Solving the Cauchy Problem Related to the Helmholtz Equation
through a Genetic Algorithm
JAMAL DAOUDI1, CHAKIR TAJANI2
1,2SMAD Team, Department of Mathematics
Polydisciplinary Faculty of Larache
Abdelmalek Essaadi University
MOROCCO
Abstract: - The Cauchy problem associated with the Helmholtz equation is an ill-posed inverse problem
that is challenging to solve due to its instability and sensitivity to noise. In this paper, we propose a
metaheuristic approach to solve this problem using Genetic Algorithms in conjunction with Tikhonov
regularization. Our approach is able to produce stable, convergent, and accurate solutions for the Cauchy
problem, even in the presence of noise. Numerical results on both regular and irregular domains show the
eectiveness and accuracy of our approach.
Key-Words: - Inverse Problem, Helmholtz Equation, Tikhonov Regularization, Optimization, Genetic
Algorithms
Received: May 19, 2023. Revised: August 21, 2023. Accepted: September 27, 2023. Published: October 9, 2023.
1 Introduction
Let be an open and bounded domain in R2with
a smooth boundary Γ. We divide the boundary
into two disjoint parts, Γ = ΓiΓc, where ΓiΓc=
and mes(Γc)6= 0.
Mathematical formulation of the Cauchy prob-
lem for the Helmholtz equation can be expressed
as:
(P) :
u+κ2u= 0 in
u=fon Γc
nu=gon Γc
(1)
where is the Laplacian operator, ndenotes the
outward normal derivative, κis a complex num-
ber (wave number), fand gare the Cauchy data
available on the accessible boundary Γc.
This problem arises in many important physi-
cal applications related to wave propagation and
vibration phenomena see, [1], [2], [3], [4].
The Helmholtz equation is a fundamental equa-
tion in physics that describes the propagation of
waves. In the past century, extensive research
has been carried out on the direct problem of
the Helmholtz equation, which involves nding
the solution to the equation given boundary data
(Dirichlet, Neumann and Dirichlet-Neumann).
However, in practical situations, it is often not
possible to obtain boundary data for the entire
boundary. Instead, we may only have access to
noisy data related to a specic section of the
boundary or some points within the domain. This
leads to inverse problems, in which the goal is to
nd the solution to the equation given incomplete
or noisy data.
The Cauchy problem for the Helmholtz equa-
tion is an example of an inverse problem that is ill-
posed. This means that small perturbations in the
given data can result in signicant changes to the
solution, and the solution does not continuously
depend on the given Cauchy data. This makes it
dicult to nd accurate solutions to the Cauchy
problem, and special methods are often required.
References, [5], and, [6], discuss the ill-posedness
of the Cauchy problem for the Helmholtz equation
in more detail. Conventional numerical methods
are not sucient for solving the problem being
investigated. To remedy this, numerous numeri-
cal methods have been suggested in order to solve
the Cauchy problem for the Helmholtz equation,
such as the method of fundamental solution, [7],
the method of plane waves, [8], the Landweber ap-
proach, [9], the method of Fourier regularization,
[10], [11], conjugate gradient method, [12], [13],
the boundary element minimal error method, [14],
the method of spherical wave expansion, [15], the
method of boundary knot, [16].
For solving the Cauchy problem associated
with the Helmholtz equation, two main ap-
proaches are commonly used: iterative methods
and direct methods. Iterative methods start with
an initial guess of the solution and then iteratively
improve the guess by minimizing a cost function,
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.79
Jamal Daoudi, Chakir Tajani
E-ISSN: 2224-2880
719
Volume 22, 2023
such as the error between the calculated and mea-
sured data. This process can be computationally
intensive, as the problem must be solved at each
iteration. However, iterative methods are often
more robust than direct methods and can be used
to solve problems that are ill-posed. On the other
hand, direct methods require less computational
time as the problem is discretized only once, but
they may be susceptible to numerical instability.
It should be noted that the aforementioned
methods are deterministic techniques, However,
deterministic approach has limitations, particu-
larly when dealing with complex systems that are
inuenced by many variables and factors. In such
cases, deterministic models may not be able to
account for all the variables and uncertainties in-
volved, leading to inaccuracies and incomplete un-
derstanding of the system. Beside deterministic
techniques, there is a second class called stochastic
techniques. Stochastic techniques refer to a class
of mathematical methods that deal with random-
ness, uncertainty, and probability.
Metaheuristic algorithms draw inspiration from
natural processes such as biological evolution,
swarm intelligence, and other phenomena. For
instance, genetic algorithms, [17], emulate natu-
ral selection and evolution, while particle swarm
optimization, [18], mimics the collective behavior
of ocks of birds or swarms of insects. Similarly,
ant colony optimization, [19], is based on the be-
havior of real ant colonies, and the bat algorithm,
[20], imitates the echolocation behavior of bats.
These algorithms are designed to eciently ex-
plore a vast search space by iteratively generating
and evaluating candidate solutions, with the goal
of nding an optimal or near-optimal solution.
This study proposes a new computational al-
gorithm for solving the Cauchy problem related
to the Helmholtz equation. The method is based
on a genetic algorithms coupled with Tikhonov
Regularization, and considers the solution on the
underspecied Γiboundary as a control in a di-
rect mixed well-posed problem. The proposed ap-
proach aims at accurately tting the Cauchy data
on the overspecied boundary Γcby minimizing
a cost function that measuring the discrepancies
between the available data and the corresponding
calculated values.
The rest of this paper is outlined as follows.
Section 2 introduces the formulation of the inverse
problem under consideration. In Section 3, we of-
fer a concise overview of genetic algorithms and
explore the capabilities of the real-coded genetic
algorithms, which has been tailored to solve the
inverse problem under consideration. To demon-
strate the accuracy and eciency of the proposed
method, Section 4 presents two numerical exam-
ples featuring regular and irregular domains. Fi-
nally, Section 5 summarizes the key ndings of the
research and oers concluding remarks.
2 Formulation of the problem as an
optimization problem
2.1 Optimization problem
The purpose of this paper is the use an adapted
genetic algorithm with real coded combined with
nite element method to estimate the Cauchy data
on the inaccessible part of the boundary Γifrom
the available data fand gon Γc.
Since the φand φ0on the boundary Γiis to be
determined, two direct problems are considered:
(PD) :
u+κ2u= 0 in
u=φon Γi
nu=gon Γc
(2)
(PN) :
u+κ2u= 0 in
u=fon Γc
nu=φ0on Γi
(3)
It should be noted that if φH1/2i)and
gH1/2c)(resp fH1/2c)and φ0
H1/2c)), then there is a unique solution u(φ, g)
(resp u(φ0, f)) of the direct problem Eq.(2) (resp
Eq.(3)) see, [21], and we are looking for φ(resp φ0
) such that:
(u(φ, g) = fon Γc
nu(φ0, f) = gon Γc
(4)
which leads to minimize the least-squares func-
tional JDand JNdened by:
JD(φ) = 1
2ku(φ, g)fk2
L2c)(5)
and
JN(φ0) = 1
2ku(φ0, f)gk2
L2c)(6)
2.2 Tikhonov regularization
In an inverse problem, the observed data is typ-
ically aected by noise and measurement errors,
which can lead to instability and poor accuracy
in the estimation of the unknown parameters.
Tikhonov regularization helps to overcome these
issues by introducing a regularization term.
In this case the Tikhonov regularization method is
used to convert the tow objective functions Eq.(5)
and Eq.(6) to the well-posed form given as follows:
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.79
Jamal Daoudi, Chakir Tajani
E-ISSN: 2224-2880
720
Volume 22, 2023
JDR(φ) = 1
2ku(φ, g)fk2
L2c)+α
2kφk2
L2i)(7)
and
JN R(φ0) = 1
2ku(φ0, f)gk2
L2c)+β
2kφ0k2
L2i)
(8)
where αand βare the regularization parameters,
α
2kφk2and β
2kφ0k2are the well-known Tikhonov
regularization terms. In the literature, there
are various eective techniques recommended for
choosing the most suitable value for the regular-
ization parameter, including the L-curve method,
[22], and the discrepancy principle, [23]. These
approaches avoid the need to use excessively small
or large positive values of α(resp β) to assure the
stability of the solution.
3 Application of genetic algorithms
to inverse problem
3.1 Overview of genetic algorithms
Genetic algorithms, [24], have proven to be eec-
tive in solving a variety of optimization problems.
They are based on the principles of biological evo-
lution and operate as a searching method. A pop-
ulation of chromosomes is used to represent po-
tential solutions and genetic operators are applied
to progressively improve each chromosome, which
becomes the basis for the next generation. This
process continues until the desired number of gen-
erations has been completed or a predened stop-
ping criteria value has been reached.
Genetic algorithms oer a number of advan-
tages over other optimization approaches. First,
they search from a population of solutions instead
of just one. Second, they can use any tness func-
tion, even if it is not continuous. Third, they
use random operators to generate new solutions.
Fourth, they do not need to know anything about
the problem to nd a good solution.
Genetic algorithms typically consist of the fol-
lowing basic elements:
1. Initialization: The genetic algorithm begins
by creating a population of potential solutions
to the problem being solved. This is typically
done by randomly generating a set of indi-
viduals, where each individual is a potential
solution represented as a set of genes or chro-
mosomes.
2. Fitness function: The tness function is used
to evaluate each individual in the population
and assign a tness score based on how well
it solves the problem being considered. The
tness score is used to select individuals for
reproduction in the next generation.
3. Selection: The selection process involves
choosing the ttest individuals from the cur-
rent generation to be parents for the next gen-
eration. The individuals are selected using
various techniques such as roulette wheel se-
lection or tournament selection.
4. Crossover: Crossover is the process of com-
bining genetic material from two parents to
create a new individual in the next genera-
tion. This is typically done by selecting two
parents based on their tness score and swap-
ping genetic material between them to create
a new individual.
5. Mutation: Mutation introduces random
changes to the genetic material of an individ-
ual, leading to potentially new and improved
solutions. It is typically applied to a small
fraction of individuals in the population to
maintain genetic diversity.
6. Termination: The algorithm terminates when
a stopping criterion is met, such as reaching
a desired tness score or running for a certain
number of generations.
We can summarize these steps in the following di-
agram, Fig.1.
Figure 1: Flowchart of genetic algorithms.
These elements work together to produce a pop-
ulation of increasingly t individuals that can be
used to nd optimal solutions to a wide range of
problems.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.79
Jamal Daoudi, Chakir Tajani
E-ISSN: 2224-2880
721
Volume 22, 2023
3.2 Genetic operators
In order to address the Cauchy problem associ-
ated with the Helmholtz equation, we consider a
real coded (oating-point) GAs (RCGA), which
perform better than binary coded GA, where the
chromosome corresponds to a vector of real pa-
rameters, the gene corresponds to a real number,
and the allele corresponds to a real value.
3.2.1 Crossover
This operator is the most important operator in
the genetic process between two individuals se-
lected according to a probability pc, [25], [26], pro-
ducing new ospring. Several crossover operators
have been developed, adapted to the type of en-
coding used. In this study, we consider the arith-
metic crossover operator with real encoding.
Typically, parents are denoted as:
P ar(1) =P ar(1)
1, . . . ., P ar(1)
n
P ar(2) =P ar(2)
1, . . . , P ar(2)
n(9)
The representation of ospring is given by:
Off(1) =Off(1)
1, . . . , Off(1)
n
Off(2) =Off(2)
1, . . . , Off(2)
n(10)
where, Off(i)and P ar(j)represent the ith o-
spring and jth parent, respectively. The variable
ndenotes the number of genes on each individual.
In arithmetic crossover two parents produce
two ospring which can be expressed using Eq.(11)
given as follows:
Off(1)
i=αiP ar(1)
i+ (1 αi)P ar(2)
i
Off(2)
i=αiP ar(2)
i+ (1 αi)P ar(1)
i
(11)
where, αirepresents uniformly distributed random
numbers.
It is important to mention that αican be gener-
ated at each generation; in this case we talk about
non-uniform arithmetic crossover, [27].
3.2.2 Mutation
The mutation operator is applied to specic ele-
ments of selected chromosomes. If we consider the
selected chromosome at the kth generation, in the
following form:
Off = (Off1, . . . , Of fi, . . . , Offn)
the form of the obtained chromosome, knowing
that Offiis the element to be mutated, is given
by:
Off0= (Off1, . . . , Of f 0
i, . . . , Offn)
Non-uniform mutation is one of the commonly
used mutation operators in real-coded genetic al-
gorithms (RCGAs), [28], [29]. It is dened as fol-
lows:
Off0
i=Offi+δ(k, uiOffi),if τ= 0
Offiδ(k, Offili),if τ= 1
(12)
where, τis a random digit which takes either the
value 0or 1and, liand uiare the upper and
lower bounds of Offi. The function δ(k, y)yields
a value within the range [0, y]and it is designed
such that the likelihood of the value being close to
0 becomes higher as kincreases. The value of the
function δis given as follows:
δ(k, y) = y1η1k
Tb(13)
where,
ηis a uniformly distributed random number
in the interval [0,1],
kis the current generation,
Tis the maximal generation number,
bis a system parameter determining the de-
gree of non-uniformity.
3.3 Computation procedure for the GA
Optimization
In this section, we describe the steps involved
in using the proposed genetic algorithm (GA) to
solve the Cauchy problem for the Helmholtz equa-
tion.
The dierent steps of the proposed procedure are
given by:
Step 1 : Parameter setting:
N:Population size
pc:Probability of Crossover
pm:Probability of Mutation
MaxGen :Maximum number of Genera-
tion
Step 2 : Random generation of initial popu-
lation φ(0)
pwith p= 0,· · · , N.
Step 3 : Solve the direct problem Eq.(14) be-
low, for each given φ(0)
pby the nite element
method.
(PGA)p:
u+κ2u= 0 in
u=φ(0)
pon Γi
nu=gon Γc
(14)
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.79
Jamal Daoudi, Chakir Tajani
E-ISSN: 2224-2880
722
Volume 22, 2023
Step 4 : Compute the tness value for each
individual using JDR φ(0)
p(Eq.(7)).
Step 5 : Create the next generation φ(1)
pusing
the GA process given by:
φ(1)
p=Mu.Cr.Se(φ(0)
p)
where:
Se:Random selection,
Cr:Arithmetic Crossover,
Mu:Non-uniform mutation.
Step 6 : Return to step 3 and replace φ(0)
p
with φ(1)
p.
Step 7 : The genetic process continue for φ(m)
p,
m= 1,2,· · · ,MaxGen.
The purpose of this procedure is to establish
the Dirichlet condition on Γi. However, if we want
to determine the Neumann condition instead, we
can modify the procedure by implementing certain
adjustments. Specically, in step 2, we should re-
place φ(0)
pwith (φ(0)
p)0and in step 3 (PGA)pby
(PGA)0
psuch that:
(PGA)0
p:
u+κ2u= 0 in
u=fon Γc
nu= (φ(0)
p)0on Γi
(15)
Finally, in step 4 and step 5, we need to substitute
Eq.(8) for Eq.(7).
4 Numerical results and discussion
The aim of this study is to nd an approximation
of the missing Dirichlet and Neumann boundary
conditions. Since we do not know the exact
form of the solution, we will use the polynomial
approximation. In order to illustrate the conver-
gence and the stability of the proposed numerical
method, we solve the Cauchy problem for the
Helmholtz equation by considering two cases of
domains in 2D.
The genetic algorithm used for evolving each
individual population employed the following ge-
netic operators and parameters:
Number of Generations: MaxGen = 200,
Population size: npop = 60,
Crossover operator: Arithmetic Crossover,
with pc= 0.9,
Mutation operator: Non-uniform Mutation,
with pm= 0.01,
Insertion: We consider the principle of elitism
to conserve the best solution in the next gen-
eration.
The experiments were conducted on a machine
with an Intel(R) Core(TM) i7-8565U CPU @
1.80GHz 1.99 GHz. The implementation of the al-
gorithm was done using the software FreeFem++,
[30], which is a free software for solving partial
dierential equations (PDEs) in R2and R3using
nite element method. It is worth noting that the
FreeFem++ language enables the rapid specica-
tion of the EDP (direct problem resulting from
the considered optimization problem) by writing
its variational formulation.
We investigate also the stability of the proposed
algorithm by perturbing the Cauchy data fand g
as follows:
(fper, gper) = (1 + νθ)(f, g)(16)
where νdenotes the noise level and θis a ran-
dom number in the range [1,1] sampled using a
uniform distribution.
4.1 First case:
In the rst case, the numerical tests are made on
a unit square domain =]0,1[2(Fig.2), where the
boundary Γ = is divided into two parts:
Γi={(0, y) : 0 < y < 1}
Γc= Γ\Γi
and the exact solution of the problem Eq.(1) with
k2= 5 is given by:
uex(x, y) = exp(2xy)
Figure 2: Unit square with mesh.
Fig.3 presents the analytical solution in the
whole domain.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.79
Jamal Daoudi, Chakir Tajani
E-ISSN: 2224-2880
723
Volume 22, 2023
Figure 3: Analytical solution.
4.1.1 Choice of regulation parameter
Table.[1] reveals that as the value of αand βde-
crease from 1e01 to 1e08, the JDR(φ)and
JN R(φ0)cost functions also decrease, indicating
a better t to the data. However, the rate of de-
crease slows down as we move towards smaller val-
ues of αand β, balancing the accuracy of the t
with the complexity of the solution. The results
indicate that a small amount of regularization is
sucient to prevent overtting for both JDR(φ)
and JN R(φ0), as the lowest cost is obtained for
α=β= 1e05.
Fig.4 and Fig.6 illustrate the progressive con-
vergence of the numerical solution towards the an-
alytical solution throughout the iterative process.
Initially, the numerical solution exhibits substan-
tial deviation from the exact solution, but this dis-
crepancy diminishes rapidly with each iteration.
Such behavior highlights the ecacy of the iter-
ative method in eectively resolving the inverse
problem.
Figure 4: Trace of uon Γi.
Figure 5: Objective function JDR(φ).
Figure 6: The derivative of u(nu)on Γi.
Figure 7: Objective function JN R(φ0)for dierent
iterations.
Furthermore, Fig.5 and Fig.7 demonstrate the
signicant reduction in the objective functions
JDR(φ)and JN R(φ0)during the initial iterations.
As the iterations progress, the convergence rate of
the objective function gradually slows down, al-
though it ultimately reaches a low value by itera-
tion k= 200.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.79
Jamal Daoudi, Chakir Tajani
E-ISSN: 2224-2880
724
Volume 22, 2023
αand β1e-01 1e-02 1e-03 1e-04 1e-05 1e-06 1e-07 1e-08
JDR(φ)1.72e-02 2.09e-03 5.9e-04 1.41e-04 2.56e-05 1.31e-04 3.52e-05 6.103e-04
JN R(φ0)7.108e-02 9.31e-03 1.36e-03 6.604e-04 5.10466e-04 5.19e-04 5.201e-04 5.48e-04
Table 1: Cost function JDR(φ)(resp. JN R(φ0)) for various values of α(resp. β).
4.1.2 Stability of the Proposed Method
Fig.8 and Fig.10 illustrate a comparison between
the numerical solution and the analytical solution
across varying levels of noise in the measurement
data. The numerical solution exhibits a slight de-
viation from the exact solution for low level of
noise level. However, this disparity increase, when
the noise level reaches a high values.
Figure 8: u/Γifor various levels of noise.
Figure 9: JDR(φ)for various levels of noise.
In Fig.9 and Fig.11, the cost function is
displayed for dierent noise levels, specically
ν= 1%,3%,5%,7%. The gures indicate that
as the noise level increases, the cost function
also increases, indicating a less precise t to the
data. Nevertheless, for low noise levels, the cost
function remains relatively low, suggesting that
the numerical solution still oers a satisfactory t
to the data.
Figure 10: nu/Γifor various levels of noise.
Figure 11: JN R(φ0)for various levels of noise.
4.2 Second case:
In the second case, the numerical tests are per-
formed on an unit disc Fig.12 where the boundary
of this domain is divided into two parts:
Γi=n(x, y) : x2+y2= 1, y > 0, x > 0o
Γc= Γ\Γi
and the exact solution of the problem Eq.(1) with
k2=2is given by:
uex(x, y) = sin(x) sin(y)
Fig.13 shows the analytical solution in the whole
domain.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.79
Jamal Daoudi, Chakir Tajani
E-ISSN: 2224-2880
725
Volume 22, 2023
Figure 12: Unit disc with mesh.
Figure 13: Analytical solution.
4.2.1 Choice of regulation parameter
Table.[2] demonstrates that reducing the value of
αand βfrom 1e01 to 1e08 leads to a de-
crease in the cost functions JDR(φ)and JN R(φ0),
indicating an improved t to the data. However,
as αand βapproache smaller values, the rate of
decrease in the cost functions slows down, striking
a balance between t accuracy and solution com-
plexity. These results suggest that a small degree
of regularization eectively prevents overtting for
both JDR(φ)and JN R(φ0), with the lowest cost
achieved when α= 1e04 and β= 1e07.
Figure 14: Trace of uon Γi.
Fig.14 and Fig.16 illustrate the progressive con-
vergence of the numerical solution towards the an-
alytical solution throughout the iterative process.
Initially, the numerical solution exhibits substan-
tial deviation from the exact solution, but this dis-
crepancy diminishes rapidly with each iteration.
Such behavior highlights the ecacy of the iter-
ative method in eectively resolving the inverse
problem.
Figure 15: Objective function JDR(φ).
Furthermore, Fig.15 and Fig.17 demonstrate
the signicant reduction in the objective functions
JDR(φ)and JN R(φ0)during the initial iterations.
As the iterations progress, the convergence rate of
the objective function gradually slows down, al-
though it ultimately reaches a low value by itera-
tion k= 200. These observations indicate that the
obtained solution is highly precise and represents
an excellent t to the available data.
Figure 16: The derivative of u(nu)on Γi.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.79
Jamal Daoudi, Chakir Tajani
E-ISSN: 2224-2880
726
Volume 22, 2023
αand β1e-01 1e-02 1e-03 1e-04 1e-05 1e-06 1e-07 1e-08
JDR(φ)4.65e-03 6.69e-04 1.354e-04 2.19e-05 5.12e-05 3.574e-04 8.63e-004 5.42e-04
JN R(φ0)1.76e-02 2.12e-03 4.18e-04 6.308e-04 1.31e-03 1.07e-04 1.94e-05 1.82e-03
Table 2: Cost function JDR(φ)(resp. JN R(φ0)) for various values of α(resp. β).
Figure 17: Objective function JN R(φ0)for dier-
ent iterations.
4.2.2 Stability of the Proposed Method
Fig.18 and Fig.20 illustrate a comparison between
the numerical solution and the analytical solution
across varying levels of noise in the measurement
data. For a low noise levels, the numerical so-
lution exhibits a slight deviation from the exact
solution. However, this disparity increase for im-
portant noise levels, in particular, when approach-
ing Neumann condition in the extremities of the
domain.
Figure 18: u/Γifor various levels of noise.
Fig.19 and Fig.21 depict the cost func-
tion for various noise levels, specically ν=
1%,2%,3%,5%. These gures show that as the
noise level increases, the cost function also in-
creases, indicating a decreased level of accuracy in
tting the data. However, even with higher noise
levels, the cost function remains relatively low, in-
dicating that the numerical solution still provides
a reasonably good t to the data.
It is important to note that reconstruct-
ing Dirichlet boundary conditions on inaccessible
parts of a boundary is typically more accurate and
reliable than reconstructing Neumann boundary
conditions. This is because Dirichlet boundary
conditions provide more information about the so-
lution than Neumann boundary conditions. The
factors that contribute to this dierence in accu-
racy and reliability can vary, and may depend on
the geometry of the domain being studied or the
regularity of the solution being reconstructed.
Figure 19: JDR(φ)for various levels of noise.
Figure 20: nu/Γifor various levels of noise.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.79
Jamal Daoudi, Chakir Tajani
E-ISSN: 2224-2880
727
Volume 22, 2023
Figure 21: JN R(φ0)for various levels of noise.
5 Conclusion
In this research paper, we address the challeng-
ing ill-posed inverse problem associated with the
Cauchy problem for the Helmholtz equation. Var-
ious optimization methods have been developed
to approximate solutions for such problems. In
this study, we explore the use of genetic algo-
rithms, which have the advantage of not requir-
ing specic regularity assumptions for the under-
lying functional. To achieve this objective, we
propose an optimization formulation that incor-
porates a Tikhonov regularization term. The ef-
fectiveness of our approach is evaluated through
numerical experiments conducted on both regular
and irregular domains. The results demonstrate
the eciency of the real-coded genetic algorithm,
enhanced with adapted genetic operators, in suc-
cessfully solving the Cauchy problem associated
with the Helmholtz equation. However, as with
any other investigation, the present study has lim-
itations related to computational complexity, the
need for parameter tuning, uncertainties in solu-
tion quality, and sensitivity to initial population.
These limitations create opportunities for future
research exploring the utilization of parallel com-
puting and self-adaptive algorithms.
Acknowledgment:
The authors would like to thank the editors and
the anonymous reviewers for their comments and
suggestions.
References:
[1] Kirsch, A. (2011). An introduction to the
mathematical theory of inverse problems
(Vol. 120). New York: Springer.
[2] Delillo, T., Isakov, V., Valdivia, N. & Wang,
L. (2001). The detection of the source of
acoustical noise in two dimensions. SIAM
Journal On Applied Mathematics. 61,
2104-2121.
[3] Colton, D., Kress, R. & Kress, R. (1998).
Inverse acoustic and electromagnetic
scattering theory.
[4] Hall, W. S., & Mao, X. Q. (1995). A
boundary element investigation of irregular
frequencies in electromagnetic scattering.
Engineering Analysis with Boundary
Elements, 16(3), 245-252.
[5] Hadamard, J. (1923). Lectures on Cauchy’s
problem in linear partial dierential equations
(Vol. 15). Yale university press.
[6] V. Isakov, Inverse Problems for Partial
Dierential Equations, Applied Mathematical
Sciences, vol. 127, Springer-Verlag, New York,
1998.
[7] Marin, L., & Lesnic, D. (2005). The method
of fundamental solutions for the Cauchy
problem associated with two-dimensional
Helmholtz-type equations. Computers &
Structures, 83(4-5), 267-278.
[8] Jin, B., & Marin, L. (2008). The plane wave
method for inverse problems associated with
Helmholtz-type equations. Engineering
Analysis with Boundary Elements, 32(3),
223-240.
[9] Marin, L., Elliott, L., Heggs, P. J., Ingham,
D. B., Lesnic, D., & Wen, X. (2004). BEM
solution for the Cauchy problem associated
with Helmholtz-type equations by the
Landweber method. Engineering Analysis
with Boundary Elements, 28(9), 1025-1034.
[10] Elden, L., Berntsson, F., & Reginska, T.
(2000). Wavelet and Fourier methods for
solving the sideways heat equation. SIAM
Journal on Scientic Computing, 21(6),
2187-2205.
[11] Fu, C. L., Feng, X. L., & Qian, Z. (2009).
The Fourier regularization for solving the
Cauchy problem for the Helmholtz equation.
Applied Numerical Mathematics, 59(10),
2625-2640.
[12] Marin, L., Elliott, L., Heggs, P. J., Ingham,
D. B., Lesnic, D., & Wen, X. (2003).
Conjugate gradient-boundary element
solution to the Cauchy problem for
Helmholtz-type equations. Computational
Mechanics, 31, 367-377.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.79
Jamal Daoudi, Chakir Tajani
E-ISSN: 2224-2880
728
Volume 22, 2023
[13] Marin, L., Elliott, L., Heggs, P. J., Ingham,
D. B., Lesnic, D., & Wen, X. (2004).
Comparison of regularization methods for
solving the Cauchy problem associated with
the Helmholtz equation. International Journal
for Numerical Methods in Engineering,
60(11), 1933-1947.
[14] Marin, L. (2009). Boundary
element–minimal error method for the
Cauchy problem associated with
Helmholtz-type equations. Computational
Mechanics, 44, 205-219.
[15] Yu, C., Zhou, Z., & Zhuang, M. (2008). An
acoustic intensity-based method for
reconstruction of radiated elds. The Journal
of the Acoustical Society of America, 123(4),
1892-1901.
[16] Jin, B., & Zheng, Y. (2005). Boundary knot
method for some inverse problems associated
with the Helmholtz equation. International
Journal for Numerical Methods in
Engineering, 62(12), 1636-1651.
[17] De Jong, K. (1988). Learning with genetic
algorithms: An overview. Machine learning,
3, 121-138.
[18] Kennedy, J., & Eberhart, R. (1995). Particle
swarm optimization. In Proceedings of
ICNN’95-international conference on neural
networks (Vol. 4, pp. 1942-1948). IEEE.
[19] Socha, K., & Dorigo, M. (2008). Ant colony
optimization for continuous domains.
European journal of operational research,
185(3), 1155-1173.
[20] Yang, X. S., & Hossein Gandomi, A. (2012).
Bat algorithm: a novel approach for global
engineering optimization. Engineering
computations, 29(5), 464-483.
[21] Evans, L. C. (2010), vol. 19 of Graduate
Studies in Mathematics. American
Mathematical Society, Providence.
[22] Vogel, C. R. (1996). Non-convergence of the
L-curve regularization parameter selection
method. Inverse problems, 12(4), 535.
[23] Engl, H. W. (1987). Discrepancy principles
for Tikhonov regularization of ill-posed
problems leading to optimal convergence
rates. Journal of optimization theory and
applications, 52, 209-215.
[24] Hayes-Roth, F. (1975). Review of
”Adaptation in Natural and Articial Systems
by John H. Holland”, The U. of Michigan
Press. Acm Sigart Bulletin, (53), 15-15.
[25] Arumugam, M. S., Rao, M. V. C., &
Palaniappan, R. (2005). New hybrid genetic
operators for real coded genetic algorithm to
compute optimal control of a class of hybrid
systems. Applied Soft Computing, 6(1), 38-52.
[26] Kaelo, P., & Ali, M. M. (2007). Integrated
crossover rules in real coded genetic
algorithms. European Journal of Operational
Research, 176(1), 60-76.
[27] Herrera, F., Lozano, M., & Verdegay, J. L.
(1998). Tackling real-coded genetic
algorithms: Operators and tools for
behavioural analysis. Articial intelligence
review, 12, 265-319.
[28] Khouja, M., Michalewicz, Z., & Wilmot, M.
(1998). The use of genetic algorithms to solve
the economic lot size scheduling problem.
European Journal of Operational Research,
110(3), 509-524.
[29] Michalewicz, Z. (1996). Heuristic methods
for evolutionary computation techniques.
Journal of Heuristics, 1, 177-206.
[30] Hecht, F. (2012). New development in
FreeFem++. Journal of numerical
mathematics, 20(3-4), 251-266.
Contribution of Individual Authors to the
Creation of a Scientic Article (Ghostwriting
Policy)
The authors equally contributed in the present re-
search, at all stages from the formulation of the
problem to the nal ndings and solution.
Sources of Funding for Research Presented in a
Scientic Article or Scientic Article Itself
No funding was received for conducting this study.
Conicts of Interest
The authors have no conicts of interest to
declare that are relevant to the content of this
article.
Creative Commons Attribution License 4.0
(Attribution 4.0 International , CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.79
Jamal Daoudi, Chakir Tajani
E-ISSN: 2224-2880
729
Volume 22, 2023