Parameter Estimations of Normal Distribution via Genetic Algorithm
and Its Application to Carbonation Depth
SOMCHIT BOONTHIEM1, CHATCHAI SUTIKASANA2,
WATCHARIN KLONGDEE3, WEENAKORN IEOSANURAK3,
1Mathematics and Statistics Program, Sakon Nakhon Rajabhat University, Sakon Nakhon, THAILAND
2Logistics Department, Faculty of Business Administration and Information Technology,
Rajamangala University of Technology Isan Khonkaen Campus, THAILAND
3Department of Mathematics, Faculty of Science, Khon Kaen University, THAILAND
*Corresponding author
Abstract: - In this paper, we propose a method for estimating Normal distribution parameters using genetic al-
gorithm. The main purpose of this research is to identify the most efficient estimators among three estimators
for Normal distribution; Maximum likelihood method (ML), the least square method (LS), and genetic algorithm
(GA) via numerical simulation and three real data, carbonation depth of Concrete Girder Bridges data examples
which are based on performance measures such as The Root Mean Square Error (RMSE), Kolmogorov-Smirnov
test, and Chi squared test. The simulation studies are conducted to evaluate the performances of the proposed
estimators and provide statistical analysis of the real data set. The numerical results, χ2, show that the genetic
algorithm performs better than other methods for actual data and simulated data unless the sample size is small.
Key-Words: - Normal distribution, Parameter estimation, Maximum likelihood method, Genetic algorithm, The
Least square method, Carbonation depth
Received: July 11, 2022. Revised: January 12, 2023. Accepted: February 8, 2023. Published: March 2, 2023.
1 Introduction
The Normal distribution or Gaussian distribution
plays an important role in several fields of mathemat-
ics and its applications. The Normal distribution has
been widely used to describe the probability distri-
bution of carbonation depth, [1],[2],[3],[4]. The car-
bonation depth is a point deterioration factor to de-
termine the durability of the concrete structures. The
characterization of carbonation depth is essential for
the carbonation reliability analysis of Concrete Girder
Bridges. Carbonation depth is usually employed in
carbonation service life prediction of existing Con-
crete Girder Bridges as deterministic coefficients.
Several estimations, [5],[6],[7] for estimating pa-
rameters have been proposed. The authors in [7], us-
ing the Markov Chain Monte Carlo (MCMC) method
for estimating parameters. Li, Yan, Wang and Hou,
[6], proposed two parameter estimations for the Nor-
mal distribution: the least square method and the
Bayesian quantile method. They found that the least
square method is the best parameter estimation. Ge-
netic algorithm was first introduced by Holland, [8],
in 1992 and represented a population-based optimiza-
tion method. This algorithm is a method to find an ap-
proximate solution for optimization problems which
is used widely in several fields, [9],[10]. In parame-
ter estimation, some researchers, [11],[12],[13], stud-
ied a method to find parameters by using genetic al-
gorithms. The authors in [11], studied genetic algo-
rithms and proposed a new genetic algorithm. They
found that genetic algorithms are effective in perfor-
mance indicators improvement. The authors in [12],
used genetic algorithm (GA) to find estimators of
Skew Normal distribution. They found that the GA
has a high performance where traditional search tech-
niques fail. In this paper, we study the genetic al-
gorithm, a well-known search technique. The GA is
inspired by a metaphor of the evolution process ob-
served in nature.
The main purpose of this research is to identify the
most efficient estimators among three estimators for
the Normal distribution via actual data and simulated
data. The rest of this paper is organized as follows:
Section 2 discusses a short introduction of the Nor-
mal distribution, followed by the Normal distribution
parameter estimation in Section 3. Accuracy judg-
ment criteria are considered in Section 4. The per-
formances of all methods are compared via a detailed
simulation study in Section 5. Three parameter esti-
mations are applied to three real data sets of Carbon-
ation Depth of Concrete girder bridges, in Section 6.
Finally, the main conclusions of this study are sum-
marized in the last section.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.23
Somchit Boonthiem, Chatchai Sutikasana,
Watcharin Klongdee, Weenakorn Ieosanurak
E-ISSN: 2224-2880
184
Volume 22, 2023
2 Normal Distribution
The Normal distribution is also called Gaussian dis-
tribution.
A random variable xhas a Normal distribution if
its probability density function is defined by Equation
(1).
The probability density function (pdf) of the Nor-
mal distribution can be written as
f(x) = 1
σ2πe1
2(xµ
σ)2
,(1)
where µand σare the location parameter and the scale
parameter, respectively.
The cumulative distribution function (CDF) of the
Normal distribution is given by
F(x) = Zx
−∞
1
σ2πe1
2(tµ
σ)2
dt, (2)
or
F(x) = Φ xµ
σ,
where Φ(x) = 1
2πZx
−∞
et2
2dt.
3 Estimation Methods
In this section, the three considered estimation meth-
ods were described to obtain the estimates of the pa-
rameters µand σof the Normal distribution.
3.1 Maximum Likelihood Method
Let x1, x2, . . . , xnbe observed values of
X1, X2, . . . , Xn,nindependent random variables
having the Normal distribution with parameters µ 
and σ.
The maximum likelihood function of the sample,
denote by L(µ, σ|x1, x2, . . . , xn), is given by
L(µ, σ|x1, x2, . . . , xn) =
n
Y
i=1
f(xi)
=
n
Y
i=1
1
σ2πe1
2(xiµ
σ)2
,
by taking ln, we get
ln L(µ, σ|x1, x2, . . . , xn)
=n
2ln(2πσ2)1
2σ2
n
X
i=1
(xiµ)2.
(3)
To obtain the maximum likelihood estimators, we
have to maximize Equation (3), partial derivatives
of ln L(µ, σ|x1, x2, . . . , xn)functions with respect to
each parameter are taken, and we set them equal to 0
as follows:
µ ln L(µ, σ|x1, x2, . . . , xn) = 0,
σ ln L(µ, σ|x1, x2, . . . , xn) = 0.
The maximum likelihood estimators of µand σ, de-
noted by ˆµML and ˆσML, respectively, are obtained by
ˆµML =1
n
n
X
i=1
xi,
ˆσML =v
u
u
t
1
n1
n
X
i=1
(xi¯x)2,
where ¯xis the sample mean.
3.2 Least Square Method
Let X1, X2, . . . , Xnbe nindependent random vari-
ables having the Normal distribution with parameters
µand σ. Suppose that X(1) X(2) . . . X(n)
are the order statistics. Let the empirical distribution
function of Xbe denoted by
Fn(x) =
0, x < X(1),
k
n, X(k)x < X(k+1), k = 1,2,...,n1
1, x > X(n).
(4)
The following cumulative distribution function F(x)
is calculated by
F(x) = 1
σ2πZx
−∞
e1
2(tµ
σ)2
dt. (5)
The CDF of Normal F(x)can be expanded in Taylor
series approximation as follow:
F(x) = Φ xµ
σ,(6)
where
Φ(x)1
2+1
2π
N
X
k=0
(1)kx2k+1
2kk!(2k+ 1).
For the Normal distribution, the least square method
estimates ˆµand ˆσof the parameters µand σ, respec-
tively, are obtained by minimizing the function:
E(µ, σ) =
n
X
i=1
(F(xi)Fn(xi))2,(7)
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.23
Somchit Boonthiem, Chatchai Sutikasana,
Watcharin Klongdee, Weenakorn Ieosanurak
E-ISSN: 2224-2880
185
Volume 22, 2023
where F(xi)and Fn(xi)are obtained by Equation (4)
and (5), respectively. By solving the following equa-
tions:
µ E(µ, σ) = 0,
σ E(µ, σ) = 0.
We denote by
A(xi, µ, σ) =
N
X
k=0
(1)kxiµ
σ2k
2kk!
and
B(xi, µ, σ) =
N
X
k=0
(1)k+1 xiµ
σ2k
2kk!(2k+ 1) ,
ˆµ, ˆσare the estimators of the parameters µand σ, re-
spectively. Using some algebraic manipulations, the
estimators satisfy the following equations:
ˆσ=Pn
i=1 Fn(xi)A(xi, µ, σ)
Pn
i=1 F(xi)1
σA(xi, µ, σ)
,
and
ˆµ=Pn
i=1(F(xi)Fn(xi))xiA(xi, µ, σ)
Pn
i=1(F(xi)Fn(xi))A(xi, µ, σ).
These equations are solved iteratively.
3.3 Genetic Algorithm
The main steps of the genetic algorithm (GA) are se-
lection, crossover, and mutation. In GA, each chro-
mosome (individual in the population, parameters)
represents a possible solution to a problem and is
composed of a string of genes. Kalra and Singh [14]
proposed a pseudo code of GA for optimization of
scheduling problems as follows:
Procedure GA
Determine the number of chromosomes generation
(Npop = 10000), and mutation rate (MR =
0.5).The number of chromosomes is 2 (µand σ).
1. Initialization: Generating initial population P
consisting of N= 100 chromosomes. Every
gene represents a parameter (variables) in the so-
lution. This collection of parameters that forms
the solution is the chromosome. Therefore, the
population is a collection of chromosomes. As-
sume that the initial population P(1) is denoted
by
P(1) = [w(1)
1, w(1)
2, . . . , w(1)
N]
where w(1)
i= [µ(1), σ(1)]tis a vector of param-
eters for i= 1,2, . . . , N. Also, the vector of
w(m)
i, i = 1,2, . . . , N, m = 1,2, . . . , N pop
represents the values of the ith chromosome in the
population at mth iteration.
2. Fitness: Calculate the fitness value of each chro-
mosome using a fitness function. In this study,
the fitness values are defined by:
f(m)
i=1
i, m = 1,2, . . . , Npop
where f(m)
irepresents the fitness value of the ith
chromosome at mth iteration.
3. Selection: Select the chromosomes for produc-
ing the next generation using the selection op-
erator, the worst chromosomes are replaced by
new chromosomes generated randomly from the
search space. We used the roulette wheel selec-
tion. The probability of choosing chromosome i
is equal to
pi=f(m)
i
PN
i=1 f(m)
i
where f(m)
iis the fitness value of the ith chromo-
some in the population at mth iteration.
4. Crossover: Perform the crossover operation on
the pair of chromosomes obtained in step 3.
5. Mutation: Perform the mutation operation on
the chromosomes. The chromosome kfor k=
1,2are mutated as follows:
random uk(0,1), if uk< M R, then we mu-
tated the chromosome i.
6. Replacement: Update the population P(m)for
m= 2,3, . . . by replacing bad solutions with bet-
ter chromosomes from offspring.
7. Repeat steps 3to 6until the stopping condition
is met. The stopping condition may be the max-
imum number of iterations or no change in the
fitness value of chromosomes for consecutive it-
erations.
8. Output the best chromosome (the best parame-
ter) as the final solution.
End Procedure
3.4 Goodness-of-Fit
To show how a theoretical probability function
matches with the observation data, three kinds of sta-
tistical errors are considered as the Goodness-of-fit.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.23
Somchit Boonthiem, Chatchai Sutikasana,
Watcharin Klongdee, Weenakorn Ieosanurak
E-ISSN: 2224-2880
186
Volume 22, 2023
Generally, the smaller the errors, the better the fit is.
Let nbe the number of data and kbe the number of
classes, calculated by Sturges formula,
k=1+3.322 log n.
The First one is the root mean square error (RMSE)
defined as
RMSE =v
u
u
t
1
k
k
X
i=1
(OiEi)2,(8)
where Oiis the actual value at time stage i, and Ei
is the value computed from correlation expression for
the same stage.
The second one is the Kolmogorov-Smirnov test
(KS), which is defined as the max error in CDFs
KS =max
x|Fn(x)F(x)|,(9)
where Fn(x)is the empirical cumulative distribution
function not exceeding xand F(x)is the CDF of Nor-
mal distribution.
The third judgment criterion is the Chi-squared test
given as:
χ2=
k
X
i=1
(OiEi)2
Ei
.(10)
4 A Simulation Study
In this section, a simulation study was performed to
compare the performance of the different methods
discussed in Section 3. 2000 random samples of sizes
n= 10,20,30,50,100,500,1000,and 2000 were
generated from the Normal distribution. Since any
Normal distribution data can be standardized to have
a location parameter of 0and scale parameter of 1,
only samples with parameters µ= 0 and σ= 1 were
generated. In order to compare the goodness-of-fit
of various pdfs to sample data, several statistics have
been used in studies related to data.
The RMSE is most useful when large errors are
particularly undesirable. The Kolmogorov–Smirnov
test has the advantage of considering the distribution
functions collectively. Advantages of the Chi-square
test include its robustness in data distribution, and
ease of calculation.
The most frequently used ones are the root mean
square error (RMSE), [15],[16], the Kolmogorov–
Smirnov test results (KS), [15],[17], and the Chi
squared test results (χ2), [15].
The results of the simulation study are presented in
Table 1-2. The following conclusions can be drawn:
1. All estimators of the parameters are unbiased, i.e.,
the estimators sometimes exceed the true value of the
nParameter Estimation
ML LS GA
10 µ0.5180 8.1768 0.6112
σ0.8437 0.3664 0.8263
RMSE 0.7920 1.0227 0.7729
KS test 0.9939 4.7753 1.2699
χ24.9517 16.5301 4.7405
20 µ0.2101 -7.6186 0.2315
σ0.9042 5.0261 0.9481
RMSE 0.7993 1.3470 0.7919
KS test 0.7447 8.5721 1.0788
χ23.7719 19.3985 3.7027
30 µ0.1314 0.1274 0.1330
σ0.8238 1.1688 0.9273
RMSE 1.4151 1.5568 1.4122
KS test 2.0833 3.7360 2.1236
χ213.7865 15.2259 12.9437
50 µ0.0932 4.7193 0.0890
σ0.9458 8.1408 1.0836
RMSE 1.2694 2.4223 1.4442
KS test 1.7180 15.7167 3.2420
χ213.6696 25.6064 9.7582
100 µ-0.0520 -2.8979 -0.0821
σ0.9920 3.3682 1.0383
RMSE 1.6756 3.7792 1.6742
KS test 2.6334 30.4071 1.9056
χ29.0434 46.5958 8.5079
500 µ0.0236 0.2656 0.0276
σ1.0694 1.2600 1.0831
RMSE 4.4704 7.1210 4.4631
KS test 5.0129 51.8562 5.9734
χ217.2298 48.2707 16.9679
Table 1: Comparison of the estimation methods for n=
10,20,30,50,100 and 500.
parameters. The biases of maximum likelihood esti-
mation and genetic algorithm of the parameters tend
to zero for large n
2. As the sample size increases, the estimates of µand
σgenerally approach their true values. An increase in
the sample size of the simulated The Normal distribu-
tion data generally results in the improvement of the
three methods. Overall, the RMSE, KS test, and χ2
values increase as the sample size increases.
3. The difference between the ML and GA is very lit-
tle for small sample sizes (n < 30), but it is slightly
more for larger sample sizes (n30).
4. The LS value is higher than the others.
Moreover, we found that:
1. The ML is a commonly used method for parameter
estimation because it is simple and fast.
2. The LS is the iterative method, so the best param-
eter is based on the initial parameter. The genetic al-
gorithm is the iterative method, but the genetic algo-
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.23
Somchit Boonthiem, Chatchai Sutikasana,
Watcharin Klongdee, Weenakorn Ieosanurak
E-ISSN: 2224-2880
187
Volume 22, 2023
nParameter Estimation
ML LS GA
1000 µ0.0119 8.6916 -0.0064
σ1.0588 7.3159 1.0875
RMSE 3.9538 28.6166 4.1776
KS test 6.9662 262.3597 5.2007
χ222.8076 383.9043 18.9865
2000 µ-0.0003 -0.2052 -0.0088
σ1.0471 1.5086 1.0568
RMSE 5.6892 32.7658 5.5773
KS test 14.8244 98.7142 6.0461
χ219.3856 358.0330 18.4112
Table 2: Comparison of the estimation methods for n=
1000 and 2000.
rithm performance is better than ML and LS as ML
performance is better than LS.
3. According to χ2, the genetic algorithm has a
smaller χ2value than other methods.
Therefore, ML and GA show identical perfor-
mance for estimating the µand σparameters of the
Normal distribution unless the sample size is larger.
However, the GA performs better for large sample
sizes than other methods considered here such as ML
and LS methods.
5 Application to Carbonation Depth
In this section, the parameter estimation methods
defined in Section 3 are applied to the real data-
carbonation depth. Three real data sets of carbonation
depth are analyzed to compare the considered three
estimation methods for the Normal distribution.
The first data set represents 12 measurements of the
carbonation depth of a reinforced concrete girder
bridge [6]: 12.5, 13.2, 13.9, 14.1, 14.3, 14.6, 14.9,
15, 15.3, 15.7, 16.4, 17.1 mm.
The second data set represents 18 measurements of
the carbonation depth of the Chorng-ching Viaduct
[2]: 8, 11, 15, 15, 17, 18, 20, 22, 22, 26, 28, 30, 30,
31, 33, 38, 38, 40 mm.
The third data set represents 27 measurements of the
carbonation depth of pier of a reinforced concrete
girder bridge [18]: 2, 2.1, 2.2, 2.3, 2.3, 2.3, 2.4, 2.5,
2.6, 2.7, 2.8, 2.9, 3.0, 3.2, 3.2, 3.3, 3.3, 3.3, 3.4, 3.4,
3.4, 3.5, 3.5, 3.6, 3.7, 3.8, 3.9 mm.
Method Estimated parameter RMSE KS test χ2
µ σ
ML 14.7500 1.2923 0.7851 0.7174 5.1521
LSM 14.5703 1.2197 0.7954 0.2285 5.9343
GA 14.8241 1.2976 0.7861 0.9394 5.0878
Table 3: Parameter estimations, RMSE, KS test, and Chi
squared test for the first data set.
Method Estimated parameter RMSE KS test χ2
µ σ
ML 24.5556 9.5808 1.1323 1.4957 11.7405
LSM 23.5642 10.6848 1.1626 1.1159 12.7121
GA 14.8356 1.2942 0.7865 0.9664 5.0868
Table 4: Parameter estimations, RMSE, KS test, and Chi
squared test for the second data set.
Method Estimated parameter RMSE KS test χ2
µ σ
ML 2.9852 0.5702 1.5836 2.7463 15.3773
LSM 2.9697 0.6770 1.5492 2.2868 15.0384
GA .0193 0.6512 1.5134 2.3795 14.6748
Table 5: Parameter estimations, RMSE, KS test, and Chi
squared test for the third data set.
Table 3, 4, and 5 show the estimators of µand
σparameters of the Normal distribution with val-
ues of RMSE, KS test, and χ2on carbonation depth
real data. According to χ2, the genetic algorithm
yields a smaller χ2value than other methods. Ac-
cording to the KS test, the least square method yields
a smaller KS value than other methods. According
to the RMSE, the genetic algorithm yields a smaller
χ2value than other methods. The genetic algorithm
yields a smaller χ2value than other methods.
The results indicate that the genetic algorithm is
better than other methods in terms of RMSE and χ2
values. Hence, for the real given data sets of carbon-
ation depth, we concluded that the genetic algorithm
method is the best among the three considered esti-
mation methods.
6 Conclusions
We proposed a parameter estimation to estimate pa-
rameters for the Normal distribution based on the ge-
netic algorithm. The proposed estimation and the
most common estimation were applied to real data
sets. We compare the performance of three methods
for the Normal distribution through a simulation study
and three real data sets of carbonation depth. There-
fore, it is concluded from both simulated and real data
sets that all the methods show identical performance
for estimating the parameters of the Normal distribu-
tion. However, the genetic algorithm performs better
than other methods, such as the maximum Likelihood
method and least square method. In future work, we
will adjust the genetic algorithm for estimating pa-
rameters.
Acknowledgements
This research is supported by Department of Math-
ematics, Faculty of Science, Khon Kaen University,
Fiscal Year 2022.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.23
Somchit Boonthiem, Chatchai Sutikasana,
Watcharin Klongdee, Weenakorn Ieosanurak
E-ISSN: 2224-2880
188
Volume 22, 2023
References:
[1] P. Benítez, F. Rodrigues, S. Gavilán, H. Varum,
A. Costa, Carbonated structures in Paraguay:
Durability strategies for maintenance planning,
Procedia Struct., Vol. 11, 2018, pp. 60-67.
[2] M. T. Liang, R. Huang, S. A. Fang, Carbona-
tion service life prediction of existing concrete
viaduct/bridge using time-dependent reliability
analysis, Journal of Marine Science and Technol-
ogy, Vol. 21, No.1, 2013, pp. 94 - 104.
[3] F. Lollini, E. Redaelli,L. Bertolini, Analysis of the
parameters affecting probabilistic predictions of
initiation time for carbonation-induced corrosion
of reinforced concrete structures, Materials and
Corrosion, Vol. 63, No. 12, 2012, pp. 1059–1068.
[4] U. J. Na, S. Kwon, S. R. Chaudhuri, M. Shi-
nozuka, Stochastic model for service life predic-
tion of RC structures exposed to carbonation us-
ing random field simulation, KSCE Journal of
Civil Engineering, Vol.16, No.1, 2012, pp. 133–
143.
[5] M. Cai, J. Yang, Parameter estimation of network
signal normal distribution applied to carboniza-
tion depth in wireless networks, EURASIP Jour-
nal on Wireless Communications and Networking
2020, No.1, 2020, pp. 1-15.
[6] Y. Li, L. Yan, L. Wang, W. Hou, Estimation of
normal distribution parameters and its application
to carbonation depth of concrete girder bridges,
Discrete & Continuous Dynamical Systems-S,
Vol. 12, No.4&5, 2018, pp. 1091-1100.
[7] S. Tasaka, M. Shinozuka, S. Ray Chaudhuri, U.
J. Na, Bayesian inference for prediction of car-
bonation depth of concrete using MCMC, Mem
Akashi Tech Coll, Vol. 52, 2009, pp. 45–50.
[8] J. H. Holland, Adaptation in Natural and Artifi-
cial Systems: An Introductory Analysis with Ap-
plications to Biology, Control, and Artificial In-
telligence, MIT Press, 1992.
[9] S. Mirjalili, S. Genetic Algorithm. In: Evolution-
ary Algorithms and Neural Networks. Studies in
Computational Intelligence, Vol. 780, 2019, pp.
43 - 55.
[10] S. Katoch, S.S. Chauhan, V. Kumar, A review
on genetic algorithm: past, present, and future.
Multimedia Tools and Applications,Vol. 80, No.5,
2021, pp. 8091-8126.
[11] A. Arias-Rosales, R. Mejía-Gutiérrez, R Op-
timization of V-Trough photovoltaic concentra-
tors through genetic algorithms with heuristics
based on Weibull distributions. Applied energy,
Vol. 212, 2018, pp.122-140.
[12] A. Yalçınkaya, B. Şenoğlu, U. Yolcu, U. Max-
imum likelihood estimation for the parameters of
skew normal distribution using genetic algorithm.
Swarm and Evolutionary Computation, Vol. 38,
2018, pp.127-138.
[13] M. Wadi, W. Elmasry, Modeling of wind en-
ergy potential in marmara region using different
statistical distributions and genetic algorithms,
2021 International Conference on Electric Power
Engineering Palestine (ICEPE- P), 2021, pp. 1-
7.
[14] M. Kalra, S. Singh, A review of metaheuristic
scheduling techniques in cloud computing, Egyp-
tian informatics journal, Vol.16, No.3, 2015, pp.
275-295.
[15] T. P. Chan, Estimation of wind energy poten-
tial using different probability density functions,
Applied Energy, Vol. 88, No.5, 2011, pp. 1848–
1856.
[16] T. B. M. J. Ouarda, C. Charron, J.-Y. Shin, P. R.
Marpu, A. H. Al-Mandoos, M. H. Al-Tamimi, H.
Ghedira, T. N. Al Hosary, Probability distribu-
tions of wind speed in the UAE, Energy conver-
sion and management, Vol. 93, 2015, pp. 414-
434.
[17] M. Y. Sulaiman, A. M. Akaak, M. A. Wahab, A.
Zakaria, Z. A. Sulaiman and J. Surad, Wind char-
acteristics of Oman, Energy, Vol. 27, No.1, 2002,
pp. 35-46.
[18] X. Guan, D. T. Niu, J. B. Wang, Carbonation
service life prediction of coal boardwalks bridges
based on durability testing, Journal of Xi’an Uni-
versity of Architecture and Technology, Vol.47,
2015, pp. 71-76.
Contribution of individual authors to
the creation of a scientific article
(ghostwriting policy)
Somchit Boonthiem: Investigation, Visualization and
Writing - original draft.
Chatchai Sutikasana: Writing - review & editing.
Watcharin Klongdee: Validation and Writing - review
& editing.
Weenakorn Ieosanurak: Project administration,
Methodology, Conceptualization, Visualization and
Writing - review & editing.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.23
Somchit Boonthiem, Chatchai Sutikasana,
Watcharin Klongdee, Weenakorn Ieosanurak
E-ISSN: 2224-2880
Volume 22, 2023
This research is supported by Department of Math-
ematics, Faculty of Science, Khon Kaen University,
Fiscal Year 2022.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
Conflict of Interest
The authors have no conflicts of interest to declare
that are relevant to the content of this article.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US