Development of a New Numerical Conjugate Gradient Technique for
Image Processing
HAWRAZ N. JABBAR1, YOKSAL A. LAYLANI1, ISSAM A.R. MOGHRABI2,
BASIM A. HASSAN3
1Department of Mathematics, College of Sciences,
University of Kirkuk,
IRAQ
2Department of Computer Science, College of Arts and Sciences,
University Central Asia, Naryn,
KYRGYZ REPUBLIC
3Department of Mathematics, College of Computers Sciences and Mathematics,
University of Mosul,
IRAQ
Abstract: - We present a new iterative conjugate gradient technique for image processing. The technique is
based on a new derivation of the conjugacy coefficient and develops a variant of the classical Fletcher-Reeves
conjugate gradient method. The derivation exploits a quadratic function model. The new method is intended to
minimize the presence of noise by utilizing the adaptive median filter (AMF) to reduce salt-and-pepper noise,
while the adaptive center-weighted median filter (ACWMF) is used to reduce random-valued noise. The
theoretical convergence properties of the method are proven and then tested on a basic set of images using
MATLAB. The results show that the proposed algorithm is more efficient than the classical Fletcher-Reeves
(FR) method, as measured by the signal-to-noise ratio (PSNR). The number of iterations and the number of
function evaluations are also lower for the proposed method. The favorable performance of the new algorithm
provides promise for deriving similar techniques that enhance the speed and efficiency of image-processing
libraries.
Key-Words: - Conjugate Gradient methods, image restoration, impulse noise reduction.
Received: June 7, 2022. Revised: September 11, 2023. Accepted: October 13, 2023. Available online: November 13, 2023.
1 Introduction
Extensive research and practical applications have
been devoted to the field of image restoration across
various domains in scientific and engineering. This
field focuses on the restoration of an image file from
a degraded observation. For instance, pictures
captured by telescopes and satellites often suffer
from degradation caused by air turbulence.
Moreover, images frequently encounter noise
originating from environmental effects, transmission
channels, and other associated elements throughout
the processes of acquisition, resizing, and
communication. Consequently, these factors
adversely affect the image quality, resulting in
distortion and loss of valuable information.
Moreover, noise can have a detrimental impact on
subsequent image-processing tasks, including image
analysis, image tracking, and video processing.
Therefore, image cleansing plays a pivotal role in
contemporary image processing systems.
The objective of image cleansing is to restore the
original image quality by reducing the presence of
noise in noisy images. Nevertheless, this task
presents a challenge due to the difficulty in
distinguishing between noise, edges, and textures, as
these elements often possess high-frequency
characteristics. Consequently, throughout the
cleansing process, restored images may
inadvertently lose certain significant details. In
essence, the primary challenge faced by image
processing systems lies in recovering relevant
information from noisy images while effectively
removing noise, ultimately leading to the generation
of high-quality images. In certain scenarios, it
becomes necessary to recover stellar images not
necessarily observed directly within the Earth's
atmosphere. The main objective of this research is
WSEAS TRANSACTIONS on COMPUTER RESEARCH
DOI: 10.37394/232018.2024.12.12
Hawraz N. Jabbar, Yoksal A. Laylani,
Issam A. R. Moghrabi, Basim A. Hassan
E-ISSN: 2415-1521
123
Volume 12, 2024
to devise a set of optimization methodologies that
can effectively handle edge-preserving
regularization (EPR) objective functions.
When it comes to image processing methods, a
comparative analysis reveals the distinct strengths
and characteristics of various approaches, with
conjugate gradient methods standing out in specific
contexts. Classical methods, such as Fourier
Transform-based techniques, excel in capturing
global frequency information but may fall short
when dealing with localized features. Meanwhile,
wavelet-based methods offer a compromise by
combining both global and local information,
making them versatile for various applications.
Machine learning-based approaches, particularly
deep learning models like convolutional neural
networks (CNNs), have gained immense popularity
for their ability to learn complex hierarchical
features directly from data, showcasing remarkable
performance in tasks like image recognition.
Conjugate gradient methods have proven their
viability in image processing, especially when it
comes to noise reduction. The conjugate gradient
algorithm's ability to converge rapidly, particularly
in cases of ill-conditioned systems, makes it well-
suited for large-scale optimization tasks in image
processing. This iterative optimization technique
ensures that each iteration provides a substantial
reduction in the objective function, contributing to
the overall enhancement of image quality. The
methods can achieve robust and computationally
efficient solutions, aligning with the demands of
real-world applications where both accuracy and
speed are paramount.
To address impulse noise reduction, a recent
advancement was made in the form of a two-phase
technique described in, [1]. This technique utilizes
the adaptive median filter (AMF) to mitigate salt-
and-pepper noise, while for random-valued noise,
the adaptive method of center-weighted median
filter (ACWMF) is employed. The ACWMF is
further enhanced by implementing the variable
window technique, which enhances its ability to
detect and address severe damage in images, [1].
For this study, we exclusively focus on handling
salt-and-pepper noise.
Let represent the actual picture and
󰇝󰇞󰇝󰇞 represent the index
set of and  refer to the set of noisy pixel
indices detected throughout the first phase. Also, let
 denote the set of the four nearest neighbors at
position 󰇛󰇜 , . In addition, we use 
󰇛󰇜 to indicate a lexicographically organized
column vector of length , where  represents the
size of N. Therefore, the minimization of the
following function will restore the noisy pixels:
󰇛󰇜 󰇣 
󰇛
󰇛󰇜

󰇜󰇤 󰇛󰇜
where is the regularization parameter,

󰇛 󰇜
󰇛󰇜
and

󰇛 󰇛󰇜 󰇜.
Function (1) ensures the preservation of all the
edges . Generally, impulsive
noise can be described with this function. The
slavish AMF introduced in, [3] is fundamentally
based on minimizing (1). In practical applications,
the non-smooth data-fitting term can be omitted
since it is not necessary for the next phase, which
specifically aims to recover only the poor-quality
pixels after noise reduction. Consequently, various
optimization strategies can be employed to
minimize the following smooth EPR (such as, [2]):
󰇛󰇜 󰇛

󰇜
󰇛󰇜 󰇛󰇜
Since the Conjugate Gradient (CG) methods
have low storage requirements, they prove to be
highly effective in tackling unconstrained
minimization problems expressed as:
󰇛󰇜 , x , 󰇛󰇜
(see, [3]). To solve (1), subsequent solution
estimates are generated using
 , 󰇛󰇜
where the step length is traditionally
approximated by performing a one-dimensional line
search. The approximation suffices since finding the
exact solution is time-consuming and may even be
not possible to obtain. However, for quadratic
functions, the step length can be expressed
exactly as, [4], [5], [6], [8]

. 󰇛󰇜
For non-quadratic problems, is determined to
guarantee that the computed search direction is
sufficiently descent through enforcing the strong
Wolfe conditions, [8]
󰇛󰇜 󰇛󰇜
(6a)
and
󰇛󰇜 
, (6b)
where . The search directions for
CG methods are obtained using
  , 󰇛󰇜
WSEAS TRANSACTIONS on COMPUTER RESEARCH
DOI: 10.37394/232018.2024.12.12
Hawraz N. Jabbar, Yoksal A. Laylani,
Issam A. R. Moghrabi, Basim A. Hassan
E-ISSN: 2415-1521
124
Volume 12, 2024
where is taken as a conjugacy parameter. Both
and  satisfy the condition for conjugacy
Qd0, j,
for a symmetric matrix Q .
Particularly intriguing are the global
convergence characteristics of CG algorithms.
According to, [9], the Fletcher and Reeves (FR)
formula for has the best convergence results. On
the other hand, the Hestenes-Stiefel (HS) method, a
highly recognized CG technique, fails to meet the
global convergence criterion under inexact line
search, [10]. The two choices of are given,
respectively, as:
 




, 󰇛󰇜
where  . An attractive property of
the Hestenes-Stiefel formula is the fact that it
satisfies the conjugacy criteria.
Numerous alternative approaches have been
investigated to upgrade the numerical behavior of
CG methods, considering their advantageous storage
demands (refer to, [11], [12], [13], [14], for further
details). A wide range of problems can be addressed
with CG methods, including machine learning,
mechanics, nonlinear and differential equations, and
many others. Furthermore, an additional potential
domain for their application lies within Human
Performance Technology (HPT). HPT heavily relies
on the numerical Performance Improvement (PI)
attributes of computer systems, which are facilitated
by specialized algorithms enabling logical
assessments, [15]. An empirical study found that
CG methods improve the performance and
efficiency of mobile users and help them adopt
mobile Electronic Performance Support Systems
(EPSS), [16].
One useful approach that has proven viability in
improving the performance of the CG methods, is
based on the incorporation of the quasi-Newton idea
in developing better converging CG methods, [7].
This is achieved by rewriting (7) as

   󰇛󰇜
where  is the Hessian matrix of the function
being minimized, [13].
Distinguishing itself from conventional CG
algorithms, the aforementioned approach exhibits a
distinctive quality of consistently generating
improved downhill directions while adhering to the
conjugacy properties, as demonstrated by the
reported outcomes. Subsequently, in the subsequent
section, a quadratic model is utilized to derive novel
conjugacy parameters , leading to the
development of a new CG algorithm.
2 Deriving the New Parameter
The key idea of the derivation of the new CG
parameter is the utilization of a classical quadratic
model given by
󰇛󰇜 󰇛󰇜
󰇛󰇜
󰇛
󰇜󰇛󰇜󰇛󰇜, (10)
where Q is the constant Hessian of the quadratic
function. The gradient of the model is expressed as
 󰇛󰇜 , 󰇛󰇜
for 
From (10) and (11), the second-order curvature is
given by
󰇛󰇜 󰇛󰇜
, (12)
or, equivalently,
󰇛󰇜 󰇛󰇜

󰇛󰇜
Equation (13) leads to the following matrix
definition
󰇛󰇜 󰇛󰇜

, (14)
where is the n by n Identity matrix. Substituting
(14) in (9) yields a new conjugacy parameter as
follows:
 
󰇛󰇜


, (15)
The algorithmic framework is given next.
BBD Algorithm:
i) Start with the initial solution point, . Set
and . If , then
terminate.
ii) Find that satisfies conditions (6).
iii) Calculate  and the
corresponding gradient  󰇛󰇜. If
 , then halt.
iv) Calculate using 󰇛󰇜 and construct  from
(7).
v) and go to (ii).
3 Convergence Analysis
To ensure the global convergence of the BBD
algorithm on uniformly convex problems, it is
necessary to rely on the following assumptions.
i. The level set 󰇝 󰇛󰇜 󰇛󰇜󰇞 is
bounded.
ii. There exists a constant L > 0 such that the
gradient g of the objective function is Lipschitz
continuous in some neighborhood of , such that
WSEAS TRANSACTIONS on COMPUTER RESEARCH
DOI: 10.37394/232018.2024.12.12
Hawraz N. Jabbar, Yoksal A. Laylani,
Issam A. R. Moghrabi, Basim A. Hassan
E-ISSN: 2415-1521
125
Volume 12, 2024
󰇛󰇜󰇛󰇜  (󰇜
(see, [14], for more details).
In this particular situation, there is a stable
such that, provided that certain function
assumptions are met, 󰇛󰇜 .
Theorem 1.
If
, the search directions given by (7),
using (15), are descent directions.
Proof. We have
since
. Consider
to be true. Multiplying
(6) by  results in


 
󰇛󰇜



(17)
Let
󰇛󰇛󰇜

󰇜, then it is easy to show that

 



(18)
Now using the Lipschitz condition leads
to
 
 and
. Thus,
it can be deduced that

 

󰇛
󰇜
.
(19)
󰇛󰇜
Because L and
are very small, it is clear that:

 0. (20)
󰇛󰇜
The proof is established.
For any conjugate gradient approach, employing
the strong Wolfe conditions (6), the general
convergence results in, [15], apply and are stated in
Lemma 1 below.
Lemma 1.
If assumptions (i) and (ii) are true, then for any
conjugate gradient method using  
for selected to satisfy the strong Wolfe
conditions (6) the following applies:
If
󰇛󰇜
then

󰇛󰇜 .
󰇛󰇜
The same results were used in [13], [15], [16], [17],
[18].
We now utilize the results in Lemma 1 to prove
the same for our method.
Theorem 2.
If a constant exists such that it satisfies, for
any :
󰇛󰇛󰇜󰇛󰇜󰇜󰇛 󰇜

(23)
then by Lemma 1, the following holds:

󰇛󰇜 .
󰇛󰇜
Proof. It is clear from (12) that:

󰇛󰇜
󰇛󰇜
where

. Using Cauchy's inequality:
󰇛
󰇜
 󰇛
󰇜.
󰇛󰇜
Therefore, 󰇛󰇜 implies that:



󰇛󰇜
It follows that 
 using Lemma 1.
4 Numerical Results
We evaluate the new algorithm's performance in the
context of reducing salt-and-pepper impulse noise
(3). The test images utilized in this evaluation are
presented in Table 1. Additionally, Table 1 provides
the numerical results obtained from comparing the
classical CG FR method with the newly derived
algorithm. The comparison is based on parameters
that include the number of function/gradient
evaluations, count of iterations, and Peak signal-to-
noise ratio (PSNR), [2], [19], [20]. We use
MATLAB 2015a for all simulations. This study
focuses on developing an efficient and fast way to
reduce carbon emissions in (3). We use the PSNR
value, [21], [22], to assess the corrected images'
pixel quality:
  
󰇛

󰇜
 , (28)
WSEAS TRANSACTIONS on COMPUTER RESEARCH
DOI: 10.37394/232018.2024.12.12
Hawraz N. Jabbar, Yoksal A. Laylani,
Issam A. R. Moghrabi, Basim A. Hassan
E-ISSN: 2415-1521
126
Volume 12, 2024
where 
and 
refer to the pixel values of the
denoised and initial images, respectively. The
termination conditions for both procedures are as
follows: 󰇛󰇜󰇛󰇜
󰇛󰇜 
and 󰇛󰇜 󰇛 󰇛󰇜󰇜 (29)
Figure 1, Figure 2, Figure 3, and Figure 4
showcase the outcomes achieved by implementing
the algorithms on noisy images. Specifically, the
first image in Figures 1, 2, 3, and 4 depicts the
images corrupted with 70% salt-and-pepper noise.
The results obtained from the FR method are
represented in the second image of each figure. The
third image in Figure 1, Figure 2, Figure 3, and
Figure 4 exhibits the outcomes of the BBD method.
The proposed BBD image correction method is
demonstrated to be effective and efficient evidenced
by the visual representations.
Table 1. Numerical results of FR and BBD algorithms
Image
Noise level r (%)
FR-Method
BBD-Method
NI
NF
PSNR (dB)
NI
NF
PSNR (dB)
Le
50
70
90
82
81
108
153
155
211
30.5
27.4
22.8
53
59
59
107
103
93
30.42
27.5087
22.03
Ho
50
70
90
52
63
111
53
116
214
30.6
31.2
25.2
38
52
65
63
79
67
30.73
31.12
25.00
El
50
70
90
35
38
65
36
39
114
33.9
31.8
28.2
33
38
52
37
39
53
33.88
31.80
28.14
c512
50
70
90
59
78
121
87
142
236
35.5
30.6
24.3
41
46
50
45
48
100
35.38
30.65
24.89
Salt-and-pepper noise
r= 50%
50 100 150 200 250
50
100
150
200
250
FR
30.5529 dB
50 100 150 200 250
50
100
150
200
250
BBD
30.3958 dB
50 100 150 200 250
50
100
150
200
250
Salt-and-pepper noise
r= 70%
50 100 150 200 250
50
100
150
200
250
FR
27.4824 dB
50 100 150 200 250
50
100
150
200
250
BBD
27.4178 dB
50 100 150 200 250
50
100
150
200
250
WSEAS TRANSACTIONS on COMPUTER RESEARCH
DOI: 10.37394/232018.2024.12.12
Hawraz N. Jabbar, Yoksal A. Laylani,
Issam A. R. Moghrabi, Basim A. Hassan
E-ISSN: 2415-1521
127
Volume 12, 2024
Fig. 1: Demonstrates the results of algorithms FR and BBD of 256 * 256 Lena image
Fig. 2: Demonstrates the results of algorithms FR and BBD of 256 * 256 house images.
Salt-and-pepper noise
r= 50%
50 100 150 200 250
50
100
150
200
250
Salt-and-pepper noise
r= 70%
50 100 150 200 250
50
100
150
200
250
FR
34.6845 dB
50 100 150 200 250
50
100
150
200
250
BBD
34.7307 dB
50 100 150 200 250
50
100
150
200
250
FR
31.2564 dB
50 100 150 200 250
50
100
150
200
250
BBD
31.1217 dB
50 100 150 200 250
50
100
150
200
250
Salt-and-pepper noise
r= 90%
50 100 150 200 250
50
100
150
200
250
FR
25.287 dB
50 100 150 200 250
50
100
150
200
250
BBD
25.0083 dB
50 100 150 200 250
50
100
150
200
250
Salt-and-pepper noise
r= 90%
50 100 150 200 250
50
100
150
200
250
FR
22.8583 dB
50 100 150 200 250
50
100
150
200
250
BBD
22.7628 dB
50 100 150 200 250
50
100
150
200
250
WSEAS TRANSACTIONS on COMPUTER RESEARCH
DOI: 10.37394/232018.2024.12.12
Hawraz N. Jabbar, Yoksal A. Laylani,
Issam A. R. Moghrabi, Basim A. Hassan
E-ISSN: 2415-1521
128
Volume 12, 2024
Fig. 3: Demonstrates the results of algorithms FR and BBD of 256 * 256 Elaine image
Salt-and-pepper noise
r= 70%
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
FR
31.864 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
BBD
33.889 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
Salt-and-pepper noise
r= 70%
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
FR
28.2019 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
BBD
31.8013 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
Salt-and-pepper noise
r= 90%
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
FR
31.864 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
BBD
28.1487 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
FR
35.5359 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
Salt-and-pepper noise
r= 50%
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
BBD
35.3898 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
FR
30.6259 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
BBD
30.6507 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
WSEAS TRANSACTIONS on COMPUTER RESEARCH
DOI: 10.37394/232018.2024.12.12
Hawraz N. Jabbar, Yoksal A. Laylani,
Issam A. R. Moghrabi, Basim A. Hassan
E-ISSN: 2415-1521
129
Volume 12, 2024
Fig. 4: Demonstrates the results of algorithms FR and BBD of 256 * 256 Cameraman image.
5 Conclusion
In this paper, the primary objective was to develop
innovative and modified conjugate gradient
formulae that supersede the performance of the
conventional Fletcher-Reeves conjugate gradient
(FR) approach, specifically in the context of picture
restoration. Through a comprehensive analysis, the
experimental results validate the global convergence
of the proposed novel techniques, particularly when
subjected to the strong Wolfe line search conditions.
The application of the Wolfe conditions ensures
both sufficient decrease and curvature conditions in
the optimization process. The convergence analysis
reveals that, even in the presence of complex, ill-
conditioned systems inherent in image processing
tasks, the proposed method consistently converges
globally. The experimental results consistently
demonstrate that the newly introduced algorithm,
referred to as BBD, consistently achieves
remarkable reductions in iteration counts and
function evaluations. Remarkably, these efficiency
improvements are achieved without compromising
the quality of picture restoration. Further research
may focus on looking at other possibilities that
utilize more of the quasi-Newton methods within
CG algorithms, such as the ones proposed in, [23].
References:
[1] L. Wei X., Junhong R., Xiao Z., Zhi, L.,
Yueyong, “A new DY conjugate gradient
method and applications to image denoising,”
IEICE Transactions on Information Systems,
vol. 12, pp. 2984–2990, 2018.
[2] G. Q. Huang, G. Yang, G. Tang, “A fast two-
dimensional median filtering algorithm,” IEEE
Transactions on Acoustics, Speech, and Signal
Processing, vol. 27, no. 1, pp. 13–18, Feb.
1979, doi: 10.1109/TASSP.1979.1163188.
[3] R. C. M. Fletcher R., “Function minimization
by conjugate gradients,” Computer Journal,
vol. 7, pp. 149–154, 1964.
[4] Y. Woldu, T., Zhang, H. and Fissuh, “A Scaled
Conjugate Gradient Method Based on New
BFGS Secant Equation with Modified
Nonmonotone Line Search,” American Journal
of Computational Mathematics, vol. 10, pp. 1–
22, 2020, doi: 10.4236/ajcm.2020.101001.
[5] B. Hassan and T. Mohammed, “A New
Variants of Quasi-Newton Equation Based on
the Quadratic Function for Unconstrained
Optimization,” Indonesian Journal of
Electrical Engineering and Computer Science,
vol. 19,(2, pp. 701–708, 2020.
[6] M. Hassan, B. and Ghada, “A New Quasi-
Newton Equation on the Gradient Methods for
Optimization Minimization Problem,”
Indonesian Journal of Electrical Engineering
and Computer Science, vol. 19, no. 2, pp. 737–
744, 2020.
[7] Nocedal J. and Wright J., Numerical
Optimization-Springer Series In Operations
Research. New York: Springer Verlag, 2006.
[8] P. Wolfe, “Convergence conditions for ascent
methods. II: Some corrections,” SIAM Review,
vol. 3, pp. 185–188, 1971.
[9] R. Fletcher, “Function minimization by
conjugate gradients,” Computer Journal, vol.
7, no. 2, pp. 149–154, Feb. 1964, doi:
10.1093/comjnl/7.2.149.
[10] M. R. Hestenes and E. Stiefel, “Methods of
conjugate gradients for solving linear systems,”
Journal of Research of the National Bureau of
Standards, vol. 49, no. 6, p. 409, Dec. 1952,
doi: 10.6028/jres.049.044.
[11] A. Askar, “Interactive ebooks as a tool of
mobile learning for digital-natives in higher
education: Interactivity, preferences, and
ownership,” Educational Technology Research
Development, vol. 60, no. 1, pp. 7–13, 2014.
Salt-and-pepper noise
r= 90%
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
FR
24.9362 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
BBD
24.8992 dB
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
WSEAS TRANSACTIONS on COMPUTER RESEARCH
DOI: 10.37394/232018.2024.12.12
Hawraz N. Jabbar, Yoksal A. Laylani,
Issam A. R. Moghrabi, Basim A. Hassan
E-ISSN: 2415-1521
130
Volume 12, 2024
[12] A. Awasthi and H. Omrani, “A goal-oriented
approach based on fuzzy axiomatic design for
sustainable mobility project selection,
International Journal of Systems Science:
Operations & Logistics, vol. 6, no. 1, pp. 86–
98, 2019, doi:
10.1080/23302674.2018.1435834.
[13] B. A. Hassan, Z. M. Abdullah, and H. N.
Jabbar, “A descent extension of the Dai - Yuan
conjugate gradient technique,” Indonesian
Journal of Electrical Engineering and
Computer Science, vol. 16, no. 2, pp. 661–668,
2019, doi: 10.11591/ijeecs.v16.i2.pp661-668.
[14] B.A. Hassan, , I.A.R. Moghrabi, A modified
secant equation quasi-Newton method for
unconstrained optimization. Journal of Applied
Math. and Computers, vol. 69, pp. 451–464,
May 2023, https://doi.org/10.1007/s12190-
022-01750-x, 2014.
[15] Y. Dai, J. Han, G. Liu, D. Sun, H. Yin, and Y.
Yuan, “Convergence Properties of Nonlinear
Conjugate Gradient Methods,” SIAM J. Optim.,
vol. 10, no. 2, pp. 345–358, Jan. 2000, doi:
10.1137/S1052623494268443.
[16] Y. H. Dai and Y. Yuan, “A nonlinear conjugate
gradient method with a strong global
convergence property,” SIAM Journal
Optimimization, vol. 10, no. 1, pp. 177–182,
1999, doi: 10.1137/S1052623497318992.
[17] B. A. Hassan and R. M. Sulaiman, “A new
class of self-scaling for quasi-newton method
based on the quadratic model,” Indonesian
Journal of Electrical Engineering and
Computer Science, vol. 21, no. 3, p. 1830, Mar.
2021, doi: 10.11591/ijeecs.v21.i3. pp1830-
1836.
[18] B. A. Hassan, H. O. Dahawi, and A. S.
Younus, “A new kind of parameter conjugate
gradient for unconstrained optimization,”
Indonesian Journal of Electrical Engineering
and Computer Science, vol. 17, no. 1, pp. 404
411, Jan. 2019, doi: 10.11591/ijeecs.v17.i1. pp.
404-411.
[19] G. Q. Wu, C.Y., Chen, “New type of conjugate
gradient algorithms for unconstrained
optimization problems,” Journal of Systems
Engineering and Electronics, vol. 21, no. 6, pp.
1000–1007.
[20] G. Zoutendijk, “Nonlinear programming,
computational methods,” in Integer and
Nonlinear Programming, J. Abadie, Ed.,
Amsterdam: North-Holland, 1970, pp. 37–86.
[21] J. Waziri, M., Ahmed, K., Sabi’u, “A Dai–Liao
Conjugate Gradient method via modified
secant equation for system of nonlinear
equations,” Arab. J. Math., vol. 9, pp. 443–
457, 2020.
[22] I. Yasushi N., Hideaki, “Conjugate gradient
methods using value of objective function for
unconstrained optimization,” Optimization
Letters, vol. 6, no. 5, pp. 941–955, 2011.
[23] I. A. R. Moghrabi, “A non-Secant quasi-
Newton Method for Unconstrained Nonlinear
Optimization,” Cogent Eng., vol. 9, no. 1, pp.
20–36, 2022.
Contribution of Individual Authors to the
Creation of a Scientific Article (Ghostwriting
Policy)
- Hawraz Jabbar contributed to the mathematical
derivations.
- Yoksal Laylani carried out the numerical tests on
the new method.
- Issam Moghrabi drafted the document and
contributed to the derivation.
- Basim Hassan did the coding necessary for
carrying out the tests.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
No funding was received for conducting this study.
Conflict of Interest
The authors have no conflicts of interest to declare.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on COMPUTER RESEARCH
DOI: 10.37394/232018.2024.12.12
Hawraz N. Jabbar, Yoksal A. Laylani,
Issam A. R. Moghrabi, Basim A. Hassan
E-ISSN: 2415-1521
131
Volume 12, 2024