The Convergence Properties of Conjugate Gradient Method Using
AMRI Parameter with Exact Line Search
LAILY DWI RETNO WAHYUNINGTIAS1, SALMAH1*
1Department of Mathematics, Universitas Gadjah Mada
Yogyakarta 55281, INDONESIA
Abstract: The conjugate gradient method is a simple way to get a solution of optimization problems without
constraints. In this work, we offer a conjugate gradient method algorithm using AMRI parameter where the step
of length is decided by an exact line search. The proposed algorithm accomplishes the condition of sufficiently
descent and globally convergence with several assumptions. Computation results prove that the modified
method is effective.
Key-Words:-conjugate gradient method, conjugate parameter, globally convergent, sufficient descent condition.
Received: September 8, 2022. Revised: March 11, 2023. Accepted: April 22, 2023. Published: May 31, 2023.
1 Introduction
Solving various optimization problems is often an
important topic for engineers and scientists. One
easy-to-use optimization technique is the conjugate
gradient method. For example, at Yuan et al. [1]
discuss the image processing problem which can be
formulated into an optimization problem. Abubakar
et al. [2] discuss the signal improvement problems
that can be solved by constructing optimization
problem and Helmig et al. [3], create an
optimization problem to estimate the distance and
the number of sensors in the inverse calculation of
temperature boundary conditions, and others (see
[4], [5], [6], [7] and [8]). Its application include
optimization problems, with and without
constraints.
The method that we discuss in this paper is the
conjugate gradient method. This method is a helpful
and convenient way to get solution of unconstrained
optimization problems because it takes less memory
and is easier to compute. This method determines
the iteration solution direction through the objective
function’s gradient, the conjugate parameter, and the
search direction of the previous iteration. The
development of conjugate gradient methods is very
diverse, especially in modifying conjugate gradient
parameters.
The first conjugate gradient parameter
introduced is FR parameter [9]. Powell [10] further
investigated FR parameter and found that this
parameter with exact line search can generate small
step length without giving significant results to the
optimal solution. Then, Polak and Ribiere in [11]
introduce PR parameters. Numerically, the
conjugate gradient method with PR parameter has
better performance than FR parameter. Polak and
Ribiere [11] showed that the PR parameter under
exact line search for convex objective functions
yield global convergent properties. Nevertheless,
Powell [12] has also shown that this is not
necessarily the case for non-convex function. In
addition, Powell [12] found that PR parameter by
exact line search, can rotate indefinitely and do not
approach the solution point. This behavior can occur
when the conjugate gradient parameter is negative
so Powell [12] suggests that the conjugate gradient
parameter non-negative. Therefore, Gilbert and
Nocedal [13] modified the PR parameter to the
maximum value between the number zero and the
PR parameter and obtained global convergence
results with inexact line search.
Rivaie et. al. [14] modified the PR parameter by
changing the numerator from the norm of gradient
of the objective function to the norm of search
direction. These are called RMIL parameters
hereafter. In general, RMIL parameter is better than
PR parameter because RMIL parameter can solves
more test functions in optimization problems.
However, the RMIL parameter is not necessarily
negative, so Dai [15] modified the parameter so that
if the RMIL parameter is negative, the RMIL
parameter value is set to zero. With this
modification, a globally convergent method is
obtained.
Another parameter that is a modification of
RMIL is the AMRI parameter [16]. This parameter
always has a positive or zero value and it is clear
International Journal on Applied Physics and Engineering
DOI: 10.37394/232030.2023.2.5
Laily Dwi Retno Wahyuningtias, Salmah
E-ISSN: 2945-0489
28
Volume 2, 2023
that this parameter accomplishes the condition of
sufficiently descent with inexact line search. Based
on the paper's numerical results, the AMRI
parameter is better than the RMIL parameters.
In this work, the AMRI parameter with exact line
search are also proved to satisfy sufficient descent
conditions. Moreover, this AMRI parameter-based
conjugate gradient method meets the property of
global convergence with an exact line search step
length. Several numerical computations and
comparisons between methods are also presented in
this paper.
The systematics of writing this work is given as
follows: Some definitions and assumptions related
to this study are given in section 2. AMRI parameter
and its algorithm are described in section 3. We will
provide a convergence analysis in section 4. It
includes condition of sufficiently descent and
globally convergent. Numerical results and
comparisons 1between methods with AMRI
parameter and existing parameters are showed in
section 5. As a finale, given the conclusions
presented in section 6.
2 Preliminaries
We consider optimization problem without
constraints below:

󰇛󰇜
( 1 )
with is continuous and differentiable
function and is real numbers. The method
discussed here is the modified conjugate gradient
where this method is an algorithm for numerical
solutions that is often implemented as an iterative
algorithm of the form

( 2 )
where is the solution point for the kth iteration,
is length of step and notation is the
search direction. Length of step is determined by a
one-dimensional search called a line search. The
most commonly used is the exact line search, or
󰇛󰇜
 󰇛󰇜
( 3 )
The most frequently used length of step is the
exact line search. This is due to its ability to produce
optimal step length [17]. In 2015, research showed
that modern technology with faster processors and
better tools solves the speed issues often
encountered by equation (3), as indicated in Rivaie
et al. [18].
The formulae for search direction is


( 4 )
where is the gradient of the function at and
is the parameter of the conjugate gradient. As
mentioned in section 1, there are several well-
known conjugation parameter formulas such as FR
parameter, PR parameter and RMIL parameter
below.


( 5 )

󰇛󰇜

( 6 )

󰇛󰇜

( 7 )
where is the norm of euclid. The equation
Σφάλμα! Το αρχείο προέλευσης της αναφοράς
δεν βρέθηκε.-Σφάλμα! Το αρχείο προέλευσης της
αναφοράς δεν βρέθηκε. became known as Fletcher
and Reeves (FR) in [9], Polak and Ribiere (PR) in
[11], and Rivaie, Mustafa, Ismail and Leong
(RMIL) in [14] respectively. There are two
important properties for analyzing the convergence
properties of the conjugate gradient method, namely
sufficiently descend and globally convergent [19].
Definition 1: An algorithm is said to sufficiently
descent if there is for any
 
( 8 )
Definition 2: An algorithm of conjugate gradient
method is globally convergent if


( 9 )
The following assumptions are required to get
the prove of equation (9) of this method.
Assumption 1: Function on the level set is
bounded below and is continuous and differentiable
at the starting point in the neighborhood of the
level set 󰇝󰇛󰇜󰇛󰇜󰇞.
Assumption 2: The gradient is a Lipschitz
continuous function in in other words there exist
, such that 󰇛󰇜󰇛󰇜 for
any
International Journal on Applied Physics and Engineering
DOI: 10.37394/232030.2023.2.5
Laily Dwi Retno Wahyuningtias, Salmah
E-ISSN: 2945-0489
29
Volume 2, 2023
A lemma is obtained by Assumption 1 and
Assumption 2, hereinafter referred to as the
Zoutendijk condition. The lemma also applies when
the step length is specified with inexact line search.
Lemma 1: For any conjugate gradient method
with (2)-(4), where as formulated by exact line
search (3). Under Assumption 1 and Assumption 2
holds will satisfy
 
( 10 )
Zoutendijk has proved this lemma in [20].
3 Conjugate Gradient Method With
AMRI Parameter
In this section, the conjugate gradient method
algorithm is formed with the AMRI parameter and
equation (3). The AMRI parameter used is the
conjugation parameter proposed by Abashar,
Mamat, Rivaie, Ismail, and Omer in the form

󰇡
󰇢

( 11 )
The algorithm is given as follows based on (2), (3),
(4), and (11).
1) Initialization. Given and set

2) Calculate if then is
solution point. If go to step (3).
3) Calculate based on (11).
4) Calculate based on (4).
5) Calculate step length by equation (3).
6) Set and calculate next step by
equation (2) and move to step (2).
4 Convergent Analysis
In this section, we will prove that the conjugate
gradient method with the AMRI conjugation
parameter accomplishes condition of sufficiently
descent and global convergence. To discuss these
two matters, we give a property that shows that the
AMRI parameter is always non-negative.
Lemma 2 : Given the conjugate gradient method
algorithm with the AMRI parameter. For every
applies 
Proof : Based on the conjugation gradient
method algorithm with the AMRI parameter, if
 then  applies. If  then using
the Cauchy-Schwartz inequality, we have

󰇡
󰇢





( 12 )
Theorem 1 below will show that the method with
the AMRI parameter accomplishes the condition of
sufficiently descent.
Theorem 1: For a conjugate gradient method
with formulated by (4) and parameter 
formulated by (11), then
( 13 )
where 
Proof : We prove that the conjugate gradient
method with the AMRI parameter accomplishes
(13). Note that


( 14 )
If  then we get
( 15 )
In other words, the inequality (13) is satisfied for
 Furthermore, if  then multiply (14)
with
we get

󰇛󰇜


Apply exact line search method, we get

 Consequently,
This means that  is sufficiently descen.
Therefore inequation (13) applies. We have finished
the proof.
International Journal on Applied Physics and Engineering
DOI: 10.37394/232030.2023.2.5
Laily Dwi Retno Wahyuningtias, Salmah
E-ISSN: 2945-0489
30
Volume 2, 2023
Next, we will prove that the conjugate gradient
method with AMRI parameter accomplishes the
global convergence properties, in other words it is
proved to satisfy equation (9). To analyze the global
convergence of this modified method, it is necessary
to use the Assumption 1, Assumption 2, and Lemma
1. Theorem 2 : Assume that Assumption 1,
Assumption 2, and Theorem 1 satisfied. Let the
conjugate gradient method with equation (2)-(4),
where is computed by (11) and is formulated
by equation (3). If for  then


( 16 )
Proof : Suppose be the angle between 
and with 
 With equation (3)
and equation (4), we have

( 17 )
Using the fact that
 we get

( 18 )
Furthermore, by combining (17) and (18), we get





󰇭󰇡
󰇢
󰇮

󰇼
󰇼
󰇼
󰇼
With exact line search and (4), we get
󰇼
󰇼

Based on Lemma 2, we have

󰇼
󰇼
󰇼
󰇼
󰇛󰇜

( 19 )
If (16) does not hold, then for every there is
such that
( 20 )
Because and from Assumption (2) is
a Lipschitz function, there exist a non-negative
integer such that for every holds


( 21 )
By (19)-(21) we have


( 22 )
Note that for every 󰇟
󰇜, the following
inequation holds

( 23 )
Inequation (22) and (23), induce


󰇛󰇜

( 24 )
This show that the angle is always less than
angle fixed which is less than
By Lemma 1,
we get
 󰇛󰇜
 
( 25 )
This implicitly means that 
 which
contradicts (20). We have finished the proof.
5 Numerical Implementation and
Discussions
International Journal on Applied Physics and Engineering
DOI: 10.37394/232030.2023.2.5
Laily Dwi Retno Wahyuningtias, Salmah
E-ISSN: 2945-0489
31
Volume 2, 2023
In this section, we provide numerical
implementation of the conjugate gradient method
using FR, PR, RMIL, and AMRI parameters to
show each method's efficiency. Some of the test
functions used are taken from Andrei [21] starting
from low, medium, to high dimensions as in Rivaie
et al. [14], namely 2, 4, 10, 50, 100, 500. Criteria for
stopping iteration based on where
 From the starting point that is closest to the
optimal point to the starting point that is farthest
from the optimal point, each test function utilizes a
different set of starting points. Table 1 give the list
of test functions and starting points.
Table 1. Test function’s list.
No.
Function
Starting point
1
Six-hump
󰇛󰇜󰇛󰇜󰇛󰇜
2
Three-hump
󰇛󰇜󰇛󰇜󰇛󰇜
3
Booth
󰇛󰇜󰇛󰇜󰇛󰇜
4
Goldstein-
Price
󰇛󰇜󰇛󰇜󰇛󰇜
6
Zettl
󰇛󰇜󰇛󰇜󰇛󰇜
7
Cube
󰇛󰇜󰇛󰇜󰇛󰇜
8
Rosenbrock
󰇛󰇜󰇛󰇜󰇛󰇜
9
Quartic
󰇛󰇜󰇛󰇜󰇛󰇜
10
Ext. Maratos

󰇛󰇜󰇛󰇜󰇛󰇜
11
Ext. White
and Holst

󰇛󰇜󰇛󰇜󰇛󰇜
12
Ext.
Frudenstein
and Roth

󰇛󰇜󰇛󰇜󰇛󰇜
13
Beale

󰇛󰇜󰇛󰇜󰇛󰇜
14
Raydan1

󰇛󰇜󰇛󰇜󰇛󰇜
15
Liarwhd

󰇛󰇜󰇛󰇜󰇛󰇜
16
Fletcher

󰇛󰇜󰇛󰇜󰇛󰇜
17
Edencsch

󰇛󰇜󰇛󰇜󰇛󰇜
18
Gen.Quartic

󰇛󰇜󰇛󰇜󰇛󰇜
19
Ext.
Denschnf

󰇛󰇜󰇛󰇜󰇛󰇜
20
Ext.
Denschnb

󰇛󰇜󰇛󰇜󰇛󰇜
21
Himmelblau

󰇛󰇜󰇛󰇜󰇛󰇜
22
Ext. Penalty

󰇛󰇜󰇛󰇜󰇛󰇜
23
Tridiagonal1

󰇛󰇜󰇛󰇜󰇛󰇜
Table 2. Numerical experiments’s summary.
Method
Total of
NI
Total of
CPU time
Successful
FR
14,344
2944.653350
98%
PRP
1,647
1396.406475
93%
RMIL
2,720
1710.859500
98%
AMRI
2,182
1461.500225
100%
The test functions that is given in Table 1 are
completed by Matlab R2020a and done on a laptop
with specifications; Intel(R) Celeron(R) processor,
4.00 GB RAM, 64-bit Windows 10 Operating
System Home Single Language. If the test function
solved by a method produces a negative step length,
then that method's function is included in the failed
category. Iteration number (NI) and the time taken
by the CPU are compared. Table 2 will give
summary of numerical results. From these
numerical results, a performance representation is
obtained based on NI and CPU time shown
respectively in Fig. 1 and Fig. 2. The result is a
performance representation curve established by
Dolan and More [22].
Fig. 1. Performance representation dependent on
iteration count.
Fig. 2. Performance representation dependent on the
time taken by CPU
Dolan and More [22] established how evaluation
and comparation of the performance of each method
with various test functions. Let is a collection of
solver and is a collection of test functions.
For each solver and , is defined ,
which is the iteration’s number or time the CPU
needs to complete function with solver 
Performance solver in the test function is
compared with the best performance of the existing
method with the same test function, or it can be
written as follows
 
󰇝󰇞
International Journal on Applied Physics and Engineering
DOI: 10.37394/232030.2023.2.5
Laily Dwi Retno Wahyuningtias, Salmah
E-ISSN: 2945-0489
32
Volume 2, 2023
Then  is called performance rate. Choose a
constant with  for each and
and  if and only if solver failed to solve
function 
It is interesting to compare the performance of
each solver against a test function, but the author
wants to obtain an overall comparison. If defined
󰇛󰇜

then 󰇛󰇜 is the probability of the solver
where the performance rate  is in the
factor of the best possible rate. In general, the
method with a high 󰇛󰇜 score or the curve position
on the top right is the best solver.
Fig. 1 and Fig. 2 respectively represent the
execution of method with FR, PR, RMIL and AMRI
parameter based on the iteration number and the
entire time spent by the CPU in solving each test
function. The curve of the method with the AMRI
parameter is at the top left to right compared to
other parameters. Although the PRP method
previously achieved a higher probability value than
the AMRI method, there are several functions that
the PRP method has been unable to complete. The
AMRI method can complete the entire test function,
the RMIL method can complete the 98% test
function, and the PRP method can complete the
93% test function. The FR method can solve the
98% test function, but the iteration is high enough,
so its performance is not better than the other
methods. Because of these results, we can say that
the method with AMRI parameter is a method
capable of achieving better performance than FR,
PR and RMIL parameter.
6 Conclusion
This paper aims to initiate an algorithm of conjugate
gradient method with AMRI parameter using exact
line search step length. The AMRI method
accomplishes two important properties, namely
sufficiently descent and globally convergence with
exact line search. Based on numerical experiments
on the test functions, it has shown that the conjugate
gradient method with AMRI parameters is an
efficient method.
Acknowledgments:
The author would like to thank the Direktorat Riset,
Teknologi, dan Pengabdian Kepada Masyarakat,
Direktorat Jenderal Pendidikan Tinggi, Riset, dan
Teknologi, Kementerian Pendidikan, Kebudayaan,
Riset, dan Teknologi Republik Indonesia for
funding and support under
(089/E5/PG.02.00.PT/2022;1957/UN1/DITLIT/Dit-
Lit/PT.01.03/2022)
References:
[1] G. Yuan, T. Li and W. Hu, A conjugate
gradient algorithm for large scale nonlinear
equation and image restoration problems,
Applied Numerical Mathematics, vol. 147,
2020, pp. 129-141.
[2] A. B. Abubakar, P. Kumam and A. M. Awwal,
Global convergence via descent modified three-
term conjugate gradient projection algorithm
with applications to signal recovery, Results in
Applied Mathematics, vol. 4, 2019, p. 100069.
[3] T. Helmig, F. Al-Sibai and R. Kneer,
Estimating sensor number and spacing for
inverse calculation of thermal boundary
conditions using the conjugate gradient
method, International Journal of Heat and
Mass Transfer, vol. 153, 2020, p. 119638.
[4] J. Cao and J. Wu, A conjugate gradient
algorithm and its applications in image
restoration, Applied Numerical Mathematics,
vol. 152, 2020, pp. 243-252.
[5] A. H. Ibrahim, P. Kumam, A. B. Abubakar, W.
Jirakitpuwapat and J. Abubakar, A hybrid
conjugate gradient algorithm for constrained
monotone equations with application in
compressive sensing, Heliyon, vol. 6, no. 3,
2020, p. e03466.
[6] I. A. R. Moghrabi, A New Preconditioned
Conjugate Gradient Method for Optimization,
IAENG International Journal of Applied
Mathematics, vol. 49, no. 1, 2019, pp. 29-36.
[7] O. Kardani, A. V. Lyamin and K. Krabbenhoft,
A Comparative Study of Preconditioning
Techniques for Large Sparse Systems Arising
in Finite Element Limit Analysis, IAENG
International Journal of Applied Mathematics,
vol. 43, no. 4, 2013, pp. 195-203.
[8] D. Kumar, S. Gupta and P. Sehgal, Improved
Training of Predictive ANN with Gradient
Techniques, in Lecture Notes in Engineering
and Computer Science : Proceedings of The
International MultiConference of Engineers
and Computer Scientists 2014, Hong Kong.
[9] R. Fletcher and C. M. Reeves, Function
minimization by conjugate gradients, The
Computer Journal, vol. 7, no. 2, 1964, pp. 149-
154.
[10] M. J. D. Powell, Restart Procedures for the
Conjugate Gradient, Mathematical
Programming, vol. 12, 1977, pp. 241-254.
International Journal on Applied Physics and Engineering
DOI: 10.37394/232030.2023.2.5
Laily Dwi Retno Wahyuningtias, Salmah
E-ISSN: 2945-0489
33
Volume 2, 2023
[11] E. Polak and G. Ribiere, Note sur la
convergence de m´ethodes de directions
conjugu´ees, ESAIM: Mathematical Modelling
and Numerical Analysis-Mod´elisation
Math´ematique et Analyse Num´erique, vol. 3,
no. R1, 1969, pp. 35-43.
[12] M. J. D. Powell, Nonconvex minimization
calculations and the conjugate gradient method,
Lecture Notes in Mathematics, vol. 1066, 1984,
pp. 122-141.
[13] J. C. Gilbert and J. Nocedal, Global
Convergence Properties of Conjugate Gradient
Methods for Optimization, SIAM J. Optimizat,
vol. 2, no. 1, 1992, pp. 21-42.
[14] M. Rivaie, M. Mamat, L. W. June and I. Mohd,
A new class of nonlinear conjugate gradient
coefficient with global convergence properties,
Appl. Math. comput., vol. 218, 2012, pp.
11323-11332.
[15] Z. Dai, Comments on a new class of nonlinear
conjugate gradient coefficients with global
convergence properties, Applied Mathematics
and Computation, vol. 276, 2016, pp. 297-300.
[16] A. Abashar, M. Mamat, M. Rivaie, I. Mohd
and O. Omer, The Proof of Sufficient Descent
Condition for a New Type of Conjugate
Gradient Methods, 1AIP Conf. Proc., 1602,
2014, pp. 296-303.
[17] M. A. Hery, M. Mamat and L. W. June, BFGS
Method : A New Search Direction, Sains
Malaysiana, vol. 40, no. 10, 2014, pp. 1591-
1597.
[18] M. Rivaie, M. Mamat and A. Abashar, new
class of nonlinear conjugate gradient
coefficients with exact and inexact line
searches, Appl. Math. Comput., vol. 268, 2015,
pp. 1152-1163.
[19] M. Malik, M. Mamat, S. S. Abas, I. M.
Sulaiman and Sukono, A New Modification of
NPRP Conjugate Gradient Method for
Unconstrained Optimization, Advances in
Mathematics : Scientific Journal, vol. 9, no. 7,
2020, pp. 4955-4970.
[20] G. Zoutendijk, Nonlinear programming,
computational methods, Integer and Nonlinear
Programming, 1970, pp. 37-86.
[21] N. Andrei, An Unconstrained Optimization
Test Functions Collection, Advanced Modeling
and Optimization, Volume 10, Number 1, vol.
10, no. 1, 2008, pp. 147-161.
[22] E. D. Dolan and J. J. More, Benchmarking
optimization software with performance
profiles, Math. Prog., vol. 91, 2002, pp. 201-
213.
Contribution of Individual Authors to the
Creation of a Scientific Article (Ghostwriting
Policy)
The authors equally contributed in the present
research, at all stages from the formulation of the
problem to the final findings and solution.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
No funding was received for conducting this study.
Conflicts of Interest
The authors have no conflicts of interest to declare
that are relevant to the content of this article.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
International Journal on Applied Physics and Engineering
DOI: 10.37394/232030.2023.2.5
Laily Dwi Retno Wahyuningtias, Salmah
E-ISSN: 2945-0489
34
Volume 2, 2023