New Iterative Methods for Solving Nonlinear Equations and Their
Basins of Attraction
O.ABABNEH
Zarqa university
Department of Mathematics
JORDAN
Key–Words: Nonlinear equations, Newton’s method, Basins of Attraction, Order of convergence.
Received: March 15, 2021. Revised: November 10, 2021. Accepted: December 16, 2021. Published: January 3, 2022.
1 Introduction
Solving the nonlinear equation is a non-trivial task
that has nice applications in various branches of
physics and engineering. One of the most important
techniques in order to approximate nonlinear equa-
tions are iterative methods [2, 3, ?,?, 20]. This is
an area of research that has grown exponentially over
the last few years. Sometimes the applications of the
numerical methods to solve non-linear equations de-
pending on the second derivatives are restricted in
physics and engineering. In this paper, we consider
iterative methods to find a simple root of a nonlinear
equation f(x)=0, where f is a scalar function. The
classical Newtons method is given by
xn+1 =xnf(xn)
f0(xn),n = 0,1,2,... (1)
This is an important and basic method [?], which con-
verges quadratically. To improve the local order of
convergence of Newtons method, many third-order
methods are developed. A family of third-order meth-
ods, called Chebyshev Halley methods [14], is defined
as [15, 9]
xn+1 =xn1 + Lf(xn)
1αLf(xn)f(xn)
f0(xn), α R. (2)
where
Lf(xn) = 1
2
f00 (xn)f(xn)
f0(xn)2.(3)
This family includes the classical Chebyshevs method
(α= 0), Halleys method (α= 1) and Super-Halley
method(α= 2). It is clear that to implement (2),
one has to evaluate the second derivative of the
function. This can create some problems. In order
to overcome this drawback, several techniques have
been developed [7, 12, 27, 28, 32, 21, 31]. On the
other hand, the super-Halley method can be viewed as
particular one of the following general form defined
by:
xn+1 =xn 1 + Lf(xn) + βLf(xn)2
1αLf(xn)!f(xn)
f0(xn),
α, β R. (4)
This family is known to be third order, but it de-
pends on second derivative, so its use is severely re-
stricted in applications from a practical point of view.
Therefore, it is important and interested to develop it-
erative methods which are free from second deriva-
tive and whose order is higher than three if possible.
The aim, in the present paper to improves the third-
order method in (4) to obtain a family of fourth-order
method free from second derivative. Moreover, per it-
eration in these new methods require two evaluations
of the function and just one of its first derivative.The
remaining part of the paper is organized as follows.
In Section 2 and 3, the Description of the new
methods are given. in section 4, the convergence anal-
ysis is given. Furthermore, some examples of the new
methods are suggested. In Section 5, the proposed
methods are tested on some functions, and the results
are compared with other methods. at the end of this
work,in section 6, the basins of attraction are also pre-
sented.
Abstract: The purpose of this paper is to propose new modified Newton’s method for solving nonlinear
equations and free from second derivative. Convergence results show that the order of convergence is four.
Several numerical examples are given to illustrate that the new iterative algorithms are effective.In the end, we
present the basins of attraction to observe the fractal behavior and dynamical aspects of the proposed algorithms.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.2
O. Ababneh
E-ISSN: 2224-2880
9
Volume 21, 2022
2 Description of the methods
In [8], Chun Suggestes new fourth order modifications
of Newton’s method by using any two existing fourth-
order methods to give new fourth-order methods. As
an illustrated example, Chun consider the double-
Newton method with fourth-order convergence [43]
given by
xn+1 =xnf(xn)
f0(xn)f(yn)
f0(yn),(5)
the fourth-order method given by
xn+1 =xnf(xn)
f0(xn)"1+2f(yn)
f(xn)+f(yn)2
f(xn)2#f(yn)
f0(xn),
(6)
and King’s fourth-order family of methods [26] given
by
xn+1 =xnf(xn)
f0(xn)f(xn) + βf(yn)
f(xn)+(β2)f(yn)
f(yn)
f0(xn),
(7)
Where βR. Approximately equaling the cor-
rection terms of the methods (5) and (6), he obtained
an approximation to f0(yn)
f0(yn)f0(xn)f(xn)2
f(xn)2+ 2f(xn)f(yn) + f(yn)2(8)
Approximately equaling the correction terms of
the methods (5) and (7), he obtained another approxi-
mation to f0(yn)
f0(yn)f0(xn) [f(xn)+(β2)f(yn)]
f(xn) + βf(yn).(9)
Finally, he applied the respective
approximation(8) and (9) to the fifth-order method
proposed in [18] given by
xn+1 =xnf(xn)
f0(xn)f0(yn)+3f0(xn)
5f0(yn)f0(xn)
f(yn)
f0(xn).
(10)
Per iteration this method requires two evalua-
tions of the function and two evaluations of its first-
derivative, Using (8) in (10) to obtain the new method
xn+1 =xnf(xn)
f0(xn)
4f(xn)2+ 6f(xn)f(yn)+3f(yn)2
4f(xn)22f(xn)f(yn)f(yn)2
f(yn)
f0(xn).(11)
Using (9) in (10), to obtain the new family of
methods
xn+1 =xnf(xn)
f0(xn)
2f(xn) + (2β1)f(yn)
2f(xn) + (2β5)f(yn)
f(yn)
f0(xn),(12)
whereβR.
Noor [36] proposed new fourth order method de-
fined by
xn+1 =ynf(yn)
f0(xn)f(yn)
f0(xn)1f0(yn)
f0(xn)
1
2f(yn)
f0(xn)2f00(yn)
f0(xn),(13)
where yn=xnf(xn)/f0(xn).
It is clear that to implement (13), one has to evalu-
ate the second derivative of the function. This can cre-
ate some problems. In [34], a second-derivative-free
method is obtained through approximating the second
derivative f00(yn)in (14) by
f00(yn)
=f0(yn)f0(xn)
ynxn
.(14)
Noor and Khan [35] have used the same approxi-
mation of the second derivative (14) in (13) to suggest
the following Iterative methods
xn+1 =yn2f(yn)
f0(xn)+f(yn)f0(yn)
f02(xn)
+f0(yn)f0(xn)
2f(xn)f(yn)
f0(xn)2
.(15)
In [1], we rederive the method in (15) to ob-
tain a family of fourth-order method free from second
derivative. Moreover, per iteration in these new meth-
ods require two evaluations of the function and just
one of its first derivative as follow:
Combining (8) and (15), we get the new iterative
method
xn+1 =yn2f(yn)
f0(xn)+ f(xn)2f(yn)
f0(xn)(f(xn) + f(yn))2!
+ f(yn)f0(xn)(2f(xn) + f(yn))
2f(xn)(f(xn) + f(yn))2!f(yn)
f0(xn)2
.(16)
Using (9) in (15), we get a new family of iterative
method
xn+1 =yn2f(yn)
f0(xn)+f(yn) (f(xn) + f(yn)(β2))
f0(xn)(f(xn) + βf(yn))
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.2
O. Ababneh
E-ISSN: 2224-2880
10
Volume 21, 2022
+f0(x)f(y)
f(x)(f(x) + βf(y)) f(yn)
f0(xn)2
.(17)
Numerical results show that the number of iterations
of the new method are always less than that of the
classical Newton’s method and the method in (15).
2.1 New approximation of the second
derivative and derive new methods
let us consider the third-order methods defined by
xn+1 =xnf(xn)2
f0(xn) (f(xn)f(yn)),(18)
where
yn=xnf(xn)
f0(xn),(19)
which is third order method often called NewtonStef-
fensen method and
xn+1 =xnf(xn)
f0(xn)f(xn)2f00(xn)
2f0(xn)3,(20)
which is known as the Householder iterative method
[19]. We approximately equate the correcting terms
of both methods to obtain the following approximate
expression:
f(xn)2
f0(xn) (f(xn)f(yn)) f(xn)
f0(xn)+f(xn)2f00(xn)
2f0(xn)3
(21)
this gives a new approximation
f(xn)00 2f02(xn)f(yn)
f(xn) (f(xn)f(yn)).(22)
A family of third-order methods, called Cheby-
shev Halley methods [14], is defined as [15, 9]
xn+1 =xn1 + Lf(xn)
1αLf(xn)f(xn)
f0(xn), α R.
(23)
where
Lf(xn) = 1
2
f00 (xn)f(xn)
f0(xn)2.(24)
This family includes the classical Chebyshevs
method (α= 0), Halleys method (α= 1) and Super-
Halley method(α= 2). It is clear that to implement
23, one has to evaluate the second derivative of the
function. On the other hand, the super-Halley method
can be viewed as particular one of the following
general form defined by:
xn+1 =xn 1 + Lf(xn) + βLf(xn)2
1αLf(xn)!f(xn)
f0(xn)
, α, β R. (25)
This family is known to be third order, but it depends
on second derivative, so that its use is severely re-
stricted in applications from a practical point of view.
Therefore, it is important and interesting to develop it-
erative methods which are free from second derivative
and whose order is higher than three if possible, this
is main motivation of this paper.
Using 22 in 24 we obtain
Lf(xn) = 1
2
f00(xn)f(xn)
f0(xn)2f(yn)
f(xn)f(yn),(26)
using 26 in 25 we obtain a new family of the second-
derivative -free method defined as following
xn+1 =yn1 + βf(yn)
f(xn)f(yn)(1 + α)
f(xn)f(yn)
f0(xn) (f(xn)f(yn)),(27)
where
yn=xnf(xn)
f0(xn).(28)
For the methods defined by 27, we have the following
convergence result.
3 Convergence analysis
In order to establish the order of convergence and
other properties of the new methods, we state
some of the important definitions . A sequence
x0, x1, x2, ..., xn, ... of approximations to the zero α,
generated by an iterative function φ, presumably con-
verges to α. More precisely, if there holds lim
n→∞ xn=
α, we say that the sequence of approximations {xn}
is convergent. Convergence conditions depend on the
form of the iteration function, its properties and the
chosen initial approximation .
Definition 1 [40] Let φ:RRbe an iterative
function which defines the iterative process Xn+1 =
φ(xn). If there exists a real number pand a nonzero
constant λsuch that lim
n→∞
|φ(xn)α|
|xnα|p=λ, then pis
called the order of convergence and λis the asymp-
totic error constant . If p= 2 or 3, the convergence is
said to be quadratic or cubic, respectively [39, 40].
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.2
O. Ababneh
E-ISSN: 2224-2880
11
Volume 21, 2022
Let en=xnαbe the error of the approximation
in the nth iterative step. Then en+1 =xn+1 α=
φ(xn)α. Hence, lim
n→∞
|φ(xn)α|
|xnα|p= lim
n→∞
|xn+1α|
|xnα|P=
lim
n→∞
|en+1|
|en|P=λ
Definition 2 : [39] Let rbe the number of func-
tion evaluations of the method . The efficiency of the
method is measured by the concept of Efficiency Index
and defined as r
p=p1
/
r, where pis the order of
convergence .
Theorem 3 Let αIbe a simple zero of sufficiently
differentiable function f:IRfor an open interval
I. If xois sufficiently close to α, where en=xnα
and ck=f(k)(α)/k!. Then the family defined by 27
is of fourth-order convergence if β= 1 and for any
αR.
Proof Using Taylor expansion of f(xn)about α
and taking into account that f0(α)6= 0 we have
f(xn) = f0(α)en+c2e2
n+c3e3
n+c4e4
n+O(e5
n).
(29)
Furthermore, we have
f0(xn) = f0(α)1+2c2en+ 3c3e2
n+ 4c4e3
n+O(e4
n).
(30)
and
f(xn)
f0(xn)=enc2e2
n+2(c2
2c3)e3
n+(7c2c34c3
23c4)e4
n
(31)
+O(e5
n).
Substituting 31 in 28 yields
ynα=c2e2
n2(c2
2c3)e3
n
(7c2c34c3
23c4)e4
n+O(e5
n).(32)
Expanding f(yn)about αand using 32, we have
f(yn) = f0(α)c2e2
n2(c2
2c3)e3
n
(7c2c34c3
23c4)e4
n+O(e5
n).(33)
Using Eqs. 29-33 in method 27 we have the
following error equation:
en+1 = (1 β)c2
2e3
c2(3c2
26βc2
2+βαc2
23c3+ 4βc3)e4+O(e5
n).
(34)
Therefore, the iteration function defined by (27) is of
order at least four for βmaking the coefficient of e3
in (34) zero. Thus, we need β= 1, this means that
the method defined by (27) is fourth order. This com-
pletes the proof of the theorem.
If we consider the definition of efficiency index
[13] as p1
w, where pis the order of the method and
wis the number of function evaluations per iteration
required by the method, then the new family method
(27) has the efficiency index equal to 3
4'1.5874.
3.1 Some examples
As an example of family (27), we choose some differ-
ent values of α, to get some new fourth order methods
as follow:
for α= 1, we obtain a new fourth order method
(yn=xnf(xn)
f0(xn),
xn+1 =ynf(xn)
f(xn)2f(yn)
f(yn)
f0(xn),(35)
for α= 0, we obtain a new fourth order method
yn=xnf(xn)
f0(xn),
xn+1 =ynf2(xn)
(f(xn)f(yn))2
f(yn)
f0(xn),(36)
for α=1, we obtain a new fourth order method
(yn=xnf(xn)
f0(xn),
xn+1 =ynf(xn)+f(yn)
f(xn)f(yn)
f(yn)
f0(xn).(37)
4 Numerical Results
In this section, we present the results of some nu-
merical tests to compare the efficiencies of the new
methods . Numerical computations reported here
have been carried out in a MTHEMATICA environ-
ment . The stopping criterion has been taken as
|xn+1 xn|< ε, We used the fixed stopping crite-
rion ε= 1014 . In Table 1, the test functions has
been used. The test results in Table 2, Show that for
most of the functions we tested the new methods pro-
posed in this paper have better performance than the
other existing methods.
In some cases, we consider the present methods
(OA1), (OA2) and (OA3) works better than (TM)
method. Note that we used NC in Table 2 to mean that
the method does not converge to the root. As a con-
clusion, we can infer that the new methods has better
performance in accordance with the theoretical analy-
sis of the order. However, it should be noted that for
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.2
O. Ababneh
E-ISSN: 2224-2880
12
Volume 21, 2022
Table 1: Test function
xRoot(α)
f1(x) = x3+ 4x210 α= 1.365230013414096
f2(x) = sin2xx2+ 1 α= 1.404491648215341
f3(x) = x2ex3x+ 2 α= 0.257530285439860
f4(x) = cos xx α = 0.739085133215160
f5(x)=(x1)32α= 2.259921049894873
f6(x) = xex2sin2x+ 3 cos x+ 5 α=1.207647827130919
f7(x) = (x+ 2)ex1α=0.442854401002388
f8(x) = e(x2+7x30) 1α= 3.000000000000000
per iteration the new methods require just two func-
tions evaluation and one of its first derivative. Thus,
the present methods can be of practical interest.
Table 2: Comparison of various iterative schemes and
the new methods, the number of iterations to approxi-
mate the zero. Newton method (NM), Chebysheves
method (2)(CM), a modified super Halley fourth
order (MH) optimal method [9], Traub-Ostrowski
method (TM)[38] and the new methods in (18-20)
(OA1),(OA2) and (OA3) respectively.
f(x)NM CM MH TM OA1 OA2 OA3
f1, x0=0.353 8 75 60 31 11 7
f2, x0= 2.05 4 3 3 3 3 3
f2, x0= 1.06 5 3 3 3 3 4
f3, x0= 1.04 3 3 2 2 2 2
f3, x0= 2.05 3 4 3 3 3 3
f4, x0= 1.04 3 3 2 2 2 2
f4, x0= 1.74 4 3 3 3 3 3
f5, x0=0.39 30 5 10 10 3 5
f5, x0= 0.0NC 5NC 9 9 6 7
f6, x0=1.05 4 3 3 3 3 3
f6, x0=3.014 9 8 6 6 7 8
f7, x0= 2.08 6 5 4 4 4 5
f7, x0=0.54 3 3 2 2 2 2
f8, x0= 3.512 8 7 5 5 6 7
f8, x0= 1.019 13 11 8 8 10 11
5 Basins of Attraction
The basin of attraction is a region of the phase space
where iterations are defined. such that any point in
that region will eventually be iterated into the attrac-
tor. It can be used to compare the efficiency of various
algorithms. See, for example[4, 5]
The aim of this section is to represent the basins
for the proposed algorithms by using graphical tools
as Mathematica. The colors of basins of attraction
vary depending on the number of iterations neces-
sary to obtain the approximate solution of polynomi-
als with given accuracy. the detailed study of basins
of attraction, its theoretical background, and applica-
tions are discussed in [16, 17, 22, 23].
We assign a color to the attraction of a root. We
make the same color lighter or darker depending on
the number of iterations needed to reach the root with
the fixed precision required.
we will apply the iterative methods proposed in
the previous section to get the complex roots of func-
tions.
f(z) = z31and f(x) = exp sin z
100 z31.
We start by taking a rectangle DCand we
apply the iterative methods. starting in z0D. In
practice, we take a grid of 1024 ×1024 points in D
and we use these points as z0.
All the figures have been generated using the
computer program Mathematica 10.0 by taking a tol-
erance ε= 108and a maximum of 40 iterations.
We denote the three roots as ζ=e2kπi/3, k = 0,1,2
Then, we take z0in the corresponding rectangle and
we iterate zn+1 up to |Sznζk|< ε for k= 0,1,2
[NM] [MH]
[CM] [TM]
[OA1] [OA2]
[OA3]
Figure 1: Dynamical planes on f(z) = z31
In the first example, we have apply all the pro-
posed algorithms to obtain the simple roots of the cu-
bic complex polynomial. The results of the basin of
attractions are presented in Figure 1. For each root of
the considered polynomial, there exists a unique color
on the corresponding basins of attraction, that can be
easily seen from Figure 1. In the next example, which
has three roots also. The basins of attraction are pre-
sented in Figure 2. three unique colors corresponding
to the distinct roots can be seen in Figure 2.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.2
O. Ababneh
E-ISSN: 2224-2880
13
Volume 21, 2022
[NM] [MH]
[CM] [TM]
[OA1] [OA2]
[OA3]
Figure 2: Dynamical planes on f(x) =
exp sin z
100 z31
6 Conclusion
In this paper new optimal fourth order root-finding al-
gorithms for solving nonlinear equations have been
proposed.
Table 2 show the best performance of the new
methods in terms of number of iterations, and order
of convergence as compared to other well-known ex-
isting methods. We have also presented the basins of
attraction by using some complex functions through
newly developed methods which describe the fractal
behavior and dynamical aspects of the proposed meth-
ods.
References:
[1] O.Y. Ababneh, New Fourth Order Iterative
Methods Second Derivative Free, Journal of Ap-
plied Mathematics and Physics. 4 (2016), 519-
523
[2] I.K. Argyros, A.A. AMagre Contemporary
Study of Iterative Methods; Academic Press:
Cambridge, MA, USA, (2018).
[3] I.K. Argyros, A.A. Magre Iterative Methods and
Their Dynamics with Applications: A Contem-
porary Study; CRC Press: Boca Raton, FL,
USA, (2017).
[4] C. Chun, B. Neta, Comparative study of meth-
ods of various orders for finding simple roots of
nonlinear equations. J. Appl. Anal. Comput. 9
(2019),400427.
[5] C. Chun, B.Neta, Comparative study of meth-
ods of various orders for finding repeated roots
of nonlinear equations. J. Comput. Appl. Math.
340 (2018), 1142.
[6] C. Chun, Y. Ham, Some second-derivative-free
variants of super-Halley method with fourth-
order convergence, Appl. Math. Compute. 195
(2008), 537-541.
[7] C. Chun, Some third-order families of iterative
methods for solving non-linear equations, Appl.
Math. Comput. 188(2007), 924-933.
[8] C. Chun, Y. Ham, Some fourth-order modifica-
tions of Newtons method, Appl. Math. Comput.
197 (2008), 654-658.
[9] D. Chen, I.K. Argyros, Q.S. Qian, A local con-
vergence theorem for the Super-Halley method
in a Banach space, Appl. Math. Lett. 7(5)(1994).
[10] C. Chun, Some third-order families of iterative
methods for solving non linear equations, Appl.
Math. Comput. 188 (2007), 924-933.
[11] J. A. Ezquerro, M. A. Hern andez, A unipara-
metric Halley-type iteration with free second
derivative, Int. J. Pure Appl. Math. 6 (2003),
103–114.
[12] J. A. Ezquerro, M. A. Hernandez, A uniparamet-
ric Halley-type iteration with free second deriva-
tive, Int. J. Pure Appl. Math. 69 (2003), 103-114.
[13] W. Gautschi, Numerical Analysis: An Introduc-
tion, Birkhauser, (1997).
[14] J.M. Gutierrez, M.A. Hernandez, A family
of CLebyshevHalley type methods in Banach
spaces, Bull. Austral. Math. Soc. 55 (1997), 113-
130.
[15] J.M. Gutierrez, M.A. Hermandez, An accelera-
tion of Newton method: Super-Halley method,
Appl. Math. Comput. 117 (2001), 223-239
[16] K. Gdawiec, Fractal patterns from the dynamics
of combined polynomial root finding methods,
Nonlinear Dynamics, 90 (2017), 24572479, .
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.2
O. Ababneh
E-ISSN: 2224-2880
14
Volume 21, 2022
[17] K. Gdawiec, W. Kotarski, and A. Lisowska, Vi-
sual analysis of the Newtons method with frac-
tional order derivatives, Symmetry, 11 (2019).
[18] Y. Ham, C. Chun, A fifth-order iterative method
for solving nonlinear equation, Appl. Math.
Comput. 194 (2007), 287290
A.S. Householder, The Numerical Treatment
of a Single Nonlinear Equation, McGraw-Hill,
New York, (1970).
[19] A.S. Householder, The Numerical Treatment
of a Single Nonlinear Equation, McGraw-Hill,
New York, (1970).
[20] Ioannis K. Argyros, Santhosh George, Extended
And Unified Local Convergence For Newton-
Kantorovich Method Under w? Conditions With
Applications, WSEAS Transactions on Mathe-
matics, 16(2017), 248–256.
[21] Jain, Pankaj and Bahadur Chand, Prem. Deriva-
tive free iterative methods with memory hav-
ing higher R-order of convergence. Inter-
national Journal of Nonlinear Sciences and
Numerical Simulation, 21 (2020), 641–648.
https://doi.org/10.1515/ijnsns-2019-0174
[22] B. Kalantari, E. H. Lee, Newton-Ellipsoid poly-
nomiography, Journal of Mathematics and the
Arts, 13 (2019), 336-352.
[23] S. M. Kang, A. Naseem, W. Nazeer, M. Munir,
and C. Y. Jung, Polynomiography of some iter-
ative methods, International Journal of Mathe-
matical Analysis, 11 (2017), 133-149.
[24] Y. C. Kwun, Z. Majeed, A. Naseem, W. Nazeer,
and S. M. Kang, New iterative methods using
variational iteration technique and their dynam-
ical behavior, International Journal of Pure and
Applied Mathematics, 116 (2017), 1093-1113.
[25] D. Kinkaid, W. Chenney, Numerical anal-
ysis: mathematics of scientific computing,
AMS(2009).
[26] R. King, A local convergence theorem for the
Super-Halley method in a Banach space, SIAM
J. Numer. Anal. 10 (5) (1973), 876879.
[27] J.S. Kou, Y.T. Li, X.H. Wang, A uniparametric
Chebyshev-type method free from econd deriva-
tives, Appl. Math. Comput. 179 (2006), 296-
300.
[28] J.S. Kou, Y.T. Li, X.H. Wang, Modified Halleys
method free from second derivative, Appl. Math.
Comput. 183 (2006), 704-708.
[29] J.S. Kou, Y.T. Li, X.H. Wang, A uniparamet-
ric Chebyshev-type method free from second
derivatives, Appl. Math. Comput. 179 (2006),
296–300.
[30] J.S. Kou, Y.T. Li, X.H. Wang, Modified Halley s
method free from second derivative, Appl. Math.
Comput. 183 (2006), 704–708.
[31] S. Li, Fourth-order iterative method with-
out calculating the higher derivatives
for nonlinear equation. Journal of Algo-
rithms Computational Technology.(2019)
https://doi.org/10.1177/1748302619887686
[32] B. Neta, A New Derivative-Free Method to
Solve Nonlinear Equations. Mathematics, 9
(2021). https://doi.org/ 10.3390/math9060583
[33] M.A. Noor, Some iterative methods for solving
nonlinear equations using homotopy perturba-
tion method, Int. J. Comp. Math. 87 (2010), 141-
149.
[34] M.A. Noor, V. Gupta, Modified Householder it-
erative method free from second derivatives for
nonlinear equations, Appl. Math. Comput. 190
(2007), 1701-1706.
[35] M.A. Noor, W.A. Khan, New iterative methods
for solving nonlinear equation by using homo-
topy perturbation method, Appl. Math. Comput.
219 (2012), 3565–3574.
[36] M.A. Noor, Some iterative methods for solving
nonlinear equations using homotopy perturba-
tion method, Int. J. Comp. Math. 87 (2010), 141-
149.
[37] Ourida Ourahmoun , Newton Raphson method
used to model organic solar cells under Matlab
software, WSEAS Transactions on Circuits and
Systems, 19(2020),181–185
[38] A.M. Ostrowski, Solution of Equations in Eu-
cilidean and Banach Space, third ed., Academic
Press, New York, (1973).
[39] A.M Ostrowski, Solutions of Equations and Sys-
tem of Equations, Academic Press, New York,
(1960).
[40] M. S. Petkovic, B. Neta, L.D. Petkovic and J.
Dzunic, Multipoint Methods for Solving Non-
linear Equations,( 2012).
[41] J.R. Sharma, A composite third order NewtonSt-
effensen method for solving nonlinear equations,
App. Math. Comput. 169 (2005), 242–246.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.2
O. Ababneh
E-ISSN: 2224-2880
15
Volume 21, 2022
[42] F.Soleymani, A Novel and Precise Sixth-Order
Method for Solving Nonlinear Equations, Inter-
national Journal of Mathematical Models and
Methods in Applied Sciences, 5(2011), 730–
737.
[43] J.F. Traub, Iterative Methods for the Solution of
Equations, New York, (1977).
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the Creative
Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en_US
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.2
O. Ababneh
E-ISSN: 2224-2880
16
Volume 21, 2022