
S=v=1
nV e , (8)
supplied with
αR= 1 , αE= 2 , αCO =αCI =αS= 1/2 ; (9)
consequently γ= 1, thus step 2 can be reduced to
x0= (x1+. . . +xn)/n, i. e. to x0=xn+1 +v,
whereas a vector (ςR, ςE, ςCO, ςCI)is identical with
(αR, αE, αCO, αCI). This case, whose original terms
reflection,expansion,outer /inner contraction and
shrink have their intuitive meanings, is still signif-
icant in computational practice. Indeed, by (6) we
have γ−1=1
ne
>V−1V e = 1 .(10)
Unfortunately, various well-documented counterex-
amples of stagnation or divergence of this algorithm
can be found both in the classical literature, [19], and
in their later extensive survey, [9]. This has moti-
vated numerous attempts to derive some “better con-
vergent” variants of SNMS.
To improve SNMS, especially for n>>2, [20] sug-
gested ANMS (Adaptive Nelder-Mead Simplex) with
αR= 1 , αE= 1 + 2/n , (11)
αCO =αCI = 3/4 −1/(2n), αS= 1/2
instead of (9), thus for n= 2 ANMS coincides with
SNMS; moreover, certain sufficient descent property
is required for expansion and contraction steps. Po-
tential stagnation trends in both SNMS and ANMS
can be handled using the sufficient decrease moti-
vated oriented restarts, [21], reinitializing the sim-
plex to a smaller one with orthogonal edges, or, in the
more complicated way, introducing certain pseudo-
expansion on so-called “ghost simplices”, additional
to S, coupled with the sequences of quasi-minimal
frames, reshaping the simplices Susing the matrix
QR-decomposition, [22], with help of the knowledge
of properties of positive bases and frames, [23]. Such
algorithm seems to be also implemented in the MAT-
LAB function fminsearch from the toolbox optimiza-
tion.
Unlike ANMS, GBNM (Globalized Bounded
Nelder-Mead), [24], still preserving γ= 1, relies
on another generalization (so-called globalization)
of searching for appropriate αR,αE,αCO,αCI and
αS, satisfying (7), together with probabilistic restarts,
whereas RMNM (Restarted Modified Nelder-Mead),
[25], works with an advanced deterministic modifica-
tion of the shrink step. Still other attempt to improve
SNMS rely on its hybridization with appropriate soft
computing techniques, as with differential evolution,
[26], with genetic algorithms, [27], [28], [29], with ar-
tificial neural networks (NM-ANN), [30], with parti-
cle swarm optimization (NM-PSO), [31], or with ma-
chine learning, [32], Chap. 12.
The existing convergence theory for SNMS and
related algorithms is far from being satisfactory
and comparable with that available for gradient ap-
proaches. The following results try to handle strictly
convex Fwith bounded level sets. Namely SNMS
is verified to converge to the minimizer for n= 1
(where, in our notation, the directions of Gand Sal-
ways coincide), [33] ; for n= 2 only the conver-
gence of hby (3) to zero during SNMS can be guar-
anteed, [33]. The strongest result, up to now, [34], for
n= 2 seems to be: SNMS with disabled expansion
converges always to the minimizer. The computer-
assisted tedious proof of this result, presented at 25
pages, needs the step-by-step elimination of all pos-
sible non-convergent cases; one can see the origin
of such proof technique, [9], in the Sherlock Holmes
argument, [35], Chap. 6: “How often have I said to
you that when you have eliminated the impossible,
whatever remains, however improbable, must be the
truth ?” Nevertheless, none of these results, includ-
ing recommended corrections, [21], [22], guarantees
the convergence even for a probably simplest example
with n= 2, i. e. that of revolution paraboloid surface
F(x1, x2) = x2
1+x2
2,(12)
with an arbitrary regular initial triangle!
Apart from the various advanced strategies for the
initial choice of S, [36], [37], [38], avoiding the di-
vergence (not only) for (12), some results working
with approximation of derivatives, [39], [40], can be
useful, too. Unfortunately, the hypothetical standard
approximation of Fby a quadratic polynomial, whose
minimum or other stationary point could be detected
easily, needs to calculate N= 1 + 1 + 1 = 3 values
for n= 1,N= 1 + 2 + 3 = 6 values for n= 2,
N=1+3+6 =10 values for n=3, etc., in general
N=
2
X
k=0 n+k−1
k=
2
X
k=0
(n+k−1)!
k! (n−1)! (13)
= 1 + n+1
2n(n+ 1) = 1 + 1
2n(n+ 3)
=n+ 1
1+n+ 1
2,
thus e. g. N= 66 values for n= 10, which is dif-
ficult to implement into any effective numerical al-
gorithm, namely if such values must be obtained in
some non-trivial way, typically as outputs from the
numerical analysis of partial differential equations of
evolution, as mentioned above. However, the last
equality in (13) exhibits the possibility of a simple
choice of evaluation points in all vertices and in all
centres of one-dimensional edges of S. For com-
parison N ∈ {1,2,3,4}in every step of SNMS, as
well as in our generalization of SNMS, except shrink
with N=n(as number of edges of S), which is
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.71
Jiří Vala, Petra Jarošová