Modified Generalized Way for Optimization Problem
1ALEXANDER ZEMLIAK, 2CHRISTIAN SERRANO
1Dept. of Physics and Mathematics, Autonomous University of Puebla, Puebla, MEXICO
2Dept. of Electronic, Autonomous University of Puebla, Puebla, MEXICO
Abstract: The solution to the problem of circuit optimization is obtained on the basis of a combination of a genetic
algorithm (GA) and the idea of generalized optimization, developed earlier for the deterministic case. It is shown that
such a GA modification allows one to overcome premature convergence to local minima and to increase the
minimization accuracy by several orders of magnitude. In this case, GA forms a set of populations determined by the
fitness function, given in different way, depending on the strategy chosen within the framework of the idea of
generalized optimization. The way of setting fitness functions as well as the length and structure of chromosomes, are
determined by a control vector artificially introduced within the framework of generalized optimization. This vector
determines the number of independent variables of the optimization problem and the method for calculating the
fitness function. It allows you to build compound strategies that significantly increase the accuracy of the resulting
solution. This, in turn, makes it possible to reduce the number of generations required during the operation of the GA
and minimize the processor time for solving the problem of circuit optimization.
Keywords: generalized optimization, GA, circuit optimization, control vector, set of strategies
Received: June 25, 2021. Revised: June 28, 2022. Accepted: July 25, 2022. Published: Septemebr 9, 2022.
1. Introduction
One of the major challenges in designing a large system is
the excessive computing time required to reach the optimum
point in the design process. This problem is important as it has
many applications, for example, in the design of VLSI circuits.
The design process starts with an initial approximation that is
provided by analysis of circuit for the initial point and then the
process is continued till adjusting of the system parameters to
obtain the necessary performance features defined in the
specification. The process of setting parameters is usually
based on the optimization procedure. So, the process of design-
by-analysis can be realized instead of the difficult problem of
synthesis of a complex system. Mathematically, this process is
defined as the minimization of a special objective function that
includes necessary properties of the designed circuit. It means
that any circuit design strategy includes two main blocks:
analysis of the mathematical model of the circuit and an
optimization procedure that reaches the minimum point of the
objective function. The minimum value of this function can
ensure that the required circuit characteristics are obtained. The
interaction of the circuit analysis block and the optimization
procedure block forms the circuit optimization process.
Optimization methods for systems of various natures can be
divided into two main groups: deterministic optimization
algorithms and stochastic search algorithms. Some of the
weaknesses of classical deterministic optimization algorithms
are the requirement for a good starting point in the parameter
space, the difficulty of finding the global minimum, and a long
execution time. To overcome these problems some special
methods were developed. For example, a method that
determines initial point of the optimization process by
centering [1], geometric programming methods [2] that
guarantee the convergence to a global minimum, but, on the
other hand, this require a special formulation of the design
equations to which additional difficulties accompany. Other
approach based on the idea of space mapping technique [3-4],
which achieves a satisfactory solution. This technology
successfully used for optimization of microwave systems but
there are no experience for solution of other problems.
Some alternative stochastic search algorithms, especially
evolutionary computation algorithms, can solve the problem of
finding the global minimum and have been developed in recent
years. An analysis of various stochastic algorithms for system
optimization allowed select some groups: simulated annealing
method [5-7], evolutionary computing techniques that produce
some different approaches as evolutionary algorithms [8-11]
particle swarm optimization (PSO) method, GA, differential
evolution, genetic programming. A PSO technique [12-15] is
one of the evolutionary algorithms that competes with genetic
algorithms. This method has been successfully used to solve
electromagnetic problems and to optimize microwave systems.
Separately, we highlight GA that is used to solve nonlinear
programming problems both for optimizing systems of various
nature [16-22], and, in particular, for optimizing and designing
electronic systems [23-24]. GA has been used as an
optimization procedure for analog circuits due to the ability to
find a satisfactory solution. The disadvantages of these
methods include a premature convergence to a local minimum
and an increase in computer operation time when setting a
sufficiently high accuracy for obtaining the minimum. To
prevent this, we propose to use the approach underlying the
generalized optimization method defined for the deterministic
case of circuit optimization in [25]. In this formulation of the
problem, an artificially introduced control vector produces
many different optimization strategies and sets a different type
of the objective function for each new strategy.
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2022.21.18
Alexander Zemliak, Christian Serrano
E-ISSN: 2224-2678
168
Volume 21, 2022
This control vector is introduced into the system of
equations describing the optimization process of a certain
function, determines the structure of these equations and leads
to the emergence of many different optimization strategies that
differ in the number of operations and CPU time. The set of
different strategies that appear in this case depends on the
dimension of the control vector, which, in turn, is determined
by the number of circuit nodes. At the same time, this
dependence is exponential, i.e. if the number of circuit nodes is
M, then the number of different optimization strategies is 2M.
With this approach, each strategy is determined by its objective
function, which depends on the structure of the control vector.
In this case, the optimization process is generalized and, in
fact, is a dynamic controlled system with a control vector.
A detailed analysis of various optimization strategies in the
deterministic case showed the prospects and advantages of this
approach when solving the problem of reducing the time spent
on the optimization and design of electronic systems. It would
like to find out the validity of this approach when solving
optimization problems by stochastic methods. In this paper, the
approach of generalized optimization is included into the
implementation of a standard GA. This means that one of the
most important steps is the setting of the GA fitness function,
which now includes the control vector.
2. GA and Generalized Optimization
Approach
The process of circuit optimization can be defined as the
problem of minimization of objective function C(X),
N
RX
with additional conditions. It is supposed that the minimum of
the objective function C(X) corresponds to achievement of all
the necessary design goals of the circuit, and the system of
constraints is a mathematical model of the electronic circuit.
A typical formulation of a circuit optimization problem can
be defined mathematically as a constrained optimization
problem for the objective function C(X). The process to
minimize the objective function C(X) is conventionally defined
by the following equation:
,...2,1),(
1
sXX ss
 
where Λ is the operator of transition from iteration s to
iteration s+1. The constraints are determined by the circuit
equations and can be described by a system of nonlinear
equations:
MjXgj,...,2,1,0)(
 
We will declare some of the variables as independent, and
the other part as dependent, the value of which is determined
from the constraint equations (2): X = (X ', X "), X ' ϵ RK is a
vector of independent variables, X " ϵ RM is a vector of
dependent variables, K is the number of independent variables,
М is the number of the circuit’s dependent variables, N is the
total number of variables (N=K+M). Traditionally, resistance
(conductivity) of resistors are defined as independent variables
of optimization procedure (vector
). Other variables are
defined as dependent variables (currents or nodal voltages).
However, this partition is conditional, since any variable may
be considered independent or dependent.
To calculate the function C(X), it is required to solve a
system of nonlinear equations (2) at each step of the
optimization process. This approach can be named as
traditional strategy of optimization (TSO).
Let us accept the following statement that there is no need
to fulfil condition (2) at each step of the optimization
procedure, and that it is enough to fulfil it at the final point of
the optimization process. We use the approach [25] leading to
a generalization of the optimization process. Let’s define as
independent all the variables included in the vector X", and
previously declared dependent. In this case, the constraint
equations (2) can be removed, but to fulfill all the constraints
(2), at least at the end point of the optimization process, we
introduce a new, generalized, objective function F(X), which
can be defined as follows:
)()()( XXCXF
 
where φ(X) is an additional penalty function, the equality of
which to zero, at the end point of the optimization process,
ensures the fulfilment of conditions (2). This function can be,
for example, the following form:
).()(
1
2XgX M
jj

Generalizing this approach, it is possible to declare
independent only a part of the previously dependent variables,
for example, Z variables, where Zϵ[0, M]. In this case, Z
equations are removed from system (2), and the formula (4)
contains Z terms. This approach generalizes the optimization
problem by introducing a special control vector U=(u1, u2,…,
uM), that changes the structure of all equations and formulas of
the optimization procedure. In this case, system (2) is
transformed into the following:
MjXgu jj ,...,2,1,0)()1(
 
where uj is the jth component of the control vector
U=(u1, u2,…, uM), uj ϵ Ω, Ω={0;1}. Formulas (3) and (4) are
transformed into the following:
),()(),( UXXCUXF
 
)(
1
),(
1
2XguUX M
jjj
 
where σ is a special adjusting parameter.
Thus, the control vector U allows one to change both the
structure of the basic equations of the constraints (5) and the
form of the generalized objective function F(X,U). Zero values
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2022.21.18
Alexander Zemliak, Christian Serrano
E-ISSN: 2224-2678
169
Volume 21, 2022
of components of the vector U determine the TSO. In this case,
the system (2) is solved at each step of the optimization
procedure, and the generalized objective function F(X,U)
coincides with the function C(X). Further, the penalty function
φ(X,U) is equal to zero. If some components of the vector U
are equal to 1, then the corresponding equations disappear from
system (5), but information about them appears in the penalty
function and in the function F(X,U). If all components of the
vector U are equal to 1, then the optimization is determined by
the modified traditional strategy (MTS). This means that
system (5) disappears, and the penalty function includes
complete information about system (5).
It is also important to note the necessary changes in the
optimization procedure. When using the deterministic
approach, the optimization procedure is specified by
differential (8) or difference (9) equations:
,,...,2,1),,( NiUXf
dt
dx i
i

s
s
ss HtXX
1
 
where fi(X,U) or H are determined by a specific optimization
method. A change in the components of the control vector U
from 0 to 1 corresponds to a transformation of the
corresponding dependent component of the vector X into an
independent one, leading to a change in the number of
independent variables and the number of equations both in the
optimization procedure (8) or (9) and in the system of
constraints (5). The control vector U defines strategies of the
structural basis of generalized optimization. The number of
these strategies is 2M.
Equations (5)-(9) define a set of different strategies, each of
which is determined by the corresponding value of the control
vector. In this case, as was shown in [26], there are
opportunities for a significant (by several orders of magnitude)
acceleration of the optimization process due to the different
behavior of the trajectories of different strategies and the
combination of these strategies in the process of optimization.
Let us consider the application of the idea of generalized
optimization in the case of using a GA as the main
optimization procedure. Instead of using equation (8) or (9),
the optimization procedure was carried out on the basis of a
GA. Let us consider the classic version of GA [27], in which
the selection of chromosomes is carried out by a tournament
method and two main genetic operators are used: crossover and
mutation. Variants with one-, two-, and four-point crossover
operators with a probability of 0.95 and mutation operators
with a probability of 0.05 to 0.1 were analyzed. Let's define NN
as the number of chromosomes in a population, and X is a
special matrix with N rows and NN columns, provided that
each column corresponds a specific value of the vector X.
Let's define the fitness function according to the following
generalized formula:
UFUP ,/1, XX
 
Taking into account the concepts of generalized
optimization, the structure of GA can be represented in Fig. 1.
A new element of this algorithm is the control vector U, which
provides the implementation of various GA variants with
different objective functions (fitness functions in GA
terminology). Thus, the fitness function also depends on the
vector U.
Fig. 1. Modified GA flowchart.
The presence of the control vector U is reflected in the
corresponding blocks, since in these blocks either the fitness
function is calculated or it is used. The presence of the control
vector U in the blocks of the diagram ensures the
determination and change of the structure of both the initial
generation of chromosomes and the current generations during
the operation of the algorithm. For this stochastic algorithm,
we can also introduce a vector X consisting of N components,
where each component is calculated as the arithmetic mean of
all values of this component in the generation:
NN
j
iNN
x
1
ij
x
1
 
where xij is the element of matrix X.
For the analyzed examples, the length of chromosomes (L)
in GA varied from 20 to 80 for each of the variables, and the
number of chromosomes (NN) in the population varied from
40 to 400 depending on the length of the chromosomes.
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2022.21.18
Alexander Zemliak, Christian Serrano
E-ISSN: 2224-2678
170
Volume 21, 2022
3. Results
3.1 Example 1
Minimize C(X)
3
1
2
2
2
1
4xxxXC
 
Subject to:
012 2
2
2
1 xx
 
In this example, parameter M=1 because there is only one
constraint equation (13). Define variable
1
x
as independent,
and variable
2
x
as dependent, the value of which is
determined from equation (14). Based on the generalized
approach, equation (13) is transformed into the following
equation:
0121 2
2
2
1 xxu
 
where u is a control vector consisting, in this particular case, of
one component. Consider two basic strategies: the TSO with
control vector u = 0 and the MTS with control vector u = 1.
Here, we analyse the results of optimization by means of a GA
for these strategies. However, it was shown that in the case of a
deterministic optimization process, a combination of several
strategies can reduce both the number of steps in the
optimization procedure and the computing time of the
optimization process.
Table I shows the number of generations required to
achieve the minimum of the function C(X) with accuracy
for the strategies TSO (U=(0)), MTS (U=(1)) and for the
third, combined strategy determined by the control vector
(1),(0) with one switching point Sp = 2 between strategies (1)
and (0). That is, the first two iterations correspond to strategy
(1) and the next ones correspond to strategy (0).
TABLE I. NUMBER OF GENERATIONS G FOR STRATEGIES (0), (1) AND
COMBINED STRATEGY (1)(0) WITH SWITCH POINT SP FOR DIFFERENT
PRECISION δ
Precision
δ
Control
vector
(0)
Control
vector
(1)
Control
vector
(1)(0)
Sp=2
10-1
15
45
17
10-2
18
51
18
10-3
21
61
20
10-4
29
74
25
10-5
70
83
40
10-6
-
87
47
10-7
-
90
49
10-8
-
90
68
10-9
-
102
68
10-10
-
114
69
It can be seen that when using the TSO, the number of
generations is less than for MTS up to a certain level of
accuracy (10-5). If the required error is reduced to 10-6 or less,
no solution based on the traditional strategy is found. The
MTS with control vector (1) finds a solution up to an accuracy
of 10-10. At the same time, a combined strategy consisting of
two, (1) and (0) with a switching point between them Sp = 2,
also finds a solution to the problem with no errors up to an
accuracy of 10-10 and, importantly, for a lower number of
generations. The final result for a combined strategy clearly
depends on the switching point from one strategy to another.
Table II shows the dependence of the number of generations
on the switching point for a given error
=10-5.
TABLE II. NUMBER OF GENERATIONS G FOR COMBINED STRATEGY
(1)(0) FOR DIFFERENT SWITCHING POINT SP.
Switch
point Sp
2
3
4
5
6
7
8
9
10
G
40
39
42
49
43
53
40
72
74
This shows that the switching point affects the number of
generations required to achieve the required accuracy. The
minimum value for this example corresponds to the switching
point Sp = 3. Fig. 2 shows the dependence of the minimized
function F on the number of generations G for three strategies
corresponding to three variants of calculating the fitness
function.
Fig. 2. Minimized function F for strategies (0), (1) and (1)(0) on the number
of generations G.
It can be seen that the best strategy for calculating the
fitness function is the composite strategy (1)(0), which after
the 18th generation solves the problem in the best way
compared to other strategies.
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2022.21.18
Alexander Zemliak, Christian Serrano
E-ISSN: 2224-2678
171
Volume 21, 2022
3.2 Example 2
We need to optimize the nonlinear circuit shown in Fig. 3.
Fig. 3. Two-node nonlinear passive circuit.
Consider a simple nonlinear voltage divider circuit. A
nonlinear element has the following dependency: yn=a+b(V1-
V2)2. The admittances y1, y2, y3 are positive and compose a set
of independent circuit parameters (K=3). The node voltages V1,
V2 are the dependent parameters (M=2). Vector X consists of
the following five components: (x1, x2, x3, x4, x5): x12 =y1, x22
=y2, x32 =y3, x4 =V1, x5 =V2. By defining the components x1, x2,
x3 using the above formulas, we can automatically obtain
positive values of the conductance, which eliminates the issue
of positive definiteness for each conductance and allows us to
perform optimization in the full space of the values of these
variables without any restrictions.
The model of this circuit includes two equations
corresponding to Kirchhoff’s laws. The objective function
C(X) is determined by the formula C(X)=(x5-m1)2+((x4-x5)-
m2)2, where m1 and m2 are predetermined values of the divider
voltages. This circuit is characterised by two (M=2) dependent
parameters (x4, x5), and three (K=3) independent parameters
(x1, x2, x3). The control vector has the next structure:
U=(u1,u2). The structural basis of the various strategies
includes four strategies with the following control vectors:
(00), (01), (10), and (11). The mathematical model of the
circuit is determined by the following equations:
0))()(()1()( 2
24
2
5454
2
141 xxxxbaxxxxXg
(15)
0))()(()( 2
25
2
54542 xxxxbaxxXg
It is from the solution of system (15) that the values of the
dependent variables x4, x5 can be determined and then the
value of the objective function C(X) can be calculated. In the
case of the transformation of the two dependent variables x4,
x5 (or at least one of them) into independent ones, it is
necessary to form a generalized objective function F(X,U)
according to the following formula:
/))()(()(),( 2
22
2
11 XguXguXCUXF
 
Consider the optimization problem for the circuit shown in
Fig. 1. Let a = 1, b = 1, m1=0.2, and m2=0.25. For the example
analyzed, the length L of the chromosomes varied from 20 to
80 for each of the five variables, and the number NN of
chromosomes in the population varied from 60 to 320.
Variable limits are specified in a normalized form; for
variables x1, x2, x3, they were set from 10-5 to 2.0, and for
variables x4, x5 from 10-3 to 1.0. Matrix X has five rows (N = 5)
and NN columns. At the same time, we introduce a vector X
consisting of five components, each of which is calculated by
formula (11). This vector is not directly involved in the
calculations, but serves as an informative object for
constructing averaged trajectories of the optimization process.
In this case, we can follow the evolution of the mean values
and build graphs of the trajectories of the optimization process
based on the GA in N dimensions or different projections of
these trajectories. The calculations for each trajectory
continued until the required accuracy δ for the generalized
objective function F(X,U) was achieved. For a given accuracy
δ = 3·10-5, two projections of the four trajectories for strategies
(00), (01), (10), and (11) are shown in Fig. 4. Strategy (00)
corresponds to TSO, and strategy (11) corresponds to MTS.
These dependences are obtained by averaging the stochastic
results of the GA, in contrast to the trajectories obtained by
direct integration of differential equations in the analytical
approach [25]. However, they reflect the behavior of the
optimization trajectory. All strategies start at one point S and
end at approximately one point F, but their behavior during
optimization process are very different.
It is important to note that the required accuracy of
minimising the objective function F(X,U) has a significant
impact on the optimization process and its characteristics.
Each of the four strategies has its own convergence accuracy.
Table III shows the results reflecting the potential accuracy ε
of the optimization process that each strategy can achieve and
the number of generations required to obtain a solution with
precision δ = 10-5.
Fig. 4. x2-x5 and x3-x5 projections for four optimization strategies.
TABLE III. POTENTIAL ACCURACY ε AND NUMBER OF GENERATIONS
N
Control
vector
Potential
accuracy ε
Number of
generations
for δ =10-5
1
(00)
1.69·10-5
No solution
2
(01)
2.04·10-5
No solutiion
3
(10)
6.05·10-6
77
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2022.21.18
Alexander Zemliak, Christian Serrano
E-ISSN: 2224-2678
172
Volume 21, 2022
4
(11)
2.87·10-5
No solution
Table III reveals that for an error of 1.6·10-5 or less in
obtaining the solution, only one of the strategies, namely
strategy (10), will allow solving the problem. In this case, 77
generations are required. Other strategies, including the
traditional strategy (00), cannot find solutions for any number
of steps in the optimization procedure. The presence of
various strategies determined by the control vector U allows
one to formulate the problem of constructing a complex
optimization strategy consisting of several different strategies.
It is possible to propose the structure of a composite strategy
consisting of two, three, or more different strategies, where
each composite strategy is determined by the control vector U.
In this case, it is important to obtain the optimal position of
the switching points from one strategy to another, which
ensures a decrease in the parameter ε, i.e., an increase in the
accuracy of solving the optimization problem. Table IV shows
the results of some composite strategies with optimal
switching point Sp that can significantly improve the accuracy
of solving the problem. This table contains data for six
composite strategies, each of which consists of two strategies
in Table III. As can be seen, the accuracy of the solution was
improved by 2 to 3 orders of magnitude. The table shows that
a decrease in the required error leads at the same time to a
slight increase in the number of required populations. We have
seen that only one strategy from Table III can solve the
problem with an accuracy of 10-5.
TABLE IV. POTENTIAL ACCURACY ε AND NUMBER OF GENERATIONS
FOR COMPLEX SINGLE SWITCHING POINT STRATEGIES
N
Control
vector
Sp
ε
Number of generations G for
various precision δ
10-5
10-6
10-7
4·10-8
1
(01) (00)
13
3.94·10-8
31
35
44
51
2
(10) (00)
2
3.94·10-8
32
38
44
49
3
(11) (00)
8
3.94·10-8
31
38
44
57
4
(00) (01)
20
10-7
34
46
62
-
5
(00) (10)
16
6.75·10-7
35
57
-
-
6
(00) (11)
20
7.22·10-7
44
72
-
-
However, the results presented in Table IV show that
complex strategies allow solving the problem with much more
stringent requirements for the accuracy of the solution
obtained. This table provides data on the potential accuracy
achievable for various compound strategies at the optimum
position of switching point. The results of the optimization
process are also presented in the form of the required number
of GA populations at which the required accuracy
is
achieved. The composite strategies allow solving the
optimization problem with a significantly higher accuracy than
the original strategies of Table III. Some of these strategies can
solve the problem up to 4·10-8 accuracy. In this case, as can be
seen from Table IV, number of generations increases
insignificantly with an increase in the required accuracy of
solving the problem. This happens until the potential strategy
error exceeds the required precision for solving the problem.
Fig. 5 shows the dependence of the minimized function F
on the number of generations G for three strategies
corresponding to three variants of calculating the fitness
function for precision δ = 10-6.
These dependences are plotted for two scales - large (Fig.
5(a)) and small, which corresponds to the inner region of the
ellipse in the first figure (Fig. 5(b)). The composite strategy
includes two strategies (01) and (00) with the switching point
between them Sp=13. Simple strategies (00) and (11) do not
achieve the required accuracy δ = 10-6, but the composite
strategy solves the problem in 35 generations.
(a) Large scale
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2022.21.18
Alexander Zemliak, Christian Serrano
E-ISSN: 2224-2678
173
Volume 21, 2022
(b) Small scale
Fig. 5. Minimized function F for strategies (00), (11) and (01)(00) on the
number of generations G.
The results of the analysis of compound strategies with two
switching points are presented in Table V. This table shows
similar results for compound strategies with three parts. Again,
a significant improvement in the accuracy of the solution by 2
to 3 orders of magnitude was obtained in comparison with the
strategies in Table III. In this case, the possible number of
compound strategies that allow solving the problem with high
accuracy also increases.
TABLE V. POTENTIAL ACCURACY ϵ AND NUMBER OF GENERATIONS
FOR COMPLEX STRATEGIES WITH TWO SWITCHING POINTS SP1, SP2
N
Control
vector
Sp1,
Sp2
ε
Number of generations for
various precision δ
10-5
10-6
10-7
10-8
1
(00)(01)(00)
2, 4
3.94
·10-8
30
38
45
57
2
(00)(10)(00)
4, 6
7.53
·10-7
32
42
-
-
3
(00)(11)(00)
4, 11
7.53
·10-7
33
44
-
-
4
(00)(11)(10)
15,
16
9.03
·10-8
34
40
51
-
5
(01)(11)(00)
3, 6
3.94
·10-8
30
37
41
48
6
(10)(11)(00)
3, 4
3.94
·10-8
28
36
45
51
7
(11)(01)(00)
3, 4
3.94
·10-8
29
34
40
45
8
(11)(10)(00)
4, 9
7.53
·10-7
29
42
-
-
9
(11)(00)(01)
2, 11
6.84
·10-8
37
49
65
-
10
(01)(10)(00)
4, 7
3.94
·10-8
29
35
42
53
The obtained result shows that the change in the structure
of the fitness function in the course of the optimization
algorithm allows us to bypass local minima and overcome
premature convergence. Such an improvement in the accuracy
of the solution leads to a significant reduction in the number
of GA generations needed to obtain the required accuracy of
the solution to the optimization problem.
A. Example 3
Let's analyze the optimization process of the nonlinear
circuit shown in Fig. 6.
Fig. 6. Single-stage amplifier.
The conductivities y1, y2, y3 are positive and compose the
set of non-dependent parameters of the circuit (K=3). Nodal
voltages V1, V2, V3 for nodes 1, 2 and 3 are the dependent
parameters (M=3). Let's define a vector of variables X ϵ R6,
including six components (x1, x2, x3, x4, x5, x6): x12 =y1, x22 =y2,
x32 =y3, x4 =V1, x5 =V2, x6 =V3. A static Ebers-Moll model of
transistor was used [28].
The objective function C(X) of the optimization process
was determined as the sum of the squares of the differences
between the previously specified and current values of the
nodal voltages:
M
iiiK VxXC
1
2
0)(

where V10,V20,V30 are the before-defined values of nodal
voltages.
The circuit model is defined by Kirchhoff's law as:
0
2
1401 xxEIXg B

0
5
2
22 xxIXg E
 
0
2
3613 xxEIXg C

where IB, IE, IC are the base, emitter and collector currents,
respectively. This system serves as a system of constraints for
minimizing the objective function C(X). The control vector
includes three components U=(u1,u2,u3). Using the generalized
approach, we transform system (18) into system (19).
01 Xgu jj
j 
The generalized objective function is defined by the
following formula:
3
1
2)(
1
)(,
jjj XguXCUXF
 
Table VI shows the number of generations required to
achieve the minimum of the function F with accuracy δ for
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2022.21.18
Alexander Zemliak, Christian Serrano
E-ISSN: 2224-2678
174
Volume 21, 2022
three strategies. The first two are TSO with control vector U =
(0,0,0) and MTS with control vector U = (1,1,1). The third
strategy is a combined strategy defined by the control vector
(000)(111) with one switching point Sp = 9. That is, the first
nine iterations correspond to the traditional strategy (000), and
the next ones correspond to the strategy (111).
It can be seen that when using the traditional strategy, the
required number of generations is much larger than in the case
of the modified traditional strategy and the combined strategy
to achieve the same accuracy. In addition, the traditional
strategy does not provide a good accuracy of achieving the
minimum of the objective function. When the required error is
10-4 or less, no solution based on the traditional strategy is
found. At the same time, the modified strategy (111) and the
combined strategy, consisting of two strategies (000) and
(111), find a solution with an accuracy of 5·10-11. Note that the
combined strategy finds a solution to the problem in a
significantly fewer generations.
TABLE VI. NUMBER OF GENERATIONS G FOR STRATEGIES (000), (111)
AND (000)(111) FOR DIFFERENT PRECISION δ
Precision
δ
Control
vector
(000)
Control
vector
(111)
Control
vector
(000)(111)
Sp=9
10-1
137
36
27
10-2
20706
47
32
10-3
348514
63
38
10-4
-
77
42
10-5
-
88
49
10-6
-
102
59
10-7
-
109
69
10-8
-
123
76
10-9
-
144
89
10-10
-
172
99
5·10-11
-
210
108
10-11
-
-
-
It can be seen that the gain in the number of iterations
(number of generations) for the combined strategy and MTS is
three to four orders of magnitude compared to the traditional
one, if the traditional strategy as a whole is able to find a
solution. The reduction in CPU time is even greater, since the
time of one iteration of the modified strategy is much less than
the traditional one.
It is clear that the final result of the combined strategy
depends on the switching point from one strategy to another.
Table VII shows the dependence of the number of generations
on the switching point for a given error δ =10-5. It can be seen
that the switching point significantly affects the number of
generations required to achieve the necessary accuracy. The
minimum value for this combined strategy corresponds to the
switching point Sp = 9.
TABLE VII. NUMBER OF GENERATIONS G FOR COMBINED STRATEGIES
(000)(111) FOR DIFFERENT SWITCHING POINT SP
Control
vector
(000),(111)
2
3
4
8
9
10
11
12
13
G
67
56
96
53
49
52
99
57
74
Table VIII contains information for the three considered
strategies, summarizing their comparative characteristics when
achieving an accuracy of 10-2 and 10-3 for the minimized
objective function. The numerical values of the number of
generations, the CPU time of all strategies, as well as the
comparative gain for the MTS and for the combined strategy,
both in the number of generations and in the CPU time
compared to TSO, are given. Note that TSO does not allow
finding a solution to the problem with the required error less
than 10-3. It can be seen that both the MTS and the combined
strategy provide a large gain over TSO.
With an error of 10-2, the gain in terms of the number of
generations is more than two orders of magnitude, and in
terms of CPU time, more than three orders of magnitude. With
a given error of 10-3, the gain in terms of the number of
generations is 3-4 orders of magnitude, and in terms of CPU
time, it is almost five orders of magnitude.
TABLE VIII. GENERALIZED COMPARATIVE CHARACTERISTICS FOR THREE
DIFFERENT STRATEGIES
Precision
δ
Control
vector
(000)
(111)
(000)(111)
Sp=9
10-2
Number of
generations
20706
47
32
Gain in the
number of
generations
440
647
CPU time (s)
1178.15
0.266
0.244
Time gain
4429
4828
10-3
Number of
generations
348514
63
38
Gain in the
number of
generations
5532
9171
CPU time (s)
20518.2
0.353
0.276
Time gain
58124
74340
The information presented in this table is the main practical
result of the work. It can be stated that the use of a generalized
approach that changes the structure of the vector of basic
variables X and the shape of the fitness function makes it
possible to overcome the problem of the GA's premature
convergence to a local minimum. In this case new strategies
appear that can substantially increase the accuracy of solving
the problem and significantly speed up the optimization
procedure.
In Fig. 7 shows the dependence of the function F to be
minimized on the number of generations G for the three
analyzed strategies at a given accuracy of 10-5.
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2022.21.18
Alexander Zemliak, Christian Serrano
E-ISSN: 2224-2678
175
Volume 21, 2022
Fig. 7. Minimized function F for strategies (000), (111) and
(000)(111) on the number of generations G.
Traditional strategy of optimization cannot find a solution
to the problem with the required accuracy. On the contrary, it
is obvious that the modified traditional strategy and the
combined strategy find a solution to the task rather quickly.
Thus, we can conclude that new optimization strategies that
appear within the framework of the presented generalized
approach have good prospects for improving the optimization
process of electronic circuits.
4. Conclusion
A generalized approach in terms of control theory to
solving the problem of optimizing electronic circuits using
deterministic optimization methods was developed earlier. The
obtained algorithms have shown high efficiency in comparison
with the traditional approach in terms of both accuracy and
speed.
This paper demonstrates the possibility of embedding the
idea of generalized optimization into the body of stochastic
optimization methods. It was shown that this approach can be
built into GA, which leads to the formation of a set of different
optimization strategies and a significant improvement in the
main characteristics of GA.
The studied examples demonstrate the practical
implementation of a modified GA based on a generalized
approach for solving the problem of optimizing electronic
circuits. The emerging new optimization strategies make it
possible to increase the accuracy of the problem solution by
several orders of magnitude. It should also be emphasized that
the real gain of these strategies in CPU time compared to the
traditional approach is much higher than the gain in the
number of GA populations. This is due to the fact that the
processor time for evaluating the fitness function for new
strategies is much less than in the traditional case.
REFERENCES
[1] G. Stehr, M. Pronath, F. Schenkel, H. Graeb, and K. Antreich K, “Initial
sizing of analog integrated circuits by centering within topology-given
implicit specifications”, Proceedings of the IEEE/ACM International
Conference on Computer-Aided Design, 2003, pp. 241–246.
[2] M. Hershenson, S. Boyd, and T. Lee, “Optimal design of a CMOS op-
amp via geometric programming”, IEEE Transactions on Computer-
Aided Design of Integrated Circuits, vol. 20, no.1, 2001, pp.1–21.
[3] J. W. Bandler, R. M. Biernacki, S. H. Chen, P. A. Grobelny, and R.
Hemmers, “Space mapping technique for electromagnetic optimization”,
IEEE Transactions on Microwave Theory and Techniques, vol. 42,
1994, pp. 2536–2544.
[4] S. Koziel, J. W. Bandler, and K. Madsen, “Space-mapping-based
interpolation for engineering optimization”, IEEE Transactions on
Microwave Theory and Techniques, vol. 54, 2006, pp. 2410–2421.
[5] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by
Simulated Annealing”, Science, vol. 220, 1983, pp. 671–680.
[6] M. Moutchou, H. Mahmoudi, and A. Abbou, “Induction Machine
identification based on a new technique of simulated annealing
optimization”, Proceedings of International Conference on Renewable
and Sustainable Energy, 2014, pp. 873–878.
[7] P. Grabusts, J. Musatovs, and V. Golenkov, “The application of
simulated annealing method for optimal route detection between
objects”, Procedia Computer Science, vol.149, 2019, pp. 95–101.
[8] D. Nam, Y. Seo, L. Park, C. Park, and B. Kim, “Parameter optimization
of an on-chip voltage reference circuit using evolutionary
programming”, IEEE Transactions on Evolutionary Computations, vol.
5, no. 4, 2001, pp. 414–421.
[9] G. Alpaydin, S. Balkir, and G. Dundar, “An evolutionary approach to
automatic synthesis of high performance analog integrated circuits”,
IEEE Transactions on Evolutionary Computation, vol. 7, no. 3, 2003,
pp. 240–252.
[10] B. Liu, Y. Wang, Z. Yu, L. Liu, M. Li, Z. Wang, J. Lu, and F. V.
Fernandez, “Analog circuit optimization system based on hybrid
evolutionary algorithms”, Integration the VLSI journal 2009, vol. 42, no.
2, 2009, pp. 137–148.
[11] H. Nenavath and R. Kumar Jatoth, “Hybridizing sine cosine algorithm
with differential evolution for global optimization and object tracking”,
Applied Soft Computing, vol. 62, 2018, pp. 1019–1043.
[12] J. Robinson and Y. Rahmat-Samii, “Particle swarm optimization in
electromagnetic”, IEEE Transactions on Antennas and Propagation, vol.
52, 2004, pp. 397–407.
[13] M. A. Zaman, M. Gaffar, M. M. Alam, S. A. Mamun, and M. Abdul
Matin, “Synthesis of antenna arrays using artificial bee colony
optimization algorithm”, International Journal of Microwave and Optical
Technology, vol. 6, no. 8, 2011, pp.234–241.
[14] A. Sallem, B. Benhala, M. Kotti, M. Fakhfakh, A. Ahaitouf, and M.
Loulou, “Application of swarm intelligence techniques to the design of
analog circuits: evaluation and comparison”, Analog Integrated Circuits
and Signal Processing, vol. 75, no. 3, 2013, pp. 499-516.
[15] F. Li, X. Cai, and L. Gao, “Ensemble of Surrogates Assisted Particle
Swarm Optimization of Medium Scale Expensive Problems”, Applied
Soft Computing, vol. 74, 2019, pp. 291-305.
[16] M. L. Carneiro, P. H. P. de Carvalho, N. Deltimple, L. da C Brito, L.R.
de Menezes, E. Kerherve, S. G. de Araujo, and A. S. Rocira, “Doherty
amplifier optimization using robust genetic algorithm and unscented
transform”, Proceedings of Annual IEEE Northeast Workshop CAS,
2011, pp. 7780.
[17] M. Z. R. Khan and A. K. Bajpai, “Genetic algorithm and its application
in mechanical engineering”, International Journal of Engineering
Research & Technology, vol. 2, no. 5, 2013, pp. 677–683.
[18] M. Quiroz-Castellanos, L. Cruz-Reyes, J. Torres-Jimenez, C. Gomez, H.
J. Fraire-Huacuja, and A. C. Alvim, “A grouping genetic algorithm with
controlled gene transmission for the bin packing problem”, Computer
Operation Research, vol. 55, 2015, pp. 5264.
[19] Y. C. Chuang, C. T. Chen, and C. Hwang, “A simple and efficient real-
coded genetic algorithm for constrained optimization”, Applyed
Software Computing, vol. 38, no. C, 2016, pp. 87105.
[20] L. L. Corso, A. L. Gasparin, and H. M. Gomes, “Reliability based
design optimization using a genetic algorithm: application to bonded
thin films areas of copper/polypropylene”, Ingeniare. Revista chilena de
ingeniería, vol. 24, no. 3, 2016, pp. 510–519.
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2022.21.18
Alexander Zemliak, Christian Serrano
E-ISSN: 2224-2678
176
Volume 21, 2022
[21] A. A. Khan and R. N. Mir, “Optimization of Constrained Function
Using Genetic Algorithm”, Computer Engineering and Intelligent
Systems, vol. 8, no. 2, 2017, pp. 11–15.
[22] A. A. R. Hosseinabadi, J. Vahidi, B. Saemi, A. K. Sangaiah, and M.
Elhoseny, “Extended Genetic Algorithm for solving open-shop
scheduling problem”, Soft Computing, vol. 23, 2019, pp. 5099–5116.
[23] C. Goh and Y. Li, “GA automated design and synthesis of analog
circuits with practical constraints”, Proceedings of IEEE Congress on
Evolutionary Computations, vol. 1, 2001, pp. 170–177.
[24] N. F. Paulino, J. Goes, and A. Steiger-Garcao, “Design methodology for
optimization of analog building blocks using genetic algorithms”,
Proceedings of Symposium on Circuits and Systems, 2001, pp. 435–438.
[25] A. Zemliak, “Analog circuit optimization on basis of control theory
approach”, COMPEL: The International Journal for Computation and
Mathematics in Electrical and Electronic Engineering, vol. 33, no. 6,
2014, pp. 2180–2204.
[26] A. Zemliak, and T. Markina, “Behaviour of Lyapunov´s function for
different strategies of circuit optimization”, International Journal of
Electronics, vol. 102, no. 4, 2015, pp. 619634.
[27] L. Davis (ed.), Handbook of Genetic Algorithms. Van Nostrand
Reinhold: New York, 1991.
[28] G. Massobrio and P. Antognetti, Semiconductor Device Modeling with
SPICE, Mc. Graw-Hill, Inc.: New York, 1993.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the Creative
Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en_US
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2022.21.18
Alexander Zemliak, Christian Serrano
E-ISSN: 2224-2678
177
Volume 21, 2022