Distributed optimization algorithms for heterogeneous linear
multi-agent systems with inequality constraints
ZHENGQUAN YANG1, WENJIE YU2, ZHIYUN GAO1
1College of transportation science and Engineering, Civil Aviation University of China,
Tianjin 300300, CΗΙΝΑ
2College of Science, Civil Aviation University of China, Tianjin 300300, CΗΙΝΑ
Abstract: - In this paper, for heterogeneous linear multi-agent systems, a distributed constrained optimization
problem about digraphs is studied. Every agent only utilizes local interaction and information such that all agents
can achieve the global objective function. The state of each agent is limited to a local inequality constraint set.
First, this paper proposes a distributed continuous-time optimization algorithm by designing a left eigenvector
corresponding to the zero eigenvalue of the Laplacian matrix, which removes the imbalance of the communication
graph. Next, the asymptotical convergence about the algorithm is demonstrated using Lyapunov stability. Finally,
two numerical examples are given to illustrate the effectiveness of the algorithm.
Key-Words: - Asymptotical convergence, Weight-unbalanced digraphs, Multi-agent systems, Distributed
optimization,
Received: June 2, 2023. Revised: September 1, 2023. Accepted: October 1, 2023. Published: October 20, 2023.
1 Introduction
Distributed optimization has broad application
prospects in the fields of economic dispatch in
smart grid, [1], wireless sensor networks, [2], UAV
formation, [3], etc., so it has become a research
hotspot at present. Several types of distributed
algorithms based on the framework of multi-agent
systems have been proposed successively and are
used to solve various optimization problems.
Up to now, in most of the literature (see, [4], [5]),
many works have been derived based on the setting
of discrete-time. Owing to the fact that continuous-
time dynamic systems are more convenient to
study convergence and their applications in
physical systems are more flexible, distributed
continuous-time algorithms have also received
widespread attention from scholars. In [6], the
authors put forward the continuous-time distributed
adaptive coordination protocols to address the given
optimization problems. Aiming at the distributed
optimization problem with general constraints,
a proportional-integral (PI) consensus protocol
was proposed by [7]. However, in light of these
distributed strategies depend on the differentiability
of local objective functions, they may not be able
to solve the problems that local objective functions
are non-smooth. The dual decomposition technique
was given to decouple the equality constraints and
a new initialization-free distributed continuous-
time algorithm was developed in [8]. In [9], the
authors adopted the Laplacian-gradient method
and designed an adaptive control to resolve the
resource allocation problem, and constructed a
distance-based exact penalty function to handle local
convex sets. Different from the above distributed
optimization problems based on single integrator
multi-agent systems, the Euler-Lagrange systems
were considered in [10], and the system dynamics
were second-order dynamical systems in [11].
These above researches are all based on undirected
networks, which require the information transmission
between agents to be symmetrical and bidirectional.
However, in the actual systems, the information
exchange between agents is often one-way and easy
to be interfered with external factors, such as data
packet loss and signal attenuation. Therefore, in
recent years, distributed optimization on digraphs
has attracted more and more attention. In [12],
the authors proposed a subgradient optimization
algorithm of distributed average consistency and in
[13], the authors established an improved subgradient
push algorithm by introducing push-sum protocol
and distributed subgradient algorithm for the
deterministic directed switching network. In [14],
the authors solved the constrained optimization
problems without any initialization conditions and in
[15], the authors researched the resource allocation
problems with the help of a projection operator over
weighted balanced digraphs. In [16], [17], [18], [19],
the directed network is generalized to the situation
of weight-unbalanced. To avoid knowing certain
information of both in-neighbors and out-neighbors,
the effort was made in [16], to design a continuous-
time coordination algorithm. In [17], the authors
designed an adaptive algorithm by virtue of dynamic
coupling gains and the requirements of the Lipschitz
constants, meanwhile, the network connectivity
was removed. Resorting to the gradient-tracking
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
756
Volume 22, 2023
and PI strategies, in [18], the authors established a
distributed optimization algorithm with a fixed step-
size, which ensured that the convergence rate was
not affected by the decreasing step-size. In [19], the
authors designed an adaptive distributed continuous-
time algorithm, in which the out-Laplacian matrix
was adopted to overcome the difficulties brought
about by a weight-unbalanced communication graph.
As an important carrier of distributed information
processing, linear multi-agent systems have achieved
many research results. For homogeneous linear
multi-agent systems on general digraphs, some
relevant studies have been conducted in [20].
Actually, different multi-agents have different system
states and even diverse dimensions of state space.
Many works in [21], [22], [23], have been derived
for heterogeneous linear multi-agent systems. By
means of the KKT condition and primal-dual control
scheme, in [22], the authors proposed the state-based
and output-based adaptive control algorithms without
initialization. Based on output feedback, the PI
control technique was used to handle distributed
optimal output consensus problem in [23]. This did
not require the convexity of the local cost functions,
but the global control parameters were adopted.
For more related works on distributed optimization
algorithms, readers can refer to some literature
reviews, [24], [25], [26], [27], [28], [29].
Most of theses above-mentioned researches
are limited to distributed optimization problems
under undirected graphs or unconstrained digraphs.
The design of constrained distributed algorithms
under weight-unbalanced digraphs still has some
limitations. When the balance of communication
topology is damaged, the original distributed
algorithms suitable for undirected and balanced
directed graphs may become invalid. The Laplacian
matrix for the unbalanced communication graph is
asymmetric, so the equilibrium point of the general
distributed algorithm is not equal to the optimal
solution. Hence, it is quite necessary to extend
the constrained distributed optimization algorithms
to the weight-unbalanced digraphs. Besides, most
systems are generally heterogeneous in practice, in
the meantime, notice that more and more scholars
focus on the heterogeneity of multi-agent systems.
Generally speaking, on account of the more complex
dynamics involved in systems, dealing with the
consistency of heterogeneous systems is undoubtedly
a challenge. thus, the study of distributed constrained
optimization problems over weight-unbalanced
digraphs considering the heterogeneous structure of
multi-agent systems can be further discussed.
To handle the distributed optimization problem
with inequality constraints, in which the local
objective functions are strongly convex, this
paper aims to design a novel continuous-time
optimization algorithm for heterogeneous linear
multi-agent systems. Simultaneously, each agent
interacts information with other agents through a
communication network modeled by the weight-
unbalanced digraph. The main contributions of this
paper are given as follows.
(1) Compared with [6], [7], [8], [22], which require
the graphs to be undirected, our algorithm is
designed over weight-unbalanced digraphs. In
addition, some of the methods used to prove
convergence cannot be applied to unbalanced
graphs such as the symmetry of undirected
graphs in [6]. The distributed optimization
problems on unbalanced digraphs have also
been studied in [16], [17], [23]. However, in
contrast to [16], [17], [23], which can only deal
with unconstrained optimization problems, these
approaches they used do not enable agents to
reach consensus on the intersection of feasible
sets subject to set constraints.
(2) We devote ourselves to researching the
heterogeneous linear multi-agent systems in
this study while the works of [15], [20] are
restricted to homogeneous linear multi-agent
systems. The subsystems of this paper have
different dynamics, so the method is also
suitable for agents with identical dynamics.
We arrange the remaining parts of this paper in the
following order. Section 2 introduces some useful
preliminaries. Section 3 formulates the constrained
optimization problem and gives some indispensable
assumptions and lemmas. The main result of
this article presented in Section 4 is to seek the
optimal output of a given problem by designing a
distributed continuous time optimization algorithm,
and to analyze its asymptotic convergence.Then in
Section 5, the effectiveness of the proposed algorithm
is illustrated via two numerical examples. Finally,
Section 6 discusses our conclusions and future work.
2 Preliminaries
2.1 Notations
Let R,Rn,Rn×mstand for the sets of real
numbers, n-dimensional real vectors, and n×m-
dimensional real matrices, respectively. Let In
Rn×n,1nRn,0nRnrepresent the identity
matrix, the vector with entries equal to one and
zero, respectively. A>and k · k are respectively
the transpose of a matrix Aand the Euclidean norm.
col(x1, . . . , xn) = (x>
1, . . . , x>
n)>is a column vector
sequentially stacked by vectors x1, . . . , xn.(p)+=
p, if p > 0, and (p)+= 0 otherwise. represents the
Kronecker product.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
757
Volume 22, 2023
2.2 Convex Analysis and Projection
Operation
For a set Rd, if τ x1+ (1 + τ)x2for
any x1, x2Rd,x16=x2and τ(0,1), then
is a convex set. f:RdRis a continuously-
differentiable function, if there exists m > 0such
that (x1x2)>(f(x1)f(x2)) mkx1x2k2,
for any x1, x2Rdholds, the function fis said to
be strongly convex over a convex set Rd. And
moreover, if there exists x, M > 0such that for
any x1, x2 B(x, x), there is |f(x1)f(x2)|
Mkx1x2k, then we say fis locally Lipschitz at
xRd.
We signify P(x) = argminykxykas the
projection of a vector x, where denotes a closed
convex set. A useful lemma on the based on the
definition of P(x)is following:
Lemma 1 ([7]) Define Rdto be a nonempty
closed convex set. Then x=P(y)if and only if
xand (xy)>(%x)0holds for any %.
2.3 Graph Theory
We describe G= (V, ε, A)as a communication
graph consisting of Nagents with node set V=
{1, . . . , N}and edge set ε V × V, and A=
[aij ]RN×Nis a weighted adjacency matrix.
Particularly, aij >0if (i, j)ε, and aij = 0
otherwise. A directed path from agent ito agent
jis a sequence of ordered edges in the form of
(i, i1),(i1, i2), . . . , (il, j). If there exists a directed
path connecting every pair of nodes, then Gis called
a strongly connected directed graph. Let din
i=
PN
j=1 aij and dout
i=PN
j=1 aji respectively be the
in-degree and out-degree of agent i. The Laplacian
matrix L= [lij ]RN×Nassociated with G
is denoted by L=Din A, where Din =
diag(din
1, din
2, . . . , din
N). For all i V, if din
i=dout
i,
then we say the directed graph is weight-balanced,
otherwise weight-unbalanced.
Lemma 2 ([16]) For a strongly connected graph G,
and LRN×Nis its Laplacian matrix. Then, the
following properties hold.
(1) L1N=0N;
(2) there is a positive left eigenvector ξ=
(ξ1, ξ2, . . . , ξN)>concerning eigenvalue zero
such that ξ>L=0>
Nand PN
i=1 ξi= 1;
(3) ˜
L=ΞL+L>Ξ
2is a valid Laplacian matrix
for a strongly connected and balanced graph,
where Ξ = diag(ξ1, . . . , ξN), and ˜
Lis positive
semidefinite. Zero is a simple eigenvalue of
˜
Land λ2(˜
L) = min1>
N=0N
x>˜
Lx
x>xis its second
smallest eigenvalue.
3 Problem formulation
We consider that a multi-agent system is
composed of Nagents, and each agent iis given the
following heterogeneous linear dynamics:
˙xi=Aixi+Biui,
yi=Cixi,(1)
where xiRniis the state variable of agent i,ui
Rpiis the control input of agent i, and yiRdis the
control output of agent i.AiRni×ni,BiRni×pi,
CiRd×niare the state, input, and output matrices,
respectively, which are constant matrices.
Our goal is to design a proper controller ui(t)
for each agent i, where each agent only knows its
own information and local interaction, such that all
agents can reach the optimal output yunder the local
inequality constraints. The optimization problem
with inequality constraints is formulated as follows:
min f(y) =
N
X
i=1
fi(y),
s.t. gi(y)0, i = 1, . . . , N,
(2)
where yRd,fi:RdR,gi:RdRri.
Here, fi(y)is known as the local objective function
of ith agent, and gi(y)=(gi1(y), . . . giri(y))>is
the local inequality constraints of ith agent, where ri
denotes the number of local constraints. Clearly, the
optimization problem can be resolved in a distributed
way due to both information of the local objective
functions fi(y)and local constraints gi(y)are only
obtained by agent i.
Before the introduction of our main results, we
make a few assumptions about the communication
graph, the local objective functions and the local
constraint functions.
Assumption 1 The communication network Gis
strongly connected.
Assumption 2 The function fi(y), i = 1, . . . , N
is mi-strongly convex. Local constraint function
gi(y), i = 1, . . . , N is convex.
Assumption 3 The gradient function of the local
objective function fi, i = 1, . . . , N as well as the
local constraint function gi, i = 1, . . . , N is Mi-
Lipschitz with Mi>0.
Under the whole output variable y=
col(y1, y2, . . . , yN)RN d, the global
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
758
Volume 22, 2023
objective function incurred by all agents is
f(y) = PN
i=1 fi(yi). Under Assumption 1, we
can reformulate the problem (2) as
min f(y) =
N
X
i=1
fi(yi),
s.t. (LId)y=0, gi(yi)0, i = 1, . . . , N.
(3)
The equality constraint (LId)y=0ensures that
y1=y2=. . . =yN, the problem (2) and problem
(3) are equivalent in terms of the optimal solution
set. Then to solve the problem (2), we can solve the
problem (3) instead.
Remark 1 Define Y=TN
i=1 Yiwith Yi={yi
Rd|gi(yi)0, i = 1, . . . , N}as the global constraint
set on the output variable. And the optimal set of the
considered optimization problem is denoted by Y.
Assumption 1 permits the graph to be unbalanced.
Assumption 2 ensures that the optimal solution to (3)
is unique and the optimal set Yis convex.
Assumption 4 (Ai, Bi)is controllable, and
rank(CiBi) = d, i = 1, . . . , N. (4)
Lemma 3 ([23]) Under Assumption 4, the following
matrix equations
CiBiΥi=CiAi,
CiBiΨi=Id.(5)
exist solutions ΥiRpi×ni,ΨiRpi×d.
4 Distributed General
Continuous-Time Convex
Optimization Algorithm
In this part, we give a novel distributed
optimization algorithm to solve the problem (3)
and prove its convergence property in detail. The
distributed optimization algorithm is given as follows
ui=Υixi+ Ψi(fi(yi)
zi
i
(gi(yi))>(µi+ ˙µi)
α
N
X
j=1
aij (yiyj)
N
X
j=1
aij (ηiηj)),
˙ηi=α
N
X
j=1
aij (yiyj),
˙zi=
N
X
j=1
aij (zizj),
˙µi= (µi+gi(yi))+µi,
(6)
where ηiRdand µiRr
+with r=PN
i=1 riare the
auxiliary states of agent i,ziRNand zi
iis the ith
component of zi,xi, ui, yiare same defined as (1). α
is a positive parameter. Υi,Ψiare feedback matrices
based on Lemma 3. fiis the gradient of fi,giis
the gradient of gi. The simulation structure diagram
of the control algorithm (6) is shown in Figure 1
(Appendix)
Remark 2 In comparison, the related work in [7],
[8], [22] only give the results on undirected graphs,
while (6) is designed over unbalanced digraphs. We
evaluate the left eigenvector associated with the
zero eigenvalue of the Laplacian matrix ˜
Lthrough
(6) by designing the variable zi, which removes
the imbalance of the communication graph, and
ensures that the constrained optimization algorithm
can converge to the optimal output ywithout
knowing the information of the left eigenvector.
However, in [20], the information of ξneeds to
be obtained in advance. And moreover, the µiis
applied to handle local inequality constraints. It
should be noted that the algorithms in [16], [17],
[23], do not take the local state constraints into
account. Furthermore, for some other unconstrained
optimization problems over weight-balanced directed
graphs or undirected graphs, the algorithm (6) we
designed is still applicable.
Let x=col(x1, . . . , xN),y=col(y1, . . . , yN),
z=col(z1, . . . , zN),η=col(η1, . . . , ηN),
µ=col(µ1, . . . , µN),Ψ = diag1, . . . , ΨN),A=
diag(A1,Υ = diag1, . . . , ΥN), . . . , AN),
B=diag(B1, . . . , BN),C=
diag(C1, . . . , CN),ZN=diag(z1
1, . . . , zN
N),
g(y) = col(g1(y1), . . . , gN(yN),f(y) =
col(f1(y1), . . . , fN(yN)),g(y) =
col(g1(y1), . . . , gN(yN)), and ¯µ= (µ+g(y))+.
On the basis of the above definition, the compact
form of the algorithm (6) can be written as
˙x= (ABΥ)x+BΨ((Z1
NId)f(y)
(g(y))>¯µα(LId)y(LId)η),
(7a)
˙η=α(LId)y, (7b)
y=Cx, (7c)
˙z=(LIN)z, (7d)
˙µ= ¯µµ. (7e)
Pre-multiplying (7a) by C, substituting Lemma 3
into the above system, then we can get the following
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
759
Volume 22, 2023
result.
˙y=(Z1
NId)f(y)(g(y))>¯µ
α(LId)y(LId)η,
˙η=α(LId)y,
˙z=(LIN)z,
˙µ= ¯µµ.
(8)
Remark 3 In order to guarantee the existence of
Z1
N, it is essential that the initial value z(0) satisfies
zi
i= 1 for i= 1, . . . , N and zj
i= 0 for all i6=j.
With Assumption 1, it can be readily obtained that
eLt(t > 0) is a nonnegative matrix with positive
diagonal entries. This implies that zi
i>0for all
t0. Therefore, there exists (zi
i)1, that is, Z1
N
is well-defined.
Next, the optimality condition is given in Lemma 4
which plays a crucial part in the following analysis.
Lemma 4 Let the Lagrangian function of problem
(3) be L(y, η, µ) = f(y)+ 1
2k1
2Id)¯µk2+y>(˜
L
Id)η, where ηRNd and µRNr
+are Lagrange
multipliers. Then, if Assumptions 1-4 hold, the point
yis an optimal solution of the optimization problem
(3) iff there exist Lagrangian multipliers (η, µ)
RNd ×RNr
+such that the following KKT condition
holds:
0=f(y)+(µ)> Id)g(y)+(˜
LId)η,
µ0, g(y)0,(µ)>g(y) = 0.(9)
Proof 1 The second formula of (9) can be discussed
in the following two cases.
(1) It follows from µ0, g(y)=0that
(µ)>g(y)=0, then we can get that (µ+
g(y))+=max(µ+g(y),0) = µ+g(y) =
µ.
(2) It follows from µ= 0, g(y)0that
(µ)>g(y)=0, by similar discussion, we can
also have that (µ+g(y))+= 0 = µ.
And vice versa.
Combined with the definition of (·)+and pre-
multiplying (9) by 1Id), (9) is equivalent to
0= 1Id)f(y)+(g(y))>µ
+ (LId)η,
µ= (µ+g(y))+.
(10)
The lemma below gives the relation between an
optimal solution to the optimization problem (3) and
an equilibrium point of the algorithm (8).
Lemma 5 Assume that Assumptions 1-4 hold. Given
z(0) in Remark 3 and the parameter α(α > 0),
if (y, η, µ, z)is the equilibrium point of the
algorithm (8), then yis an optimal solution of the
distributed constrained optimization problem (3).
Proof 2 Let (y, η, µ, z)is the equilibrium point
of the algorithm (8), so
0=((Z1
N)Id)f(y)(g(y))>¯µ
α(LId)y(LId)η,
0=α(LId)y,
0=(LIN)z,
0= ¯µµ.
(11)
It follows from [16], that limt→∞ z=
limt→∞ e(LIN)tz(0) = (1Nξ>IN)z(0) =
1Nξ, which implies that limt→∞ Z1
N= Ξ1.
Thus, z= 1Nξand (Z1
N)= Ξ1. Based on the
above discussion and combined with the definition
of ¯µ= (µ+g(y))+, then we have
0=1Id)f(y)(g(y))>µ(LId)η,
(12a)
0= (LId)y,(12b)
¯µ=µ= (µ+g(y))+.(12c)
It is obvious that (12b) generates y=1N˜y
with ˜yRd. Furthermore, by comparing (12a), (12c)
with (10), we can obtain that the equilibrium point
of (8) satisfies the condition (10). By Lemma 4, we
know that yis an optimal solution to the problem (3).
Then, Theorem 1 provides the proof of convergence.
Theorem 1 Under Assumptions 1-4, given z(0) in
Remark 3 and the parameter α(α > 0), then
the output variable y(t)asymptotically converge to
the optimal solution yof (3) for any initial value
(y(0), η(0), µ(0)) RN d ×RN d ×RN r
+.
Proof 3 Let θ=col(y, η, µ),it follows from (8) that
θsatisfies
˙
θ=f(θ) + g(t, θ) + v(t),(13)
where
f(θ) = f1(θ)
f2(θ)
f3(θ)!.
f1(θ) = 1Id)f(y)(g(y))>¯µα(L
Id)y(LId)η, f2(θ) = α(LId)y, f3(θ) = ¯µµ,
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
760
Volume 22, 2023
g(t, θ) =
((Ξ1Z1
N)Id)(f(y) f(y))
0
0
,
v(t) =
((Ξ1Z1
N)Id)f(y)
0
0
.
First, consider the stability of the system:
˙
θ=f(θ).(14)
Let θ=col(y, η, µ)be an equilibrium point
of system (14) and the Lyapunov candidate is defined
as
V1=1
2(yy)> Id)(yy),(15)
Then we have
˙
V1= (yy)> Id)(1Id)f(y)
(g(y))>¯µα(LId)y(LId)η)
=(yy)>(f(y) f(y))
(yy)> Id)((g(y))>¯µ(g(y))>µ)
α
2(yy)>((ΞL+L>Ξ) Id)(yy)
1
2(yy)>((ΞL+L>Ξ) Id)(ηη)
=(yy)>(f(y) f(y))
(yy)> Id)((g(y))>¯µ(g(y))>µ)
α(yy)>(˜
LId)(yy)
(yy)>(˜
LId)(ηη),
(16)
where ˜
L=ΞL+L>Ξ
2we have used in Lemma 2.
Next, define V2=1
2(µµ)> Id)(µµ).
According to the definition of (·)+and Lemma 1, we
can conclude that
( ˙µ+µ(µ+g(y)))>(µ˙µµ)0,(17)
expand the equation (17), that is,
˙µ>(µµ) k ˙µk2g(y)>(µ¯µ)0,(18)
pre-multiplying (7a) by Id), it can be obtained
that
˙µ> Id)(µµ) k Id)1
2˙µk2
g(y)> Id)(µ¯µ)0,(19)
by combining (19) and the definition of V2, it is
simplified as
˙
V2 −k Id)1
2˙µk2g(y)> Id)(µ¯µ).
(20)
Take V3=1
2α(ηη)> Id)(ηη), where
α > 0. Then the derivative of V3along with (14) is
˙
V3=1
2α·α(yy)>((ΞL+L>Ξ) Id)(ηη)
= (yy)>(˜
LId)(ηη).
(21)
Let
V=V1+V2+V3(22)
be a Lyapunov function candidate, obviously, which
is positive semi-definitive. By utilizing (16),(20) and
(21), and from (22), we can obtain that
˙
V=(yy)>(f(y) f(y))
α(yy)>(˜
LId)(yy)
k Id)1
2˙µk2g(y)> Id)(µ¯µ)
(yy)> Id)((g(y))>¯µ
(g(y))>µ).
(23)
Since the function fis strongly convex, we have
(yy)>(f(y) f(y)) mkyyk2.(24)
Utilized the fact that the communication graph is
a connected digraph, one can obtain that
α(yy)>(˜
LId)(yy)αλ2(˜
L)kyyk2,
(25)
where λ2(˜
L)is the second smallest eigenvalue of ˜
L.
Next analyzing ω=g(y)> Id)(µ¯µ)
(yy)> Id)((g(y))>¯µ(g(y))>µ).
By rearranging these terms, it follows that
ω= +(µ)> Id)((g(y))>(yy)g(y))
¯µ> Id)((g(y))>(yy)g(y)).
(26)
By adding and subtracting g(y)in (26), one has
ω= (¯µ)> Id)((g(y))>(yy)
+g(y)g(y)) (¯µ)> Id)g(y)
¯µ> Id)((g(y))>(yy)
+g(y)g(y)) + ¯µ> Id)g(y)
(27)
where the convex property of g(y)and the
nonnegative property of ¯µand µare used in the first
and third term. It follows from Lemma 4 that yis
an optimal solution to the optimization problem (3).
This makes it easy to verify that g(y)0,
¯µ> Id)g(y)0, and (¯µ)> Id)g(y) = 0.
Hence, what we can gain from the above analysis is
that
˙
V −k Id)1
2˙µk2(m+αλ2)kyyk2
0.(28)
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
761
Volume 22, 2023
Due to ˙
Vis continuous and negative for any
α > 0,V(t)is bounded, then we can conclude
that y(t),µ(t)and η(t)are bounded. This
together with the Lipschitz condition of fiand
gi, which imply that fiand giare bounded,
then (Z1
NId)f(y)(g(y))>¯µα(L
Id)y(LId)ηis also bounded, and denotes
σ, σ > 0as its upper bound. Furthermore, we have
that kx(t)k kx0ket+ (1 et)σ kx0k+σ
according to the expression of ˙x(t), so x(t)is also
bounded. For the reason that Vis radially unbounded,
then by using LaSalle Invariance Principle, it can be
found that limt→∞ yi(t) = y, for i= 1, . . . , N.
Next, we discuss the asymptotical stability of the
terms g(t, θ)and v(t).
From [16], there exist two positive constants %1
and ι1such that max|ξ1
i(zi
i)1| %1eι1tholds.
Thus, g(t, θ)satisfies the inequality
kg(t, θ)k=k((Ξ1Z1
N)Id)(f(y) f(y))k
max|ξ1
i(zi
i)1| · k∇f(y) f(y)k
M%1eι1tkyyk.
(29)
v(t)satisfies
kv(t)k=k((Ξ1Z1
N)Id)f(y)k
max|ξ1
i(zi
i)1| · k∇f(y)k
%1k∇f(y)keι1t.
(30)
In light of (29), (30), it is evident that
limt→∞ g(t, θ) = 0and limt→∞ v(t) = 0. As
already indicated above, one has θ(t)asymptotically
converges to θas t , which means that the
proposed algorithm is effective for solving the
problem (3). This completes the proof.
5 Numerical Examples
Two numerical examples are provided to
demonstrate the effectiveness of the presented
algorithm in this section.
Example 1 We consider a heterogeneous multi-agent
system consisting of six agents in R2inspired by [23],
where A1=A2=1 0
1 1,A3=A4=0 1
2 1,
A5=A6="110
011
102#,B1=B2=1
2,B3=
B4=1
1,B5=B6="1
0
2#,C1=C2= [1 1],
C3=C4= [3 1],C5=C6=1
21 0. The
local objective functions referred to [11] and the local
convex inequality functions of six agents are given by
f1(y1) = 1
2y2
11 +1
2y2
12,
f2(y2) = 1
2(y21 + 1)2+1
2y2
22,
f3(y3) = 1
2y2
31 +1
2(y32 + 1)2,
f4(y4) = 1
2(y41 + 1)2+1
2(y42 + 1)2,
f5(y5) = 1
4y4
51 +1
4y4
52,
f6(y6) = 1
4(y61 + 1)4+1
4y4
62,
gi(yi) = (y12)2+ (y22)25, i = 1, . . . , 6.
(31)
Obviously, all of the local objective functions
and inequality functions are continuously
differentiable. Due to the strong convexity of
function f(y) = PN
i=1 fi(yi), the global minimizer
yis unique. And the minimum value f(y) = 5.2347
and the optimal solution y= [0.2945,0.5539]>
can be acquired through some simple and easy
calculations.
In addition, the feedback gain matrixes of each
agent can be selected by γ1=γ2= [21],
γ3=γ4= [1 2],γ5=γ6= [11 2].
Ψ1= Ψ2=[1],Ψ3= Ψ4=1
2,Ψ5= Ψ6=[2].
The initial states of xi(0), ηi(0) R2are chosen
randomly and the initial value µi(0) = 0 is set.
The communication network among these six
agents is unbalanced and depicted in Figure 2
(Appendix) setting the weight of all edges as 1.
Let α1= 7 and α2= 10, the state trajectories
of these agents by executing algorithm (6) are given
in Figure 3(Appendix), it can be seen that the
outputs of all agents converge uniformly to the
global optimal solution yof optimization problem
2. Meanwhile, compared with the two subplots in
Figure 3 (Appendix), we can see that, the curve
corresponding to gain parameter α= 7 converges
around t= 20 s and the curve corresponding to α=
10 converges around at the time t= 15 s, namely,
the convergence rate is improved when αis increased
from 7 to 10. This indicates that the convergence
rate can be faster by choosing larger constant control
parameters.
Example 2 In this example, we also consider a
system consisting of six agents in R2. Assume that the
coefficient matrixes and the feedback gain matrixes
are the same as that chosen in Example 1, so is
the communication graph. Each agent ihas the
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
762
Volume 22, 2023
following objective function fi, i = 1, . . . , 6and
convex inequality function gi, i = 1, . . . , 6.
f1(y1) = y2
11
ln(y2
11 + 6),
f2(y2) = 1
2(y21 + 1)2ln(y2
11 + 3) + y2
22,
f3(y3) = y2
31
py2
31 + 1 +1
5y2
32,
f4(y4) = y4
41 + 2y2
42 + 5,
f5(y5) = 1
10y2
51 +e1
10 y52 ,
f6(y6) = 1
2e1
2y61 +2
5e3
10 y62 ,
gi(yi) = (y13)2+ (y25)24, i = 1, . . . , 6.
(32)
Besides, configuring the initial states of
xi(0), ηi(0) R2randomly and setting the initial
value to µi(0) = 0, αi(0) = 10. Figure 4 (Appendix)
depicts the evolution trajectories of the six agents
by executing algorithm (6), and it is clear to
observe that the output decision of each agent also
effectively converges to the exact optimal value
y= [1.606,3.566]>.
6 Conclusion
To handle the distributed optimization problem
with inequality constraints of heterogeneous linear
multi-agent systems over weight-unbalanced
digraphs, a distributed continuous-time optimization
algorithm is presented. Additionally, a variable is
designed to evaluate the left eigenvector associated
with the zero eigenvalue of the Laplacian matrix,
which removes the imbalance of the communication
graph. When the local cost functions are assumed to
be strongly convex with global Lipschitz gradients
and the local constraint functions are assumed to be
convex, by resorting to the Lyapunov stability, it is
proved that the state of agents will asymptotically
converge consensus at an optimal solution of the
given optimization problem.
In the future, we will focus on the
nonsmooth constrained optimization problems
with communication delays and practical safety
constraints of each agent against a collision.
References:
[1] P. Yi, Y. Hong, and F. Liu, Initialization-free
distributed algorithms for optimal resource
allocation with feasibility constraints and
application to economic dispatch of power
systems, Automatica, Vol. 74, pp. 259-269,
2016.
[2] Y. Zhang, Y. Lou, Y. Hong, and L. Xie,
Distributed projection-based algorithms for
source localization in wireless sensor networks,
IEEE Transactions on Wireless Communication,
Vol. 14, No. 6, pp. 3131-3142, 2015.
[3] Y. Huang, H. Wang, and P. Yao, Energy-optimal
path planning for solarpowered uav with tracking
moving ground target, Aerospace Science &
Technology, Vol. 53, pp. 241-251, 2016.
[4] A.Nedic, and A. Ozdaglar, Distributed
subgradient methods for multi-agent
optimization, IEEE Transactions on Automatic
Control, Vol. 54, No. 1, pp. 48-61, 2009.
[5] A.Nedic, A. Ozdaglar, and P. Parrilo, Constrained
consensus and optimization in multi-agent
networks, IEEE Transactions on Automatic
Control, Vol. 55, No. 4, pp. 922-938, 2010.
[6] A. Huang, F. Chen, and W. Lan, An Adaptive
dynamic protocol for distributed convex
optimization, in Proceedings of the 34th
Chinese Control Conference (CCC), China, pp.
1318-1322, 2015.
[7] W. Xu, and F. Yang, Projection-based dynamics
for distributed optimization subject to general
constraints, Proceedings of the 37th Chinese
Control Conference, China, pp. 2474-2478, 2018.
[8] G. Chen, Q. Yang, Y. Song, and F. Lewis,
A distributed continuous-time algorithm for
nonsmooth constrained optimization, IEEE
Transactions on Automatic Control, 2020, doi:
10.1109/TAC.2020.2965905.
[9] M. Lian, Z. Guo, X. Wang, S. Wen, and T.
Huang, Adaptive exact penalty design for optimal
resource allocation, IEEE Transactions on Neural
Networks and Learning Systems, vol. 34, no. 3,
pp. 1430 1438, 2023.
[10] Y. Zou, B. Huang, and Z. Meng, Distributed
continuous-time algorithm for constrained
optimization of networked Euler-Lagrange
systems, IEEE Transactions on Control of
Network Systems, Vol. 8, No. 2, pp. 1034-1041,
2021.
[11] Z. Yang, Q. Zhang, and Z. Chen, Distributed
constraint optimization with flocking
behavior, Hindawi, Complexity, 2018,
https://doi.org/10.1155/2018/1579865.
[12] S. Ram, V. Veeravalli, and A. Nedic,
Distributed non-autonomous power control
through distributed convex optimization, in
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
763
Volume 22, 2023
Proceedings of the 28th conference on computer
communications, Brazil, pp. 3001-3005, 2009.
[13] A. Nedic, and A. Olshevsky, Distributed
optimization over time-varying directed graphs,
in 52nd IEEE Conference on Decision and
Control, Italy, pp. 6855-6860, 2013.
[14] Q. Yang, G. Chen, and J. Ren, Continuous-
time algorithm for distributed constrained
optimization over directed graphs, in 2019 IEEE
15th International Conference on Control and
Automation (ICCA), Uk, pp. 1020-1025, 2019.
[15] Y. Zhu, W. Ren, W. Yu, and G. Wen, Distributed
resource allocation over directed graphs via
continuous-time algorithms, IEEE Transactions
on Systems, Man, and Cybernetics: Systems, Vol.
51, No. 2, pp. 1090-1106, 2021.
[16] Y. Zhu, W. Yu, G. Wen, and W. Ren, Continuous-
time coordination algorithm for distributed
convex optimization over weight-unbalanced
directed networks, IEEE Transactions on Circuits
and Systems II: Express Briefs, Vol. 66, No. 7,
pp. 1202-1206, 2019.
[17] Z. Li, Z. Ding, J. Sun, and Z. Li, Distributed
adaptive convex optimization on directed
graphs via continuous-time algorithms, IEEE
Transactions on Automatic Control, Vol. 63, No.
5, pp. 1434-1441, 2018.
[18] J. Xia, and W. Ni, Distributed optimization
algorithms over weighted unbalanced
directed networks, Journal of Nanchang
University(Natural Science), Vol. 46, No. 3, pp.
303-308, 2022.
[19] M. Lian, Z. Guo, S. Wen, and T. Huang,
Distributed adaptive algorithm for resource
allocation problem over weight-unbalanced
graphs, IEEE Transactions on Network
Science and Engineering, pp. 1-10, 2023,
doi:10.1109/TNSE.2023.3300736
[20] J. Zhang, L. Liu, and H. Ji, Exponential
convergence of distributed optimal coordination
for linear multi-agent systems over general
digraphs, in Proceedings of the 39th Chinese
Control Conference (CCC), China, pp. 5047-
5051, 2020.
[21] Z. Li, Z. Wu, Z. Li, and Z. Ding, Distributed
optimal coordination for heterogeneous linear
multiagent systems with event-triggered
mechanisms, IEEE Transactions on Automatic
Control, Vol. 65, No. 4, pp. 1763-1770, 2020.
[22] B. Shao, M. Li, and X. Shi, Distributed
resource allocation algorithm for general
linear multiagent systems, IEEE Access,
Vol. 10, pp. 74691-74701, 2022, https://doi:
10.1109/ACCESS.2022.3191909.
[23] L.Li, Y. Yu, X. Li, and L. Xie, Exponential
convergence of distributed optimization for
heterogeneous linear multi-agent systems
over unbalanced digraphs, Automatica, 2022,
https://doi.org/10.1016/j.automatica.2022.110259.
[24] Y. Hong, and Y. Zhang, Distributed
optimization: algorithm design and convergence
analysis, Control Theory & Applications, Vol.
31, No. 7, pp. 850-857, 2014.
[25] P. Yi, and Y. Hong, Distributed cooperative
optimization and its application, Chinese
Science: mathematics, Vol. 46, No. 10, pp. 1547-
1564, 2016.
[26] P. Xie, K. You, Y. Hong, and L. Xie, Research
progress of networked distributed convex
optimization algorithms, Control Theory
&Applications, Vol. 35, No. 7, pp. 918-927,
2018.
[27] L. Wang, K. Lu, and Y. Guan, Distributed
optimization via multi-agent systems, Control
Theory & Applications, Vol. 36, No. 11, pp. 1820-
1833, 2019.
[28] X. Jiang, X. Zeng, J. Sun, and J. Chen, Research
status and prospect of distributed optimization
for multiple aircraft, Acta Aeronautica et
Astronautica Sinica, Vol. 42, No. 4, pp. 1-16,
2021.
[29] Q. Yang, J. Yu, Y. Gao, An adaptive optimization
algorithm for heterogeneous linear multi-
agent systems with inequality constraints,
in Proceedings of the 42th Chinese Control
Conference (CCC), China, pp. 6123-6128, 2023.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
764
Volume 22, 2023
Project Grant No. 62173332).
Conflict of Interest
The authors have no conflicts of interest to declare
that are relevant to the content of this article.
Creative Commons Attribution
License 4.0 (Attribution 4.0
International , CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en_US
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
765
Volume 22, 2023
Acknowledgments
Zhengquan Yang proposed the idea of the method
and checked the correctness of the manuscript.
Wenjie Yu gave the method and wrote the article.
Zhiyu Gao carried out the simulation and the
optimization.
Sources of funding for research
presented in a scientific article or
scientific article itself
The authors acknowledge the financial support
from Natural Science Foundation of China (Key
Figure 1: The controller to solve the optimization problem (3).
Figure 2: Communication graph among six agents.
0 5 10 15 20 25 30
t/s
0.5
1
1.5
2
2.5
3
3.5
states of agnets yi(i=1,...,6)
=7
Agent xi1
xi1
*
Agent xi2
xi2
*
0 5 10 15 20 25 30
t/s
0.5
1
1.5
2
2.5
3
3.5
states of agnets yi(i=1,...,6)
=10
Agent xi1
xi1
*
Agent xi2
xi2
*
Figure 3: The state trajectories of output variable yiwith respect to time tfor different values of αby executing
algorithm (6).
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
766
Volume 22, 2023
APPENDIX
0 5 10 15 20 25 30
t/s
0.5
1
1.5
2
2.5
3
3.5
states of agnets yi(i=1,...,6)
=10
Agent xi1
xi1
*
Agent xi2
xi2
*
Figure 4: The state trajectories of output variable yirespect to time tby executing algorithm (6).
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2023.22.83
Zhengquan Yang, Wenjie Yu, Zhiyun Gao
E-ISSN: 2224-2880
767
Volume 22, 2023