Chandrasekhar-type Algorithms with Gain Elimination
NICHOLAS ASSIMAKIS1, MARIA ADAM2
1Department of Digital Industry Technologies,
National and Kapodistrian University of Athens,
34400 Psachna Evias,
GREECE
2Department of Computer Science and Biomedical Informatics,
University of Thessaly,
2-4 Papasiopoulou Str., 35131, Lamia,
GREECE
Abstract: - Chandrasekhar-type algorithms are associated with the Riccati equation emanating from the Kalman
filter in linear systems which describe the relationship between the n-dimensional state and the m-dimensional
measurement. The traditional Chandrasekhar-type algorithms use the Kalman filter gain to compute the
prediction error covariance. In this paper, two variations of Chandrasekhar-type algorithms eliminating the
Kalman filter gain are proposed. The proposed Chandrasekhar-type algorithms with gain elimination may be
faster than the traditional Chandrasekhar-type algorithms, depending on the model dimensions.
Key-Words: - Discrete time, Kalman filter, Discrete algebraic Riccati equation, algebraic Lyapunov equation,
Chandrasekhar-type algorithms, Kalman filter gain, convergence theory.
Received: April 13, 2023. Revised: December 18, 2023. Accepted: December 27, 2023. Published: December 31, 2023.
1 Introduction
Consider discrete time,  invariant linear
systems, which are traditionally formulated by the
state space equations, [1]:
󰇛 󰇜󰇛󰇜 󰇛󰇜 (1)
󰇛󰇜󰇛󰇜 󰇛󰇜 (2)
Here, 󰇛󰇜 defines the state vector of dimension
with Gaussian noise 󰇛󰇜 of zero mean and
covariance and 󰇛󰇜 defines the measurement
vector of dimension with Gaussian noise 󰇛󰇜 of
zero mean and covariance R. In addition, is the
transition matrix and is the output matrix. All the
model parameters    are constant. The initial
state 󰇛󰇜 is Gaussian with mean and covariance
.
The discrete time Kalman filter, [1], [2], is the
celebrated algorithm, which computes the state
estimation 󰇛󰇜 and the estimation error
covariance 󰇛󰇜 as well as the state prediction
󰇛 󰇜 and the prediction error covariance
matrix󰇛 󰇜. The prediction and estimation
error covariances do not depend on the
measurements; thus they can be computed off-line
using the equations
󰇛󰇜󰇛 󰇜 (3)
󰇛󰇜 󰇛 󰇜󰇛󰇜 (4)
󰇛󰇜󰇟 󰇛󰇜󰇠󰇛 󰇜 (5)
󰇛 󰇜 󰇛󰇜 (6)
with initial condition 󰇛󰇜 .
Here, denotes the transpose of  denotes
the identity matrix, and 󰇛󰇜 is the Kalman filter
gain. Note that the existence of the inverse of 󰇛󰇜
is ensured assuming that is positive definite (this
has the reasonable meaning that no measurement is
accurate).
It is well known, [1], that 󰇛 󰇜 can be
computed independently of the measurements, using
the discrete algebraic Riccati equation:
󰇛 󰇜 󰇛 󰇜
󰇛 󰇜
󰇟󰇛 󰇜 󰇠󰇛 󰇜 (7)
In the infinite measurement noise covariance
case, where , the discrete algebraic Riccati
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2023.18.65
Nicholas Assimakis, Maria Adam
E-ISSN: 2224-2856
642
Volume 18, 2023
equation takes the form of the algebraic Lyapunov
equation:
󰇛 󰇜 󰇛 󰇜 (8)
In addition, if the model is asymptotically stable,
then there is a unique steady state prediction error
covariance , which satisfies the discrete algebraic
Riccati equation:
󰇟 󰇠 (9)
In the case , if the model is asymptotically
stable, then there is a unique steady state prediction
error covariance matrix  which satisfies the
algebraic Lyapunov equation:
 (10)
Due to the importance of the Riccati equation,
significant bibliography exists about iterative or
algebraic solutions, [1], [3], [4], [5], [6], [7].
Chandrasekhar-type algorithms have been part of
the folklore associated with the Riccati equation,
[1]. Methods are described in [8], based on the
solution of so-called Chandrasekhar-type equations
than the classical Riccati-type equation. An
important advantage of this method is the reduction
in computational burden, when the state dimension
is much greater than the measurement dimension,
[1]. Chandrasekhar-type algorithms can be used to
iteratively compute the prediction error covariance,
[1], [8] or to compute the steady state solution of the
Riccati equation. Chandrasekhar-type algorithms are
applicable to Kalman filters, [9], [10] and to time
varying as well as time invariant distributed
systems, [11].
All Chandrasekhar-type algorithms use the
Kalman filter gain. Iterative and algebraic
algorithms for the computation of the steady state
Kalman filter gain have been derived in [12]. The
basic idea of this work is to eliminate the Kalman
filter gain from the Chandrasekhar-type algorithms’
equations, to reduce the computational effort, [10].
The Kalman filter gain elimination concept and
the proposed variations of Chandrasekhar-type
algorithms can find application in steady state
Kalman filter design, where the Riccati equation
solution is required. In addition, the proposed
algorithms can be applied in control problems. The
basic problems in control theory are (a) the
controller design problem (control law design for
the dynamical system) and (b) the state estimation
problem (computation of the estimate of the states
of the dynamical system). The Linear Quadratic
Regulator (LQR) and the Kalman filter solve the
associated problems, [13]. The proposed algorithms
can be applied in the case of linear dynamical
systems, to estimate the control effectiveness of the
actuator on behalf of an actuator stuck fault incident
occurring on airplanes, [14], to Kalman filter design
that accounts for measurement differences, for the
case of time-correlated measurement errors, [15], to
Global Positioning System (GPS) and Inertial
Navigation System (INS) integration during GPS
outages using machine learning augmented with
Kalman filter, [16].
The novelty of this work concerns: (a) the use of
the Kalman filter gain elimination concept in the
Riccati equation solution, (b) the derivation of
Chandrasekhar-type algorithms with gain
elimination, (c) the computation of the calculation
burdens of the Chandrasekhar-type algorithms, (d)
the determination of the faster Chandrasekhar-type
algorithm via the system dimensions.
The paper is organized as follows: Section 2
summarizes the traditional Chandrasekhar-type
algorithms. The Chandrasekhar-type algorithms
based on Kalman filter gain elimination are derived
in Section 3. In Section 4, the traditional and the
proposed Chandrasekhar-type algorithms are
compared concerning their calculation burdens.
Section 5 summarizes the conclusions.
2 Traditional Chandrasekhar-type
Algorithms
The basic idea in Chandrasekhar- type algorithms is
to factorize the difference:
󰇛󰇜 󰇛 󰇜 󰇛 󰇜 (11)
as
󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜 (12)
where 󰇛󰇜 is square symmetric matrix with
dimension equal to:
󰇛󰇜 (13)
with  .
There exist various equivalent Chandrasekhar-
type equation sets. In this section, we deal with two
Chandrasekhar-type algorithms, which are well
described in [1]; we refer to these algorithms as
Chandrasekhar-type algorithm version 1 and
Chandrasekhar-type algorithm – version 2.
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2023.18.65
Nicholas Assimakis, Maria Adam
E-ISSN: 2224-2856
643
Volume 18, 2023
Chandrasekhar-type algorithm version 1
󰇛 󰇜 󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜󰇟󰇛󰇜󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜󰇠󰇛 󰇜
󰇛 󰇜 󰇟 󰇛 󰇜󰇠󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛 󰇜 󰇛󰇜󰇛󰇜󰇛󰇜
for  , with initial conditions
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜 󰇛󰇜󰇛󰇜
󰇛󰇜 and 󰇛󰇜 are derived by factoring
󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜 󰇛󰇜 󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜 󰇛󰇜
The easiest initialization is proposed, 󰇛󰇜
, as then the dimensions of 󰇛󰇜 and 󰇛󰇜
can be helpfully low, [1]. In this case:
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜 and 󰇛󰇜 are derived by factoring 󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜
In particular, Chandrasekhar-type algorithms
may be more attractive computationally for 󰇛
󰇜 , [1].
In the case where 󰇛󰇜 and has
full rank, we get 󰇛󰇜 and 󰇛󰇜 .
Chandrasekhar-type algorithm version 2
󰇛 󰇜 󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇟 󰇛󰇜󰇠󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛 󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛 󰇜󰇛󰇜󰇛󰇜󰇛 󰇜
󰇛 󰇜 󰇛 󰇜 󰇛󰇜󰇛󰇜󰇛󰇜
for  , with initial conditions
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜 󰇛󰇜󰇛󰇜
󰇛󰇜 and 󰇛󰇜 are derived by factoring
󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜 󰇛󰇜 󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜 󰇛󰇜
It is known that if 󰇛󰇜 , then the version 1
is preferred, while if 󰇛󰇜 , then the version 2 is
preferred. In addition, if the initial condition
󰇛󰇜 is equal to the solution of the algebraic
Lyapunov equation (10), then 󰇛󰇜 , [1].
Remark 1.
The Lyapunov equation is a special case of the
Riccati equation in the infinite measurement noise
covariance case, where . Then, form both the
above two versions of Chandrasekhar-type
algorithms we get the Chandrasekhar-type algorithm
for the Lyapunov equation:
Chandrasekhar-type algorithm Lyapunov
equation
󰇛 󰇜 󰇛󰇜
󰇛 󰇜 󰇛 󰇜 󰇛󰇜󰇛󰇜
for  , with initial conditions
󰇛󰇜
󰇛󰇜 is derived by factoring
󰇛󰇜 󰇛󰇜󰇛󰇜 󰇛󰇜 󰇛󰇜
󰇛󰇜 󰇛󰇜
Remark 2.
Chandrasekhar-type algorithms can be applied to
compute the steady state limiting solution of the
Riccati equation. In this case, Chandrasekhar-type
algorithms are implemented for  , until
󰇛 󰇜 󰇛 󰇜 , where is the
convergence criterion and denotes the norm of
the matrix .
3 Chandrasekhar-type Algorithms
with Gain Elimination
The basic idea is to eliminate the Kalman filter gain
from the equations of Chandrasekhar-type
algorithms, working as in [17].
This can be achieved by defining the ratio 󰇛󰇜
(the term  corresponds to the Greek term
):
󰇛󰇜󰇟 󰇛󰇜󰇜󰇠󰇛󰇜 (14)
In this section we are going to develop two
Chandrasekhar-type algorithms with gain
elimination that correspond to the two versions of
the traditional Chandrasekhar-type algorithms of the
previous section; we refer to these algorithms as
Chandrasekhar-type algorithm with gain elimination
version 1 and Chandrasekhar-type algorithm with
gain elimination – version 2.
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2023.18.65
Nicholas Assimakis, Maria Adam
E-ISSN: 2224-2856
644
Volume 18, 2023
Chandrasekhar-type algorithm with gain
elimination version 1
󰇛 󰇜 󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇟 󰇛 󰇜󰇠󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛 󰇜 󰇛󰇜󰇛󰇜󰇛󰇜
for  , with initial conditions
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜 󰇛󰇜
󰇛󰇜 and 󰇛󰇜 are derived by factoring
󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜 󰇛󰇜 󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜 󰇛󰇜
Proof.
For the Kalman filter gain, from (3) and (4) we get:
󰇛󰇜 󰇛 󰇜󰇟󰇛 󰇜 󰇠
󰇛󰇜󰇛 󰇜 󰇛󰇜󰇛󰇜
󰇛 󰇜
󰇛󰇜󰇛 󰇜 󰇛󰇜
󰇛 󰇜
󰇛󰇜󰇟 󰇛󰇜󰇠󰇛 󰇜
Then using (14) we derive:
󰇛󰇜 󰇛 󰇜 (15)
Then, using (15) and (4) we derive:
󰇛󰇜󰇛󰇜 󰇛󰇜 (16)
Then we are able to eliminate the Kalman filter
gain from the Chandrasekhar-type algorithm
version 1:
󰇛 󰇜
󰇟󰇛󰇜󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜󰇠󰇛 󰇜
󰇛 󰇜󰇛 󰇜
󰇛󰇜󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜
Hence
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜 (17)
In addition, using (3), (4) and the Matrix
Inversion Lemma
1
we get:
󰇛󰇜 󰇛 󰇜󰇟󰇛 󰇜 󰇠
󰇛 󰇜 󰇛󰇜󰇛 󰇜
1
Let  be nonsingular matrices. Then,
󰇟 󰇠  󰇟 󰇠
󰇛 󰇜 󰇛 󰇜
󰇟󰇛 󰇜 󰇠󰇛 󰇜
󰇟󰇛 󰇜 󰇠
󰇟 󰇛󰇜󰇠󰇛 󰇜
󰇟󰇛 󰇜 󰇠
󰇟 󰇛󰇜󰇠 󰇟󰇛 󰇜
󰇠󰇛 󰇜
󰇟 󰇛󰇜󰇠󰇟 󰇛 󰇜󰇠
Then using (14) we derive:
󰇟 󰇛󰇜󰇠󰇟 󰇛󰇜󰇠 (18)
Hence, we are able to eliminate the Kalman
filter gain from the 󰇛 󰇜 equation of the
Chandrasekhar-type algorithm – version 1:
󰇛 󰇜 󰇟 󰇛 󰇜󰇠󰇛󰇜 (19)
It is obvious that equations (17) and (19)
substitute equations for 󰇛 󰇜 and 󰇛 󰇜 of
the Chandrasekhar-type algorithm version 1,
eliminating the use of Kalman filter gain.
Chandrasekhar-type algorithm with gain
elimination version 2
󰇛 󰇜 󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇟 󰇛󰇜󰇠󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛 󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛 󰇜 󰇛󰇜󰇛󰇜󰇛󰇜
for  , with initial conditions
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜 󰇛󰇜
󰇛󰇜 and 󰇛󰇜 are derived by factoring
󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜 󰇛󰇜 󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜 󰇛󰇜
Proof.
We are able to eliminate the Kalman filter gain from
the 󰇛 󰇜 equation of the Chandrasekhar-type
algorithm – version 2, by using (18):
󰇛 󰇜 󰇟 󰇛󰇜󰇠󰇛󰇜 (20)
In addition, are able to eliminate the Kalman filter
gain from the Chandrasekhar-type algorithm
version 2:
󰇛 󰇜 󰇛󰇜
󰇛 󰇜󰇛󰇜󰇛󰇜󰇛 󰇜
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2023.18.65
Nicholas Assimakis, Maria Adam
E-ISSN: 2224-2856
645
Volume 18, 2023
󰇛 󰇜󰇛 󰇜 󰇛󰇜󰇛 󰇜
󰇛 󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜󰇛 󰇜
󰇛󰇜󰇟󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜󰇠
󰇛 󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜󰇛 󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇟 󰇛󰇜󰇠󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇟 󰇛󰇜󰇠󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛󰇜
󰇝󰇛󰇜 󰇟 󰇛󰇜󰇠󰇞󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛󰇜
󰇝󰇛󰇜 󰇛󰇜󰇞󰇛󰇜󰇛󰇜󰇛󰇜
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜
Hence
󰇛 󰇜 󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜 (21)
It is obvious that equations (20) and (21)
substitute equations for 󰇛 󰇜 and 󰇛 󰇜 of
the Chandrasekhar-type algorithm version 2,
eliminating the use of Kalman filter gain.
4 Comparison of the Algorithms
It is established that the Chandrasekhar-type
algorithms with gain elimination have been derived
from the traditional Chandrasekhar-type algorithms.
Thus the traditional as well as the proposed
Chandrasekhar-type algorithms are equivalent
algorithms concerning their behavior, since they
compute theoretically the same prediction error
covariances. Since all the algorithms are iterative, it
is reasonable to compare the algorithms concerning
their per iteration calculation burdens.
Scalar operations are involved in matrix
manipulation operations, which are needed for the
implementation of the Chandrasekhar-type
algorithms. Table 1 summarizes the calculation
burden of needed matrix operations. Note that
denotes a symmetric matrix. The details for the
multi-dimensional model  are given in
[17].
Table 1. Calculation Burden of Matrix Operations
Matrix
Operation
Matrix
Dimensions
󰇛 󰇜󰇛 󰇜
󰇛 󰇜󰇛 󰇜
󰇛 󰇜󰇛 󰇜
󰇛 󰇜󰇛 󰇜
󰇛 󰇜󰇛 󰇜

The per iteration calculation burdens of the
Chandrasekhar-type algorithms for the general
multidimensional case, where  , are
analytically calculated in the Appendix and
summarized in Table 2.
Table 2. Calculation Burden of Chandrasekhar-type
algorithms
Chandrasekhar-type
algorithms
Calculation
Burden
traditional
(gain use)
version1
 
󰇛  󰇜
  
  
  
traditional
(gain use)
version 2
  
󰇛  󰇜
  
  
  
proposed
(gain elimination)
version1

󰇛  󰇜
󰇛  󰇜
 
  
  
proposed
(gain elimination)
version 2

󰇛  󰇜
󰇛  󰇜
 
  
  
From Table 2 we realize that we are able to
determine, which Chandrasekhar-type algorithm is
faster:
1. Chandrasekhar-type algorithms – version 1
 
󰇝 󰇛 󰇜
󰇛 󰇜
󰇛  󰇜󰇞
The areas (wrt the model dimensions), where
the proposed gain elimination algorithm or the
traditional algorithm is faster, are shown in Figure 1.
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2023.18.65
Nicholas Assimakis, Maria Adam
E-ISSN: 2224-2856
646
Volume 18, 2023
The following Rule of Thumb is derived: The
proposed Chandrasekhar-type algorithm with gain
elimination is faster than the traditional
Chandrasekhar-type algorithm, when  
Fig. 1: The faster Chandrasekhar-type algorithm
version 1
2. Chandrasekhar-type algorithms – version 2
 
󰇝
󰇛 󰇜󰇞
The areas (wrt the model dimensions) where the
proposed gain elimination algorithm or the
traditional algorithm is faster are shown in Figure 2.
Fig. 2: The faster Chandrasekhar-type algorithm
version 2
The following Rule of Thumb is derived: The
proposed Chandrasekhar-type algorithm with gain
elimination is faster than the traditional
Chandrasekhar-type algorithm, when  
3. Traditional Chandrasekhar-type algorithms
 
󰇝 
󰇛 󰇜
󰇛 󰇜󰇞
The areas (wrt the model dimensions) where
version 1 or version 2 is faster, are shown in Figure
3.
The following Rule of Thumb is derived: version 1
is faster than version 2, when  .
Fig. 3: The faster traditional Chandrasekhar-type
algorithm
4. Proposed Chandrasekhar-type algorithms
 
The proposed version 1 is as fast as version 2.
Thus we conclude that which algorithm is faster
depends on the state dimension and the
measurement dimension and not on the
dimension defined in (13). Hence, the knowledge
of the system dimensions and can determine,
which Chandrasekhar-type algorithm is faster.
Finally, the per iteration calculation burdens of
the traditional Lyapunov equation and the
Chandrasekhar-type algorithm for the Lyapunov
equation, are analytically calculated in the Appendix
and summarized in Table 3.
Table 3. Calculation Burden of Algorithms
for the Lyapunov equation solution
Algorithms
Calculation Burden
traditional
 
Chandrasekhar-type
 
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2023.18.65
Nicholas Assimakis, Maria Adam
E-ISSN: 2224-2856
647
Volume 18, 2023
As in Table 3 appears the Chandrasekhar-type
algorithm for the Lyapunov equation is faster than
the traditional Lyapunov equation, when .
Example. Consider the system dimensions
 for estimation of three-dimensional radar
tracking [18]. Then   and
hence the proposed Chandrasekhar-type algorithm
version 2 is faster than the traditional one.
5 Conclusions
In this paper, new variations of Chandrasekhar-type
algorithms eliminating the Kalman filter gain are
proposed. The calculation burdens of the
Chandrasekhar-type algorithms are derived. The
proposed Chandrasekhar-type algorithms may be
faster than the traditional ones, depending on the
model dimensions. It has been shown that the
determination of the faster Chandrasekhar-type
algorithm can be achieved via the system
dimensions.
A subject of future research is to investigate the
application of corresponding Chandrasekhar-type
algorithms to dynamical continuous-time systems,
[19], [20], [21], and to discrete-time anti-linear
systems, [22]. Another area of future research may
be the use of the derived Chandrasekhar-type
algorithms with gain elimination in the derivation of
time varying, time invariant, and steady state
Kalman filters.
References:
[1] B. D .O Anderson, J. B. Moore, Optimal
Filtering, Dover Publications, New York,
2005.
[2] R. E. Kalman, A new approach to linear
filtering and prediction problems, J. Bas.
Eng., Trans. ASME, Ser. D, vol. 8(1), 1960,
pp. 34-45.
[3] N. Komaroff, Iterative matrix bounds and
computational solutions to the discrete
algebraic Riccati equation, IEEE Trans.
Autom. Control, vol. 39, 1994, pp. 1676–
1678.
[4] L. Wang, An improved iterative method for
solving the discrete algebraic Riccati
equation, Mathematical Problems in
Engineering, vol. 2020, Article ID 3283157, 6
pages, https://doi.org/10.1155/2020/3283157.
[5] J. Zhang and J. Liu, New upper and lower
bounds, the iteration algorithm for the
solution of the discrete algebraic Riccati
equation, Advances in Difference Equations,
vol. 313, 2015, pp. 1-17.
[6] B. Zhou, On Linear Quadratic Optimal
Control of Discrete-Time Complex-Valued
Linear Systems, Optimal Control Applications
and Methods, DOI: 10.1002/oca.2554, 2017.
[7] J. Liu, Z. Wang, and Z. Xie, Iterative
algorithms for reducing inversion of discrete
algebraic Riccati matrix equation, IMA
Journal of Mathematical Control and
Information, vol. 39, 2022, pp. 985–1007.
[8] M. Morf, G. S. Sidhu, T. Kailath, Some New
Algorithms for Recursive Estimation in
Constant, Linear, Discrete-time Systems,
IEEE Trans. Automatic Control, vol. AC-19,
no. 4, 1974, pp. 315–323.
[9] N. Assimakis, A. Kechriniotis, S. Voliotis, F.
Tassis, M. Kousteri, Analysis of the time
invariant Kalman filter implementation via
general Chandrasekhar algorithm,
International Journal of Signal and Imaging
Systems Engineering, vol. 1(1), 2008, pp. 51-
57.
[10] S. Nakamori, A. Hermoso-Carazo, J. Jiménez-
López and J. Linares-Pérez, Chandrasekhar-
type filter for a wide-sense stationary signal
from uncertain observations using covariance
information, Applied Mathematics and
Computation, vol. 151(2), 2004, pp. 315-325,
https://doi.org/10.1016/S0096-
3003(03)00343-6.
[11] J.S. Baras and D.G. Lainiotis, Chandrasekhar
algorithms for linear time varying distributed
systems, Information Sciences, vol. 17(2),
1979, pp. 153-167,
https://doi.org/10.1016/0020-0255(79)90037-
9.
[12] J.U. Sevinov, S.O. Zaripova, A stable
iterative algorithm for estimating the elements
of the matrix gain of a Kalman filter,
Electrical and Computer Engineering,
Technical science and innovation, no 3, 2023,
pp. 99-103,
https://scienceweb.uz/publication/15804.
[13] M.T. Augustine, A note on linear quadratic
regulator and Kalman filter, 2023,
DOI: 10.48550/arXiv.2308.15798.
[14] A. Guven and C. Hajiyev, Two-Stage Kalman
Filter Based Estimation of Boeing 747
Actuator/Control Surface Stuck Faults,
WSEAS Transactions on Signal Processing,
vol. 19, 2023, pp. 32-40,
https://doi.org/10.37394/232014.2023.19.4.
[15] C. Hajiyev and U. Hacizade, A Covariance
Matching-Based Adaptive Measurement
Differencing Kalman Filter for INS’s Error
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2023.18.65
Nicholas Assimakis, Maria Adam
E-ISSN: 2224-2856
648
Volume 18, 2023
Compensation, WSEAS Transactions on
Systems and Control, vol. 18, 2023, pp. 478-
486,
https://doi.org/10.37394/23203.2023.18.51.
[16] R. Verma, L. Shrinivasan and K.
Shreedarshan, GPS/INS integration during
GPS outages using machine learning
augmented with Kalman filter, WSEAS
Transactions on Systems and Control, vol. 16,
2021, pp. 294-301, doi:
10.37394/23203.2021.16.25
[17] N. Assimakis, Kalman Filter Gain Elimination
in Linear Estimation, International Journal of
Computer and Information Engineering, vol.
14(7), 2020, pp. 236-241.
[18] P. Aditya, E. Apriliani, D. K. Arif and K.
Baihaqi, Estimation of three-dimensional
radar tracking using modified extended
Kalman filter, Journal of Physics: Conf.
Series 974, 2018, doi :10.1088/1742-
6596/974/1/012071
[19] Z.-P. Jiang, T. Bian and W. Gao, Learning-
Based Control: A Tutorial and Some Recent
Results, Foundations and Trends in Systems
and Control, vol. 8(3), 2022, pp. 985–1007,
(176-284).
[20] J. Liu, Li Wang and Y. Bai, New estimates of
upper bounds for the solutions of the
continuous algebraic Riccati equation and the
redundant control inputs problems,
Automatica, vol. 116, 2020, 108936,
https://doi.org/10.1016/j.automatica.2020.108
936.
[21] T. Simos, V. Katsikis, S. Mourtas and P.
Stanimirović, Unique non-negative definite
solution of the time-varying algebraic Riccati
equations with applications to stabilization of
LTV systems, Mathematics and Computers in
Simulation, vol. 202, 2022, pp. 164-180,
https://doi.org/10.1016/j.matcom.2022.05.033
[22] C.-Y. Chiang and H.-Y. Fan, Inheritance
properties of the conjugate discrete-time
algebraic Riccati equation, Linear Algebra
and its Applications,
vol. 683, 2024, pp. 71-97,
https://doi.org/10.1016/j.laa.2023.11.011.
Contribution of Individual Authors to the
Creation of a Scientific Article (Ghostwriting
Policy)
The authors equally contributed in the present
research, at all stages from the formulation of the
problem to the final findings and solution.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
No funding was received for conducting this study.
Conflict of Interest
The authors have no conflicts of interest to declare.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2023.18.65
Nicholas Assimakis, Maria Adam
E-ISSN: 2224-2856
649
Volume 18, 2023
APPENDIX
The per iteration calculation burdens of the
Chandrasekhar-type algorithms for the general
multidimensional case, where  , are
analytically calculated in Table 4, Table 5, Table 6
and Table 7. The per iteration calculation burdens of
the traditional Lyapunov equation and the
Chandrasekhar-type algorithm for the Lyapunov
equation, are analytically calculated in Table 8 and
Table 9.
Table 4. Chandrasekhar-type algorithm – version 1
Matrix Operation
Calculation Burden
󰇛󰇜 󰇛󰇜󰇛󰇜

󰇛󰇜
󰇛󰇜󰇛󰇜

󰇛󰇜 󰇛󰇜
 
󰇛󰇜 󰇛󰇜

󰇛 󰇜 󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜

󰇛󰇜 󰇛󰇜󰇛󰇜 󰇛󰇜

󰇛 󰇜
󰇛  󰇜
󰇛 󰇜 󰇛󰇜󰇛 󰇜

󰇛 󰇜
 
󰇛 󰇜
󰇟 󰇛 󰇜󰇠

󰇛 󰇜 󰇟 󰇛 󰇜󰇠󰇛󰇜
 
󰇛󰇜
󰇛  󰇜
󰇛󰇜
󰇛󰇜
 
󰇛󰇜 󰇛󰇜󰇛󰇜
 
󰇛󰇜
󰇛󰇜󰇛󰇜

󰇛 󰇜 󰇛󰇜 󰇛󰇜
󰇛 󰇜 󰇛 󰇜 󰇛󰇜
 
󰇛  󰇜
  
     
Table 5. Chandrasekhar-type algorithm with gain
elimination – version 1
Matrix Operation
Calculation Burden
󰇛󰇜 󰇛󰇜󰇛󰇜

󰇛󰇜
󰇛󰇜󰇛󰇜

󰇛󰇜 󰇛󰇜
 
󰇛󰇜 󰇛󰇜

󰇛 󰇜 󰇛󰇜
󰇛󰇜
󰇛󰇜 󰇛󰇜󰇟󰇠
 
󰇛 󰇜 󰇛󰇜 󰇛󰇜

󰇛 󰇜
 
󰇛 󰇜
󰇟 󰇛 󰇜󰇠
󰇛  󰇜
󰇟 󰇛 󰇜󰇠

󰇛 󰇜 󰇟 󰇛
󰇜󰇠󰇛󰇜
 
󰇛󰇜
󰇛  󰇜
󰇛󰇜
󰇛󰇜
 
󰇛󰇜 󰇛󰇜󰇛󰇜
 
󰇛󰇜
󰇛󰇜󰇛󰇜

󰇛 󰇜 󰇛󰇜 󰇛󰇜
󰇛 󰇜 󰇛 󰇜 󰇛󰇜

󰇛  󰇜
󰇛  󰇜  
     
Table 6. Chandrasekhar-type algorithm – version 2
Matrix Operation
Calculation Burden
󰇛󰇜 󰇛󰇜󰇛󰇜

󰇛󰇜
󰇛󰇜󰇛󰇜

󰇛󰇜 󰇛󰇜
 
󰇛󰇜 󰇛󰇜

󰇛 󰇜 󰇛󰇜
󰇛󰇜
󰇛 󰇜
󰇛  󰇜
󰇛󰇜
 
󰇛󰇜
󰇟 󰇛󰇜󰇠

󰇛 󰇜 󰇟 󰇛󰇜󰇠󰇛󰇜
 
󰇛󰇜
󰇛󰇜
 
󰇛󰇜 󰇛 󰇜󰇛󰇜
 
󰇛󰇜
󰇛󰇜󰇛󰇜

󰇛 󰇜 󰇛󰇜 󰇛󰇜
󰇛󰇜󰇛󰇜

󰇛󰇜
󰇛󰇜󰇛 󰇜

󰇛󰇜 󰇛 󰇜

󰇛󰇜 󰇛󰇜󰇛󰇜
 
󰇛 󰇜 󰇛󰇜
󰇛󰇜

󰇛 󰇜 󰇛 󰇜 󰇛󰇜
  
󰇛  󰇜
  
     
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2023.18.65
Nicholas Assimakis, Maria Adam
E-ISSN: 2224-2856
650
Volume 18, 2023
Table 7. Chandrasekhar-type algorithm with gain
elimination – version 2
Matrix Operation
Calculation Burden
󰇛󰇜 󰇛󰇜󰇛󰇜

󰇛󰇜
󰇛󰇜󰇛󰇜

󰇛󰇜 󰇛󰇜
 
󰇛󰇜 󰇛󰇜

󰇛 󰇜 󰇛󰇜
󰇛󰇜
󰇛󰇜 󰇛󰇜󰇟󰇠
 
󰇛 󰇜 󰇛󰇜 󰇛󰇜

󰇛󰇜
 
󰇛󰇜
󰇟 󰇛󰇜󰇠
󰇛  󰇜
󰇟 󰇛󰇜󰇠

󰇛 󰇜 󰇟 󰇛󰇜󰇠󰇛󰇜
 
󰇛 󰇜
󰇛  󰇜
󰇛󰇜
󰇛󰇜
 
󰇛󰇜 󰇛 󰇜󰇛󰇜
 
󰇛󰇜
󰇛󰇜󰇛󰇜

󰇛 󰇜 󰇛󰇜 󰇛󰇜
󰇛 󰇜 󰇛 󰇜 󰇛󰇜

󰇛  󰇜
󰇛  󰇜  
     
Table 8. Lyapunov equation
Matrix Operation
Calculation Burden
󰇛 󰇜

󰇛 󰇜
󰇛 󰇜 󰇛 󰇜
 
Table 9. Chandrasekhar-type algorithm – Lyapunov
equation
Matrix Operation
Calculation Burden
󰇛 󰇜󰇛󰇜
 
󰇛󰇜󰇛󰇜

󰇛 󰇜 󰇛 󰇜 󰇛󰇜󰇛󰇜
 
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2023.18.65
Nicholas Assimakis, Maria Adam
E-ISSN: 2224-2856
651
Volume 18, 2023