Robust Recursive Least-Squares Fixed-Point Smoother and Filter using
Covariance Information in Linear Continuous-Time Stochastic Systems
with Uncertainties
SEIICHI NAKAMORI
Professor Emeritus, Faculty of Education,
Kagoshima University,
1-20-6, Korimoto, Kagoshima, 890-0065,
JAPAN
Abstract: - This study develops robust recursive least-squares (RLS) fixed-point smoothing and filtering
algorithms for signals in linear continuous-time stochastic systems with uncertainties. The algorithms use
covariance information, such as the cross-covariance function of the signal with the observed value and the
autocovariance function of the degraded signal. A finite Fourier cosine series expansion approximates these
functions. Additive white Gaussian noise is present in the observation of the degraded signal. A numerical
simulation compares the estimation accuracy of the proposed robust RLS filter with the robust RLS Wiener
filter, showing similar mean square values (MSVs) of the filtering errors. The MSVs of the proposed robust
RLS fixed-point smoother are also compared to those of the proposed robust RLS filter.
Key-Words: - Robust RLS fixed-point smoother, robust RLS filter, degraded signal, stochastic systems with
uncertainties, continuous-time stochastic systems, Fourier series expansion.
Received: March 5, 2023. Revised: March 9, 2024. Accepted: April 11, 2024. Published: May 13, 2024.
1 Introduction
Over the past two decades, researchers have
extensively studied robust estimation in continuous-
time stochastic systems with uncertainties, covering
both linear and nonlinear scenarios. The following is
one classification for robust estimation problems.
(1) Norm-bounded parameter uncertainty, [1], [2],
[3], [4], [5], [6]. (2) Polytope uncertainty, [7], [8],
[9], [10], [11], [12]. (3) Markovian jumps in the
parameters [13], [14]. (4) In the presence of both
parameter uncertainty and a known input signal, [2].
(5) Systems with finite frequency specifications,
[15]. (6) Uncertain nonlinear systems with
multiplicative observation noise, [16]. (7) Nonlinear
systems via Takagi–Sugeno (T–S) fuzzy affine
dynamic models, [17], [18]. (8) Robust finite
impulse response (FIR) estimators, [5], [6]. (9)
Recursive least-squares (RLS) Wiener filter, [19].
The book, [20], mainly discusses identification
techniques for linear discrete-time stochastic
systems. It is presented a method for estimating
parameters of continuous-time linear systems by
using differential equations to define the input-
output relationship of the system. Recently, the
author developed a robust recursive least-squares
RLS Wiener filter for linear continuous-time
uncertain stochastic systems by estimating the
system matrix for the degraded signal, [19]. The
estimated system matrix elements in [19] are
unreliable because of negative values of the third
and seventh powers of 10 in the third and fourth-
order matrices, respectively, caused by large values
in higher derivatives of the autocovariance function.
To address this issue, it is recommended to use an
alternative approach that does not involve
estimating the system matrix.
Based on the preceding discussion, this paper
suggests a novel robust estimation method for
continuous-time uncertain stochastic systems. The
observation of the degraded signal includes additive
white Gaussian noise. Instead of estimating the
system matrix for the degraded signal in [19], the
robust RLS fixed-point smoothing and filtering
algorithms of Theorem 1 are characteristic in that
they use covariance information. The estimation
algorithms described in Theorem 1 utilize the cross-
covariance function of the signal with the observed
value, along with the autocovariance function of the
degraded signal. The finite Fourier cosine series
expansion approximates the cross-covariance
function between the signal and the observed value,
as well as the autocovariance function of the
degraded signal.
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2024.20.2
Seiichi Nakamori
E-ISSN: 2224-3488
9
Volume 20, 2024
Section 2 introduces the state-space model for
the signal and its degraded counterpart. In the
degraded state-space model, uncertain parameters
are present in both the observation vector and the
system matrix. Section 3 presents a robust fixed-
point smoothing problem in linear least-squares
estimation. Theorem 1 in Section 4 presents the
robust RLS fixed-point smoothing and filtering
algorithms. Section 5 explains the finite Fourier
cosine series approximation of the cross-covariance
function between the signal and the observed value,
as well as the autocovariance function of the
degraded signal. In Section 6, we compare the
estimation accuracy of the proposed robust RLS
filter with the robust RLS Wiener filter, [19], in the
first simulation example. The mean square value
(MSV) of the robust RLS filter in Theorem 1 is
smaller than that of the robust RLS Wiener filter,
[19], when the observation noise is white Gaussian
with variance . The proposed robust RLS fixed-
point smoother is compared with the proposed
robust RLS filter in terms of estimation properties.
2 State-Space Model and its Degraded
State-Space Model with
Uncertainties
Consider a state-space model (1) that satisfies the
observability condition in linear continuous-time
stochastic systems.
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
 󰇛󰇜󰇛󰇜󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇟󰇛󰇜󰇛󰇜󰇠
󰇟󰇛󰇜󰇛󰇜󰇠
(1)
󰇛󰇜 is a state vector, and 󰇛󰇜 is a scalar
signal that needs to be estimated. The input noise
󰇛󰇜 and the observation noise 󰇛󰇜 are
mutually uncorrelated white Gaussian noises with
zero means. is an input matrix, and is a
observation vector. The autocovariance
functions of the input noise 󰇛󰇜 and the
observation noise 󰇛󰇜 are expressed in (1) using the
Dirac delta function. This paper examines the state
and observation equations with uncertain parameters
in the state-space model.
󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜

󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
(2)
󰇟󰇛󰇜󰇛󰇜󰇠
󰇟󰇛󰇜󰇛󰇜󰇠󰇟󰇛󰇜󰇛󰇜󰇠
󰇟󰇛󰇜󰇛󰇜󰇠, 
In
H
equation (2), the system matrix and
observation vector from equation (1) are
substituted with the degraded versions
󰇛󰇜 and
󰇛󰇜, respectively. The matrix elements of 󰇛󰇜
and the vector components of 󰇛󰇜 contain
uncertain variables. The initial state vector 󰇛󰇜 is
randomly generated and independent of input or
measurement noise.
The robust RLS Wiener filter, [19], utilizes the
estimates of the degraded system and observation
matrices. Estimating the matrices in linear
continuous-time stochastic systems is more
challenging than in linear discrete-time stochastic
systems. Section 3 introduces linear least-squares
estimation using covariance information without
explicitly identifying the degraded system matrix
and measurement vector.
3 Robust Least-Squares Fixed-Point
Smoothing Problem
Let the fixed-point smoothing estimate 󰇛󰇜 of the
signal 󰇛󰇜 be given by
󰇛󰇜 󰇛󰇜󰇛󰇜
(3)
as a linear transformation of the observed value
󰇛󰇜 Here, 󰇛󰇜represents an
impulse response function. Let us consider
minimizing the mean-square value:
󰇟󰇛󰇛󰇜󰇛󰇜󰇜󰇠 (4)
of the fixed-point smoothing error 󰇛󰇜󰇛󰇜.
The fixed-point smoothing estimate 󰇛󰇜 that
minimizes the cost function satisfies the
relationship:
󰇛󰇜󰇛󰇜󰇛󰇜, , (5)
from the orthogonal projection lemma, [21]. The
optimal impulse response function satisfies the
Wiener-Hopf integral equation:
󰇟󰇛󰇜󰇛󰇜󰇠
󰇛󰇜󰇟󰇛󰇜󰇛󰇜󰇠

(6)
Substituting the degraded observation equation in
(2) into (6), (6) is transformed into:
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2024.20.2
Seiichi Nakamori
E-ISSN: 2224-3488
Volume 20, 2024
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜󰇟󰇛󰇜󰇛󰇜󰇠
󰇛󰇜󰇟󰇛󰇜󰇛󰇜󰇠.
(7)

󰇛󰇜 is the cross-covariance function between
the signal 󰇛󰇜 and the observed value 󰇛󰇜 Assume
that the cross-covariance function
󰇛󰇜 is
expressed as:

󰇛󰇜󰇛󰇜󰇛󰇜
(8)
󰇛󰇜 is the autocovariance function of the
degraded signal 󰇛󰇜, expressed by:
󰇛󰇜󰇫
󰆾󰇛󰇜
󰇛󰇜
󰇛󰇜
󰆾󰇛󰇜
(9)
In wide-sense stationary stochastic systems,

󰇛󰇜 and
󰇛󰇜 are represented as 
󰇛󰇜 and
󰇛󰇜 respectively, with .
󰇛󰇜 is an even
function for every τ in its domain. From (7), Section
4 introduces Theorem 1 and proposes the robust
RLS fixed-point smoothing and filtering algorithms
using the covariance information provided by (8)
and (9).
4 Robust RLS Fixed-Point Smoothing
and Filtering Algorithms
Theorem 1 proposes the robust RLS fixed-point
smoothing and filtering algorithms for the signal
󰇛󰇜 using the covariance information
󰇛󰇜 and
󰇛󰇜 defined by (8) and (9).
Theorem 1 Let the state-space model for the signal
󰇛󰇜 be given by (1). Let the state-space model for
the degraded signal 󰇛󰇜 be given by (2). Let the
cross-covariance function 
󰇛󰇜 of the signal
󰇛󰇜 with the observed value 󰇛󰇜 be represented as
(8). Let the autocovariance function
󰇛󰇜of the
degraded signal 󰇛󰇜 be expressed as (9). Then, the
robust RLS fixed-point smoothing and filtering
algorithms for the signal 󰇛󰇜 from the degraded
observation 󰇛󰇜 in (2) using the covariance
information consist of (10)-(19).
Fixed-point smoothing estimate of the signal 󰇛󰇜 at
the fixed point : 󰇛󰇜
󰇛󰇜
 󰇛󰇜󰇛󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜,
󰇛󰇜 󰇛󰇜
(10)
Smoother gain: 󰇛󰇜
󰇛󰇜󰇛
󰇛󰇜󰇛󰇜
󰆾󰇛󰇜󰇜
(11)
󰇛󰇜
 󰇛󰇜󰇛
󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜,
󰇛󰇜󰇛󰇜󰇛󰇜
(12)
Filtering estimate of the signal 󰇛󰇜: 󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜
(13)
󰇛󰇜
 󰇛󰇜󰇛󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜, e(0)=0
(14)
󰇛󰇜
 󰇛󰇜󰇛󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜
󰇛󰇜
(15)
󰇛󰇜󰇛󰇛󰇜󰇛󰇜
󰆾󰇛󰇜󰇜
(16)
󰇛󰇜󰇛
󰇛󰇜󰇛󰇜
󰆾󰇛󰇜󰇜
(17)
󰇛󰇜
 󰇛󰇜󰇛
󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜 󰇛󰇜
(18)
Autovariance function of the filtering estimate of
the degraded signal 󰇛󰇜: 󰇛󰇜
󰇛󰇜
 󰇛󰇜󰇛
󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜,
󰇛󰇜
(19)
In (11), 
󰇛󰇜 represents the cross-
covariance function of the signal 󰇛󰇜 with the
observed value 󰇛󰇜, .
Theorem 1 is derived based on the invariant
imbedding method for integral equations, [22], [23].
Proof of Theorem 1 is deferred to the Appendix.
The robust RLS fixed-point smoother and filter
are designed by minimizing the cost function (4) in
the linear least-squares sense. In the combined
Kalman filter and neural network estimation method,
[24], [25], [26], [27], [28], the neural network
weights are computed iteratively using a large
amount of high-quality training data.
5 Finite Fourier Series
Approximation of Autocovariance
Function of Degraded Signal and
Cross-Covariance Function of
Signal with Observed Value
The autocovariance function
󰇛󰇜of the
degraded signal 󰇛󰇜 is represented as
󰇛󰇜 in
wide-sense stationary stochastic systems, with
.
󰇛󰇜 is an even function for every τ in its
domain. Let
󰇛󰇜 be approximated by the finite
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2024.20.2
Seiichi Nakamori
E-ISSN: 2224-3488
Volume 20, 2024
Fourier cosine series expansion given in (20).
󰇛󰇜
represents a function that approximates
󰇛󰇜 using
terms.
󰇛󰇜
󰇛󰇜


,
(20)
Here represents the fundamental period of
󰇛󰇜.
The finite Fourier cosine coefficients are calculated
by:
󰇛󰇜󰇛󰇜
.
(21)
After comparing (9) and (20), we can represent
the vector components of
󰆾󰇛󰇜 and
󰇛󰇜 as follows:
󰇛󰇜
󰆾󰇛󰇜
󰆾󰇛󰇜
󰆾󰇛󰇜
󰆾󰇛󰇜,
󰇛󰇜󰇟
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
󰆾󰇛󰇜
,
󰆾󰇛󰇜󰇛󰇛󰇜
󰇜, ,
󰆾󰇛󰇜󰇛󰇜󰇛󰇛󰇛󰇜󰇜
󰇜, 
,
󰇛󰇜,
󰇛󰇜󰇛󰇛󰇜
󰇜, ,
󰇛󰇜󰇛󰇛󰇛󰇜󰇜
󰇜,
.
The cross-covariance function 
󰇛󰇜 of 󰇛󰇜
with 󰇛󰇜 is given by (8). Let 
󰇛󰇜 be
approximated by the finite Fourier cosine series
expansion given in (22).

󰇛󰇜 represents a
function that approximates 
󰇛󰇜 using
terms.

󰇛󰇜
󰇛󰇜


,
.
(22)
Here, represents the fundamental period of

󰇛󰇜. The finite Fourier cosine coefficients are
calculated by:

󰇛󰇜󰇛󰇜
,
.
(23)
After comparing (8) and (22), we can represent
the vector components of 󰇛󰇜 and 󰇛󰇜 as follows:
󰇛󰇜󰇟󰇛󰇜 󰇛󰇜 󰇛󰇜 󰇛󰇜󰇠,
󰇛󰇜󰇟󰇛󰇜 󰇛󰇜 󰇛󰇜 󰇛󰇜󰇠 ,
󰇛󰇜
,
󰇛󰇜󰇛󰇛󰇜
󰇜, ,
󰇛󰇜󰇛󰇜󰇛󰇛󰇛󰇜󰇜
󰇜,
,
󰇛󰇜,
󰇛󰇜󰇛󰇛󰇜
󰇜, ,
󰇛󰇜󰇧󰇛󰇜
󰇨
.
By substituting the functions
󰇛󰇜,
󰇛󰇜, 󰇛󰇜,
󰇛󰇜, and the values of 
󰇛󰇜 into the robust
RLS fixed-point smoothing and filtering algorithms
of Theorem 1, we can recursively compute the
fixed-point and filtering estimates of the signal 󰇛󰇜.
6 Numerical Simulation
Examples
Example 1
Let the observation equation for the signal 󰇛󰇜 and
the state differential equations for 󰇛󰇜 be given by
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇟 󰇠
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜󰇣
󰇤

󰇣
󰇤
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇟󰇛󰇜󰇛󰇜󰇠
󰇟󰇛󰇜󰇛󰇜󰇠.
(24)
Let the observation equation for the degraded
signal 󰇛󰇜, and the state differential equations for
the degraded state 󰇛󰇜 be given by:
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜

󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜,
󰇛󰇜󰇣
 󰇤
󰇛󰇜󰇟 󰇠
󰇟󰇛󰇜󰇛󰇜󰇠 󰇟󰇛󰇜󰇛󰇜󰇠.
(25)
󰇛󰇜 represents an uncertain matrix that is
additional to the system matrix . "" represents
a scalar random number from a uniform distribution
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2024.20.2
Seiichi Nakamori
E-ISSN: 2224-3488
Volume 20, 2024
in 󰇛󰇜. The value of for the finite Fourier cosine
series approximations in (20) and (22) is  in this
simulation. Figure 1 illustrates the autocovariance
function
󰇛󰇜 of the degraded signal 󰇛󰇜 vs. ,
, . The MSV of the finite Fourier
cosine series approximation errors for
󰇛󰇜 is
evaluated as
󰇛
󰇛󰇜
󰇛󰇜󰇜


 
Figure 2 illustrates the cross-covariance
function 
󰇛󰇜 of the signal 󰇛󰇜 with the observed
value 󰇛󰇜 vs. ,
. Here, the fundamental
period of
󰇛󰇜 and
󰇛󰇜 is . The MSV of
the finite Fourier cosine series approximation errors
for 
󰇛󰇜is evaluated as
󰇛
󰇛󰇜



󰇛󰇜󰇜=8.103093306645037  From the
MSVs, the finite Fourier cosine series expansions
accurately approximate
󰇛󰇜and 
󰇛󰇜
. Here, the Midpoint Rule calculates the
numerical integration of (21) and (23) for the finite
Fourier cosine series coefficients and ,
 with subintervals 󰇟
󰇛󰇜󰇠󰇟󰇠
By
substituting the autocovariance information
󰆾󰇛󰇜
and
󰇛󰇜 the cross-covariance information 󰇛󰇜 and
󰇛󰇜, and the values of
󰇛󰇜 into the robust
RLS fixed-point smoothing and filtering algorithms
of Theorem 1, the fixed-point smoothing and
filtering estimates are computed recursively.
Figure 3 illustrates the signal 󰇛󰇜 and its
filtering estimate 󰇛󰇜 vs for the white Gaussian
observation noise󰇛󰇜.
Figure 4 illustrates the signal 󰇛󰇜 and its
filtering estimate 󰇛󰇜 vs for the white Gaussian
observation noise 󰇛󰇜. From Figure 3 and
Figure 4, the filtering estimate for 󰇛󰇜 is
closer to the signal process than for 󰇛󰇜.
Figure 5 illustrates the signal 󰇛󰇜 and its fixed-
point smoothing estimate 󰇛󰇜 vs for
the white Gaussian observation noise 󰇛󰇜.
Figure 3 and Figure 5 show that the fixed-point
smoothing and filtering estimates have nearly
identical waveforms. Table 1 shows the MSVs of
filtering errors 󰇛󰇜󰇛󰇜 by the robust RLS
filter in Theorem 1 and the robust RLS Wiener filter
[19], and those of fixed-point smoothing errors
󰇛󰇜󰇛󰇜 by the robust RLS fixed-
point smoother in Theorem 1 for the white Gaussian
observation noises 󰇛󰇜, 󰇛󰇜, and
󰇛󰇜. The MSV by the robust RLS filter in
Theorem 1 is smaller than that by the robust RLS
Wiener filter, [19], for the white Gaussian
observation noise 󰇛󰇜. The MSV of the
filtering errors by the robust RLS filter in Theorem
1 is almost the same as that of the fixed-point
smoothing errors by the robust RLS fixed-point
smoother in Theorem 1 for each white Gaussian
observation noise.
Figure 6 illustrates the MSVs of the filtering
and fixed-point smoothing errors by the robust RLS
filter and the robust RLS fixed-point smoother in
Theorem 1 vs.  for the white Gaussian
observation noises 󰇛󰇜, 󰇛󰇜, and
󰇛󰇜. Here, the MSV for the filtering errors is
evaluated by
󰇛󰇛󰇜󰇛󰇜󰇜

 The
evaluation of the MSV for the fixed-point
smoothing errors is carried out by
󰇛󰇛󰇜󰇛󰇜󰇜

 
 . From Figure 6, for the white
Gaussian observation noise 󰇛󰇜 the MSV of
the fixed-point smoothing errors 󰇛󰇜󰇛
󰇜 is slightly smaller than that of the filtering
errors. For the white Gaussian observation noises
󰇛󰇜 and 󰇛󰇜, the MSVs of the filtering
errors are almost the same as those of the fixed-
point smoothing errors 󰇛󰇜󰇛󰇜 for
.
In the simulation example,  differential
equations run simultaneously for each filtering
estimation update. In updating the fixed-point
smoothing estimate,  differential equations are
computed recursively.
Fig. 1: Autocovariance function
󰇛󰇜 of the
degraded signal 󰇛󰇜 vs.
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2024.20.2
Seiichi Nakamori
E-ISSN: 2224-3488
Volume 20, 2024
Fig. 2: Cross-covariance function 
󰇛󰇜 of the
signal 󰇛󰇜 with the observed value 󰇛󰇜 vs.
Fig. 3: Signal 󰇛󰇜 and its filtering estimate 󰇛󰇜
vs. for the white Gaussian observation noise
󰇛󰇜
Fig. 4: Signal 󰇛󰇜 and its filtering estimate 󰇛󰇜
vs. for the white Gaussian observation noise
󰇛󰇜
Fig. 5: Signal 󰇛󰇜 and its fixed-point smoothing
estimate 󰇛󰇜 vs. for the white Gaussian
observation noise 󰇛󰇜
Table 1. MSVs of filtering errors 󰇛󰇜󰇛󰇜 by
the robust RLS filter in Theorem 1 and the robust
RLS Wiener filter [19], and those of fixed-point
smoothing errors 󰇛󰇜󰇛󰇜 by the
robust RLS fixed-point smoother in Theorem 1 for
the white Gaussian observation noises 󰇛󰇜,
󰇛󰇜, and 󰇛󰇜.
Fig. 6: MSVs of the filtering and fixed-point
smoothing errors by the robust RLS filter and the
robust RLS fixed-point smoother in Theorem 1 vs.
 for the white Gaussian observation noises
󰇛󰇜, 󰇛󰇜, and 󰇛󰇜
Example 2
White Gaussian
observation
noise
MSV of 󰇛󰇜
󰇛󰇜 by filter
in [19]
MSV of 󰇛󰇜
󰇛󰇜 by filter
in Theorem 1
MSV of 󰇛󰇜
󰇛󰇜
by fixed-point
smoother in
Theorem 1
󰇛󰇜
6.707662

1.131457

1.035210

󰇛󰇜
4.526449

8.441589

8.337765

󰇛󰇜
8.558077

1.350774

1.344245

WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2024.20.2
Seiichi Nakamori
E-ISSN: 2224-3488
Volume 20, 2024
Let us consider the second-order mass-spring
system driven by zero-mean white Gaussian noise
[29], [30].
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇟 󰇠
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜󰇣
󰇤


󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇟󰇛󰇜󰇛󰇜󰇠
󰇟󰇛󰇜󰇛󰇜󰇠.
(26)
Fig. 7: Signal 󰇛󰇜 and its fixed-point smoothing
estimate 󰇛󰇜 vs. for the white Gaussian
observation noise 󰇛󰇜
Fig. 8: MSVs of the filtering and fixed-point
smoothing errors by the robust RLS filter and the
robust RLS fixed-point smoother in Theorem 1 vs.
 for the white Gaussian observation noises
󰇛󰇜, 󰇛󰇜, and 󰇛󰇜
The state-space model is equivalently expressed
by series  circuit, [29].
Figure 7 illustrates the signal 󰇛󰇜 and its fixed-
point smoothing estimate 󰇛󰇜 vs. for
the white Gaussian observation noise 󰇛󰇜.
Figure 7 shows that 󰇛󰇜 estimates 󰇛󰇜
feasibly.
Figure 8 illustrates the MSVs of the filtering
and fixed-point smoothing errors by the robust RLS
filter and the robust RLS fixed-point smoother in
Theorem 1 vs.  for the white Gaussian
observation noises 󰇛󰇜, 󰇛󰇜, and
󰇛󰇜. From Figure 8, for the white Gaussian
observation noise 󰇛󰇜 the MSV of the fixed-
point smoothing errors 󰇛󰇜󰇛󰇜 is
slightly smaller than that of the filtering errors. For
the white Gaussian observation noises 󰇛󰇜
and 󰇛󰇜, the MSVs of the filtering errors are
almost the same as those of the fixed-point
smoothing errors 󰇛󰇜󰇛󰇜 for 
.
7 Conclusion
This paper has proposed a novel robust estimation
technique for continuous-time uncertain stochastic
systems. In the degraded state-space model, the
observation vector and the system matrix include
uncertain parameters. Additive white Gaussian noise
is present in the observation of the degraded signal.
The feature of utilizing covariance information is
present in the robust RLS fixed-point smoothing and
filtering algorithms in Theorem 1. The finite Fourier
cosine series expansion approximates the cross-
covariance function of the signal with the observed
value, as well as the autocovariance function of the
degraded signal.
In the first simulation example, the MSV by the
robust RLS filter in Theorem 1 is smaller than that
by the robust RLS Wiener filter for the white
Gaussian observation noise 󰇛󰇜. In the two
simulation examples, by using the robust RLS fixed-
point smoother and filter in Theorem 1, for the
white Gaussian observation noise 󰇛󰇜, the
MSV of the fixed-point smoothing errors 󰇛󰇜
󰇛󰇜 is slightly smaller than that of the
filtering errors. For the white Gaussian observation
noises 󰇛󰇜 and 󰇛󰇜, the MSVs of the
filtering errors are nearly identical to those of the
fixed-point smoothing errors 󰇛󰇜󰇛󰇜
for Lag values between  and . Based on
these results, the proposed fixed-point smoothing
and filtering method utilizing covariance
information is valid.
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2024.20.2
Seiichi Nakamori
E-ISSN: 2224-3488
Volume 20, 2024
The proposed robust estimation method using
covariance information leads to the development of
new robust estimators for continuous-time
stochastic systems.
References:
[1] U. Shaked and C. E. de Souza, Robust
minimum variance filtering, IEEE
Transactions on Signal Processing, Vol. 43,
No. 11, 1995, pp. 2474-2483, DOI:
10.1109/78.482099.
[2] C. E. de Souza, U. Shaked, M. Fu, Robust
filtering for continuous time varying uncertain
systems with deterministic input signals, IEEE
Transactions on Signal Processing, Vol. 43,
No. 3, 1995, pp. 709–719, DOI:
10.1109/78.370625.
[3] F. L. Lewis, L. Xie, D. Popa, Optimal and
Robust Estimation With an Introduction to
Stochastic Control Theory, Second Edition,
CRC Press, 2008.
[4] S. O. R. Moheimani, A. V. Savkin, I. R.
Petersen, Robust filtering, prediction,
smoothing and observability of uncertain
systems, IEEE Transactions on Circuits and
SystemsI: Fundamental Theory and
Applications, Vol. 45, No.4, 1998, pp. 446–
457, DOI: 10.1109/81.669068.
[5] Z. Quan, S. Han, J. H. Park, W. H. Kwon,
Robust FIR Filters for Linear Continuous-
Time State-Space Models With Uncertainties,
IEEE Signal Processing Letters, Vol. 15,
2008, pp. 621–624,
DOI: 10.1109/LSP.2008.2004515.
[6] Y. Shmaliy, S. Zhao, Optimal and Robust
State Estimation: Finite Impulse Response
(FIR) and Kalman Approaches, IEEE Press,
Piscataway, NJ, 2022.
[7] Q. Cheng, B. Cui, Improved results on robust
energy-to-peak filtering for continuous-time
uncertain linear systems, Circuits, Systems,
and Signal Processing, Vol. 38, No. 5, 2019,
pp. 2335–2350, DOI: 10.1007/s00034-018-
0965-7.
[8] D.-W. Ding, G.-H. Yang, Robust filtering
for uncertain continuous-time switched linear
systems, Proceedings of the IEEE
International Conference on Control
Applications, CCA 2009 and of the
International Symposium on Intelligent
Control, ISIC 2009, Saint Petersburg, Russia,
July 8-10, 2009, IEEE, 2009, pp. 1110–1115,
DOI: 10.1109/CCA.2009.5281161.
[9] E. Gershon, D. J. N. Limebeer, U. Shaked, I.
Yaesh, Robust filtering of stationary
continuous-time linear systems with stochastic
uncertainties, IEEE Transactions on
Automatic Control, Vol. 46, No. 11, 2001, pp.
1788–1793, DOI: 10.1109/9.964692.
[10] K. H. Lee, B. Huang, Robust optimal
filtering for continuous-time stochastic
systems with polytopic parameter uncertainty,
Automatica, Vol. 44, No. 10, 2008, pp. 2686–
2690, DOI:
10.1016/j.automatica.2008.02.025.
[11] X. Li, H. Gao, A delay-dependent approach to
robust generalized filtering for uncertain
continuous-time systems with interval delay,
Signal Processing, Vol. 91, No. 10, 2011, pp.
2371–2378, DOI:
10.1016/j.sigpro.2011.04.032.
[12] J. Qiu, G. Feng, J. Yang, Improved delay-
dependent robust filtering of continuous-
time polytopic linear systems with time-
varying delay, 10th International Conference
on Control, Automation, Robotics and Vision,
ICARCV 2008, Hanoi, Vietnam, 17-20
December 2008, Proceedings, IEEE, 2008,
pp. 53–58, DOI:
10.1109/ICARCV.2008.4795491.
[13] O. L. V. Costa, M. D. Fragoso, Robust linear
filtering for continuous-time hybrid Markov
linear systems, Proceedings of the 47th IEEE
Conference on Decision and Control, CDC
2008, December 9-11, 2008, Cancún, Mexico,
IEEE, 2008, pp. 5098–5103, DOI:
10.1109/CDC.2008.4739044.
[14] X. Xiao, H. Xi, J. Zhu, H. Ji, Robust Kalman
filter of continuous-time Markov jump linear
systems based on state estimation
performance, International Journal of Systems
Science, Vol. 39, No. 1, 2008, pp. 9–16, DOI:
10.1080/00207720701597456.
[15] A. El-Amrani, B. Boukili, A. E. Hajjaji, A.
Hmamed, Robust H filter for uncertain
continuous-time systems with finite frequency
ranges, 26th Mediterranean Conference on
Control and Automation, MED 2018, Zadar,
Croatia, June 19-22, 2018, IEEE, 2018, pp.
807–812, DOI: 10.1109/MED.2018.8443021.
[16] A. G. Kallapur, I. G. Vladimirov, I. R.
Petersen, Robust filtering for continuous-time
uncertain nonlinear systems with an integral
quadratic constraint, American Control
Conference, ACC 2012, Montreal, QC,
Canada, June 27-29, 2012, IEEE, 2012, pp.
4807–4812, DOI:
10.1109/ACC.2012.6314612.
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2024.20.2
Seiichi Nakamori
E-ISSN: 2224-3488
Volume 20, 2024
[17] J. Qiu, H. Tian, Q. Lu, H. Gao,
Nonsynchronized Robust Filtering Design for
Continuous-Time T-S Fuzzy Affine Dynamic
Systems Based on Piecewise Lyapunov
Functions, IEEE Transactions on Cybernetics,
Vol. 43, No. 6, 2013, pp. 1755–1766, DOI:
10.1109/TSMCB.2012.2229389.
[18] H. Tian, J. Qiu, H. Gao, Q. Lu, New results on
robust filtering design for continuous-time
nonlinear systems via T-S fuzzy affine
dynamic models, 2012 12th International
Conference on Control Automation Robotics
& Vision (ICARCV), Guangzhou, China, Dec.
5-7, 2012, pp. 1220–1225, DOI:
10.1109/ICARCV.2012.6485349.
[19] S. Nakamori, Robust recursive least-squares
Wiener filter for linear continuous-time
uncertain stochastic systems, WSEAS
Transactions on Signal Processing, Vol. 19,
No. 12, 2023, pp. 108–117,
10.37394/232014.2023.19.12.
[20] T. C. Hsia, System Identification: Least-
Squares Methods, Lexington Books, 1977.
[21] A. P. Sage, J. L. Melsa, Estimation Theory
with Applications to Communications and
Control, McGraw-Hill, 1971.
[22] R. Bellman, G. M.
Wing, An Introduction to
Invariant Imbedding: Classics in Applied
Mathematics, Society for Industrial and
Applied Mathematics, 1992.
[23] H. Kagiwada, R. Kalaba, Imbedding Methods
for Integral Equations with Applications,
Solution Methods for Integral Equations:
Theory and Applications, M. A. Golberg
(Ed.), Mathematical Concepts and Methods in
Science and Engineering, Springer, Boston,
1979, pp. 195–223.
[24] B. Millidge, A. Tschantz, A. Seth, C.
Buckley, Neural Kalman Filtering, arXiv:
2102.10021, 2021, pp. 1-12, DOI:
10.48550/arXiv.2102.10021.
[25] S. Kim, I. Petrunin and H. -S. Shin, A Review
of Kalman Filter with Artificial Intelligence
Techniques," 2022 Integrated
Communication, Navigation and Surveillance
Conference (ICNS), Dulles, VA, USA, 2022,
pp. 1-12, DOI:
10.1109/ICNS54818.2022.9771520.
[26] A. Juárez-Lora, L. M. García-Sebastián, V. H.
Ponce-Ponce, E. Rubio-Espino, H. Molina-
Lozano, H. Sossa, Implementation of Kalman
Filtering with Spiking Neural Networks,
Sensors, Vol. 22, No. 22, 2022, pp. 1-16,
DOI: 10.3390/s22228845.
[27] Z. Cui, J. Dai, J. Sun, D. Li, L. Wan, K.
Wang, Hybrid Methods Using Neural
Network and Kalman Filter for the State of
Charge Estimation of Lithium-Ion Battery,
Mathematical Problems in Engineering, Vol.
2022, Article ID 9616124, 2022, pp. 1-11,
DOI: 10.1155/2022/9616124.
[28] Y. Bai, B. Yan, C. Zhou, T. Su, X. Jin, State
of art on state estimation: Kalman filter driven
by machine learning, Annual Reviews in
Control, Vol. 56, 2023, p. 100909, DOI:
10.1016/j.arcontrol.2023.100909.
[29] S. Nakamori, Design of linear continuous-
time stochastic estimators using covariance
information in Krein spaces, IEICE
Transactions on Fundamentals, Vol. E84A,
No. 9, 2001, pp. 2261-2271.
[30] M. S. Grewal, A. P. Andrews, Kalman
Filtering: Theory and Practice Using Matlab,
Third Edition, John Wiley & Sons, Inc., 2008.
APPENDIX
Proof of Theorem 1
Differentiating (7) with respect to , we have
󰇛󰇜

󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜
(A-1)
Introducing a function 󰇛󰇜 satisfying
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
, (A-2)
we obtain
󰇛󰇜
 󰇛󰇜
󰆾󰇛󰇜󰇛󰇜 (A-3)
From (7), 󰇛󰇜 satisfies
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
. (A-4)
From (9), (A-4) is transformed into
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇜
󰆾󰇛
. (A-5)
Introducing a function
󰇛󰇜󰇛󰇜
󰇛󰇜
, (A-6)
󰇛󰇜 is given by
󰇛󰇜󰇛
󰇛󰇜
󰇛󰇜
󰆾󰇛󰇜󰇜. (A-7)
Differentiating (A-6) with respect to , we have
󰇛󰇜
 󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜
.
(A-8)
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2024.20.2
Seiichi Nakamori
E-ISSN: 2224-3488
Volume 20, 2024
Substituting (A-3) into (A-8) and introducing a
function
󰇛󰇜󰇛󰇜
󰇛󰇜
,
(A-9)
we have
󰇛󰇜
 󰇛󰇜󰇛
󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜.
(A-10)
The fixed-point smoothing estimate 󰇛󰇜 of the
signal 󰇛󰇜 is given by (3). Differentiating (3) with
respect to , we have
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
 󰇛󰇜
.
(A-11)
Substituting (A-3) into (A-11), we have
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇛󰇜
.
(A-12)
Introducing 󰇛󰇜 given by
󰇛󰇜󰇛󰇜󰇛󰇜
,
(A-13)
(A-12) is transformed into
󰇛󰇜
 󰇛󰇜󰇛󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜, 󰇛󰇜 󰇛󰇜.
(A-14)
From (7), the impulse response function 󰇛󰇜
for the filtering estimate 󰇛󰇜 of 󰇛󰇜 satisfies
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
, 0st.
(A-15)
Introducing a function 󰇛󰇜 satisfying
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-16)
󰇛󰇜 is given by
󰇛󰇜󰇛󰇜󰇛󰇜.
(A-17)
Differentiating (A-16) with respect to , we have
󰇛󰇜

󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜
(A-18)
From (A-2) and (A-18), 󰇛󰇜
 satisfies
󰇛󰇜
 󰇛󰇜
󰆾󰇛󰇜󰇛󰇜.
(A-19)
From (A-16), 󰇛󰇜 satisfies
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-20)
From (9), (A-20) is rewritten as
󰇛󰇜
󰇛󰇜 󰇛󰇜
󰇛󰇜
󰆾󰇛󰇜
(A-21)
Introducing a function
󰇛󰇜 󰇛󰇜
󰇛󰇜
(A-22)
󰇛󰇜 is given by
󰇛󰇜󰇛󰇛󰇜󰇛󰇜
󰆾󰇛󰇜󰇜.
(A-23)
Differentiating (A-22) with respect to , we have
󰇛󰇜
 󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜
(A-24)
Substituting (A-19) into (A-24), we have
󰇛󰇜

󰇛󰇜
󰇛󰇜
󰇛󰇜
󰆾󰇛󰇜 󰇛󰇜
󰇛󰇜
(A-25)
Introducing a function
󰇛󰇜󰇛󰇜
󰇛󰇜
,
(A-26)
we obtain
󰇛󰇜
 󰇛󰇜󰇛
󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜
󰇛󰇜.
(A-27)
From (A-2), 󰇛󰇜 satisfies
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
.
(A-28)
Differentiating (A-28) with respect to , we have
󰇛󰇜
 󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜
(A-29)
From (9), (A-29) is transformed into
󰇛󰇜
 󰇛󰇜
󰆾󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜
(A-30)
From (A-28), we obtain
󰇛󰇜
 󰇛󰇜
󰆾󰇛󰇜󰇛󰇜
(A-31)
From (A-28), 󰇛󰇜 satisfies
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-32)
From (9), (A-32) is rewritten as
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰆾󰇛󰇜
(A-33)
From (A-26), 󰇛󰇜 is given by
󰇛󰇜󰇛
󰇛󰇜󰇛󰇜
󰆾󰇛󰇜󰇜
(A-34)
The filtering estimate 󰇛󰇜 of the signal 󰇛󰇜 is
given by
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2024.20.2
Seiichi Nakamori
E-ISSN: 2224-3488
Volume 20, 2024
󰇛󰇜 󰇛󰇜󰇛󰇜
(A-35)
Substituting (A-17) into (A-35), we have
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
.
(A-36)
Introducing a function 󰇛󰇜 given by
󰇛󰇜󰇛󰇜󰇛󰇜
,
(A-37)
󰇛󰇜 is given by
󰇛󰇜󰇛󰇜󰇛󰇜.
(A-38)
Differentiating (A-37) with respect to , we have
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
 󰇛󰇜
.
(A-39)
Substituting (A-19) into (A-39), we have
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇛󰇜
.
(A-40)
From (A-13), we obtain
󰇛󰇜
 󰇛󰇜󰇛󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜
e(0)=0.
(A-41)
From (A-13), differentiating 󰇛󰇜 with respect to ,
we have
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
 󰇛󰇜
.
(A-42)
Substituting (A-31) into (A-42), we have
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇛󰇜
.
(A-43)
From (A-13), we obtain
󰇛󰇜
 󰇛󰇜󰇛󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜
󰇛󰇜.
(A-44)
Differentiating (A-26) with respect to , we have
󰇛󰇜
 󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜
.
(A-45)
Substituting (A-31) into (A-45), we have
󰇛󰇜
 󰇛󰇜
󰇛󰇜
󰇛󰇜
󰆾󰇛󰇜󰇛󰇜
󰇛󰇜
.
(A-46)
From (A-26), we obtain
󰇛󰇜
 󰇛󰇜󰇛
󰇛󰇜
󰆾󰇛󰇜󰇛󰇜󰇜,
󰇛󰇜.
(A-47)
From (A-6), 󰇛󰇜 is given by
󰇛󰇜󰇛󰇜
󰇛󰇜
.
(A-48)
Substituting (A-17) into (A-48), we have
󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
.
(A-49)
From (A-22), 󰇛󰇜 is given by
󰇛󰇜󰇛󰇜󰇛󰇜.
(A-50)
(Q.E.D.)
Contribution of Individual Authors to the
Creation of a Scientific Article (Ghostwriting
Policy)
The author contributed to the present research, at all
stages from the formulation of the problem to the
final findings and solution.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
No funding was received for conducting this study.
Conflict of Interest
The author has no conflicts of interest to declare.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2024.20.2
Seiichi Nakamori
E-ISSN: 2224-3488
Volume 20, 2024