Robust Estimators for Missing Observations in Linear Discrete-Time
Stochastic Systems with Uncertainties
SEIICHI NAKAMORI
Professor Emeritus, Faculty of Education,
Kagoshima University,
1-20-6, Korimoto, Kagoshima, 890-0065,
JAPAN
Abstract: - As a first approach to estimating the signal and the state, Theorem 1 proposes recursive least-
squares (RLS) Wiener fixed-point smoothing and filtering algorithms that are robust to missing measurements
in linear discrete-time stochastic systems with uncertainties. The degraded quantity is given by multiplying the
Bernoulli random variable by the degraded signal caused by the uncertainties in the system and observation
matrices. The degraded quantity is observed with additional white observation noise. The probability that the
degraded signal is present in the observation equation is assumed to be known. The design feature of the
proposed robust estimators is the fitting of the degraded signal to a finite-order autoregressive (AR) model.
Theorem 1 is transformed into Corollary 1, which expresses the covariance information in a semi-degenerate
kernel form. The autocovariance function of the degraded state and the cross-covariance function between the
nominal state and the degraded state is expressed in semi-degenerate kernel forms. Theorem 2 shows the robust
RLS Wiener fixed-point and filtering algorithms for estimating the signal and state from degraded observations
in the second method. The robust estimation algorithm of Theorem 2 has the advantage that, unlike Theorem 1
and the usual studies, it does not use information on the existence probability of the degraded signal. This is a
unique feature of Theorem 2.
Key-Words: - Robust RLS Wiener fixed-point smoother, robust RLS Wiener filter, missing observations,
discrete-time stochastic systems, uncertain parameters, degraded signal.
Received: July 12, 2022. Revised: October 19, 2023. Accepted: November 22, 2023. Published: December 29, 2023.
1 Introduction
1.1 Brief Literature Review
In sensor network systems, missing measurements
are often due to the limited bandwidth of the
network. Missing measurements occur at random
rates. The presence of the degraded signal in the
observation equation is described by the Bernoulli
random variable. It takes the value 1 or 0 with a
known probability. This study aims to develop a
new robust estimation technique for missing
measurements.
A variety of estimation problems for systems
with missing measurements have been studied in
detail, [1], [2], [3], [4], [5], [6], [7], [8], [9], [10],
[11], [12], [13], [14], [15], [16], [17], [18]. For
nonlinear stochastic systems without uncertainties,
estimators for missing measurements have been
developed, [10], [11]. A robust filter for nonlinear
time-delayed stochastic systems with uncertainties
was developed in, [12]. In, [13], a robust finite-
horizon Kalman filter was presented for systems
with norm-bounded parameter uncertainty and
missing measurements with a random N-step
observation delay. Missing probabilities are used for
the robust Kalman filter. Robust fusion estimation
problems with missing measurements have been
studied in multisensor network systems, [2], [7],
[14], [15], [16], [17], [18]. In, [7], [16], robust
centralized fusion (CF) and weighted measurement
fusion (WMF) Kalman estimators were designed for
missing measurements in linear stochastic systems
with uncertainties. In, [2], robust fusion Kalman
estimators were proposed for stochastic systems
with uncertainties in the system and input matrices.
Observations are delayed by one randomly
occurring step and missing observations occur.
In linear discrete-time stochastic systems with
uncertainties, robust recursive least-squares (RLS)
Wiener estimators are developed as follows: (1)
Robust RLS Wiener fixed-point smoother and filter
for signal estimation, [19], (2) Robust RLS Wiener
finite impulse response (FIR) predictor, [20], (3)
Centralized multisensor robust Chandrasekhar-type
RLS Wiener filter, [21]. In, [22], robust RLS
Wiener estimators, [19], were applied to the
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
168
Volume 19, 2023
estimation problem for random delays, packet
dropouts, and out-of-order packets in observed data
when the system matrix and observation vector have
uncertain parameters.
1.2 Current Study
As the first approach for estimating the signal and
state, Theorem 1 proposes RLS Wiener fixed-point
smoothing and filtering algorithms that are robust to
missing measurements in linear discrete-time
stochastic systems with uncertainties. The Bernoulli
random variable is multiplied by the degraded signal
due to the uncertainties in the system and
observation matrices. It is assumed that the
probability of the degraded signal being present in
the observation equation is known. Theorem 1
presents the RLS Wiener fixed-point smoothing and
filtering algorithms that estimate the signal and the
state from the missing measurements using
information on the probability and without using
any information on the uncertainties. The design
feature of the proposed robust estimators is to fit the
degraded signal to an autoregressive (AR) model of
a finite order. Theorem 1 is transformed into
Corollary 1, which expresses the covariance
information in a semi-degenerate kernel form. The
autocovariance function of the degraded state and
the cross-covariance function between the nominal
state and the degraded state is expressed in semi-
degenerate kernel forms. Theorem 2 shows the
robust RLS Wiener fixed-point smoothing and
filtering algorithms, [19], or estimating the signal
and the state from degraded observations in the
second method. A degraded quantity is given by
multiplying the Bernoulli random variable by the
degraded signal caused by the uncertainties in the
system and observation matrices. The degraded
quantity is observed with additional white
observation noise. In contrast to Theorem 1 and
other studies, e.g., [10], [11], [12], [13], the robust
estimation algorithms of Theorem 2 have the
advantage of not using information on the existence
probability of the degraded signal. This is a unique
feature of Theorem 2.
By the way, a combination of an unscented
Kalman filter and a back-propagation neural
network has been applied to GPS/SINS integrated
navigation, [23]. In order to predict the traffic state
of the entire network by modeling the dependencies
of individual self and neighbors, a deep learning
framework called Deep Kalman Filtering Network
has been studied, [24]. In, [25], the extended
estimators using covariance information are
presented. Its estimation accuracy is superior to the
Kalman filter neuro-computing and maximum a
posteriori (MAP) estimation methods.
The numerical simulation example in Section 4
compares the estimation accuracies of the robust
RLS Wiener estimators in Theorem 1 with those of
the robust RLS Wiener estimators in Theorem 2.
The estimation accuracies of the robust RLS Wiener
estimators in Theorem 2 are superior to those of the
robust RLS Wiener estimators in Theorem 1.
2 Recursive Least-Squares Fixed-Point
Smoothing Problem
Let the state-space model for the signal 󰇛󰇜 be
given by (1).
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇟󰇛󰇜󰇛󰇜󰇠
󰇟󰇛󰇜󰇛󰇜󰇠
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇛󰇜: observation vector; 󰇛󰇜:
signal vector; 󰇛󰇜: state vector;
󰇛󰇜: white observation noise with
mean zero; 󰇛󰇜: input noise vector
with mean zero; : system matrix; :
observation matrix.
(1)
󰇛󰇜 is the signal to be estimated. Here, the
following assumptions are introduced.
(1) 󰇛󰇜 is the white observation noise with the
variance . 󰇛󰇜 is the white input noise with
the variance 󰇛󰇜 denotes the
Kronecker delta function. 󰇛󰇜 and 󰇛󰇜 have
mean zeros.
(2) The state 󰇛󰇜, the observation noise 󰇛󰇜, and
the input noise 󰇛󰇜 are mutually independent.
Consider the degraded state-space model (2) with
uncertainties in the system and observation matrices
for the system (1).
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
󰆽󰇛󰇜󰇛󰇜
󰆽󰇛󰇜󰇛󰇜
󰇛󰇜
󰆽󰇛󰇜󰇛󰇜󰇛󰇜
󰆽󰇛󰇜󰇛󰇜
󰇟󰇛󰇜󰇠󰇛󰇜
(2)
󰇛󰇜: degraded observation vector; 󰇛󰇜:
degraded signal vector; 󰇛󰇜: degraded
state vector;
󰆽󰇛󰇜: degraded system matrix;
󰆽󰇛󰇜: degraded observation matrix;
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
169
Volume 19, 2023
󰇛󰇜: uncertain matrix; 󰇛󰇜:
uncertain matrix
The degraded observed value 󰇛󰇜 is given as
the sum of the degraded quantity 󰇛󰇜󰇛󰇜 and the
observation noise 󰇛󰇜. The Bernoulli random
variable 󰇛󰇜 in the observation equation has the
probabilities 󰇟󰇛󰇜󰇠󰇛󰇜 and 󰇟󰇛󰇜
󰇠󰇛󰇜. The probability that 󰇛󰇜 is
󰇛󰇜. For 󰇛󰇜, the observation equation is
given by 󰇛󰇜󰇛󰇜󰇛󰇜. The probability that
󰇛󰇜 is 󰇛󰇜. For󰇛󰇜, the observed
value 󰇛󰇜 consists only of the observation noise
󰇛󰇜. The degraded system matrix
󰆽󰇛󰇜 is given as
a sum of the system matrix and the uncertain
matrix 󰇛󰇜. The degraded observation matrix
󰆽󰇛󰇜 is given as a sum of the system matrix and
the uncertain matrix 󰇛󰇜. Assume that 󰇛󰇜
and 󰇛󰇜 contain uncertain parameters,
respectively.
The first objective of this study is to design the
RLS Wiener fixed-point smoothing and filtering
algorithms that estimate the signal from the
observed value 󰇛󰇜 using information such as the
probability 󰇛󰇜 and without using any information
on the uncertain matrices 󰇛󰇜 and 󰇛󰇜.
Let the sequence of the degraded signal 󰇛󰇜 be
fitted to a th-order AR model.
󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠
󰇛󰇜
(3)
Let 󰇛󰇜 be expressed as
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇟 󰇠
(4)
From (3) and (4), the state equation for 󰇛󰇜 is
given by
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜



  
(5)
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜
󰇛󰇜󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠
󰇛󰇜
Let
󰇛󰇜 be the autocovariance function of the
state 󰇛󰇜.
󰇛󰇜 has the wide-sense stationarity
(WSS)
󰇛󰇜
󰇛󰇜 [26].
󰇛󰇜 is
expressed in the semi-degenerate functional form as
follows:
󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜.
(6)
Here
denotes the system matrix of the state
equation (5).
is given by



  
(7)
From the relation
󰇛󰇜
󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠 in wide-sense stationary stochastic
systems, [26], and (4), the autovariance function
󰇛󰇜 of the state 󰇛󰇜 becomes
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇟󰇛󰇜󰇛󰇜
󰇛󰇜 󰇛󰇜󰇠
(8)
Then
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
170
Volume 19, 2023
The Yule-Walker equations for the AR parameters
are the following:
󰇛󰇜

󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(9)
Here,
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
Let
󰇛󰇜 denote the cross-covariance function
between the state 󰇛󰇜 and the observed value 󰇛󰇜.

󰇛󰇜 satisfies the relation
󰇛󰇜
󰇛
󰇜󰇟󰇛󰇜󰇛󰇜󰇠 in wide-sense stationary
stochastic systems, [26]. Let 
󰇛󰇜 denote the
cross-covariance function between the state 󰇛󰇜
and the degraded state 󰇛󰇜. 
󰇛󰇜 is expressed
in the following functional form:

󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜.
(10)
From (1), represents the system matrix for the
state 󰇛󰇜.
Let the fixed-point smoothing estimate 󰇛󰇜 of
the state 󰇛󰇜 at the fixed point be given by
󰇛󰇜
 󰇛󰇜󰇛󰇜,
(11)
using the observed values 󰇝󰇛󰇜󰇞. In (11),
󰇛󰇜 denotes a time-varying impulse response
function. Consider the estimation problem of
minimizing the mean square value (MSV)
󰇟󰇛󰇜󰇛󰇜󰇠
(12)
of fixed-point smoothing errors. By the orthogonal
projection lemma, [26],
󰇛󰇜
 󰇛󰇜󰇛󰇜󰇛󰇜
(13)
the impulse response function satisfies the Wiener-
Hopf equation
󰇟󰇛󰇜󰇛󰇜󰇠
 󰇛󰇜󰇟󰇛󰇜󰇛󰇜󰇠
(14)
Here, denotes the orthogonality notation. From
(1), (2), (4), (8), and the relation 󰇟󰇛󰇜󰇛󰇜󰇠

󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜, we get
󰇛󰇜

󰇛󰇜
󰇛󰇜
 󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(15)
Here, 
󰇛󰇜󰇟󰇛󰇜󰇛󰇜󰇠 is the cross-
covariance function between the state 󰇛󰇜 and the
degraded signal 󰇛󰇜 . Clearly, 󰇟󰇛󰇜󰇛󰇜󰇠
󰇟󰇛󰇜󰇛󰇜󰇠
.
3 Robust RLS Wiener Fixed-Point
Smoothing and Filtering
Algorithms
In (2), the degraded observation 󰇛󰇜 is given as the
sum of the degraded quantity 󰇛󰇜󰇛󰇜 and the
observation noise 󰇛󰇜. The Bernoulli random
variable 󰇛󰇜 in the observation equation has the
probabilities 󰇟󰇛󰇜󰇠󰇛󰇜 and 󰇟󰇛󰇜
󰇠󰇛󰇜. Theorem 1 assumes that 󰇛󰇜 is
known. The degraded signal 󰇛󰇜 in (2) is affected
by the uncertain matrices 󰇛󰇜 and 󰇛󰇜. The
sequence of 󰇛󰇜is fitted to the th-order AR
model (3). The AR model corresponds to the state-
space model (5) for 󰇛󰇜. The model parameters are
calculated using the Yule-Walker equation (9). The
observation matrix
and the system matrix
do
not use any information about 󰇛󰇜 and 󰇛󰇜.
and
are used in the robust RLS Wiener algorithms
for the fixed-point smoothing estimate 󰇛󰇜 at the
fixed point and the filtering estimate 󰇛󰇜 of the
signal 󰇛󰇜 in Theorem 1.
Based on the linear estimation problem for the
state 󰇛󰇜 in Section 2, Theorem 1 proposes the
robust RLS Wiener fixed-point smoothing and
filtering algorithms.
Theorem 1 Let and denote the system and
observation matrices, respectively, for the signal
󰇛󰇜 in the state-space model (1). In the state-space
model (2), the system and observation matrices
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
171
Volume 19, 2023
󰆽󰇛󰇜 and
󰆽󰇛󰇜 contain the uncertain matrices 
and , respectively.
and
denote the system
and observation matrices, respectively, when the
degraded signal process in 󰇛󰇜 is fitted to the AR
model (3) of order . Let
󰇛󰇜 be the variance of
the state 󰇛󰇜 for the degraded signal 󰇛󰇜 and

󰇛󰇜the cross-variance function between the
state 󰇛󰇜 and the degraded state 󰇛󰇜. In the
observation equation (2) for 󰇛󰇜, the presence of
the degraded signal 󰇛󰇜 depends on the values of
the Bernoulli random variable 󰇛󰇜. Let be the
variance of the white observation noise 󰇛󰇜. Then
the robust RLS Wiener algorithms for the fixed-
point smoothing estimate 󰇛󰇜 at the fixed point
and the filtering estimate 󰇛󰇜 of the signal 󰇛󰇜
consist of (16)-(26) in linear discrete-time stochastic
systems with uncertainties.
Fixed-point smoothing estimate of the signal 󰇛󰇜 at
the fixed point : 󰇛󰇜
󰇛󰇜󰇛󰇜
(16)
Fixed-point smoothing estimate of the state 󰇛󰇜 at
the fixed point : 󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇜
󰇛󰇜 󰇛󰇜
(17)
Smoother gain for 󰇛󰇜 in (17): 󰇛󰇜
󰇛󰇜
󰇟
󰇛󰇜󰇛
󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
󰇝󰇛󰇜
󰇟
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
󰇞
(18)
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
(19)
Filtering estimate of the signal 󰇛󰇜: 󰇛󰇜
󰇛󰇜󰇛󰇜
(20)
Filtering estimate of the state 󰇛󰇜: 󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇜
󰇛󰇜
(21)
Filter gain for 󰇛󰇜 in (21): 󰇛󰇜
󰇛󰇜󰇟
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
󰇝󰇛󰇜
󰇟
󰇛󰇜

󰇛󰇜
󰇠
󰇛󰇜󰇞

󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜󰇛󰇜
(22)
Filtering estimate of the degraded state 󰇛󰇜:
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇡󰇛󰇜󰇛󰇜
󰇛󰇜󰇢
󰇛󰇜
(23)
Filter gain for
󰇛󰇜 in (23): 󰇛󰇜
󰇛󰇜󰇟
󰇛󰇜
󰇛󰇜

󰇛󰇜
󰇛󰇜󰇠
󰇝󰇛󰇜
󰇟
󰇛󰇜

󰇛󰇜
󰇠
󰇛󰇜󰇞
(24)
Autovariance function of
󰇛󰇜 : 󰇛󰇜
󰇟
󰇛󰇜
󰇛󰇜󰇠
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(25)
Cross-variance function between 󰇛󰇜 and
󰇛󰇜: 󰇛󰇜󰇟󰇛󰇜
󰇛󰇜󰇠
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(26)
Proof of Theorem 1 is deferred to the Appendix.
The conditions for the stability of the fixed-
point smoothing and filtering algorithms of
Theorem 1 are as follows:
1. All the eigenvalues of the matrix lie
within the unit circle.
2. All the eigenvalues of the matrix
󰇛󰇜󰇛󰇜
lie within the unit circle.
3. 󰇛󰇜
󰇛
󰇛󰇜
󰇛
󰇜
󰇜
󰇛󰇜 is a positive definite matrix,
and its inverse exists.
Instead of Theorem 1, Corollary 1 presents
robust RLS Wiener fixed-point smoothing and
filtering algorithms using covariance information.
Corollary 1 Let the autocovariance function
󰇛󰇜
of the state 󰇛󰇜 be given by (6). Let the cross-
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
172
Volume 19, 2023
covariance function between the state 󰇛󰇜 and the
degraded state 󰇛󰇜 be given by (10). Let the state-
space model for the signal 󰇛󰇜 be given by (1). Let
the degraded state and observation equations
containing the uncertain matrices  and be
given by (2). In the observation equation (2) for
󰇛󰇜, the presence of the degraded signal 󰇛󰇜
depends on the values of the Bernoulli random
variable 󰇛󰇜. Let the variance of the white
observation noise be . Using the covariance
information, the robust RLS Wiener algorithms for
the fixed-point smoothing estimate 󰇛󰇜 at the
fixed point and the filtering estimate 󰇛󰇜 of the
signal 󰇛󰇜 consist of (27)-(39) in linear discrete-
time stochastic systems with uncertainties.
Fixed-point smoothing estimate of the signal 󰇛󰇜 at
the fixed point : 󰇛󰇜
󰇛󰇜󰇛󰇜
(27)
Fixed-point smoothing estimate of the state 󰇛󰇜 at
the fixed point : 󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇜
(28)
Smoother gain for 󰇛󰇜 in (28): 󰇛󰇜
󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇠
󰇟󰇛󰇜
󰇟󰇛󰇜
󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇛󰇜󰇠
(29)
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜󰇟
󰇛󰇜
󰇛󰇜󰇛󰇜󰇠
󰇛󰇜󰇛󰇜󰇛󰇜
(30)
Filtering estimate of 󰇛󰇜: 󰇛󰇜
󰇛󰇜󰇛󰇜
(31)
Filtering estimate of 󰇛󰇜: 󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜
(32)
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇜
󰇛󰇜
(33)
󰇛󰇜󰇟󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇠
󰇝󰇛󰇜
󰇟󰇛󰇜
󰇛󰇜󰇠󰇛󰇜
󰇛󰇜󰇞
(34)
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇛󰇜
󰇛󰇜󰇛󰇜󰇜
󰇛󰇜
(35)
Filtering estimate of 󰇛󰇜:
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜
(36)
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇜
󰇛󰇜
(37)
󰇛󰇜󰇟󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇠
󰇝󰇛󰇜
󰇟󰇛󰇜
󰇛󰇜󰇠󰇛󰇜
󰇛󰇜󰇞
(38)
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇛󰇜
󰇛󰇜󰇛󰇜󰇜
󰇛󰇜
(39)
Proof
(28) is obtained from (6), (A-30), and (A-48). (29) is
obtained from (6), (10), (A-16), (A-41), and (A-42).
(30) is obtained from (A-42). From (10), (A-9), (A-
13), and (A-40), 󰇛󰇜 is obtained. From (A-27)
and (A-28), (32) is obtained. From (6), (A-30), and
(A-31) we get (33). The initial condition 󰇛󰇜 is
clear from (A-28). From (6), (10), and (A-18) we
get (34). From (6) and (A-17) we get (35). From (A-
13) the initial condition 󰇛󰇜 is clear. From (6)
and (A-32) we get (36). From (6), (A-32), and (A-
33) we get (37). From (6), (A-19), and (A-23), (38)
is obtained. (A-21) is equivalent to (39). The initial
condition 󰇛󰇜 is clear from (A-16).
(Q.E.D.)
Note that the robust fixed-point smoother and the
filter in Theorem 1 use information about the
existence probability 󰇛󰇜 of 󰇛󰇜 and the degraded
signal 󰇛󰇜. Suppose that the degraded quantity
󰇛󰇜 is defined as the multiplication of the
Bernoulli random variable 󰇛󰇜 by the degraded
signal 󰇛󰇜. The observation equation for 󰇛󰇜 and
the state equation for 󰇛󰇜 in (2) are rewritten as
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰆽󰇛󰇜󰇛󰇜󰇛󰇜
󰆽󰇛󰇜󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
(40)
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
173
Volume 19, 2023
in linear discrete-time stochastic systems with
uncertainties. Assume that the sequence of the
degraded signal
󰇛󰇜 is fitted to the th-order AR
model.
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠
󰇛󰇜
(41)
Suppose that
󰇛󰇜 is represented by
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇟 󰇠
(42)
From (41) and (42), the state equation for the
degraded state
󰇛󰇜 becomes
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜





󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇣
󰇛󰇜
󰇛󰇜󰇤
󰇛󰇜
(43)
The relation
󰇛󰇜
󰇛󰇜 holds for the
autocovariance function of the state
󰇛󰇜 in wide-
sense stationary stochastic systems, [26]. Let
󰇛󰇜 be expressed in the following semi-
degenerate functional form:
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜.
(44)
Here,
is the state transition matrix for the state
󰇛󰇜. The system matrix
in the state equation
(43) is given by





(45)
By letting
󰇛󰇜
󰇛󰇜
󰇣
󰇛󰇜
󰇛󰇜󰇤, the autovariance function
󰇛󰇜
of the state
󰇛󰇜 is given by
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(46)
Then
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
.
Using
󰇛󰇜, the Yule-Walker equations for the
AR parameters are given by
󰇛󰇜

󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
.
(47)
Here,
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
174
Volume 19, 2023
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
.
Let 
󰇛󰇜 represent the cross-covariance
function between the state 󰇛󰇜 and the degraded
state
󰇛󰇜 in wide-sense stationary stochastic
systems. Assume that
󰇛󰇜 has the following
functional form:

󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜.
(48)
Here, is the system matrix for the state 󰇛󰇜.
Theorem 2 Let the state-space model containing the
uncertain matrices  and  be given by (40). Let
and be the system and observation matrices for
the signal process in (1) for 󰇛󰇜, respectively.
When the sequence of the degraded signal
󰇛󰇜 is
fitted to the AR model (41) of order and
represented in the state-space model,
and
stand
for the system matrix and the observation matrix,
respectively. Let the variance
󰇛󰇜 of the state
󰇛󰇜 for the degraded signal
󰇛󰇜 and the cross-
variance function
󰇛󰇜 between the state 󰇛󰇜
for the signal 󰇛󰇜 and the state
󰇛󰇜 for the
degraded signal
󰇛󰇜 be given. Let the variance of
the white observation noise 󰇛󰇜 be . Then, the
robust RLS Wiener algorithms for the fixed-point
smoothing estimate 󰇛󰇜 at the fixed point and
the filtering estimate 󰇛󰇜 of the signal 󰇛󰇜
consist of (49)-(59) in linear discrete-time stochastic
systems with uncertainties.
Fixed-point smoothing estimate of the signal 󰇛󰇜 at
the fixed point : 󰇛󰇜
󰇛󰇜󰇛󰇜
(49)
Fixed-point smoothing estimate of the state 󰇛󰇜 at
the fixed point : 󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇛󰇜

󰇛󰇜󰇜
󰇛󰇜 󰇛󰇜
(50)
Smoother gain for 󰇛󰇜 in (50): 󰇛󰇜
󰇛󰇜󰇟
󰇛󰇜󰇛
󰇜
󰇛󰇜
󰇠
󰇝
󰇟
󰇛󰇜
󰇛󰇜
󰇠
󰇞
(51)
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇣
󰇛󰇜
󰇛󰇜
󰇤
󰇛󰇜󰇛󰇜
(52)
Filtering estimate of the signal 󰇛󰇜: 󰇛󰇜
󰇛󰇜󰇛󰇜
(53)
Filtering estimate of the state 󰇛󰇜: 󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
(54)
Filter gain for 󰇛󰇜 in (54): 󰇛󰇜
󰇛󰇜󰇟
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
󰇝
󰇟
󰇛󰇜
󰇛󰇜
󰇠
󰇞

󰇛󰇜
󰇛󰇜
󰇛󰇜
(55)
󰇛󰇜 represents the cross-variance function
between 󰇛󰇜 and the degraded signal
󰇛󰇜.
Filtering estimate of
󰇛󰇜:
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
(56)
Filter gain for
󰇛󰇜 in (56): 󰇛󰇜
󰇛󰇜󰇟
󰇛󰇜
󰇛󰇜
󰇠
󰇝
󰇟
󰇛󰇜
󰇛󰇜
󰇠
󰇞
(57)
Autovariance function of
󰇛󰇜 : 󰇛󰇜
󰇟
󰇛󰇜
󰇛󰇜󰇠
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇟
󰇛󰇜
󰇛󰇜
󰇠
󰇛󰇜
(58)
Cross-variance function of 󰇛󰇜 with
󰇛󰇜:
󰇛󰇜󰇟󰇛󰇜
󰇛󰇜󰇠
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇟
󰇛󰇜
󰇛󰇜
󰇠
󰇛󰇜
(59)
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
175
Volume 19, 2023
See, [19], for the proof of Theorem 2.
The conditions for the stability of the fixed-
point smoothing and filtering and algorithms of
Theorem 2 are as follows:
1 All eigenvalues of the matrix lie
within the unit circle.
2 All eigenvalues of the matrix
󰇛󰇜
are inside the unit circle.
3
󰇛
󰇛󰇜
󰇛󰇜
󰇜
is a positive definite matrix, and its inverse
exists.
4 Filtering Error Variance Function
of Signal in Theorem 1
This section presents the filtering error variance
function
󰇛󰇜 for the filtering estimate 󰇛󰇜 in
Theorem 1. Let the autocovariance function 󰇛󰇜
of the state 󰇛󰇜 be expressed by
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
(60)
The filtering error variance function for the filtering
estimate 󰇛󰇜 is given by
󰇛󰇜󰇟󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠
󰇟󰇛󰇜󰇟󰇛󰇜󰇛󰇜󰇠
(61)
From (10) and (A-27), and introducing a function,
󰇛󰇜
 󰇛󰇜󰇛󰇜
(62)
(61) is rewritten as
󰇛󰇜
󰇛󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜󰇜
(63)
Subtracting 󰇛󰇜 from 󰇛󰇜, using (A-11) and
introducing a function
󰇛󰇜
 󰇛󰇜󰇛󰇜
(64)
we have
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇜
󰇛󰇜
(65)
Subtracting 󰇛󰇜 from 󰇛󰇜 and using (A-5),
we have
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇜
(66)
󰇛󰇜
Therefore, the filtering error variance function
󰇛󰇜
is calculated by (63) with (34), (35), (38), (39), (65),
and (66) recursively.
Since
󰇛󰇜 is the semidefinite function, the
filtering variance function
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜󰇛󰇜 is
upper bounded by 󰇛󰇜 and lower bounded
by the zero matrix as follows:
󰇟󰇛󰇜󰇛󰇜󰇠
󰇛󰇜
(67)
This shows the existence of a robust filtering
estimate 󰇛󰇜 of the signal 󰇛󰇜.
5 A Numerical Simulation Example
Suppose that the scalar observation and state
equations for the state 󰇛󰇜 are given by
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇟 󰇠󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜

󰇣
󰇤
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠󰇛󰇜
(68)
In (68), the signal process for 󰇛󰇜 is represented by
a second-order AR model. Suppose that the state-
space model containing the uncertain matrices
󰇛󰇜 and 󰇛󰇜 is given by
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
󰆽󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
󰆽󰇛󰇜󰇛󰇜󰇟󰇛󰇜󰇠
󰇛󰇜󰇟󰇛󰇜󰇠󰇛󰇜󰇛󰇜
󰇛󰇜
󰆽󰇛󰇜󰇛󰇜󰇛󰇜
󰆽󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇝󰇛󰇜󰇞
(69)
In linear discrete-time stochastic systems. The
observed value 󰇛󰇜 is given by the sum of the
degraded quantity 󰇛󰇜󰇛󰇜 and the observation
noise 󰇛󰇜. It should be noted that the matrices
󰇛󰇜 and 󰇛󰇜 are uncertain. ζ(k) in (69)
denotes the random variable generated by the “rand"
command in MATLAB or Octave. Let the
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
176
Volume 19, 2023
probability that 󰇛󰇜 be 󰇛󰇜. 󰇛󰇜,
󰇛󰇜, and 󰇛󰇜 consist of the mean values and
zero mean stochastic variables, respectively. The
task is to recursively estimate the signal 󰇛󰇜 from
the observed value 󰇛󰇜. Suppose that 󰇛󰇜 is fitted
to the th -order AR model:
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇟󰇛󰇜󰇛󰇜󰇠
󰇛󰇜.
(70)
From (4), for the scalar observation equation in (69),
󰇛󰇜 is given by
󰇛󰇜
󰇛󰇜
󰇟 󰇠
: vector.
(71)
The state equation for 󰇛󰇜 is given by (5). In this
example, this equation corresponds to the case
. The autocovariance function
󰇛󰇜 of the state
󰇛󰇜 is expressed in the form of the semi-
degenerate function in (6) and has the property
󰇛󰇜
󰇛󰇜 in the wide sense of stationary
stochastic systems. In (6),
is the system matrix for
the state 󰇛󰇜.
is given by (7).
󰇛󰇜 of the state
󰇛󰇜 is described as follows:
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(72)
Suppose that 
󰇛󰇜󰇟󰇛󰇜󰇛󰇜󰇠 represents
the cross-covariance function between the signal
󰇛󰇜 and the degraded signal 󰇛󰇜. From (4) and
(68), the cross-covariance function
󰇛󰇜 is
given by
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇩󰇛󰇜
󰇛󰇜󰇟󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜󰇠
󰇩 󰇛󰇜
󰇛󰇜󰇟󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜󰇠󰇠
=󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
(73)
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
The Yule-Walker equations (9) calculate the AR
parameters  in (70). Substituting
,
, ,
,
󰇛󰇜,
󰇛󰇜
󰇛󰇜, and
󰇛󰇜 into the robust RLS Wiener estimation
algorithms of Theorem 1, the fixed-point smoothing
and filtering estimates are recursively computed. In
evaluating
in (7),
󰇛󰇜 in (72), and 
󰇛󰇜
in (73), 2,000 data sets of signal and degraded signal
are used, respectively. The computation of

󰇛󰇜 in (22) uses 2,000 data sets of signal and
observation, respectively. Figure 1 illustrates the
fixed-point smoothing estimate 󰇛󰇜 and the
filtering estimate 󰇛󰇜 of the signal 󰇛󰇜 by
Theorem 1 vs. for the white Gaussian observation
noise 󰇛󰇜 in the case of the AR model order
. Figure 2 illustrates the mean square values
of the filtering errors 󰇛󰇜󰇛󰇜 and the fixed-
point smoothing errors 󰇛󰇜󰇛󰇜 vs.
, , by Theorem 1 for the white
Gaussian observation noises 󰇛󰇜, 󰇛󰇜,
󰇛󰇜, and 󰇛󰇜 in the case of the AR
model order  Figure 3 illustrates the fixed-
point smoothing estimate 󰇛󰇜 and the
filtering estimate 󰇛󰇜 of the signal 󰇛󰇜 by
Theorem 2, [19], vs. for the white Gaussian
observation noise 󰇛󰇜 in the case of the AR
model order . Figure 4 illustrates the mean
square values of the filtering errors 󰇛󰇜󰇛󰇜
and the fixed-point smoothing errors 󰇛󰇜
󰇛󰇜 vs. , , by Theorem
2 for the white Gaussian observation noises
󰇛󰇜, 󰇛󰇜, 󰇛󰇜, and 󰇛󰇜
in the case of the AR model order  As
shown in Figure 2 and Figure 4, the estimation
accuracies of the RLS Wiener filter and fixed-point
smoother of Theorem 2 are superior to those of
Theorem 1 for each observation noise. In Figure 4,
the MSV decreases as  increases. This shows
the smoothing effect of the RLS Wiener fixed-point
smoother by Theorem 2. Figure 2 shows the
smoothing effect by Theorem 1 only for the
observation noise 󰇛󰇜. In Figure 2 and Figure
4, the MSVs of the fixed-point smoothing and
filtering errors are evaluated by
󰇛

 󰇛󰇜󰇛󰇜󰇜
 and 󰇛

 󰇛󰇜󰇛󰇜󰇜, respectively.
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
177
Volume 19, 2023
Fig. 1: Fixed-point smoothing estimate 󰇛󰇜
and filtering estimate 󰇛󰇜 of the signal 󰇛󰇜 by
Theorem 1 vs. for the white Gaussian observation
noise 󰇛󰇜 in the case of the AR model order
.
Fig. 2: MSVs of the filtering errors 󰇛󰇜󰇛󰇜
and the fixed-point smoothing errors 󰇛󰇜
󰇛󰇜 vs. , , by Theorem
1 for the white Gaussian observation noises
󰇛󰇜, 󰇛󰇜, 󰇛󰇜, and 󰇛󰇜
in the case of the AR model order 
Fig. 3: Fixed-point smoothing estimate 󰇛󰇜
and filtering estimate 󰇛󰇜 of the signal 󰇛󰇜 by
Theorem 2, [19], vs. for the white Gaussian
observation noise 󰇛󰇜 in the case of the AR
model order .
Fig. 4: MSVs of the filtering errors 󰇛󰇜󰇛󰇜
and the fixed-point smoothing errors 󰇛󰇜
󰇛󰇜 vs. , , by Theorem
2, [19], for the white Gaussian observation noises
󰇛󰇜, 󰇛󰇜, 󰇛󰇜 and 󰇛󰇜 in
the case of the AR model order 
6 Conclusion
Theorem 1 proposed the robust RLS Wiener fixed-
point smoother and filter for missing measurements
in linear discrete-time stochastic systems with
uncertainties. In (2), the degraded observation 󰇛󰇜
is given as the sum of the degraded quantity
󰇛󰇜󰇛󰇜 and the observation noise 󰇛󰇜. The
Bernoulli random variable 󰇛󰇜 in the observation
equation has the probabilities 󰇟󰇛󰇜󰇠󰇛󰇜
and 󰇟󰇛󰇜󰇠󰇛󰇜. Theorem 1 assumes
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
178
Volume 19, 2023
that 󰇛󰇜 is known. The degraded signal 󰇛󰇜 in (2)
is affected by the uncertain matrices 󰇛󰇜 and
󰇛󰇜. The sequence of 󰇛󰇜is fitted to the th-
order AR model (3). The AR model corresponds to
the state-space model (5) for 󰇛󰇜. The model
parameters are calculated using the Yule-Walker
equation (9). The observation matrix
and the
system matrix
do not use any information about
󰇛󰇜 and 󰇛󰇜.
and
are used in the robust
RLS Wiener algorithms for the fixed-point
smoothing estimate 󰇛󰇜 at the fixed point and
the filtering estimate 󰇛󰇜 of the signal 󰇛󰇜 in
Theorem 1. The design feature of the proposed
robust estimators is to fit the degraded signal to a
finite-order AR model. Theorem 1 is transformed
into Corollary 1, which expresses the covariance
information in the semi-degenerate kernel form.
Second, Theorem 2 showed the robust RLS Wiener
fixed-point smoother and filter, [19]. The robust
estimation algorithm of Theorem 2 has the
advantage that, unlike Theorem 1 and conventional
studies, it does not use information on the existence
probability 󰇛󰇜 of the degraded signal.
As shown in Figure 2 and Figure 4, the
estimation accuracies of the RLS Wiener filter and
fixed-point smoother of Theorem 2 are superior to
those of Theorem 1 for each observation noise. As
shown in Figure 4, the MSV decreases as the 
increases. This shows the smoothing effect of the
RLS Wiener fixed-point smoother by Theorem 2.
Extending this study to robust fusion estimation
problems with missing measurements is future work
in multisensor network systems. The current study is
based on the least-squares estimation method. The
neural network-aided Kalman filter is known. A
combination of the current study with neural
networks is also left for future work at this time.
APPENDIX
proof of Theorem 1
The impulse response function 󰇛󰇜 satisfies
(15). Subtracting 󰇛󰇜 from 󰇛󰇜,
we have
󰇛󰇛󰇜󰇛󰇜󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛

 󰇛󰇜󰇛󰇜󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-1)
Introducing
󰇛󰇜

󰇛󰇜
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-2)
we obtain
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
(A-3)
Subtracting 󰇛󰇜 from 󰇛󰇜, we get
󰇛󰇛󰇜󰇛󰇜󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛

 󰇛󰇜󰇛󰇜󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-4)
From (A-2) and (A-4), we obtain
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
(A-5)
The filtering estimate is given by
󰇛󰇜
 󰇛󰇜󰇛󰇜
(A-6)
From (15), the impulse response function 󰇛󰇜
satisfies
󰇛󰇜

󰇛󰇜
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-7)
Introducing
󰇛󰇜

󰇛󰇜
󰇛󰇜
 󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-8)
we obtain
󰇛󰇜󰇛󰇜
(A-9)
Subtracting 󰇛󰇜 from 󰇛󰇜, we have
󰇛󰇛󰇜󰇛󰇜󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛

 󰇛󰇜󰇛󰇜󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-10)
From (A-2) and (A-10), we obtain
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
(A-11)
From (A-8), 󰇛󰇜 satisfies
󰇛󰇜

󰇛󰇜
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-12)
Using (6) and introducing
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
179
Volume 19, 2023
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
(A-13)
we obtain
󰇛󰇜

󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
(A-14)
Subtracting 󰇛󰇜 from 󰇛󰇜, we have
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛

 󰇛󰇜󰇛󰇜󰇜󰇛󰇜
󰇛󰇜
(A-15)
Introducing
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
(A-16)
from (A-11), we obtain
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-17)
Substituting (A-17) into (A-14), we have
󰇛󰇜󰇟
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇠
󰇟󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛
󰇜󰇛
󰇜
󰇛󰇜󰇠
(A-18)
Introducing a function
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
(A-19)
we rewrite (A-18) as
󰇛󰇜󰇟
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇠
󰇟󰇛󰇜
󰇛
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
(A-20)
Subtracting 󰇛󰇜 from 󰇛󰇜 and using (A-5),
we obtain
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛

 󰇛󰇜󰇛󰇜󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
(A-21)
From (A-2), 󰇛󰇜 satisfies
󰇛󰇜

󰇛󰇜
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
From (6) and (A-16), it follows that
󰇛󰇜

󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
(A-22)
Substituting (A-21) into (A-22), we obtain an
expression for 󰇛󰇜 as
󰇛󰇜󰇟󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇠󰇟
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
(A-23)
From (A-19) and (A-21), it follows that
󰇛󰇜󰇛󰇜󰇟󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜󰇠󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-24)
Let us introduce a function
󰇛󰇜󰇛󰇜󰇛󰇜
(A-25)
From (A-23) and (A-25), it follows that
󰇛󰇜󰇟
󰇛󰇜
󰇛󰇜

󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
󰇟󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
(A-26)
Now, from (A-6) and (A-9), the filtering estimate
󰇛󰇜 of 󰇛󰇜 is given by
󰇛󰇜
 󰇛󰇜󰇛󰇜
(A-27)
Introducing a function
󰇛󰇜
 󰇛󰇜󰇛󰇜
(A-28)
the filtering estimate is expressed as
󰇛󰇜󰇛󰇜
(A-29)
Subtracting 󰇛󰇜 from 󰇛󰇜, using (A-5) and
(A-11), and introducing a function
󰇛󰇜
 󰇛󰇜󰇛󰇜
(A-30)
we obtain
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇛󰇜
󰇛󰇜

 󰇛󰇜󰇛󰇜󰇜
󰇛󰇜󰇛󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-31)
Let us introduce a function
󰇛󰇜
󰇛󰇜
(A-32)
which represents the filtering estimate of 󰇛󰇜.
Subtracting 󰇛󰇜 from 󰇛󰇜 and using (A-5),
we obtain
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
180
Volume 19, 2023
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇜
󰇛󰇜
(A-33)
Substituting (A-31) into (A-29), we have
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇛󰇜󰇛󰇜
󰇛󰇜󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇛󰇜󰇛󰇜
󰇛󰇜󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
(A-34)
From (A-18), and by introducing a function
󰇛󰇜󰇛󰇜󰇛
󰇜
(A-35)
󰇛󰇜 is expressed as
󰇛󰇜󰇟
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
󰇟󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
(A-36)
From (A-32) and (A-33), it follows that
󰇛󰇜
󰇛󰇜

󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇜
󰇛󰇜󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇜
󰇛󰇜
(A-37)
From (A-17) and (A-35), it follows that
󰇛󰇜󰇛󰇜󰇛
󰇜
󰇛󰇜󰇛󰇜󰇛
󰇛󰇜

󰇛󰇜󰇜󰇛
󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛
󰇛󰇜

󰇛󰇜
󰇜
󰇛󰇜
(A-38)
From (15), 󰇛󰇜 satisfies
󰇛󰇜
󰇛󰇜
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜

󰇛󰇜󰇛
󰇜
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
(A-39)
Introducing a function
󰇛󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜
(A-40)
we have an expression for 󰇛󰇜 as
󰇛󰇜

󰇛󰇜󰇛
󰇜
󰇛󰇜
󰇛󰇜󰇛
󰇜
󰇛󰇜
(A-41)
Subtracting 󰇛󰇜 from 󰇛󰇜, from (A-3)
and (A-16), we have
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛

 󰇛󰇜󰇛󰇜󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜

 󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜
󰇛󰇜
(A-42)
Let us introduce a function
󰇛󰇜󰇛󰇜󰇛
󰇜
(A-43)
From (A-19), (A-42), and (A-43), it follows that
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
󰇜
(A-44)
From (A-41), it is clear that
󰇛󰇜

󰇛󰇜󰇛
󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜
(A-45)
Substituting (A-44) into (A-45), we have
󰇛󰇜
󰇟
󰇛󰇜󰇛
󰇜
󰇛󰇜
󰇛󰇜
󰇛󰇜󰇠
󰇟󰇛󰇜
󰇛
󰇛󰇜

󰇛󰇜
󰇜
󰇛󰇜󰇠
(A-46)
From (A-9), (A-13), (A-35), (A-40), and (A-43), the
initial condition 󰇛󰇜 in (A-44) for 󰇛󰇜 at
is given by
󰇛󰇜󰇛󰇜󰇛
󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜󰇛
󰇜
 󰇛󰇜󰇛󰇜
󰇛󰇜󰇛
󰇜
󰇛󰇜󰇛
󰇜
󰇛󰇜
(A-47)
The fixed-point smoothing estimate is given by (11).
Subtracting 󰇛󰇜 from 󰇛󰇜 and using (A-
3), (A-30), and (A-32), we have
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
181
Volume 19, 2023
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇛󰇜
󰇛󰇜

 󰇛󰇜󰇛󰇜󰇜
󰇛󰇜󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇜
󰇛󰇜󰇛󰇜󰇛󰇛󰇜
󰇛󰇜
󰇛󰇜󰇜
󰇛󰇜 󰇛󰇜
(A-48)
(Q.E.D.)
References:
[1] D. Lou, L. Liu, S. Fang, J. Hu, D. Zhang, H.
Liang, An adaptive unscented Kalman filter
for needle steering with missing
measurements, 2023 IEEE International
Conference on Advanced Robotics and
Mechatronics, ICARM 2023, 2023, pp.1089-
1095.
[2] C. Ran, Z. Deng, Robust fusion Kalman
estimators for networked mixed uncertain
systems with random one-step measurement
delays, missing measurements, multiplicative
noises and uncertain noise variances, Inf. Sci.,
Vol.534, 2020, pp. 27-52.
[3] G. Tao, W. Liu, X. Zhang, Robust centralized
fusion Kalman predictor for uncertain
descriptor system with missing measurements,
16th IEEE International Conference on
Control & Automation, ICCA 2020, pp. 253-
259.
[4] K. Ma, L. Xu, H. Fan, Hybrid Kalman
filtering algorithm with stochastic
nonlinearities and multiple missing
measurements, IEEE Access, Vol.7, 2019, pp.
84717-84726.
[5] Y. Xu, T. Shen, X. Chen, L. Bu, N. Feng,
Predictive adaptive Kalman filter and Its
application to INS/UWB-integrated human
localization with missing UWB-based
measurements, Int. J. Autom. Comput., 2019,
Vol.16, No.5, pp. 604-613.
[6] Y. Zhao, C. Yang, Information fusion robust
guaranteed cost Kalman estimators with
uncertain noise variances and missing
measurements, Int. J. Syst. Sci., 2019, Vol. 50,
No.15, pp. 2853-2869.
[7] W. Liu, X. Wang, Z. Deng, Robust
centralized and weighted measurement fusion
Kalman predictors with multiplicative noises,
uncertain noise variances, and missing
measurements, Circuits Syst. Signal Process.,
2018, Vol.37, No.2, pp. 770-809.
[8] Y. Sun, Y. Wang, X. Wu, Y. Hu, Robust
extended fractional Kalman filter for
nonlinear fractional system with missing
measurements, J. Frankl. Inst., 2018, Vol.355,
No.1, pp. 361-380.
[9] Z. Deng, Z. Yang, Robust weighted fusion
Kalman estimators for systems with
uncertain-variance multiplicative and additive
noises and missing measurements, 20th IEEE
International Conference on Information
Fusion, FUSION 2017, 2017, pp. 1-8.
[10] J. Hu, Z. Wang, S. Liu, H. Gao, A variance-
constrained approach to recursive state
estimation for time-varying complex networks
with missing measurements, Automatica,
Vol.64, 2016, pp. 155–162.
[11] J. Hu, Z. Wang, F. E. Alsaadi, T. Hayat,
Event-based filtering for time-varying
nonlinear systems subject to multiple missing
measurements with uncertain missing
probabilities, Inf. Fusion, Vol.38, 2017, pp.
74-83.
[12] Y. Liu, F. E. Alsaadi, X. Yin, Y. Wang,
Robust H filtering for discrete nonlinear
delayed stochastic systems with missing
measurements and randomly occurring
nonlinearities, International Journal of
General Systems, Vol.44, No.2, 2015, pp.
169-181.
[13] H. Rezaei, R. M. Esfanjani, M. H. Sedaaghi,
Improved robust finite-horizon Kalman
filtering for uncertain networked time-varying
systems, Inf. Sci., Vol.293, 2015, pp. 263–274.
[14] C. Pang, S. Sun, Fusion predictors for
multisensor stochastic uncertain systems with
missing measurements and unknown
measurement disturbances, IEEE Sens. J.,
Vol.15, No.8, 2015, pp. 4346-4354.
[15] X. Wang, W. Liu, Z. Deng, Robust weighted
fusion Kalman estimators for systems with
multiplicative noises, missing measurements
and uncertain-variance linearly correlated
white noises, Aerosp. Sci. Technol., Vol. 68,
2017, pp. 331–344.
[16] W. Liu, X. Wang, Z. Deng, Robust
centralized and weighted measurement fusion
Kalman estimators for uncertain multisensor
systems with linearly correlated white noises,
Inf. Fusion, Vol.35, 2017, pp. 11–25.
[17] W. Liu, X. Wang, Z. Deng, Robust
centralized and weighted measurement fusion
white noise deconvolution estimators for
multisensor systems with mixed uncertainties,
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
182
Volume 19, 2023
Int. J. Adapt. Control Signal Process, Vol.32,
2018, pp. 185–212.
[18] C. Yang, Z. Yang, Z. Deng, Robust weighted
state fusion Kalman estimators for networked
systems with mixed uncertainties, Inf. Fusion,
Vol.45, 2019, pp. 246-265.
[19] S. Nakamori, Robust RLS Wiener signal
estimators for discrete-time stochastic systems
with uncertain parameters, Frontiers in Signal
Processing, Vol.3, No.1, 2019, pp. 1–18.
[20] S. Nakamori, Robust recursive least-squares
finite impulse response predictor in linear
discrete-time stochastic systems with
uncertain parameters, WSEAS Transactions on
Systems, Vol.19, 2020, pp. 86–101.
[21] S. Nakamori, Centralized robust multi-sensor
Chandrasekhar-type recursive least-squares
Wiener filter in linear discrete-time stochastic
systems with uncertain parameters, Jordan
Journal of Electrical Engineering, Vol.7,
No.3, 2021, pp. 289-303.
[22] S. Nakamori, Numerical simulation of robust
recursive least-squares Wiener estimators for
observations with random delays and packet
dropouts in systems with uncertainties,
Computer Reviews Journal, Vol.7, 2020, pp.
29–40.
[23] S. Li, C. Cai, Research on strong tracking
UKF algorithm of integrated navigation based
on BP neural network, Proceedings of the 4th
International Conference on Computer
Science and Application Engineering, CSAE
'20, Association for Computing Machinery,
2020, pp. 1-5.
[24] F. Chen, Z. Chen, S. Biswas, S. Lei, N.
Ramakrishnan, C. Lu, Graph Convolutional
Networks with Kalman Filtering for Traffic
Prediction, Proceedings of the 28th
International Conference on Advances in
Geographic Information Systems,
SIGSPATIAL '20, Association for Computing
Machinery, 2020, p. 135–138.
[25] S. Nakamori, Design of estimators using
covariance information in discrete-time
stochastic systems with nonlinear observation
mechanism, IEICE Trans. Fundamentals,
Vol.E-82, 1999, pp. 1292–1304.
[26] A. P. Sage and J. L. Melsa, Estimation Theory
with Applications to Communications and
Control. New York: McGraw-Hill, 1971.
Contribution of Individual Authors to the
Creation of a Scientific Article (Ghostwriting
Policy)
The author contributed to the present research, at all
stages from the formulation of the problem to the
final findings and solution.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
No funding was received for conducting this study.
Conflict of Interest
The authors have no conflicts of interest to declare.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.18
Seiichi Nakamori
E-ISSN: 2224-3488
183
Volume 19, 2023