KF CMN filter was the second best estimator. However, its
accuracy and F-score were lower at high IoU values, being
outperformed by the standard UFIR filter. This may be because
the initial conditions of the motion model are not exactly
known. As with the ”Car4” test, it can be seen that the standard
Kalman filter had a performance poorly in the estimation of
states under colored noise with a high color factor Ψ=0.95.
Fig. 5. Precision of benchmark ”SUV”(N opt = 70),with Ψ=0.95.
Fig. 6. F-score of benchmark ”SUV”(N opt = 110),with Ψ=0.95.
Based on the results of this work, we find that, in general,
the standard filters, UFIR and KF, presented a low performance
in the estimation of states with data in the presence of colored
measurement noise and with noise factor Ψpresenting low
precision and accuracy values (F-score). The KF CMN filter
performs well when used simulated data with ideal conditions
and known data and process noise. In the same way, it was
shown to be a good estimator with real data under non-ideal
conditions, but it is highly dependent on a correct approach to
the movement model, as was demonstrated in the estimation
with simulated data when using incomplete or unknown noise.
The algorithm UFIR CMN generally obtained better results
and proved to be more robust, since it does not require
knowing the initial conditions of process noise and data.
Its performance was good under non-ideal conditions and
with a high noise factor value. These characteristics make it
an estimation algorithm with great application in movement
models where the information is not well known or the
complete information is not available.
Therefore, we conclude that the incorporation of UFIR
CMN state estimation algorithms would contribute to the
development and improvement of applications and research
in the field of object tracking research.
We are currently working on modifications of the UFIR
CMN algorithm, since although we consider good results were
obtained, we are focused to the improvement in the estimates
when the values of Ψand IoU threshold are the highest. This
in order to develop a more robust and efficient algorithm.
[1] A. N. Bishop, A. V. Savkin, and P. N. Pathirana, “Vision-based target
tracking and surveillance with robust set-valued state estimation,” IEEE
Signal Processing Letters, vol. 17, no. 3, pp. 289–292, 2010.
[2] A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” Acm
computing surveys (CSUR), vol. 38, no. 4, pp. 13–es, 2006.
[3] “Robust visual tracking framework in the presence of blurring by
author = Kang, TaeKoo and Mo, YungHak and Pae, DongSung and
Ahn, ChoonKi and Lim, MyoTaeg, journal=Measurement, volume=95,
pages=50–69, year=2017, publisher=Elsevier.”
[4] Y. S. Shmaliy, S. Zhao, and C. K. Ahn, “Optimal and unbiased
filtering with colored process noise using state differencing,” IEEE
Signal Processing Letters, vol. 26, no. 4, pp. 548–551, 2019.
[5] Y. Yoon, A. Kosaka, and A. C. Kak, “A new kalman-filter-based
framework for fast and accurate visual tracking of rigid objects,” IEEE
Transactions on Robotics, vol. 24, no. 5, pp. 1238–1251, 2008.
[6] D. Simon, Optimal state estimation: Kalman, H∞, and nonlinear
approaches. Hoboken,NJ: John Wiley & Sons, 2006.
[7] P. Liang, E. Blasch, and H. Ling, “Encoding color information for visual
tracking: Algorithms and benchmark,” IEEE transactions on image
processing, vol. 24, no. 12, pp. 5630–5644, 2015.
[8] Y. S. Shmaliy, S. Zhao, and C. K. Ahn, “Kalman and ufir state estimation
with coloured measurement noise using backward euler method,” IET
Signal Processing, vol. 14, no. 2, pp. 64–71, 2020.
[9] Y. S. Shmaliy, “Linear optimal fir estimation of discrete time-invariant
state-space models,” IEEE Transactions on Signal Processing, vol. 58,
no. 6, pp. 3086–3096, 2010.
[10] S. Zhao, Y. S. Shmaliy, and C. K. Ahn, “Bias-constrained optimal
fusion filtering for decentralized wsn with correlated noise sources,”
IEEE Transactions on Signal and Information Processing over Networks,
vol. 4, no. 4, pp. 727–735, 2018.
[11] (2015) Datasets-visual tracker benchmark. [Online]. Available: http:
//www.visual-tracking.net
[12] W. Burger, M. J. Burge, M. J. Burge, and M. J. Burge, Principles of
digital image processing. Springer, 2009, vol. 54.
[13] A. Dave, T. Khurana, P. Tokmakov, C. Schmid, and D. Ramanan,
“Tao: A large-scale benchmark for tracking any object,” in European
conference on computer vision. Springer, 2020, pp. 436–454.
[14] P. Voigtlaender, L. Luo, C. Yuan, Y. Jiang, and B. Leibe, “Reducing the
annotation effort for video object segmentation datasets,” in Proceedings
of the IEEE/CVF Winter Conference on Applications of Computer Vision,
2021, pp. 3060–3069.
8. Conclusions
References
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2022.17.40
Eli G. Pale-Ramon, Yuriy S. Shmaliy,
Luis J. Morales-Mendoza, Mario Gonzalez-lee,
Jorge A. Ortega-Contreras, Karen Uribe-Murcia