
4 Conclusion
Leveraging its extensive expertise in developing spe-
cialized hardware for unmanned vehicle platforms
(like the fully autonomous UUVs Synoris: [18], [19],
[20]), along with its advanced research in computer
vision, shape analysis, and machine learning, Infili
Technologies S.A. proposes to create a use-case sce-
nario utilizing cutting-edge 5G-enabled recognition
and formation technology. The new system can per-
form recognition and subsequent disinfection tasks in
hospitals from mainly visual sensor data by appropri-
ate and controlled formation positioning of a fleet of
UGVs by using their onboard adjustable UV lamps,
in a distributed and optimal manner, empowered by
5G connectivity between the system’s nodes.
Acknowledgment:
It is an optional section where the authors may write
a short text on what should be acknowledged
regarding their manuscript.
References:
[1] Joseph Redmon and Ali Farhadi, YOLOv3: An
Incremental Improvement, arXiv, 2018
[2] O. Ronneberger, P. Fischer, and T. Brox. U-net:
Convolutional networks for biomedical image
segmentation. In International Conference on
Medical Image Computing and
Computer-Assisted Intervention (MICCAI),
pages 234–241. Springer, 2015.
[3] S. Yu, B. Park, and J. Jeong. Deep iterative
down-up CNN for image denoising. In CVPR
Workshops, 2019.
[4] B. Park, S. Yu, and J. Jeong. Densely connected
hierarchical network for image denoising. In
CVPR Workshops, 2019.
[5] [15] G. Huang, Z. Liu, L. Van Der Maaten, and
K. Q. Weinberger. Densely connected
convolutional networks. In CVPR, pages
4700–4708, 2017.
[6] [14] K. He, X. Zhang, S. Ren, and J. Sun. Deep
residual learning for image recognition. In
CVPR, pages 770–778, 2016.
[7] [43] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and
Y. Fu. Residual dense network for image
super-resolution. In CVPR, pages 2472–2481,
2018.
[8] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu.
Residual dense network for image restoration.
arXiv preprint arXiv:1812.10477, 2019.
[9] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and
Y. Fu. Image super-resolution using very deep
residual channel attention networks. In ECCV,
pages 286–301, 2018.
[10] M. Haris, G. Shakhnarovich, and N. Ukita.
Deep backprojection networks for
super-resolution. In CVPR, June 2018.
[11] Konstantinos A. Raftopoulos and Stefanos D.
Kollias. The Global–Local transformation for
noise resistant shape representation. Computer
Vision and Image Understanding,volume = 115,
number = 8, pages = 1170-1186, year = 2011
[12] Konstantinos A. Raftopoulos, Stefanos D.
Kollias, Dionysios D. Sourlas, and Marin
Ferecatu. 2018. On the Beneficial Effect of
Noise in Vertex Localization. Int. J. Comput.
Vision 126, 1 (January 2018), 111–139.
https://doi.org/10.1007/s11263-017-1034-6
[13] Chen, Tianqi and Guestrin, Carlos. XGBoost:
A Scalable Tree Boosting System. Proceedings
of the 22nd ACM SIGKDD International
Conference on Knowledge Discovery and Data
Mining, pages=785–794, year=2016,
organization=ACM,
doi=10.1145/2939672.2939785
[14] A. Artusi and K. A. Raftopoulos, ”A
Framework for Objective Evaluation of Single
Image De-Hazing Techniques,” in IEEE Access,
vol. 9, pp. 76564-76575, 2021, doi:
10.1109/ACCESS.2021.3082207.
[15] Chen, Chang Wen et al. ”Deep Boosting for
Image Denoising.” European Conference on
Computer Vision (2018).
[16] Xu, Jun et al. ”External Prior Guided Internal
Prior Learning for Real-World Noisy Image
Denoising.” IEEE Transactions on Image
Processing 27 (2017): 2996-3010.
[17] Zhang, K., Zuo, W., Zhang, L. (2018).FDNet:
Toward a fast and flexible solution for
CNN-based image denoising. IEEE Transactions
on Image Processing, 27(9), 4608–4622. DOI:
[10.1109/TIP.2018.2839891]
(https://doi.org/10.1109/TIP.2018.2839891)
[18] Nikolaos Papadakis, Dimitris Paraschos,
”Synoris: an Unmanned Underwater Platform
Based on Hydrophone Arrays for Detection and
Tracking From Sound Signatures,” International
Journal of Circuits, Systems and Signal
Processing, vol. 16, pp. 780-786, 2022, DOI:
10.46300/9106.2022.16.96.
International Journal of Electrical Engineering and Computer Science
DOI: 10.37394/232027.2024.6.28