[31] Gallego, A.J., Calvo-Zaragoza, J., Valero-Mas, J.J.,
Rico-Juan, J.R. (2018). Clustering-based k-nearest
neighbor classification for large-scale data with neural
codes representation. Pattern Recognition, 74, 531-
543. https://doi.org/10.1016/j.patcog.2017.09.038
[32] Nerurkar, P., Shirke, A., Chandane, M., Bhirud, S.
(2018). Empirical analysis of data clustering
algorithms. Procedia Computer Science, 125, 770-779.
https://doi.org/10.1016/j.procs.2017.12.099
[33] Singh, A., Yadav, A., Rana, A. (2013). K-means with
Three different Distance Metrics. International Journal
of Computer Applications, 67(10).
http://dx.doi.org/10.5120/11430-6785
[34] Mughnyanti, M., Efendi, S., Zarlis, M. (2020). Analysis
of determining centroid clustering x-means algorithm
with davies-bouldin index evaluation. In IOP
Conference Series: Materials Science and Engineering,
725(1), 012128. https://doi.org/10.1088/1757-
899X/725/1/012128
[35] Cortes, C., Vapnik, V. (1995). Support-vector
networks. Machine learning, 20(3), 273-297.
https://doi.org/10.1007/BF00994018
[36] Evgeniou T., Pontil M. (2001) Support Vector
Machines: Theory and Applications. In: Paliouras G.,
Karkaletsis V., Spyropoulos C.D. (eds) Machine
Learning and Its Applications. ACAI 1999. Lecture
Notes in Computer Science, 2049.
https://doi.org/10.1007/3-540-44673-7_12
[37] Tsekouras, G.E., Trygonis, V., Maniatopoulos, A.,
Rigos, A., Chatzipavlis, A., Tsimikas, J., Velegrakis, A.
F. (2018). A Hermite neural network incorporating
artificial bee colony optimization to model shoreline
realignment at a reef-fronted beach. Neurocomputing,
280, 32-45.
https://doi.org/10.1016/j.neucom.2017.07.070
[38] Zhang, X., Wang, J., Wang, T., Jiang, R., Xu, J., Zhao,
L. (2021). Robust feature learning for adversarial
defense via hierarchical feature alignment. Information
Sciences, 560, 256-270.
https://doi.org/10.1016/j.ins.2020.12.042
[39] Liu, S., Deng, W. (2015). Very deep convolutional
neural network based image classification using small
training sample size. In IAPR Asian conference on
pattern recognition (ACPR), 730-734.
https://arxiv.org/abs/1409.1556
[40] Timofte, R., Zimmermann, K., Van Gool, L. (2014).
Multi-view traffic sign detection, recognition, and 3D
localisation. Machine vision and applications, 25(3),
633-647. https://doi.org/10.1007/s00138-011-0391-3
[41] Misra, D. (2019). Mish: A self regularized non-
monotonic neural activation function. arXiv preprint
arXiv:1908.08681. https://arxiv.org/abs/1908.08681
[42] Goodfellow, I., Bengio, Y., Courville, A. (2017). Deep
learning (adaptive computation and machine learning
series). Cambridge Massachusetts, 321-359.
[43] Zhai, S., Cheng, Y., Lu, W., Zhang, Z. (2016). Doubly
convolutional neural networks. arXiv preprint
arXiv:1610.09716. https://arxiv.org/abs/1610.09716
[44] Graham, B. (2014). Fractional max-pooling. arXiv
preprint arXiv:1412.6071.
https://arxiv.org/abs/1412.6071
[45] Nair, V., Hinton, G.E. (2010). Rectified linear units
improve restricted boltzmann machines. Icml.
https://openreview.net/forum
[46] Glorot, X., Bordes, A., Bengio, Y. (2011). Deep sparse
rectifier neural networks. In Proceedings of the
fourteenth international conference on artificial
intelligence and statistics. JMLR Workshop and
Conference Proceedings, 315-323.
https://proceedings.mlr.press/v15/glorot11a.html
[47] Maniatopoulos, A., Mitianoudis, N. (2021). Learnable
Leaky ReLU (LeLeLU): An Alternative Accuracy-
Optimized Activation Function. Information, 12(12),
513. http://dx.doi.org/10.3390/info12120513
[48] Ramachandran, P., Zoph, B., Le, Q.V. (2017). Swish:
a self-gated activation function. arXiv preprint
arXiv:1710.05941. https://arxiv.org/abs/1710.05941v1
[49] Mish, M.D. (2020). A Self Regularized Non-
Monotonic Activation Function. arXiv preprint
arXiv:1908.08681. https://arxiv.org/abs/1908.08681
[50] Kingma, D.P., Ba, J. (2014). Adam: A method for
stochastic optimization. arXiv preprint
arXiv:1412.6980. https://arxiv.org/abs/1412.6980
[51] Ioffe, S., Szegedy, C. (2015). Batch normalization:
Accelerating deep network training by reducing
internal covariate shift. In International conference on
machine learning, 448-456.
http://proceedings.mlr.press/v37/ioffe15.html
[52] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever,
I., Salakhutdinov, R. (2014). Dropout: a simple way to
prevent neural networks from overfitting. The Journal
Of Machine Learning Research, 15(1), 1929-1958.
https://jmlr.org/papers/v15/srivastava14a.html
[53] Aggarwal, C.C., Hinneburg, A., Keim, D.A. (2001).
On the surprising behavior of distance metrics in high
dimensional space. In International conference on
database theory, 420-434. https://doi.org/10.1007/3-
540-44503-X_27
[54] Hou J., Kang J., Qi N. (2010) On Vocabulary Size in
Bag-of-Visual-Words Representation. In: Qiu G., Lam
K.M., Kiya H., Xue XY., Kuo CC.J., Lew M.S. (eds)
Advances in Multimedia Information Processing -
PCM 2010. Lecture Notes in Computer Science, 6297.
https://doi.org/10.1007/978-3-642-15702-8_38
[55] Kuru, K., Khan, W. (2020). A framework for the
synergistic integration of fully autonomous ground
vehicles with smart city. IEEE Access, 9, 923-948.
https://doi.org/10.1109/ACCESS.2020.3046999
[56] Butt, F. A., Chattha, J. N., Ahmad, J., Zia, M.U.,
Rizwan, M., Naqvi, I.H. (2022). On the Integration of
Enabling Wireless Technologies and Sensor Fusion for
Next-Generation Connected and Autonomous
Vehicles. IEEE Access, 10, 14643-14668.
https://doi.org/10.1109/ACCESS.2022.3145972
[57] Yeomans, J.S. (2021). A Multicriteria, Bat Algorithm
Approach for Computing the Range Limited Routing
Problem for Electric Trucks. WSEAS Transactions on
Circuits and Systems, 20, 96-106.
www.doi.org/10.37394/23201.2021.20.13
[58] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J.,
Jones, L., Gomez, A.N., Polosukhin, I. (2017).
Attention is all you need. In Advances in neural
information processing systems, 5998-6008.
https://papers.nips.cc/paper/2017/hash/3f5ee243547de
e91fbd053c1c4a845aa-Abstract.html
[59] Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C.,
Dosovitskiy, A. (2021). Do vision transformers see like
convolutional neural networks?. Advances in Neural
Information Processing Systems, 34.
https://proceedings.neurips.cc/paper/2021/hash/652cf3
8361a209088302ba2b8b7f51e0-Abstract.html
[60] Li, S., Chen, X., He, D., Hsieh, C.J. (2021). Can Vision
Transformers Perform Convolution?. arXiv preprint
arXiv:2111.01353. https://arxiv.org/abs/2111.01353
[61] Newell, A., Deng, J. (2020). How useful is self-
supervised pretraining for visual tasks? Proceedings of
the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, 7345-7354.
https://arxiv.org/abs/2003.14323
Creative Commons Attribution License 4.0
(Attribution 4.0 International , CC BY 4.0)
This article is published under the terms of the Creative
Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en_US
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.19
Efstathios Karypidis, Stylianos G. Mouslech,
Kassiani Skoulariki, Alexandros Gazis