associative classifier in [7] scored these datasets at
96% and 99.8% accuracy respectively. It looks like
some accuracy may be lost when moving from
pixel-related associations to larger shapes. The
system in [3], for example, only scored about 55%
accuracy for the Chars74K dataset, but that was for
all of the symbols and not just the numbers.
5 Conclusions
This paper describes a new image-processing
algorithm that is very human-like. It processes and
stores images individually, but these can then be
clustered into exemplars. The process uses
something resembling an eye-scan which moves in
angular directions. It is conjectured that the image
parts are more ‘intelligent,’ because they are more
explainable. The process can even include
information about the relative positions of each part.
The method is shown to be very quick for small
image sets, but it requires an exhaustive search over
all saved exemplars, which might require some sort
of heuristic search, if the database was to grow very
large. However, if it cannot be as accurate as cell-
based or neural networks, for example, then part of
the human learning process must be missing, or
maybe some refinement is still required. A second
distributed test did not fare quite as well as using
exemplars and so the conclusion here is that the
holistic view is still more important, but that finer
details are also required.
The advantage of the method is the fact that it is
explainable. The image parts can be used at a
symbolic level, for example, where they could be
integrated with other types of data. This might be a
false goal, if an AI system ultimately needs to
process at a neural level, and the parts are not
always the most meaningful. But it would at least
allow the symbolic processes to be studied, across
data types. The system is deterministic and always
returns the same result, which might be an
interesting property of a symbolic system over a
neural one, in that it can add some stability.
References
[1] Buscema, M. (1998). MetaNet: The Theory of
Independent Judges, Substance Use & Misuse,
33(2), pp. 439 - 461.
[2] Chen, K., Choi, H.J., and Bren, D.D. (2008).
Visual Attention and Eye Movements.
[3] de Campos, T.E., Babu, B.R. and Varma, M.
(2009). Character recognition in natural images,
In Proceedings of the International Conference
on Computer Vision Theory and Applications
(VISAPP), Lisbon, Portugal.
[4] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K.
and Fei-Fei, L. (2009). Imagenet: A large-scale
hierarchical image database. In 2009 IEEE
conference on computer vision and pattern
recognition, pp. 248-255.
[5] Fukishima, K. (1988). A Neural Network for
Visual Pattern Recognition. IEEE Computer,
21(3), 65 - 75.
[6] Gatto, B.B., dos Santos, E.M., Fukui, K., Junior,
W.S.S. and dos Santos, K.V. (2020). Fukunaga–
Koontz Convolutional Network with
Applications on Character Classification,
Neural Processing Letters, 52, pp. 443 - 465.
https://doi.org/10.1007/s11063-020-10244-5.
[7] Greer, K. (2022). Image Recognition using
Region Creep, 10th International Conference on
Advanced Technologies (ICAT'22), pp. 43 – 46,
November 25-27, Van, Turkey. Virtual
Conference.
[8] Greer, K. (2018). New Ideas for Brain
Modelling 4, BRAIN. Broad Research in
Artificial Intelligence and Neuroscience, 9(2),
pp. 155-167. ISSN 2067-3957.
[9] Hinton, G.E., Osindero, S. and Teh, Y.-W.
(2006). A fast learning algorithm for deep belief
nets, Neural computation, 18(7), pp. 1527 -
1554.
[10] Krizhevsky, A., Sutskever, I. and Hinton,
G.E. (2012). Imagenet classification with deep
convolutional neural networks. In Advances in
neural information processing systems, pp.
1097-1105.
[11] Kuefler, A. (2016). Attentional Scene
Classification with Human Eye Movements,
http://cs231n.stanford.edu/reports/2016/pdfs/00
0_Report.pdf. (last accessed 18/7/23).
[12] LeCun, Y. (2015). What’s Wrong with
Deep Learning? In IEEE Conference on
Computer Vision and Pattern Recognition.
[13] Rule, J.S. and Riesenhuber, M. (2021).
Leveraging Prior Concept Learning Improves
Generalization From Few Examples in
Computational Models of Human Object
Recognition, Frontiers in Computational
Neuroscience, 14, Article 586671, doi:
10.3389/fncom.2020.586671.
[14] Semeion Research Center of Sciences of
Communication, via Sersale 117, 00128 Rome,
Italy, and Tattile Via Gaetano Donizetti, 1-3-
5,25030 Mairano (Brescia), Italy.
[15] The Chars74K dataset,
http://www.ee.surrey.ac.uk/CVSSP/demos/chars
74k/. (last accessed 18/7/23).
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2023.19.9