
Science: A Walkthrough: Latest Trends in AI,
pp. 99–114, 2020.
[13] S. He, C. Han, G. Han, and J. Qin, “Explor-
ing duality in visual question-driven top-down
saliency,” IEEE transactions on neural net-
works and learning systems, vol. 31, no. 7,
pp. 2672–2679, 2019.
[14] S. Jha, A. Dey, R. Kumar, and V. Kumar, “A
novel approach on visual question answering by
parameter prediction using faster region based
convolutional neural network,” IJIMAI, vol. 5,
no. 5, pp. 30–37, 2019.
[15] D. Zhang, R. Cao, and S. Wu, “Information fu-
sion in visual question answering: A survey,”
Information Fusion, vol. 52, pp. 268–280, 2019.
[16] R. Krishna, M. Bernstein, and L. Fei-Fei, “In-
formation maximizing visual question genera-
tion,” in Proceedings of the IEEE/CVF Confer-
ence on Computer Vision and Pattern Recogni-
tion, pp. 2008–2018, 2019.
[17] S. Lobry, D. Marcos, J. Murray, and D. Tuia,
“Rsvqa: Visual question answering for re-
mote sensing data,” IEEE Transactions on Geo-
science and Remote Sensing, vol. 58, no. 12,
pp. 8555–8566, 2020.
[18] A. Patel, A. Bindal, H. Kotek, C. Klein,
and J. Williams, “Generating natural questions
from images for multimodal assistants,” in
ICASSP 2021-2021 IEEE International Confer-
ence on Acoustics, Speech and Signal Process-
ing (ICASSP), pp. 2270–2274, IEEE, 2021.
[19] F. Ren and Y. Zhou, “Cgmvqa: A new classi-
fication and generative model for medical vi-
sual question answering,” IEEE Access, vol. 8,
pp. 50626–50636, 2020.
[20] X. Xu, T. Wang, Y. Yang, A. Hanjalic, and
H. T. Shen, “Radial graph convolutional net-
work for visual question generation,” IEEE
transactions on neural networks and learning
systems, vol. 32, no. 4, pp. 1654–1667, 2020.
[21] S. Park, S. Hwang, J. Hong, and H. Byun, “Fair-
vqa: Fairness-aware visual question answer-
ing through sensitive attribute prediction,” IEEE
Access, vol. 8, pp. 215091–215099, 2020.
[22] Y. Xi, Y. Zhang, S. Ding, and S. Wan, “Visual
question answering model based on visual rela-
tionship detection,” Signal Processing: Image
Communication, vol. 80, p. 115648, 2020.
[23] T. Wang, J. Li, Z. Kong, X. Liu, H. Snoussi, and
H. Lv, “Digital twin improved via visual ques-
tion answering for vision-language interactive
mode in human–machine collaboration,” Jour-
nal of Manufacturing Systems, vol. 58, pp. 261–
269, 2021.
[24] C. Patil and A. Kulkarni, “Attention-based vi-
sual question generation,” in 2021 International
Conference on Emerging Smart Computing and
Informatics (ESCI), pp. 82–86, IEEE, 2021.
[25] F. Chen, J. Xie, Y. Cai, T. Wang, and Q. Li,
“Difficulty-controllable visual question gener-
ation,” in Web and Big Data: 5th Interna-
tional Joint Conference, APWeb-WAIM 2021,
Guangzhou, China, August 23–25, 2021, Pro-
ceedings, Part I 5, pp. 332–347, Springer, 2021.
[26] J. Xie, W. Fang, Y. Cai, Q. Huang, and Q. Li,
“Knowledge-based visual question generation,”
IEEE Transactions on Circuits and Systems for
Video Technology, vol. 32, no. 11, pp. 7547–
7558, 2022.
[27] L. Bashmal, Y. Bazi, F. Melgani, R. Ricci,
M. M. Al Rahhal, and M. Zuair, “Visual
question generation from remote sensing im-
ages,” IEEE Journal of Selected Topics in Ap-
plied Earth Observations and Remote Sensing,
vol. 16, pp. 3279–3293, 2023.
[28] S. M. kamel, S. I. Hassan, and L. Elrefaei,
“Vaqa: Visual arabic question answering,” Ara-
bian Journal for Science and Engineering,
pp. 1–21, 2023.
Contribution of Individual Authors to the
Creation of a Scientific Article (Ghostwriting
Policy)
The authors equally contributed in the present re-
search, at all stages from the formulation of the prob-
lem to the final findings and solution.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
No funding was received for conducting this study.
Conflicts of Interest
The authors have no conflicts of interest to
declare that are relevant to the content of this
article.
Creative Commons Attribution License 4.0
(Attribution 4.0 International , CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2023.22.67
Atul Kachare, Mukesh Kalla, Ashutosh Gupta