2023, pp. 197-205,
https://doi.org/10.37394/23209.2023.20.23.
[9] K. Pathak, A. Arya, P. Hatti, V. Handragal,
and K. Lee, A study of different disease
detection and classification techniques using
deep Learning for the cannabis plant,
International Journal of Computing and
Digital Systems, Vol.10, No.1, 2021, pp. 54-
62.
[10] S. Kumar, M. Jailia, and S. Varshney,
Improved YOLOv4 approach: A real time
occluded vehicle detection, International
Journal of Computing and Digital Systems,
Vol.12, No.1, 2022, pp. 489-497.
[11] F. Mushtaq, K. Ramesh, S. Deshmukh, T.
Ray, C. Parimi, P. Tandon, and P.K. Jha,
Nuts&bolts: YOLO-v5 and image processing
based component identification system,
Engineering Applications of Artificial
Intelligence, Vol.118, 2023.
[12] C. Gupta, N.S. Gill, P. Gulia, and J.
Chatterjee, A novel finetuned YOLOv6
transfer learning model for real-time object
detection, Journal of Real-Time Image
Processing, Vol.20, 2023, pp. 1-19.
[13] J. Zhou, Y. Zhang, and J. Wang, A dragon
fruit picking detection method based on
YOLOv7 and PSP-Ellipse, Intelligent Sensing
and Machine Vision in Precision Agriculture,
Vol.23, No.8, 2023.
[14] R. Bawankule, V. Gaikwad, I. Kulkarni, S.
Kulkarni, A. Jadhav, and N. Ranjan, Visual
detection of waste using YOLOv8,
Proceedings of the International Conference
on Sustainable Computing and Smart Systems
(ICSCSS 2023), IEEE Xplore Part Number:
CFP23DJ3-ART.
[15] F. Li, and L. Wang, Application of deep
learning based on garbage image
classification, WSEAS Transactions on
Computers, Vol.21, 2022, pp. 277-282,
https://doi.org/10.37394/23205.2022.21.34.
[16] Y. Liu, Y. Wang, Y. Li, Q. Li, and J. Wang,
SETR-YOLOv5n: A lightweight low-light
lane curvature detection method based on
fractional-order fusion model. IEEE Access,
Vol.10, 2022, pp. 93003-93016.
[17] C. Zheng, Stack-YOLO: A friendly-hardware
real-time object detection algorithm, IEEE
Access, Vol.11, 2023, pp. 62522-62534.
[18] L. Yang, Investigation of You Only Look
Once Networks for Vision-based Small
Object Detection, International Journal of
Advanced Computer Science and Applications
(IJACSA), Vol.14, No.4, 2023, pp. 69-82.
[19] Z. Wang, Z. Hua, Y. Wen, S. Zhang, X. Xu,
and H. Song, E-YOLO: Recognition of
oestrus cow based on improved YOLOv8n
model, Expert Systems with Applications,
Vol.238, Part E, 15, 2024, pp. 1-17.
[20] A. Malta, M. Mendes, and T. Farinha,
Augmented reality maintenance assistant
using YOLOv5, Applied Sciences, Vol.11
No.11, 2021, pp. 475.
[21] B. D. Carolis, F. Ladogana, and N.
Macchiarulo, YOLO TrashNet: garbage
detection in video streams, 2020 IEEE
Conference on Evolving and Adaptive
Intelligent Systems (EAIS), Bari, Italy.
[22] A. Ye, B. Pang, Y. Jin, and J. Cui, A YOLO-
based neural network with VAE for intelligent
garbage detection and classification, ACAI
'20: Proceedings of the 2020 3rd
International Conference on Algorithms,
Computing and Artificial Intelligence,
December 2020 Article No.: 73, pp. 1-7.
[23] N. A. Zailan, A. S. M. Khairuddin, K.
Hasikin, M. H. Junos, and U. Khairuddin, An
automatic garbage detection using optimized
YOLO model, Signal, Image and Video
Processing, 2023.
Contribution of Individual Authors to the
Creation of a Scientific Article (Ghostwriting
Policy)
The authors equally contributed to the present
research, at all stages from the formulation of the
problem to the final findings and solution.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
This research was funded by the Faculty of Applied
Science, King Mongkut’s University of Technology
North Bangkok. Contract no. 652115.
Conflict of Interest
The authors have no conflicts of interest to declare
that are relevant to the content of this article.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on COMPUTER RESEARCH
DOI: 10.37394/232018.2024.12.23
Mathuros Panmuang, Chonnikarn Rodmorn