Weed Identification Technique in Basil Crops using Computer Vision
RICARDO YAURI1,2, BRYAN GUZMAN2, ALAN HINOSTROZA2, VANESSA GAMERO3
1Universidad Nacional Mayor de San Marcos, Lima, PERU
2Facultad de Ingeniería, Universidad Tecnológica del Perú, Lima, PERU
3Departamento de Engenharia de Sistemas Eletrônicos, Universidade de São Paulo, São Paulo,
BRAZIL
Abstract: - The promotion of organic and ecological production seeks the sustainable and competitive growth
of organic crops in countries like Peru. In this context, agro-exportation is characterized by-products such as
fruit and vegetables where they need to comply with organic certification regulations to enter products into
countries like the US, where it is necessary to certify that weed control is carried out using biodegradable
materials, flames, heat, media electric or manual weeding, this being a problem for some productive
organizations. The problem is related to the need to differentiate between the crop and the weed as described
above, by having image recognition technology tools with Deep Learning. Therefore, the objective of this
article is to demonstrate how an artificial intelligence model based on computer vision can contribute to the
identification of weeds in basil plots. An iterative and incremental development methodology is used to build
the system. In addition, this is complemented by a Cross Industry Standard Process for Data Mining
methodology for the evaluation of computer vision models using tools such as YOLO and Python language for
weed identification in basil crops. As a result of the work, various Artificial Intelligence algorithms based on
neural networks have been identified considering the use of the YOLO tool, where the trained models have
shown an efficiency of 69.70%, with 3 hours of training, observing that, if used longer training time, the neural
network will get better results.
Key-Words: - Crops, farm, agricultural, basil, machine learning, Classification algorithms, yolo, Weed
Received: September 26, 2022. Revised: May 11, 2023. Accepted: May 27, 2023. Published: June 21, 2023.
1 Introduction
In Peru, in 2008, a proposal for the promotion of
organic and ecological production is presented,
which seeks the sustainable and competitive growth
of organic crops in Peru, [1], [2]. During 2019,
agricultural exports have been characterized by
products such as fruit and vegetables, with the
United States, Germany, the Netherlands, and Spain
as main clients. These markets need to comply with
organic certification regulations to enter the US, it is
necessary to be accredited under the USDA Organic
Standards 7 CFR 205, [3]. Its Operational Standard
for Pests and Weeds indicates that the problem of
weed control must be carried out with coverage of
biodegradable materials, mowing, flames, heat,
electrical means, or manual weeding.
The problem from the nutritional point of view,
vegetables are affected by their nutritional
properties such as: the minimal presence of lipids,
high iron, and calcium content, which make a
product in great demand for local and foreign
consumption, [1]. These products are affected by
weeds due to competition for water, light, nutrients,
CO2, and space. This is why it is important to
recognize the weeds of the crop, as it helps to
increase the yield per hectare to solve this problem.
The problem related to the presence of weeds in
crops can be solved from the technological field
using solutions such as performing a mechanical
removal of the weeds, using a robot that can move
in the crop, [4], but that will generate stress that
affects the flavor and nutrients in basil. Another
type of solution is related to the use of a
multispectral imaging system, which increases
certain nutritional properties of the plant. That is
why before carrying out any weed control task, it is
necessary to differentiate between the crop and the
weed whose most used technique is image
processing. This causes the staff to be replaced by
robots in the near future using Deep Learning image
recognition techniques in blueberries during the
rooting stage, for efficient use of resources, [4], [5],
[6].
One of the solutions to improve cultivation
processes is related to identifying diseases in plant
leaves through image processing, using artificial
intelligence and machine learning computational
tools, [7]. One of the technologies called to
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2023.22.64
Ricardo Yauri, Bryan Guzman,
Alan Hinostroza, Vanessa Gamero
E-ISSN: 2224-2678
636
Volume 22, 2023
contribute with a solution is Edge Computing, [8],
which allows for reducing latency and improving
computing power closer to the client together with
cloud computing.
In the case of this research, the area of study is
basil fields with weed presence problems in rural
areas and limited access to cable internet. Based on
what was previously described, the problem has
been identified and this paper poses the following
research question: How is it possible to identify
weeds to contribute to the care of basil crops in rural
areas?
Therefore, this research shows the evaluation of
the object identification model to detect weeds in a
basil plot, for which the following specific
objectives will be met: Select an object
identification algorithm for the classification of
weeds in a basil plot; Implement the weed
identification algorithm in a basil plot; Evaluate the
effectiveness of the identification algorithm. As a
first step, the use of a research methodology is
proposed through literature review techniques to
evaluate the most appropriate technologies.
Subsequently, a methodology based on CRISP
(Cross Industry Standard Process for Data Mining)
is selected to implement computer vision
algorithms. In addition, a development methodology
is used, which integrates computer vision
techniques, based on an iterative and incremental.
The present investigation contributes value in the
aspect of the identification of weeds in crops to
allow the production of the greatest amount of
product per hectare. In addition, the efficiency of the
artificial intelligence tool to differentiate weeds
from basil crops is described, which can then be
applied to a mechatronic device or to identify areas
with many weeds.
The development of this paper is written as
follows. Section 2 shows the most relevant papers.
The most important concepts about the technologies
used are shown in section 3. Subsequently, section 4
describes the proposed system and section 5
mentions the results. The conclusions are then
described in section 6.
2 Related Works
This section describes the existing research related
to the use of identification and classification
algorithms through deep learning techniques
through image processing in crop fields and the
importance of the development of a country.
In, [9], the author describes how globalization
has driven development in many third-world
countries. It is mentioned that there is a growing
demand for vegetables and fruits since 1986, having
a great leap since 2000, thanks to agro-export
agriculture. This is why many companies, related to
this area, manage better resources to optimize
production, among which we have: technical
irrigation, and access to water projects, in products
(Quinua, Grapes, Asparagus, Avocados, etc). The
results of the study indicate that the export sectors
use valid development strategies for low incomes.
The use of image processing techniques can be
used to detect crops with small products, at different
stages of their growth, where the difficulty lies in
their size and color change during their growth,
[10]. These investigations describe the use of tools
such as Yolo, which uses convolutional processes to
extract semantic details based on the DSE-V
(Vertical Dimensions) and DSE-H (Horizontal)
modules, [11], [12]. The results in [10], show an
effectiveness of 85% in the detection of the crop, in
a natural environment. In addition, YOLO version 3
was used, which has the Binary Cross Entropy
(BCE) function, allowing to efficiently estimate the
loss of classification and loss of confidence using
mean square error (MSE).
In other research, the Yolo V5 tool is used to
identify weeds in crops, [13], in real-time. The
objective of this article is to carry out an early
detection to avoid damage to the crops. For this, a
convolutional neural network is developed with the
software YOLO_CBAM (Convolutional Block
Attention Module) that allows improving the
extraction, correcting, suppressing, and extracting
irrelevant features, improving the performance of
the network. A Jetson AGX was used for the
training and deployment of the solution, verifying a
performance improvement from 0.90 to 0.92.
Automation in agriculture is an emerging topic
and integrates methods of artificial intelligence,
[14], or the use of drones, [15]. This makes it
possible to contribute to protecting crops from
aspects such as: climate change, population growth,
and food security. This paper describes weeding,
irrigation, and fumigation processes, thanks to
sensor information and the use of robots. In
addition, for the wedding application, precision
techniques with Artificial Intelligence and
Computer Vision are used.
Other research describes how the use of Internet
of Things technologies in agriculture enables early
detection of rice diseases, [16]. This paper
comments that farmers lose between 15% and 20%
of their profits due to bacterial, viral, or fungal
diseases that attack rice. In addition, this paper seeks
to identify the "brown spot" disease using
convolutional neural networks and real-time image
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2023.22.64
Ricardo Yauri, Bryan Guzman,
Alan Hinostroza, Vanessa Gamero
E-ISSN: 2224-2678
637
Volume 22, 2023
recognition. For the training and tests with images
of the rice paddy, software technologies such as
Keras and Tensorflow are used, [17], [18].
3 Weed Image Processing
3.1 Tools for Algorithm Generation
Some of the most used tools for the development of
artificial intelligence include Python, C++, R, Java,
Prolog, and Matlab. In the case of Python, this is a
programming language, with great development for
different platforms. Among its advantages we have
Treatment of a large volume of data, ideal for Big
Data, Simple and easy to learn, and Dynamic
Variables, [19], [20]. The most used libraries and
frameworks for the creation of artificial intelligence
algorithms have:
OpenCV. It is an open-source library,
specialized in computer vision applications. It
has algorithms developed for image processing,
[21].
Matplotlib. Allows you to create static and
interactive visualizations. Transform data in
lists to graphs, has an API that gives the
appearance of Matlab, projections, and image
mapping, [22].
PIL. An open-source Python image library that
has features for image processing. It allows
various image formats for processing.
3.2 Framework for Digital Image Processing
(DIP)
The YOLO (You Only Look Once) algorithm is
used as a powerful tool for object detection in
applications such as autonomous driving, and image
and video processing, among others, [23]. This
algorithm is based on neural network techniques
that allow the detection of objects by dividing the
image into a grid of cells of NxN dimensions.
During training, you learn to recognize patterns and
relevant features of objects in images to generalize
and detect objects in never-before-seen images.
In each cell, bounding boxes indicating the
presence of objects are generated, and a confidence
score is calculated for each detection (Fig. 1). These
boxes provide estimates of the location of detected
objects relative to the cell boundaries within the
grid. These estimates are based on the width (W)
and height (H) of the detected objects (Fig. 2) while
the boxes are associated with a confidence score,
which indicates the certainty that an object is
present in that box. The advantage of this procedure
is that it allows the detection of objects in a single
pass over the entire image, instead of using sliding
windows or multi-stage detection methods. This
makes YOLO faster and more efficient compared to
other approaches.
3.3 Weed Identification
The weed interferes in the various stages of the
agricultural process, which compete for water, light,
and nutrients, producing chemical compounds that
alter the development of other plants, [24]. These
are agents that allow the proliferation of other
pathogenic elements and have a negative impact on
the main crop.
Fig. 1: NxN grid
Fig. 2: Class probability map
To eliminate them, the following techniques can be
used:
Mechanics. Soil preparation before planting is
considered, considering its geographical,
physical, and biological characteristics that
could efficiently impact its control.
Herbicides. Chemicals used to control weeds are
called herbicides. It is necessary to identify the
weed, to select the appropriate herbicide,
through a selective application process.
Thermal. It consists of applying fire which is
effective and economical. The young weeds die
at a temperature of approximately 50°C and in
some cases, it will be necessary to increase this
temperature.
Technological. Among some of the technologies
that are applied as a proactive method is the use
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2023.22.64
Ricardo Yauri, Bryan Guzman,
Alan Hinostroza, Vanessa Gamero
E-ISSN: 2224-2678
638
Volume 22, 2023
of lasers, which eliminate unwanted plants.
Another alternative is the application of
radiation, X-rays, gamma, etc.
4 Proposed Solution
This project is focused on detecting weeds in basil
crops, using object detection algorithms, and
integrating artificial intelligence techniques and the
YOLO software package. In Fig. 3, we show a
flowchart of our project, with the following stages:
Image acquisition. The camera of a Motorola
G9 cell phone is used, which is taken in the
open field, with natural lighting, for a total of 72
photos. Priority is given to using natural lighting
during image capture to recreate the real
conditions in which basil crops are found, which
contributes to obtaining more accurate and
applicable results in real weed detection
situations.
Preparation of DataSet. It is necessary to
rename the images, reduce the resolution of the
images to 1200 x 1600 pixels, manually label
the weeds in the images, and separate the
images into 3 folders: “Train”, “test” and
“valid”. The hand tagging process involves
visually highlighting areas where weeds are
found using bounding boxes.
Training. Yolo V4 is used, through the Google
Colab interface, to train the neural network with
the images. During training, the neural network
learns to recognize and detect weeds in the
images, adjusting its internal parameters and
weights.
Validation. Yolo makes it easy to identify
weeds by displaying fuchsia boxes, with a Weed
class mark.
Fig. 3: Project flow chart
4.1 Selection of Object Detection Algorithm
For the selection process of the object detection
algorithm, a systematic review of the literature is
carried out, finding scientific papers with real
applications. For this, a methodology based on [25],
was carried out to select documents in the Scopus,
Google Scholar, and ScienceDirect databases with a
search period of 5 years ago in English. Based on
the search for scientific papers, the most important
techniques to identify objects and process images,
focused on agriculture, were selected. Algorithms
based on Viola-Jones, 1-stage YOLO, and 2-stage
convolutional neural networks were identified.
In the case of the Viola-Jones technique, it is
used to extract features from objects using cascade
classifiers. These classifiers are combined to form a
strong classifier that can detect specific objects. On
the other hand, YOLO is based on applying
convolutional neural networks by dividing the
image into a grid and bounding boxes for object
classification. In the inference stage, the neural
network processes the image and predicts the
classes of the detected objects. Both approaches
require a training stage using labeled data sets to
apply the inference stage.
4.2 Implementation of the Object
Identification Algorithm
To implement the algorithm, it is necessary to carry
out training, validation, and deployment processes
using previously prepared images (Fig. 4). The
following steps are performed:
The training and validation procedure begins
with the preparation of the images, considering
different types of weeds and they are used in the
training and validation stage.
Subsequently, the images are divided into
training and validation sets, which allow us to
evaluate and adjust the performance of the
model during training.
The Google Colab platform is used, and the
YOLO software packages are installed to
implement the ranking algorithm.
Then the configuration of training parameters
such as the number of iterations, the batch size,
and the learning rate are performed to evaluate
the behavioral results of the model.
During training, the model learns to recognize
and classify weeds in the basil images by
performing multiple iterations to gradually
improve model performance in the Validation.
Finally, the deployment of the model is carried
out, marking the areas of undergrowth in the
resulting images with the "weed" class mark.
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2023.22.64
Ricardo Yauri, Bryan Guzman,
Alan Hinostroza, Vanessa Gamero
E-ISSN: 2224-2678
639
Volume 22, 2023
Colab Setup and Image
Preparation
Algorithm Implementation
Installation of Framework and
MakeFile
YoLo Weight Preparation
Training data settings
training execution
Fig. 4: Implementation process
5 Results
5.1 Object Detection Algorithm Selection
In the field of object detection, there are different
approaches and algorithms used. These include two-
stage algorithms such as Region-based
Convolutional Neural Networks (R-CNN) and their
variants such as Fast R-CNN, Faster R-CNN, and R-
FCN (Region-Based Fully Convolutional
Networks).
Therefore, these algorithms have proven to be
effective in accurate object detection in a variety of
applications. However, in this research, we chose to
use YOLO (You Only Look Once), which is a
single-stage approach that uses a 24-layer
convolutional neural network based on GoogleNet.
The choice of YOLO V4 was based on its ability
to run on different operating systems, such as Linux
and Windows, and its ability to work with both
static and real-time images. By using YOLO V4,
accurate object identification was achieved in the
context of this project.
5.2 Dataset Preparation
As a result of the image acquisition, a resolution of
3472 x 4624 pixels and an average size of 6MB
were considered. Subsequently, they were
transformed to a resolution of 1200 x 1600 pixels
and a size of 700Kb on average (Fig. 5).
Fig. 5: Weed and Basil Photos Uploaded
The collected images were renamed by a script
that will allow renaming each photo as: weed_0,
weed_1, and so on. To facilitate the training of the
neural network, these renamed files were stored in
the “imagenesMaleza_rename” folder. Then the
Labeling of these images was carried out, to
manually create and locate the name of the class:
"Weed".
Weeds were manually selected over the 72
renamed images, using a rectbox that will serve us
for training (Fig. 6). The file created by each image
was a text file with a “.txt” extension, where the
location coordinates of each manually recognized
weed are stored (Fig. 7). Once the labeling is
finished, we will proceed to divide the dataset into 3
subfolders: Train (80%), Validation (15%) and Test
(5%). This was done with another script called
“Split Files” (Fig. 8).
Fig. 6: Rectbox creation and weed class
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2023.22.64
Ricardo Yauri, Bryan Guzman,
Alan Hinostroza, Vanessa Gamero
E-ISSN: 2224-2678
640
Volume 22, 2023
Fig. 7: Files with the location coordinates of the
weed
Fig. 8: Training, testing, and validation folders
Analyzing the results described above, it was
considered that the data preparation and labeling
steps were crucial to ensure the quality and accuracy
of neural network training in weed recognition in
basil crops. As a discussion of the results, it can be
stated that the successful implementation of these
processes contributed to obtaining an effectively
labeled and organized data set for the training and
evaluation of the weed detection algorithm. This, in
turn, laid the foundation for the results obtained in
terms of the ability to identify weeds using
computer vision.
5.3 Deployment of the Object Identification
Algorithm
To deploy the algorithm, the following steps are
performed in Google Colab:
The Darknet Framework is installed to create a
virtual workspace.
Create a MakeFile file, which will give
instructions to the Make function, to perform all
tasks.
Download the weights of the algorithm that
YOLO has by default.
Custom data configuration for the use of
folders, train, validate, and test in our
workspace.
YOLO settings for image processing, saturation,
exposure angle, rotation, etc. for the training
process.
Execution of the configuration script
Training process.
It will proceed from inference, using the images
from the test folder.
To carry out the prediction process, the new
weights obtained are used, through the "darknet"
command, to which a series of options will be
passed as parameters to display the results (Fig. 9).
Concluded the previous process, the images with the
class mark are shown, identifying the weeds. Fig. 10
shows an example image with the identification.
Once the identifications are made, the percentage of
correct answers is determined, considering different
training times (Fig. 11).
From the results shown, his analysis shows that
with different training times, the classification
results in images have significant improvements.
We attach a series of images with the weed
identifier boxes, of the results shown above. Fig. 12
shows the classification results with one hour of
training, while Fig. 13 shows the results with three
hours of training. Finally, it was verified that the
model performs a correct detection of weeds with
the test images, as shown in Fig. 14.
Fig. 9: Prediction process
Fig. 10: Weed identification
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2023.22.64
Ricardo Yauri, Bryan Guzman,
Alan Hinostroza, Vanessa Gamero
E-ISSN: 2224-2678
641
Volume 22, 2023
Fig. 11: Summary of success vs training time
Fig. 12: Identification with 1 hour of training
Fig. 13: Identification with 3 hours of training
Fig. 14: Identification with an improved dataset
6 Conclusions
Various Artificial Intelligence algorithms based on
neural networks have been identified considering
the use of the YOLO tool. This tool is useful due to
its speed of processing, easy implementation, and
because it allows training automation by adjusting
parameters depending on the type of analysis of
interest. In addition, collaborative development was
carried out using the Python language and the
Google Colab tool, accessing the GPU available
online to optimize the training processes.
During the evaluation of the algorithm, an
efficiency of 69.70% has been obtained, with 03
hours of training, observing that if a longer training
time is used for the neural network, better results
will be obtained. On the other hand, it should be
considered to use a larger number of photos for
training, to increase the efficiency of the algorithm.
Also, as recommendations for future work, we
can indicate that if the resolution is reduced, the
processing time will decrease using Yolo v4. On the
other hand, the use of the GPU using Google Colab
sometimes limits access to the Web resource for
some hours. An alternative that was not used was to
use a paid Colab account or use a computer with the
software installed and a GPU. Like other online
alternatives, the use of Amazon Elastic Computer
Cloud as an alternative to Google Colab can be
compared in the future.
In other words, a greater number of photos with
labels of different types of weeds could be used to
identify the type of weed. In addition, other
algorithms such as EfficientDet or RetinaNet could
be compared, which, like YOLO, are one-stage.
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2023.22.64
Ricardo Yauri, Bryan Guzman,
Alan Hinostroza, Vanessa Gamero
E-ISSN: 2224-2678
642
Volume 22, 2023
References:
[1] J. I. Sanca Mendoza, “Informe por servicios
profesionales: Manejo del cultivo de
albahaca (ocimum basilicum) Var. Genovessa
para la planta procesadora agroindustrial La
Joya S.A.C. - Arequipa,’” Universidad
Nacional de San Agustín de Arequipa, 2018.
[2] PromPerú, “Departamento de Inteligencia de
Mercados - PromPerú,” 2019. Accessed: Jan.
23, 2023. [Online]. Available:
http://bit.ly/2pqMcUa
[3] L. T. Krasny, (2002). USDA’s National
Organic Program: The Final Rule”. In Food
and Drug Law Institute FDLI Update.
Available:
https://heinonline.org/HOL/LandingPage?han
dle=hein.journals/fdliup2002&div=12&id=&p
age=
[4] R. Moreira, L. F. Rodrigues Moreira, P. L. A.
Munhoz, E. A. Lopes, and R. A. A. Ruas,
“AgroLens: A low-cost and green-friendly
Smart Farm Architecture to support real-time
leaf disease diagnostics,” Internet of Things,
vol. 19, p. 100570, Aug. 2022, doi:
10.1016/J.IOT.2022.100570.
[5] S. Kallapur, M. Hegde, A. D. Sanil, R. Pai,
and S. NS, “Identification of aromatic
coconuts using image processing and machine
learning techniques,” Glob. Transitions Proc.,
vol. 2, no. 2, pp. 441-447, Nov. 2021, doi:
10.1016/J.GLTP.2021.08.037.
[6] B. Jia et al., “Essential processing methods of
hyperspectral images of agricultural and food
products,” Chemom. Intell. Lab. Syst., vol.
198, p. 103936, Mar. 2020,
doi: 10.1016/J.CHEMOLAB.2020.103936.
[7] I. A. Quiroz and G. H. Alférez, “Image
recognition of Legacy blueberries in a Chilean
smart farm through deep learning,” Comput.
Electron. Agric., vol. 168, p. 105044, Jan.
2020, doi: 10.1016/J.COMPAG.2019.105044.
[8] R. Yauri and R. Espino, “Edge device for
movement pattern classification using neural
network algorithms,” Indones. J. Electr. Eng.
Comput. Sci., vol. 30, no. 1, p. 229, 2023,
doi: 10.11591/ijeecs.v30.i1.pp229-236.
[9] J. Schwarz, E. Mathijs, and M. Maertens, “A
dynamic view on agricultural trade patterns
and virtual water flows in Peru,” Sci. Total
Environ., vol. 683, pp. 719–728, Sep. 2019,
doi: 10.1016/J.SCITOTENV.2019.05.118.
[10] Y. Wang, G. Yan, Q. Meng, T. Yao, J. Han,
and B. Zhang, “DSE-YOLO: Detail semantics
enhancement YOLO for multi-stage
strawberry detection,” Comput. Electron.
Agric., vol. 198, p. 107057, Jul. 2022,
doi: 10.1016/J.COMPAG.2022.107057.
[11] R. Punithavathi et al., “Computer Vision and
Deep Learning-enabled Weed Detection
Model for Precision Agriculture,” Comput.
Syst. Sci. Eng., vol. 44, no. 3, pp. 2759-2774,
2023,
doi: 10.32604/CSSE.2023.027647.
[12] A. Wang, T. Peng, H. Cao, Y. Xu, X. Wei,
and B. Cui, “TIA-YOLOv5: An improved
YOLOv5 network for real-time detection of
crop and weed in the field,” Front. Plant Sci.,
vol. 13, Dec. 2022,
doi: 10.3389/FPLS.2022.1091655.
[13] Q. Wang, M. Cheng, S. Huang, Z. Cai, J.
Zhang, and H. Yuan, “A deep learning
approach incorporating YOLO v5 and
attention mechanisms for field real-time
detection of the invasive weed Solanum
rostratum Dunal seedlings,” Comput.
Electron. Agric., vol. 199, Aug. 2022,
doi: 10.1016/J.COMPAG.2022.107194.
[14] T. Talaviya, D. Shah, N. Patel, H. Yagnik, and
M. Shah, “Implementation of artificial
intelligence in agriculture for optimisation of
irrigation and application of pesticides and
herbicides,” Artif. Intell. Agric., vol. 4, p. 58-
73, Jan. 2020,
doi: 10.1016/J.AIIA.2020.04.002.
[15] N. Genze, R. Ajekwe, Z. Güreli, F.
Haselbeck, M. Grieb, and D. G. Grimm,
“Deep learning-based early weed
segmentation using motion blurred UAV
images of sorghum fields,” Comput. Electron.
Agric., vol. 202, Nov. 2022, doi:
10.1016/J.COMPAG.2022.107388.
[16] O. Debnath and H. N. Saha, “An IoT-based
intelligent farming using CNN for early
disease detection in rice paddy,”
Microprocess. Microsyst., vol. 94, p. 104631,
Oct. 2022,
doi: 10.1016/J.MICPRO.2022.104631.
[17] N. Aherwadi, U. Mittal, J. Singla, N. Z.
Jhanjhi, A. Yassine, and M. S. Hossain,
“Prediction of Fruit Maturity, Quality, and Its
Life Using Deep Learning Algorithms,”
Electron., vol. 11, no. 24, Dec. 2022,
doi: 10.3390/ELECTRONICS11244100.
[18] A. A. Albraikan, M. Aljebreen, J. S.
Alzahrani, M. Othman, G. P. Mohammed, and
M. Ibrahim Alsaid, “Modified Barnacles
Mating Optimization with Deep Learning
Based Weed Detection Model for Smart
Agriculture,” Appl. Sci., vol. 12, no. 24, Dec.
2022,
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2023.22.64
Ricardo Yauri, Bryan Guzman,
Alan Hinostroza, Vanessa Gamero
E-ISSN: 2224-2678
643
Volume 22, 2023
doi: 10.3390/APP122412828.
[19] R. Johansson, Numerical python: Scientific
computing and data science applications with
numpy, SciPy and matplotlib, Second edition.
Apress Media LLC, 2018.
doi: 10.1007/978-1-4842-4246-9/COVER.
[20] N. Ketkar and J. Moolayil, Deep Learning
with Python. Apress, 2021,
doi: 10.1007/978-1-4842-5364-9.
[21] S. Russel and P. Norvig, Artificial
intelligencea modern approach 3rd Edition.
2012. doi: 10.1017/S0269888900007724.
[22] S. Tosi, Matplotlib for Python Developers.
2009. Accessed: Jan. 21, 2023. [Online].
Available:
https://www.packtpub.com/product/matplotlib
-for-python-developers/9781847197900
[23] O. Masurekar, O. Jadhav, P. Kulkarni, and S.
Patil, “Real Time Object Detection Using
YOLOv3,” Int. Res. J. Eng. Technol., 2020,
Accessed: Jan. 21, 2023. [Online]. Available:
www.irjet.net
[24] M. Bustamante, R. Martínez, R. Suazo N., and
D. Sharma, Manual de control de malezas.
Escuela Agricola Panamericana, 2014, 2014.
Accessed: Jan. 21, 2023. [Online]. Available:
http://hdl.handle.net/11036/2931
[25] B. Kitchenham, “Guidelines for performing
systematic literature reviews in software
engineering,” Durham, 2007. [Online].
Available:
https://www.elsevier.com/__data/promis_misc
/525444systematicreviewsguide.pdf
Contribution of Individual Authors to the
Creation of a Scientific Article (Ghostwriting
Policy)
All authors have contributed equally to the creation
of this article.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
No funding was received for conducting this study.
Conflict of Interest
The authors have no conflict of interest to declare.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on SYSTEMS
DOI: 10.37394/23202.2023.22.64
Ricardo Yauri, Bryan Guzman,
Alan Hinostroza, Vanessa Gamero
E-ISSN: 2224-2678
644
Volume 22, 2023