Classification of Guava Leaf Disease using Deep Learning
ASSAD S. DOUTOUM, RECEP ERYIGIT, BULENT TUGRUL
Department of Computer Engineering,
Ankara University, Golbasi, Ankara,
TURKEY
*Corresponding Author
Abstract: - A higher percentage of crops are affected by diseases, posing a challenge to agricultural
production. It is possible to increase productivity by detecting and forecasting diseases early. Guava is a fruit
grown in tropical and subtropical countries such as Chad, Pakistan, India, and South American nations. Guava
trees can suffer from a variety of ailments, including Canker, Dot, Mummification, and Rust. A diagnosis
based only on visual observation is unreliable and time-consuming. To help farmers identify plant diseases in
their early stages, an automated diagnosis and prediction system is necessary. Therefore, we developed a deep
learning method for classifying and forecasting guava leaf diseases. We investigated a dataset composed of
1834 leaf examples, separated into five categories. We trained the dataset using four different and generally
preferred pre-trained CNN architectures. The EfficinetNet-B3 architecture outperformed the other three
architectures, achieving 94.93% accuracy on the test data. The results ensure that deep learning methods are
more successful and reliable than traditional methods.
Key-Words: Deep Learning, Convolutional Neural Networks, Guava, Leaf Diseases
Received: August 18, 2022. Revised: September 19, 2023. Accepted: October 12, 2023. Published: October 23, 2023.
1 Introduction
A guava’s high nutritional content includes
vitamins A, B-Complex, C, E, and K as well as
copper, iron, magnesium, manganese, potassium,
sodium, and zinc. Guavas are also high in fiber.,
there are many health benefits, including the
prevention and treatment of cancers, skin
conditions, and heart diseases, [1]. Additionally,
guava is used for different types of sickness.
Besides lowering blood pressure and normalizing
blood sugar, it is also good for diarrhea, [2]. A
large number of underdeveloped countries,
including Chad, rely on guava fruit as a source of
food.
It is important to note that guava plants are
susceptible to various illnesses, such as canker,
mummification, dot, and rust, which can reduce
overall production, [3]. Even though pesticides are
used on guava plants to control these diseases, they
negatively affect the environment and cause
economic damage to the country. It is therefore
important to diagnose these diseases properly to
reduce their negative impact. Guava diseases can
be identified and classified by experts through
manual observation that requires constant
monitoring and testing. There is a significant lack
of accuracy in this procedure and it is quite
expensive, [4]. Guava farmers constantly lose a
large portion of their production due to a lack of
prevention techniques. Furthermore, many farmers
are interested in learning how to prevent guava
diseases and increase the production of the fruit,
[5]. Consequently, farmers cannot increase guava
production or improve its quality. There is a great
deal of importance in diagnosing guava diseases in
their early stages and correctly classifying them,
[2].
Guava-producing countries must implement an
automated system to detect and diagnose diseases.
The system is considered the most accurate, fastest,
and most cost-effective way to detect guava
diseases, [2]. Deep learning approaches can be
applied to the guava leaf dataset to identify guava
diseases. It can assist farmers in diagnosing guava
diseases early and figuring out the best pesticides,
[5]. Researchers developed computer vision
systems to identify and classify plant diseases as
well as consider accurate solutions to plant disease
problems. The proposed system uses RGB images to
differentiate leaves that are healthy from those that
are unhealthy, [3].
The study aims to identify guava leaf diseases,
mummification, dot, canker, and rust using a deep
learning model that will have a higher accuracy
WSEAS TRANSACTIONS on INFORMATION SCIENCE and APPLICATIONS
DOI: 10.37394/23209.2023.20.38
Assad S. Doutoum,
Recep Eryigit, Bulent Tugrul
E-ISSN: 2224-3402
356
Volume 20, 2023
rate than any currently available model. These
diseases will be identified quickly and accurately by
the model. The model will also be used for
developing prevention and control strategies for
diseases, as well as providing recommendations for
future studies. Finally, this study will provide
agricultural scientists with valuable knowledge to
further improve guava production by providing
insight into guava leaf diseases.
These are the main contributions of this article:
Four popular CNN models are trained for detecting
and classifying guava leaf diseases.
Each model’s performance is evaluated and
compared in terms of accuracy, precision, and
recall values.
Future developments in guava leaf disease
classification are discussed.
2 Related Works
A recent trend in smart agriculture has been the use
of deep learning algorithms to diagnose and detect
plant diseases automatically. There has been a large
number of studies conducted in this area over the
years, [6], [7], [8], [9].
A recent study
proposed a system that used
five CNN algorithms to recognize guava
diseases: AlexNet, SqueezeNet, GoogLeNet,
ResNet- 50, ResNet-101, and Bagged Tree (BT),
[2]. Based on the results, ResNet-101 achieved
decent classification performance by producing an
accuracy rate of 97.74%. A recent study introduced
a model to identify guava plant diseases by
applying machine learning classifiers, [3]. The
bagged Tree classifier achieved the best accuracy
of 99% in identifying four guava diseases, namely
Canker, Mummification, Rust, and Dot. A recent
study proposed three CNN-based models to
discover guava diseases, [5]. The experiment
results show the most accurate model achieved
better than the other two models with an accuracy
of 95.61%. As suggested, [10], image
segmentation and clustering of images can be
achieved using K-means The authors applied
SVM to the classifier to recognize guava diseases
at an early stage with an accuracy rate of
98.17%. A recent study introduced an automated
system to aid farmers in identifying guava diseases
and differentiating between healthy and malady
leaves, [11]. Authors adjusted machine learning
algorithms including F-KNN, C-SVM, Bagged
Tree, and RUSBoost Tree algorithms. C-SVM
achieved the highest accuracy of 100% compared
to the other classifiers.
A recent study suggested deep learning for
recognizing guava plant diseases such as Algal leaf
spot, whitefly, and rust diseases, [12]. The
experiment results on the dataset show an average
accuracy of 98.74%. A recent study developed an
automated system to detect guava diseases and early
detection of plant leaves, [13]. Based on the
experiment results, they achieved an accuracy of
98.96% using ResNet. A recent study proposed a
deep learning-based mobile application to detect
plant diseases using a phone’s camera, [14]. The
system uses VGG architecture to classify the major
grape diseases. The VGG models were able to
achieve an accuracy rate of 98% on the test data.
Additionally, many types of research done using
the EfficientNet model have achieved significant
accuracy results. A variety of types of plant village
datasets were used in these studies to identify plant
diseases, [15].
3 Materials and Methods
3.1 Dataset
The guava plant dataset is publicly available on
Kaggle. Five major categories of guava are
represented in the dataset: Canker,
Mummification, Dot, Rust, and Healthy. In total,
1,834 images are included in the dataset. The
dataset is divided into three major categories:
training, validation, and testing directories. There
are 1239 images in the training set, 457 images in
the validation set, and 138 images in the testing set.
Each of these images in the training set belongs to
a different class; 696 images are canker, 150
images are mummification, 149 images are dot,
149 images are rust, and 95 are healthy images.
The distribution of the data is presented in Figure
1.
Fig. 1: Distribution of the data
3.2 Data Processing
Image pre-processing is an important step for
reducing noise and improving differential shading
in images. In addition, it is necessary to resize the
image in deep learning architectures to increase
WSEAS TRANSACTIONS on INFORMATION SCIENCE and APPLICATIONS
DOI: 10.37394/23209.2023.20.38
Assad S. Doutoum,
Recep Eryigit, Bulent Tugrul
E-ISSN: 2224-3402
357
Volume 20, 2023
their accuracy. The images in the dataset are 512
x 512 pixels in size. We resized the images to 224
x 224.
3.3 Data Augmentation
Data augmentation is a technique used to increase
the size and diversity of a dataset by generating
new images from existing ones. This is done by
applying a variety of transformations to the
images, such as rotation, translation, cropping, and
adding noise. Increasing the size and diversity of
the training dataset can prevent overfitting and
improve the model’s generalization ability, [16].
3.4 CNN Architectures
3.4.1 Visual Geometry Group (VGG)
The VGG architecture is a straightforward and
efficient CNN design, which has been applied to a
variety of image recognition tasks, including
object detection, segmentation, and object
classification, [17]. The structure comprises
convolutional layers stacked on top of each other,
followed by fully connected layers and max-
pooling layers. The convolutional layers function
using 3 x 3 filters, preserving spatial information
in images. The feature maps are downsampled by
the max pooling layers to reduce the network’s
parameter count and avoid overfitting. The
images are then classified into distinct categories
by fully connected layers. The VGG architecture
has two principal variations, namely VGG-16,
comprising 16 layers, and VGG-19, featuring 19
layers. It is the number of convolutional layers
that distinguishes the two architectures, with 13 in
VGG-16 and 16 in VGG-19. The sample images
from the guava leaf disease dataset after
augmentation are presented in Figure 2.
Fig. 2: Sample images from the guava leaf disease
dataset after augmentation
3.4.2 Residual Network (ResNet)
The ResNet is designed to handle the issue of
vanishing gradients, which are caused by excessive
weights in a neural network, resulting in learning
difficulties, [18]. Residual connections are
proposed to resolve this issue. A residual
connection is a path that skips one or more layers
of the network, thus allowing the earlier layers’
output to be added directly to the output of the
later layers. This helps the network learn better,
even when it’s deep. Depending on the application,
the number and layout of residual blocks will
differ. ResNet-50, for instance, consists of 50
residual blocks, each consisting of three
convolutional layers.
3.4.3 Inception V3
The Inception structure contains repeated blocks
entitled inception modules, [19]. Each inception
module works with a convolutional feature map as
input and generates several smaller feature maps
of varying sizes. Various sizes of feature maps
permit the inception module to capture diverse
image details. These inception modules are
organized hierarchically, with each module
constructed upon the preceding module’s output.
This permits the architecture to acquire more
intricate representations of the input image as it
moves deeper.
The Inception V3 model has 22 layers,
encompassing 9 inception blocks. The initial 13
layers are convolutional, with the subsequent 3
being fully connected. The ultimate layer is a
softmax layer that gives the probability of every
class.
3.4.4 EfficientNet
The main idea behind EfficientNet is to achieve
better performance by balancing three different
scaling dimensions: depth, width, and resolution,
[20]. In traditional CNN architectures, these
dimensions are often scaled together, which can
lead to excessive computational requirements
without significant performance gains. The
EfficientNet approach, however, uses a compound
scaling method that carefully determines the
optimal balance between these dimensions,
resulting in networks that are both efficient and
accurate.
EfficientNet includes several different
models with different scaling coefficients. The
most commonly used models are EfficientNet-B0,
EfficientNet-B1, EfficientNet-B2, EfficientNet-
B3, EfficientNet-B4 and EfficientNet-B5. Of all
the models in the family, the EfficientNet-B0 is
WSEAS TRANSACTIONS on INFORMATION SCIENCE and APPLICATIONS
DOI: 10.37394/23209.2023.20.38
Assad S. Doutoum,
Recep Eryigit, Bulent Tugrul
E-ISSN: 2224-3402
358
Volume 20, 2023
the smallest and most efficient. It has a width
multiplier of 1.0, a depth multiplier of 1.0, and a
resolution multiplier of 224. The EfficientNet-B5
is the largest and most accurate model in the
family. It has a width multiplier of 6.0, a depth
multiplier of 1.0, and a resolution multiplier of
1280.
The characteristics of the four architectures are
compared in Table 1 and the architectures are
shown in Figure 3.
3.5 Performance Metrics
Classification algorithms are assessed using a
range of performance metrics to determine their
efficacy in making precise predictions on labeled
datasets. The selection of metrics depends on the
particular issues at hand and the trade-offs that are
deemed necessary, [23]. Classification algorithms
are evaluated using the following performance
metrics:
Accuracy: This is the most basic metric, which
shows the proportion of accurately predicted
instances to the total number of instances.
Nevertheless, it may not be appropriate when
datasets have an imbalanced distribution.
𝑨𝒄𝒄𝒖𝒓𝒂𝒄𝒚 = 𝑻𝑷 +𝑻𝑵
𝑻𝑷 +𝑻𝑵 +𝑭𝑷 +𝑭𝑵
(1)
Precision: It concentrates on the number of true
positive predictions made by the model out of all
positive predictions. It aids in evaluating the
model’s ability to prevent false positives.
Precision is calculated as:
𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏 = 𝑻𝑷
𝑻𝑷 +𝑭𝑷
(2)
Table 1. Comparison of VGG16, ResNet50, InceptionV3 and EfficientNet-B3
VGG16 ResNet50 InceptionV3 EfficientNet-B3
Year
2015
2015
2019
Top-1 Accuracy
74.90%
77.90%
81.60%
Top-5 Accuracy
92.10%
93.70%
95.70%
Parameters(M)
25.60
23.90
12.30
Depth
107
189
210
Fig. 3: Architectures of VGG16, ResNet50, InceptionV3 and EfficientNet-B3, [21], [22]
WSEAS TRANSACTIONS on INFORMATION SCIENCE and APPLICATIONS
DOI: 10.37394/23209.2023.20.38
Assad S. Doutoum,
Recep Eryigit, Bulent Tugrul
E-ISSN: 2224-3402
359
Volume 20, 2023
Recall: It is a useful metric for understanding how
well the model captures all instances of the
positive class. It measures the number of true
positive predictions out of all actual positive
instances and is calculated as:
𝑹𝒆𝒄𝒂𝒍𝒍 = 𝑻𝑷
𝑻𝑷 +𝑭𝑵
(3)
F1-Score: It is the harmonic mean of precision
and recall. It offers equilibrium between the two
metrics and proves to be particularly valuable when
handling datasets that lack balance. The F1-Score
is determined by using the formula:
𝑭
1
𝑺𝒄𝒐𝒓𝒆 = 𝟐 𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏 𝑹𝒆𝒄𝒂𝒍𝒍
𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏 + 𝑹𝒆𝒄𝒂𝒍𝒍
(4)
4 Results and Discussions
It takes a lot of human labor to detect and classify
diseases in guava leaves using traditional
methods. It is difficult to differentiate between
disease types due to their similar shape, texture,
and color. Recently, many studies have been
published showing that both traditional and deep
learning methods successfully classify different
leaf disease types, [24], [25], [26]. This section
presents the results of deep-learning models for
the classification of guava leaf diseases. We
trained four different and commonly used CNN
architectures on the Guava image dataset for this
purpose. Graphs showing the accuracy and loss of
each CNN architecture for training and validation
data can be found in Figure 4, Figure 5, Figure 6,
and Figure 7.
The confusion matrix and overall performance
of the EfficientNet architecture, which produces
the most accurate results, are presented in Figure 8
and Figure 9.
Fig. 4: Training accuracy for all architectures
Fig. 5: Training loss for all architectures
Fig. 6: Validation accuracy for all architectures
Fig. 7: Validation loss for all architectures
Fig. 8: Confusion matrix of EfficientNet-B3 on the
test dataset.
According to the graphs, EfficientNet-B3
produces the most accurate results, while ResNet
produces the least accurate results. In this study,
EfficientNet achieved 97.83 % accuracy on the
training set, 92.73 % accuracy on the training set,
WSEAS TRANSACTIONS on INFORMATION SCIENCE and APPLICATIONS
DOI: 10.37394/23209.2023.20.38
Assad S. Doutoum,
Recep Eryigit, Bulent Tugrul
E-ISSN: 2224-3402
360
Volume 20, 2023
and 94.93 % accuracy on the test set. Therefore,
the results obtained by applying deep learning
methods to the detection of diseases of the guava
leaf is very successful and reliable. These results
demonstrate the high classification performance of
the deep learning model, suggesting its suitability
for practical applications in guava leaf disease
detection. The model, therefore, can be used for
disease surveillance and control in guava plants.
The Confusion matrix of EfficientNet-B3 on the
test dataset is presented in Figure 8. Similarly, the
Precision, Recall, and F1-Score values for all
classes are presented in Figure 9.
Fig. 9: Precision, Recall, and F1-Score values for
all classes
5 Conclusions and Future Works
Plant diseases should be detected timely and
accurately so that agricultural products can be
produced more efficiently and with higher quality.
Therefore, early detection of guava diseases is a
widely applicable measure to reduce economic
loss. In this study, we propose a deep-learning
solution using VGG-16, Inception V3, ResNet50,
and EfficientNet-B3 classifiers to detect guava
diseases. The models showed successful results in
performance metrics such as accuracy, precision,
recall, and F1-score. Based on the results of the
four CNN architectures trained, EfficientNet
produced the most reliable and accurate results.
Farmers will be able to detect guava leaf disease in
real-time on their smartphones without needing any
cloud services by integrating the proposed model
into mobile devices. Our future goal is to enable
consumers and producers to access healthier
products by designing systems to detect diseases on
the leaves of fruits and plants. Despite its
beneficial and influential contributions, this study
does have some limitations. The quality and
diversity of the training dataset are key factors
influencing the model’s performance. Future work
should involve the collection of a larger dataset that
spans a wide range of guava varieties, geographical
regions, and seasons to further improve the
performance of the algorithm.
References:
[1]
N. N. Kurniawati, S. N. H. S. Abdullah, S.
Abdullah, S. Abdullah, “Investigation on
image processing techniques for diagnosing
paddy diseases,” in 2009 International
Conference of Soft Computing and Pattern
Recognition, pp. 272–277, 2009.
[2]
A. M. Mostafa, S. A. Kumar, T. Meraj, H. T.
Rauf, A. A. Alnuaim, M. A. Alkhayyal,
“Guava disease detection using deep
convolutional neural networks: A case study
of guava plants,” Applied Sciences, vol. 12, no.
1, p. 239, 2021.
[3]
A. Almadhor,
H.
T.
Rauf,
M. I.
U.
Lali,
R. Damaševičius, B. Alouffi, A. Alharbi, “Ai-
driven framework for recognition of guava
plant diseases through machine learning from
dslr camera sensor based high resolution
imagery,” Sensors, vol. 21, no. 11, p. 3830,
2021.
[4]
P. Shukla, T. Fatima, S. Rajan, “Research on
fusarium wilt disease of guava,” Indian
Phytopathology, vol. 72, pp. 629–636, 2019.
[5]
A. S. M. Farhan Al Haque, R. Hafiz, M. A.
Hakim, G. M. Rasiqul Islam, “A computer
vision system for guava disease detection and
recommend curative solution using deep
learning approach,” in 22nd International
Conference on Computer and Information
Technology (ICCIT), pp. 1–6, 2019.
[6]
A. Rajbongshi, S. Sazzad, R. Shakil, B. Akter,
U. Sara, “A comprehensive guava leaves and
fruits dataset for guava disease recognition,
Data in Brief, vol. 42, p. 108174, 2022.
[7]
B. Tugrul, E. Elfatimi, R. Eryigit,
“Convolutional neural networks in detection
of plant leaf diseases: A review,” Agriculture,
vol. 12, no. 8, p. 1192, 2022.
[8]
M. M. U. Nobi, M. Rifat, M. Mridha, S.
Alfarhood, M. Safran, D. Che, “Glddet:
Guava leaf disease detection in realtime using
lightweight deep learning approach based on
mobilenet,” Agronomy, vol. 13, no. 9, p. 2240,
2023.
[9]
M. Asim, S. Ullah, A. Razzaq, S. Qadri,
“Varietal discrimination of guava (psidium
guajava) leaves using multi features analysis,”
WSEAS TRANSACTIONS on INFORMATION SCIENCE and APPLICATIONS
DOI: 10.37394/23209.2023.20.38
Assad S. Doutoum,
Recep Eryigit, Bulent Tugrul
E-ISSN: 2224-3402
361
Volume 20, 2023
International Journal of Food Properties, vol.
26, no. 1, pp. 179–196, 2023.
[10]
P. Perumal, K. Sellamuthub, K. Vanitha, V.
Manavalasundaram, “Guava leaf disease
classification using support vector machine,”
Turkish Journal of Computer and
Mathematics Education (TURCOMAT), vol.
12, no. 7, pp. 1177–1183, 2021.
[11]
O. Almutiry, M. Ayaz, T. Sadad, I. U. Lali,
A. Mahmood, N. U. Hassan, H. Dhahri, “A
novel framework for multi-classification of
guava disease.,” Computers, Materials &
Continua, vol. 69, no. 2, 2021.
[12]
M. R. Howlader, U. Habiba, R. H. Faisal,
M. M. Rahman, “Automatic recognition of
guava leaf diseases using deep convolution
neural network,” in 2019 International
Conference on Electrical, Computer and
Communication Engineering (ECCE), pp. 1–
5, 2019.
[13]
F. Marzougui, M. Elleuch, M. Kherallah, “A
deep cnn approach for plant disease
detection,” in 21st International Arab
Conference on Information Technology
(ACIT), pp. 1–6, 2020.
[14]
S. A. P. N. Kavala, R. Pothuraju, “Detection of
grape leaf disease using transfer learning
methods: Vgg16 & vgg19,” in 2022 6th
International Conference on Computing
Methodologies and Communication (ICCMC),
pp. 1205–1208, 2022.
[15]
H. Phan, A. Ahmad, D. Saraswat,
“Identification of foliar disease regions on corn
leaves using slic segmentation and deep
learning under uniform background and field
conditions,” IEEE Access, vol. 10, pp.
111985–111995,
2022.
[16]
I. Z. Mukti, D. Biswas, “Transfer learning
based plant diseases detection using resnet50,”
in 4th International Conference on Electrical
Information and Communication Technology
(EICT), pp. 1–6, 2019.
[17]
K. Simonyan, A. Zisserman, “Very deep
convolutional networks for large-scale image
recognition,” arXiv preprint arXiv:1409.1556,
2014.
[18]
K. He, X. Zhang, S. Ren, J. Sun, “Deep
residual learning for image recognition,” in
Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition
(CVPR), pp. 770–778, June 2016.
[19]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens,
Z. Wojna, “Rethinking the inception
architecture for computer vision,” in
Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition
(CVPR), pp. 2818–2826, June 2016.
[20]
M. Tan, Q. Le, “EfficientNet: Rethinking
model scaling for convolutional neural
networks,” in Proceedings of the 36th
International Conference on Machine
Learning, pp. 6105–6114, 2019.
[21]
M. M. Leonardo, T. J. Carvalho, E. Rezende,
R. Zucchi, F. A. Faria, “Deep feature-based
classifiers for fruit fly identification (diptera:
Tephritidae),” in 31st SIBGRAPI Conference
on Graphics, Patterns and Images (SIBGRAPI),
pp. 41–47, 2018.
[22]
H. Alhichri, A. S. Alswayed, Y. Bazi, N.
Ammour, N. A. Alajlan, “Classification of
remote sensing images using efficientnet-b3
cnn model with attention,” IEEE Access,
vol. 9, pp. 14078–14094, 2021.
[23]
T. Saito, M. Rehmsmeier, “The precision-
recall plot is more informative than the roc
plot when evaluating binary classifiers on
imbalanced datasets,” PLOS ONE, vol. 10, no.
3, pp. 1–21, 2015.
[24]
A. S. Zamani, L. Anand, K. P. Rane, P. Prabhu,
A. M. Buttar, H. Pallathadka, A. Raghuvanshi,
B. N. Dugbakie, “Performance of machine
learning and image processing in plant leaf
disease detection,” Journal of Food Quality,
vol. 2022, pp. 1–7, 2022.
[25]
V. K. Trivedi, P. K. Shukla, A. Pandey,
“Automatic segmentation of plant leaves
disease using min-max hue histogram and k-
mean clustering,” Multimedia Tools and
Applications, vol. 81, no. 14, pp. 20201
20228, 2022.
[26]
R. Eryigit, Y. Ar, B. Tugrul, “Classification of
trifolium seeds by computer vision methods,”
WSEAS Transactions on Systems, vol. 22, pp.
313–320, 2023.
https://doi.org/10.37394/23202.2023.22.34
WSEAS TRANSACTIONS on INFORMATION SCIENCE and APPLICATIONS
DOI: 10.37394/23209.2023.20.38
Assad S. Doutoum,
Recep Eryigit, Bulent Tugrul
E-ISSN: 2224-3402
362
Volume 20, 2023
Contribution of Individual Authors to the
Creation of a Scientific Article (Ghostwriting
Policy)
In this study, all authors contributed equally,
from formulation of the problem to solution and
analysis.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
The authors did not receive support from any
organization for the submitted work.
Conflict of Interest
The authors have no conflict of interest to declare.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.e
n_US
WSEAS TRANSACTIONS on INFORMATION SCIENCE and APPLICATIONS
DOI: 10.37394/23209.2023.20.38
Assad S. Doutoum,
Recep Eryigit, Bulent Tugrul
E-ISSN: 2224-3402
363
Volume 20, 2023