COVID Pneumonia Severity Detection of Chest CT-Scan Images based
on Robust Semantic Segmentation
BAYAN ALSAAIDAH
Department of Computer Science,
Al-Balqa Applied University,
JORDAN
Abstract: - Image segmentation has steadily grown especially for clinical usage and disease recognition in
radiological research. This procedure, aimed at acquiring quantitative measurements, strives to distinguish
regions or objects of interest from adjacent body tissues. To be more specific, it entails measuring the area and
volume of segmented structures to extract more refined diagnostic information. The main hurdles encountered
by segmentation algorithms originate from challenges like variations in intensity, artifacts, and the close
juxtaposition of diverse soft tissues in the grayscale. In this paper, a robust semantic segmentation is proposed
to specify the infected regions of lung images and consider the severity degree of the pneumonia caused by
COVID-19 disease. The proposed model provides an accurate diagnosis of the chest CT scan image with
satisfied performance with 93% accuracy and the second most important metric which is the Jaccard Index with
0.7460.09 shows higher prediction performance than most existing systems in the literature.
Key-Words: - CT-Scan, Chest images, COVID-19, Segmentation, Severity, Pneumonia.
5HFHLYHG6HSWHPEHU5HYLVHG)HEUXDU\$FFHSWHG0DUFK3XEOLVKHG0D\
1 Introduction
Since 2019, COVID-19 has caused a global
emergency of the public health, [1] and over the
whole world. Coronavirus (COVID) is an infectious
disease that hits the lung cells and spreads rapidly
from animals and easily among humans, [2].
The COVID-19 pandemic has prompted
unheard-of hazards to public fitness, the worldwide
economic system, and so on [3]. In this extreme
situation, fast control of the spread of COVID-19 is
especially crucial. Detection of the infected cells
indicates the degree of severity of this infection.
In order to stop the COVID-19 virus from
spreading, early diagnosis is essential. Chest
computed tomography (CT) imaging is easier to use
in a clinical context and provides more sensitive
COVID-19 screening than RT-PCR, [4].
Additionally, the research community has been
paying more and more attention to CT imaging, [5],
with efforts being made to investigate the
pathological alterations caused by COVID-19 from
a radiological perspective.
Conventional manual or semi-automatic
segmentation methods take a lot of time and need
the assistance of medical professionals.
Furthermore, there is a tendency for the
segmentation results to favor the expertise of the
expert. Thus, in a clinical situation, automatic
segmentation of lung infections is highly desired.
Considerable efforts have been made in this
direction, [6].
In order to determine the source, location, and
severity of the illness, patients with pneumonia or
breathing problems according to an infection of the
respiratory tract should receive treatment at
hospitals where they were subjected to a variety of
diagnostic procedures, including laboratory and
non-laboratory tests.
The laboratory tests include common
procedures such as CBC test, pleural effusion, and
blood gas analysis tests, [7], which needs hospital
and laboratory procedures. On the other side, non-
laboratory testing consists of computer-aided
imagery analysis methods used for CT scans or
digital chest radiography, which are used to examine
the lung regions.
The benefit of a CT scan, a non-destructive
scanning method, it offers a highly detailed picture
of the lung's fine tissues, bone, and blood vessels,
[7]. Cost-effective, widely available, rapid frequent
scanning process, ideal spatial resolution with
contemporary multi-slice scanners, and high
sensitivity present the most significant advantages
of CT imaging. The drawbacks consist of Poorer
soft tissue contrast in comparison to radiation
exposure from MRIs and X-rays, [8].
Image segmentation has generally gained
importance in radiological and medical research for
WSEAS TRANSACTIONS on BIOLOGY and BIOMEDICINE
DOI: 10.37394/23208.2024.21.24
Bayan Alsaaidah
E-ISSN: 2224-2902
234
Volume 21, 2024
diagnosis purposes. Segmentation attempts to isolate
interest areas or objects from the other parts of the
image so that quantitative measurements can be
taken. More accurately, quantifying segmented
structures' volume and area to obtain more
diagnostic information. The main challenges in
using segmentation algorithms are proximity,
artifacts, and intensity heterogeneity in the grey
scale of different soft tissues.
Numerous methods have been used to segment
the lung, including hybrid, rule-based, atlas-based,
manual, and machine learning-based approaches,
[9], [10], [11]. It is tedious work and takes a long
time to segment data manually, especially when the
health system is overburdened.
Integrating AI with medical expertise can assist
the healthcare system by offering automated
COVID-19 diagnosis solutions capable of handling
numerous cases more quickly. Additionally,
utilizing AI in COVID-19 diagnosis can decrease
the need for human involvement, thereby enhancing
social distancing measures crucial for limiting the
spread of infection, [12]. Deep learning has been
actively used recently to segment COVID-19 lung
infections using different network architectures
including U-Net which is the most popular in a lot
of works ,[13], [14], [15], [16].
In addition, a noise-robust framework was
presented for COVID-19 pneumonia lesion
segmentation, utilizing a mean absolute error loss in
addition to a noise-robust Dice loss. Deep learning
models require a substantial quantity of labeled
training data, which is difficult to ensure for the
segmentation of COVID-19 infections, particularly
in the early stages of the illness. The outcomes of
this study demonstrated that the COPLE-Net
outperformed the most advanced CNNs in medical
image segmentation, and the suggested LNR-Dice
outperformed existing noise-robust loss functions,
[17].
Authors in [18], proposed a segmentation
algorithm for lung infection known as LobeNet with
the goal of predicting the regions of the left and
right lungs in the event of consolidations and diffuse
abnormalities. A Dice coefficient of 0.985 was
found to be the average for the model evaluation
performance of LobeNet on 87 patients.
One of the most popular image segmentation
architectures is the U-Net architecture, which was
used for COVID-19 lung segmentation in different
studies such as 3D U-Net for lung areas
segmentation combined with a cross-validation
scheme which is applied to avoid any expected
overfitting, [19]. The proposed study results indicate
that the proposed deep learning algorithms
performed satisfactorily, based on the false-positive
and the false-negative ratios compared with the
other published results in the literature.
The obtained segmentation performance was
0.956 and 0.761 for normal and infected. A similar
analysis was carried out by [20] where the Dice
Similarity Coefficient (DSC), sensitivity, and
specificity were 0.950, 0.920, and 0.875. According
to [21], SegNet and U-Net exceeded 90% accuracy.
Researchers in [22], demonstrated the
usefulness of an automated tool for measuring and
segmenting COVID-19 lung infections using chest
CT scan data utilizing a combination of parametric
stitching algorithms, linear and logarithmic. The
proposed methodology Dice, Sensitivity,
Specificity, and Precision were 0.714, 0.733, 0.994,
and 0.739 respectively.
COVID-19 was automatically detected using the
segmented CT slices by [23] using the Infection
Segmentation Deep Network (Inf-Net). To increase
COVID-19 prediction sensitivity, authors in [24]
created a dual-branch network that combined
segmentation and classification for defective areas
of CT chest images. A highly accurate CT image
segmentation network (COVID-SegNet) was
presented by [25] to segment COVID-19 lesions.
Enhanced features were combined with several
scales which is known as COVID-SegNet. To
segment infected regions from CT images and
identify COVID-19, authors in [26] developed a
CNN model called Anam-Net with fewer
parameters which is simpler than U-Net.
This study looked into the automated lung
segmentation from chest CT images for COVID-19
cases using deep learning. For normal and COVID-
19 datasets, the suggested method produced very
promising results, with DSC 0.980. Accurate
diagnosis of COVID-19 patients with measurable
values such as severity based on defected area will
be facilitated by dependable lung segmentation,
which will also help with lesion segmentation, [27].
A novel, two-stage cross-domain transfer
learning framework was proposed by [28] to
accurately segment lung infections of lung CT-scan
images. The proposed framework is divided into
two main innovations: a new transfer learning with
two stages and an effective deep learning model for
infection segmentation called nCoVSegNet. In
particular, nCoVSegNet aims to address the
problems associated with poor contrast at
boundaries and high variance of the infected tissues
by conducting efficient infection segmentation while
utilizing large receptive fields and attention-aware
feature fusion.
WSEAS TRANSACTIONS on BIOLOGY and BIOMEDICINE
DOI: 10.37394/23208.2024.21.24
Bayan Alsaaidah
E-ISSN: 2224-2902
235
Volume 21, 2024
Due to the severe abnormalities of the infected
lung, it becomes difficult to distinguish between the
lung cells and the chest bone. This is especially
evident when comparing them to normal patients.
The accurate diagnosis and monitoring of COVID-
19 pneumonia infection in CT-scan images
acquiring a substantial set of high-quality
annotations proves challenging. Addressing this
challenge, we suggest a new noise-robust
framework designed to learn from more easily
attainable, albeit noisy, training labels for the
segmentation task.
2 Materials
CT scans show great potential for offering precise, rapid,
and cost-effective screening and testing for COVID-
19. CT-based methods for COVID-19 prediction
have primarily used different methods to extract
features and combine them, while in chest X-ray
research, only a few studies have used transfer
learning for classifying CT images, [29]. The
COVID-19 dataset employed in this study was
sourced from [30] and has been confirmed by a
senior radiologist in Tongji Hospital, Wuhan, China,
encompassing a comprehensive collection of 216
patients. Data were diligently gathered over a period
spanning from January to April 2020 to capture the
evolving dynamics of the COVID-19 pandemic,
ensuring a representative snapshot during the study
duration as shown in Figure 1. Various data types
were included in the dataset, covering COVID CT,
Non-COVID CT, data split, and Clinical
information (such as patient ID, patient information,
and others).
In total, the dataset comprises 349 images, each
representing a distinct case of COVID-19, providing
a robust foundation for statistical analysis and model
development. Prior to analysis, the dataset
underwent preprocessing steps, addressing
challenges such as missing values and
standardization of formats. Key features encompass
patient demographics, comorbidities, symptom
onset dates, and regional attributes, offering a
comprehensive view of the COVID-19 cases under
consideration.
Fig. 1: Examples of lung CT images with COVID-
19
Ethical considerations were rigorously
observed, with all patient data anonymized, and the
study protocol received approval from the Tongji
Hospital, Wuhan, China. Acknowledging potential
limitations, the dataset may be subject to reporting
biases, variations in testing rates, and incomplete
demographic information in certain cases. For
machine learning model development, the dataset
was randomly partitioned into training (80%),
validation (10%), and test (10%) sets.
3 Method
Semantic segmentation of lung images involves
classifying each pixel in the image into predefined
categories, such as normal lung tissue,
abnormalities, or specific structures. In this work, a
semantic segmentation procedure is used to segment
the infected tissues and then identify the severity
degree based on the size of the segmented area.
Figure 2 shows the general structure of the proposed
methodology.
Fig. 2: General structure of the system
Semantic segmentation is a computer vision task
that involves classifying each pixel in an image into
a category. Unlike classification, which assigns a
single label to the entire image, semantic
segmentation labels each pixel with a corresponding
class label. This technique is commonly used in
various applications, such as object recognition,
medical image analysis, and image editing.
Semantic segmentation is typically performed
using deep learning models, especially
convolutional neural networks (CNNs). These
models learn to map input images to pixel-wise
class labels through a process called convolution,
where the network learns features at different spatial
hierarchies.
The output of a semantic segmentation model is
a segmented image, where each pixel is assigned a
WSEAS TRANSACTIONS on BIOLOGY and BIOMEDICINE
DOI: 10.37394/23208.2024.21.24
Bayan Alsaaidah
E-ISSN: 2224-2902
236
Volume 21, 2024
color corresponding to its predicted class. This can
be useful in various applications. For example, in
autonomous driving, semantic segmentation can
help a vehicle understand the layout of the road and
identify obstacles and pedestrians. In medical
imaging, it can help in the detection and analysis of
tumors or other abnormalities in scans.
The training set of CT images is labeled using
the image labeler toolbox to label each pixel of the
image where every pixel in an image is given a label
by semantic segmentation. Only the infected cells
are labeled in order to compute the infected area to
identify the severity degree of pneumonia.
The input images with their labels are provided
to the convolution neural network based on dilated
convolutions. U-Net, [31], is an important deep-
learning network with a broad range of applications
that was primarily created for comprehending and
classifying medical images. Semantic segmentation
is a further application for this sort of network,
which goes by the name U-Net due to its U-shaped
encoder and decoder.
The U-Net encoder and decoder layers,
respectively, carry out up-sampling and down
sampling. The accompanying decoding layer
receives the full feature map from the encoder. By
combining the transferred feature map and the up-
sampled feature map, the final feature map is
generated. The U-Net architecture used in this study
is shown in Figure 3 which shows the multi-channel
feature map, the number of channels, and their size.
Fig. 3: U-net architecture, [29]
In this type of network, softmax layer comprises
with the encoder and the decoder which are carried
out repeatedly. ReLU activation, and 2x2 max-
pooling2D are used two times in this network. The
U-Net encoder down sampling process results in
low-resolution feature maps, which may be
improved to full-resolution feature maps by the U-
Net decoder upsampling process.
Ten convolutional-2D layers with 3x3 filters,
and four maxPooling2D layers with 2x2 windows
are included in the U-Net encoder. ReLU (Rectified
Linear Units) activations are also present. A ReLU
layer applies each convolutional layer's output,
element-by-element, to the function f (x) = max (0,
x). By downsampling the input by a factor of two,
the max-pooling process may be used to provide
translation invariance for tiny spatial changes. Eight
convolutional 2D layers with 3x3 filters and another
ReLU, four convolutional 2D-transpose layers with
3x3 filters followed by one convolutional 2D layer
with 1x1 filter which make up U-Net's decoder. For
pixel-wise categorization, the decoder ends with a
softmax layer.
In U-Net, the upsampling is provided by a
concatenation of the convolutional 2D-transpose
layer and convolutional 2D layers. The
appropriately clipped feature map from the decoder
is concatenated to create the upsampling layer. The
last 1x1 convolutional-2D layer assigns the
necessary number of classes to each 32-component
feature vector.
The soft-max classifier that can be trained
receives a high-dimensional representation of all
features that are provided by the decoder output.
This soft-max approach classifies each pixel
independently. For pixel-wise classification, the
decoder's high-dimensional dense feature maps are
sent to the softmax layer.
Softmax layer creates probabilities for every
class and classifies each pixel independently. The
predicted segmentation corresponds to the class that
has the highest probability at each pixel as shown
below:
󰇛󰇜 󰇛󰇜󰇛 󰇛󰆓󰇛󰇜󰇜󰇜
󰆓
Where ak(x) indicates the activation in feature
channel k, and x is the pixel location inside the
image.
After identifying the infected and the non-
infected cells in the image, a simple calculation is
carried out to recognize the severity degree of the
lung pneumonia infection where the severity degree
is recognizedrecognized into three levels: mild,
moderate, and severe.
WSEAS TRANSACTIONS on BIOLOGY and BIOMEDICINE
DOI: 10.37394/23208.2024.21.24
Bayan Alsaaidah
E-ISSN: 2224-2902
237
Volume 21, 2024
4 Results and Discussion
In order to evaluate the proposed segmentation and
severity recognition model, several metrics have
been measured such as accuracy, sensitivity,
specificity, and dice coefficient (F). Besides the
most important metric for segmentation is the
Jaccard Index or the Mean Intersection-Over-Union
(mIOU).
The following equations (1-6) present the
metrics formulas that have been used for model
evaluation:
  
   
(1)
 

(2)
 
 
(3)
 
 
(4)
 
  
(5)
 
  
(6)
Where TP: True Positive, TN: True Negative, FP:
False Positive, FN: False Negative. Table 1
summarizes the evaluation metrics of the proposed
model.
Table 1. Evaluation metrics of the proposed model
Metric
Accuracy
Sensitivity
Specificity
Precision
F1
IOU
As shown in Table 1, the proposed segmentation
shows a high performance for detecting the infected
cells to recognise the level of the COVID-19
infection. The proposed model in [30] shows
performance using U-Net 0.678, 0.836, 0.265, and
0.308 of Sensitivity, Specificity, Precision, and IOU
respectively. The higher result of the proposed
method may relate to the pre steps before training
the model including clearing images, using high
quality images, and tuning the network parameters.
Fig. 4: Visual illustration of the proposed
segmentation model (a): CT Image (b): Ground
Truth (c): Segmentation and recognition result
Figure 4 shows some cases of lung infection
recognition including the ground truth which has
been recognized by an expert clinical biologist.
Figure 4 demonstrates a strong match between
the ground truth and the proposed method result for
infected lung tissue segmentation and recognition
for COVID-19 CT-scan images testifying to the
deep learning model's promising performance in
infected cell detection and segmentation.
As shown in lung images there is a high
similarity between the infected cells and the blood
vessels, however, the model succeeded in
recognizing using a robust deep learning network.
Figure 4 depicts two cases where the model failed to
accurately highlight the infected cells exactly as the
WSEAS TRANSACTIONS on BIOLOGY and BIOMEDICINE
DOI: 10.37394/23208.2024.21.24
Bayan Alsaaidah
E-ISSN: 2224-2902
238
Volume 21, 2024
ground truth which has not been affected by the
severity degree of the diagnosis.
Due to the blood vessel and infection regions
having comparable intensities, it is more difficult to
accurately identify the infected cells, which makes
the miss-segmentation more obvious in mild cases
of COVID-19 patients.
Recently, an AI system aimed to develop and
enhance an automatic intelligent system for
classifying chest X-ray images to detect and identify
the COVID-19 virus using machine learning into
infected and normal, [32]. The concentration in this
work is based on infected images only.
5 Conclusions
Segmentation is necessary for an accurate diagnosis
and tracking of pneumonia lesions caused by
COVID-19 in CT images. Although deep learning
holds tremendous potential for automating this
procedure, a substantial quantity of high-quality
annotations is hard to come across. To overcome
this challenge, a novel semantic segmentation and
severity recognition framework is proposed to detect
the severity level of COVID-19 after diagnosis of
the infection. Overall, semantic segmentation is a
powerful technique in computer vision that enables
a detailed understanding of images at the pixel level,
with numerous applications across different
domains.
Adequate segmentation of the lungs is essential
for determining the infected tissues and severity of
COVID-19. Deep learning provides several types of
networks that can be trained using labeled images in
which the model detects the infected tissues to
recognize the severity degree based on the infected
area.
The proposed framework revealed encouraging
findings in the segmentation of COVID-19 images
of contaminated lung tissue. Nonetheless, there are
some challenges related to this research that need to
be addressed in further studies. One of these
limitations is the nature of the CT-scan imaging
which is carried out several times with several
angles as slices, so in some slices, the image is
recognized as mild, and in different slices for the
same patient is recognized as moderate or severe.
Moreover, to develop this work to be used in
clinical practice the model should be integrated with
another model that identifiesidentifies the image as
infected or normal before using the proposed criteria
to distinguish the level of the infection.
Acknowledgments:
Sincere gratitude to Dr. Haitham Azzam the head of
the radiology department at Issra hospital in Jordan
for his help in lung CT-scan image labeling and
recognizing the severity level of the COVID-19
infection.
References:
[1] Lancet, T., 2020. Emerging understandings of
2019-nCoV. Lancet (London,
England), 395(10221), p.311,
DOI: 10.1016/S0140-6736(20)30186-0.
[2] Xu, J., Zhao, S., Teng, T., Abdalla, A.E., Zhu,
W., Xie, L., Wang, Y. and Guo, X., 2020.
Systematic comparison of two animal-to-
human transmitted human coronaviruses:
SARS-CoV-2 and SARS-CoV, Viruses, 12
(2), p.244,
DOI: https://doi.org/10.3390/v12020244.
[3] Xiong, J., Lipsitz, O., Nasri, F., Lui, L.M.,
Gill, H., Phan, L., Chen-Li, D., Iacobucci, M.,
Ho, R., Majeed, A. and McIntyre, R.S., 2020.
Impact of COVID-19 pandemic on mental
health in the general population: A systematic
review. Journal of Affective Disorders, 277,
pp.55-64,
https://doi.org/10.1016/j.jad.2020.08.001.
[4] Xie, X., Zhong, Z., Zhao, W., Zheng, C.,
Wang, F. and Liu, J., 2020. Chest CT for
typical coronavirus disease 2019 (COVID-19)
pneumonia: relationship to negative RT-PCR
testing, Radiology, 296(2), pp. E41-E45,
https://doi.org/10.1148/radiol.2020200343.
[5] Phelan, A.L., Katz, R. and Gostin, L.O., 2020.
The novel coronavirus originating in Wuhan,
China: challenges for global health
governance. Jama, 323(8), pp.709-710,
doi:10.1001/jama.2020.1097.
[6] Vaishya, R., Javaid, M., Khan, I.H. and
Haleem, A., 2020. Artificial Intelligence (AI)
applications for COVID-19
pandemic. Diabetes & Metabolic Syndrome:
Clinical Research & Reviews, 14(4), pp.337-
339,
https://doi.org/10.1016/j.dsx.2020.04.012.
[7] Bhandary, A., Prabhu, G.A., Rajinikanth, V.,
Thanaraj, K.P., Satapathy, S.C., Robbins,
D.E., Shasky, C., Zhang, Y.D., Tavares,
J.M.R. and Raja, N.S.M., 2020. Deep-learning
framework to detect lung abnormality–A
study with chest X-Ray and lung CT scan
images. Pattern Recognition Letters, 129,
pp.271-278,
https://doi.org/10.1016/j.patrec.2019.11.013.
WSEAS TRANSACTIONS on BIOLOGY and BIOMEDICINE
DOI: 10.37394/23208.2024.21.24
Bayan Alsaaidah
E-ISSN: 2224-2902
239
Volume 21, 2024
Gu, Y., Kumar, V., Hall, L.O., Goldgof, D.B., Li,
C.Y., Korn, R., Bendtsen, C., Velazquez,
E.R., Dekker, A., Aerts, H. and Lambin, P.,
2013. Automated delineation of lung tumors
from CT images using a single click ensemble
segmentation approach. Pattern
Recognition, 46(3), pp.692-702,
https://doi.org/10.1016/j.patcog.2012.10.005.
[8] Hofmanninger, J., Prayer, F., Pan, J., Röhrich,
S., Prosch, H. and Langs, G., 2020. Automatic
lung segmentation in routine imaging is
primarily a data diversity problem, not a
methodology problem. European Radiology
Experimental, 4(1), pp.1-13,
https://doi.org/10.1186/s41747-020-00173-2.
[9] Arabi, H. and Zaidi, H., 2017. Comparison of
atlas-based techniques for whole-body bone
segmentation. Medical Image Analysis, 36,
pp.98-112,
https://doi.org/10.1016/j.media.2016.11.003.
[10] Arabi, H. and Zaidi, H., 2016. Whole-body
bone segmentation from MRI for PET/MRI
attenuation correction using shape-based
averaging. Medical Physics, 43(11), pp.5848-
5861, https://doi.org/10.1118/1.4963809.
[11] Alsaaidah, B., Al-Hadidi, M.D.R., Al-Nsour,
H., Masadeh, R. and AlZubi, N., 2022.
Comprehensive survey of machine learning
systems for COVID-19 detection. Journal of
Imaging, 8(10), p.267,
https://doi.org/10.3390/jimaging8100267.
[12] Zheng, C., Deng, X., Fu, Q., Zhou, Q., Feng,
J., Ma, H., Liu, W. and Wang, X., 2020. Deep
learning-based detection for COVID-19 from
chest CT using weak label. MedRxiv,
pp.2020-03,
https://doi.org/10.1101/2020.03.12.20027185.
[13] Cao, Y., Xu, Z., Feng, J., Jin, C., Han, X.,
Wu, H. and Shi, H., 2020. Longitudinal
assessment of COVID-19 using a deep
learning–based quantitative CT pipeline:
illustration of two cases. Radiology:
Cardiothoracic Imaging, 2(2), p.e200082,
https://doi.org/10.1148/ryct.2020200082.
[14] Jin, S., Wang, B., Xu, H., Luo, C., Wei, L.,
Zhao, W., Hou, X., Ma, W., Xu, Z., Zheng, Z.
and Sun, W., 2020. AI-assisted CT imaging
analysis for COVID-19 screening: Building
and deploying a medical AI system in four
weeks. MedRxiv, pp.2020-
03, https://doi.org/10.1101/2020.03.19.20039
354.
[15] Ronneberger, O., Fischer, P. and Brox, T.,
2015. U-net: Convolutional networks for
biomedical image segmentation. In Medical
Image Computing and Computer-Assisted
Intervention–MICCAI 2015: 18th
International Conference, Munich, Germany,
October 5-9, 2015, Proceedings, Part III
18(pp. 234-241). Springer International
Publishing. DOI: 10.1007/978-3-319-24574-
4_28.
[16] Wang, G., Liu, X., Li, C., Xu, Z., Ruan, J.,
Zhu, H., Meng, T., Li, K., Huang, N. and
Zhang, S., 2020. A noise-robust framework
for automatic segmentation of COVID-19
pneumonia lesions from CT images. IEEE
Transactions on Medical Imaging, 39(8),
pp.2653-2663,
DOI: 10.1109/TMI.2020.3000314.
[17] Gerard, S.E., Herrmann, J., Xin, Y., Martin,
K.T., Rezoagli, E., Ippolito, D., Bellani, G.,
Cereda, M., Guo, J., Hoffman, E.A. and
Kaczka, D.W., 2021. CT image segmentation
for inflamed and fibrotic lungs using a multi-
resolution convolutional neural
network. Scientific Reports, 11(1), p.1455.
[18] Müller, D., Rey, I.S. and Kramer, F., 2020.
Automated chest ct image segmentation of
covid-19 lung infection based on 3d u-
net. Informatics in Medicine Unlocked, (25)
2020,
https://doi.org/10.1016/j.imu.2021.100681.
[19] Trivizakis, E., Tsiknakis, N., Vassalou, E.E.,
Papadakis, G.Z., Spandidos, D.A.,
Sarigiannis, D., Tsatsakis, A., Papanikolaou,
N., Karantanas, A.H. and Marias, K., 2020.
Advancing COVID-19 differentiation with a
robust preprocessing and integration of
multi-institutional open-repository computer
tomography datasets for deep learning
analysis. Experimental and Therapeutic
Medicine, 20(5),
https://doi.org/10.3892/etm.2020.9210.
[20] Saood, A. and Hatem, I., 2021. COVID-19
lung CT image segmentation using deep
learning methods: U-Net versus SegNet. BMC
Medical Imaging, 21(1), pp.1-10.
[21] Oulefki, A., Agaian, S., Trongtirakul, T. and
Laouar, A.K., 2021. Automatic COVID-19
lung infected region segmentation and
measurement using CT-scans images. Pattern
Recognition, 114, p.107747,
https://doi.org/10.1016/j.patcog.2020.107747.
[22] Fan, D.P., Zhou, T., Ji, G.P., Zhou, Y., Chen,
G., Fu, H., Shen, J. and Shao, L., 2020. Inf-
net: Automatic covid-19 lung infection
segmentation from ct images. IEEE
Transactions on Medical Imaging, 39(8),
WSEAS TRANSACTIONS on BIOLOGY and BIOMEDICINE
DOI: 10.37394/23208.2024.21.24
Bayan Alsaaidah
E-ISSN: 2224-2902
240
Volume 21, 2024
pp.2626-2637,
DOI: 10.1109/TMI.2020.2996645.
[23] Gao, K., Su, J., Jiang, Z., Zeng, L.L., Feng,
Z., Shen, H., Rong, P., Xu, X., Qin, J., Yang,
Y. and Wang, W., 2021. Dual-branch
combination network (DCN): Towards
accurate diagnosis and lesion segmentation of
COVID-19 using CT images. Medical Image
Analysis, 67, p.101836,
https://doi.org/10.1016/j.media.2020.101836.
[24] Yan, Q., Wang, B., Gong, D., Luo, C., Zhao,
W., Shen, J., Ai, J., Shi, Q., Zhang, Y., Jin, S.
and Zhang, L., 2021. COVID-19 chest CT
image segmentation network by multi-scale
fusion and enhancement operations. IEEE
Transactions on Big Data, 7(1), pp.13-24,
DOI: 10.1109/TBDATA.2021.3056564.
[25] Paluru, N., Dayal, A., Jenssen, H.B., Sakinis,
T., Cenkeramaddi, L.R., Prakash, J. and
Yalavarthy, P.K., 2021. Anam-Net:
Anamorphic depth embedding-based
lightweight CNN for segmentation of
anomalies in COVID-19 chest CT
images. IEEE Transactions on Neural
Networks and Learning Systems, 32(3),
pp.932-946,
DOI: 10.1109/TNNLS.2021.3054746.
[26] Gholamiankhah, F., Mostafapour, S.,
Goushbolagh, N.A., Shojaerazavi, S., Layegh,
P., Tabatabaei, S.M. and Arabi, H., 2021.
Automated lung segmentation from CT images
of normal and COVID-19 pneumonia
patients. arXiv preprint arXiv:2104.02042,
https://doi.org/10.48550/arXiv.2104.02042.
[27] Liu, J., Dong, B., Wang, S., Cui, H., Fan,
D.P., Ma, J. and Chen, G., 2021. COVID-19
lung infection segmentation with a novel two-
stage cross-domain transfer learning
framework. Medical Image Analysis, 74,
p.102205,
https://doi.org/10.1016/j.media.2021.102205.
[28] Karthik, R., Menaka, R., Hariharan, M. and
Kathiresan, G.S., 2022. Ai for COVID-19
detection from radiographs: Incisive analysis
of state of the art techniques, key challenges
and future directions. IRBM, 43(5), pp.486-
510,
https://doi.org/10.1016/j.irbm.2021.07.002.
[29] Zhao, J., Zhang, Y., He, X. and Xie, P., 2020.
Covid-ct-dataset: a ct scan dataset about
covid-19. arXiv preprint v(1)
arXiv:2003.13865, 490(10.48550),
https://arxiv.org/abs/2003.13865v1.
[30] Ronneberger, O., Fischer, P. and Brox, T.,
2015. U-net: Convolutional networks for
biomedical image segmentation. In Medical
Image Computing and Computer-Assisted
Intervention-MICCAI 2015: 18th
International Conference, Munich, Germany,
October 5-9, 2015, Proceedings, Part III
18(pp. 234-241). Springer International
Publishing, https://doi.org/10.1007/978-3-
319-24574-4_28.
[31] Alsaaidah, B., Mustafa, Z., Al-Hadidi, M.D.R.
and Alharbi, L.A., 2023. Automated
Identification and Categorization of COVID-
19 via X-Ray Imagery Leveraging ROI
Segmentation and CART Model. Traitement
du Signal, 40(5), pp.2259-2265,
https://doi.org/10.18280/ts.400543.
Contribution of Individual Authors to the
Creation of a Scientific Article (Ghostwriting
Policy)
Bayan Alsaaidah contributed the presented research,
at all stages from the formulation of the problem to
the final findings and solution.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
No funding was received for conducting this study.
Conflict of Interest
The authors have no conflicts of interest to declare.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on BIOLOGY and BIOMEDICINE
DOI: 10.37394/23208.2024.21.24
Bayan Alsaaidah
E-ISSN: 2224-2902
241
Volume 21, 2024