Journal of information and communication convergence engineering 2022; 20(3): 219-225
Published online September 30, 2022
https://doi.org/10.56977/jicce.2022.20.3.219
© Korea Institute of Information and Communication Engineering
Correspondence to : *Thanh-Nghi Do (E-mail: dtnghi@ctu.edu.vn, Tel: +84-2923-734-720)
Department of Computer Networks, Can Tho University, Can Tho 94000, Viet Nam
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
In this study, we propose training a support vector machine (SVM) model on top of deep networks for detecting Covid-19 from chest X-ray images. We started by gathering a real chest X-ray image dataset, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Instead of training deep networks from scratch, we fine-tuned recent pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to classify chest X-ray images into one of three classes (Covid-19, normal, and other lung). We propose training an SVM model on top of deep networks to perform a nonlinear combination of deep network outputs, improving classification over any single deep network. The empirical test results on the real chest X-ray image dataset show that deep network models, with an exception of ResNet50 with 82.44%, provide an accuracy of at least 92% on the test set. The proposed SVM on top of the deep network achieved the highest accuracy of 96.16%.
Keywords Covid-19, X-ray image, Deep learning, Support vector machines
The first case of coronavirus disease (Covid-19) was identified in Wuhan, China, at the end of December 2019 [1-4]. Covid-19 has rapidly spread to 225 countries and territories and has become an epidemic worldwide, with more than six million deaths (6,184,360) and 494,287,348 cases of infections as on April 6, 2022 (World Health Organization, https://www.who.int). Currently, real-time polymerase chain reaction (RT-PCR) is an effective method for coronaviruses diagnosis. However, the major disadvantages of RT-PCR [5-9] are its time-consuming nature and false-negative to confirm Covid-19 patients. In contrast, diagnostic imaging techniques such as chest radiography (CXR) or computed tomography (CT) can play a crucial role in rapidly approving positive Covid-19 patients.
Deep learning techniques have been widely used in automatic medical image analysis and have shown promising results [10-14]. Instead of using handcrafted features (SIFT [15], HoG [16], GIST [17]) and training a support vector machine (SVM) [18], as in the classical framework [19-22] for image classification, the deep learning approach [23] simultaneously trains a visual feature extractor and a softmax classifier in a unified framework.
Researchers have proposed training deep neural networks to detect diseases using chest X-ray images. Kesim et al. [24] used convolutional neural networks (CNNs) to classify chest X-ray images into one of 12 classes. In a previous study [25], tuberculosis images were recognized using deep and transfer learning. Chouhan et al. [26] proposed a combination of five deep-learning models using transfer learning for the detection of pneumonia from X-ray images. Rajpurkar et al. [27] developed a CNN architecture called CheXNeXt for classifying X-ray images into 14 different pathologies. A modified AlexNet was proposed by Bhandary et al. [28] to recognize lung abnormalities in X-ray images. Wozniak et al. [29] illustrated the incorporation of local variance analysis and probabilistic neural networks for the classification of lung carcinomas. Papers [30,31] proposed fine-tuning Inception v3 [32], Xception [33], and VGG16 [34] to identify pneumonia images.
Recently, deep learning approaches have been applied to recognize Covid-19 X-ray images. Capsule networks, namely COVID-CAPS, were proposed by Afshar et al. [35] for detecting Covid-19 images. COVIDX-Net [36] used deep network models, such as VGG19 [34], DenseNet121 [37], Inception v3 [32], ResNet v2 [38], Inception-ResNet v2, Xception [33], and MobileNet v2 [39] to classify Covid-19 x-ray images. In [40], AlexNet [41] and a modified AlexNet were applied for detection of Covid-19 from X-ray and CT images. COVID-Net was designed by Wang et al. [42] for identifying Covid-19 from X-ray images. COVIDiagnosis-Net [43] combines SqueezeNet [44] and Bayesian optimization to classifying Covid-19 images. Other studies [45,46] proposed the use of ResNet [38], Inception v3 [32], Inception-ResNet v2, MobileNet v2 [39], and SqueezeNet [44] to recognize Covid-19 X-ray images. Akkus et al. [47] evaluated the effectiveness of deep learning architectures for the automatic diagnosis of Covid-19. Enireddy et al. [48] trained the linear SVM classifier on deep features extracted by ResNet50, in other word the linear SVM substitutes for the softmax in deep networks. This study proposes to train an SVM model on top of whole deep networks.
We are interested in training a SVM [18] model on top of deep networks to detect Covid-19 from chest X-ray images. For this purpose, we gathered a real chest X-ray image dataset from public data sources [49-53] tagged in one of three classes (positive Covid-19, normal cases, and lung disease not caused by Covid-19). Subsequently, we propose to fine-tune different pre-trained deep network models, such as DenseNet121 [37], MobileNet v2 [39], Inception v3 [32], Xception [33], Res- Net50 [38], VGG16, and VGG19 [34], to classify chest X-ray images. Then, we propose to train an SVM [18] on top of these deep network models to improve chest X-ray image classification. The numerical test results show that deep network models achieve an accuracy of at least 92% on the test set of real chest X-ray images (except ResNet50 with 82.44%). The proposed SVM on top of the deep networks yielded the highest accuracy of 96.16%.
The remainder of this paper is organized as follows. Section II illustrates the proposed SVM on top of the deep network models for Covid-19 detection from chest X-ray images. Section III shows the experimental results. The conclusions and future work are presented in Section IV.
We started by gathering a real chest X-ray image dataset from public data sources [49-53]. The X-ray images were tagged in one of three classes (positive Covid-19 infected patients, normal cases, and lung diseases not caused by Covid-19, such as lung opacity and viral pneumonia). An example of the chest X-ray images of the three classes are shown in Fig. 1.
We obtained a dataset with 19,282 chest X-ray images in the PNG format, as shown in Table 1. The full dataset was randomly split into a training set (15,427 images) and a test set (3,855 images).
Table 1 . Description of chest X-ray image dataset
Dataset | Covid-19 | Normal | Other lung | Total |
---|---|---|---|---|
Full dataset | 6,722 | 6,719 | 5,841 | 19,282 |
Trainset | 5,366 | 5,301 | 4,760 | 15,427 |
Testset | 1,356 | 1,418 | 1,081 | 3,855 |
In recent years, deep learning networks have focused on classifying images owing to their high accuracy. Recent deep networks, such as DenseNet121 [37], MobileNet v2 [39], Inception v3 [32], Xception [33], ResNet50 [38], and VGG [34], have achieved high classification accuracy for the ImageNet dataset [54]. Therefore, we studied these deep networks for the classification of chest X-ray images.
Instead of training deep networks from scratch, we propose fine-tuning pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to recognize Covid-19 from chest X-ray images. This approach, referred to as transfer learning [55,56], involves re-using knowledge from pre-trained deep network models on the ImageNet dataset and performing a similar task, that is, chest X-ray image classification with a new dataset. We used the training set to fine-tune the weights of the last layers while freezing the weights of the first layers in the deep networks. We identified the best fine-tuned configurations, as listed in Table 2, for classifying chest X-ray images.
Table 2 . Best fine-tuned configurations for pre-trained deep network
No | Deep network | Number of fine-tuned last layers |
---|---|---|
1 | DenseNet121 | 20 |
2 | MobileNet v2 | 8 |
3 | Inception v3 | 143 |
4 | Xception | 39 |
5 | ResNet50 | 14 |
6 | VGG16 | 15 |
7 | VGG19 | 17 |
Any deep network model causes errors in terms of bias and variance in the image classification. Therefore, our aim was to combine the strengths of deep networks to improve the classification of chest X-ray images. We proposed to train a nonlinear support vector machine model (SVM [18]) on top of seven deep network models (DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19), thereby complementing these single models. The proposed SVM on top of deep networks (illustrated in Fig. 2) performs a nonlinear combination of deep network outputs
We are interested in a numerical test to assess the classification performance of the proposed SVM on top of deep networks (denoted by SVM-on-Top) in the classification of chest X-ray images.
First, we implemented the training programs in Python using libraries, such as Keras [57], with backend Tensorflow [58], Scikit-learn [59], and OpenCV [60].
All experiments were launched on a Linux Fedora 34 machine, Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores, 16 GB main memory, and the Gigabyte GeForce RTX 2080Ti 11GB GDDR6, 4352 CUDA cores.
As described in Section II.A, the full X-ray image dataset was randomly divided into training (15,427 images) and test sets (3,855 images).
We used the training set to fine-tune the pre-trained DenseNet121, MobileNet v2, Inception v3, Xception, Res-Net50, VGG16, and VGG19 with the best configurations, as illustrated in Table 2, learning rate = 0.001, and number of epochs = 200. The training set is used to determine the best hyper-parameters (with γ = 0.2 for the non-linear RBF kernel function and the positive constant cost = 105 for a trade-off between the margin size and errors) for the SVM model on top of deep networks.
For the training set, DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, VGG19, and SVM-on-Top provided overall accuracies of 96.88, 95.31, 96.88, 96.88, 87.5, 96.88, 96.88, and 100%, respectively. Then, the resulting models were used to report the classification results on the test set.
The classification results of the deep network models and SVM on top of the deep networks are presented in Table 3 and Fig. 3. In Table 3, the highest accuracy is bold-faced, and the second highest is in italics.
Table 3 . Classification accuracy on the test set (%)
No | Model | Covid-19 | Normal | Other lung | Overall |
---|---|---|---|---|---|
1 | DenseNet121 | 96.97 | 91.05 | 94.10 | 94.03 |
2 | MobileNet v2 | 95.70 | 88.71 | 92.64 | 92.32 |
3 | Inception v3 | 98.97 | 91.61 | 96.48 | 95.56 |
4 | Xception | 99.04 | 92.45 | 93.23 | 94.99 |
5 | ResNet50 | 91.67 | 82.34 | 74.46 | 82.44 |
6 | VGG16 | 98.53 | 92.17 | 93.39 | 94.92 |
7 | VGG19 | 97.39 | 90.24 | 96.07 | 94.37 |
8 | SVM-on-Top | 99.93 | 91.10 | 96.57 | 96.16 |
The last column of Table 3 and Fig. 3(d) present the overall accuracy of classification models. The comparison among the models illustrates that our proposed SVM-on-Top achieves the highest classification accuracy of 96.16%. SVM-on-Top resulted in improvements of 13.72, 3.84, 2.13, 1.79, 1.24, 1.17, and 0.60% for fine-tuned ResNet50, MobileNet v2, DenseNet121, VGG19, VGG16, Xception, and Inception v3, respectively.
The Covid-19 classification is shown in more detail in the third column of Table 3 and Fig. 3(a). The SVM-on-Top yielded the highest accuracy of 99.93%. Xception ranks second with 99.04%, followed by Inception v3, VGG16, VGG19, DenseNet121, MobileNet v2, and ResNet50.
The classification results of healthy lung (Normal) in the fourth column of Table 3 and Fig. 3(b) show that the highest accuracy was 92.45% performed by Xception and the second highest was 92.17% obtained by VGG16. ResNet50 exhibited the lowest accuracy (82.34 %).
Classification results for other lungs (lung opacity and viral pneumonia) as presented in the fifth column of Table 3 and Fig. 3(c) show that SVM-on-Top achieved the highest classification accuracy of 96.57%. Inception v3 was the second most accurate model (96.48%), followed by VGG19, DenseNet121, VGG16, Xception, MobileNet v2, and ResNet50.
The computational complexity of the SVM for our proposed SVM-on-Top is squared with the number of training data points. In this training set, the learning task of SVM takes 0.25 s.
We have proposed a non-linear combination of deep network outputs obtained by training an SVM model on top of deep networks for detecting Covid-19 from chest X-ray images. We gathered the chest X-ray image dataset from public data sources, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Recent pretrained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, were fine-tuned to classify chest X-ray images. We proposed training an SVM model on top of these fine-tuned deep networks to improve chest X-ray image classification. The proposed SVM-on-Top improved 13.72, 3.84, 2.13, 1.79, 1.24, 1.17, and 0.60% classification accuracy against fined-tuned ResNet50, MobileNet v2, DenseNet121, VGG19, VGG16, Xception, and Inception v3, respectively.
In the near future, we intend to enlarge the chest X-ray image dataset to improve the training of deep networks. A promising research aim is to select good network models for use in combination with SVM.
was born in Cantho in 1974. He received his PhD. degree in Informatics from the University of Nantes in 2004. He is currently an associate professor at the College of Information Technology, Cantho University, Vietnam. He is also an associate researcher at UMI UMMISCO 209 (IRD/UPMC), Sorbonne university, Pierre and Marie Curie University, France. His research interests include data mining with support vector machines, kernel-based methods, decision tree algorithms, ensemble-based learning, and information visualization. He has served on the program committees of international conferences and is a reviewer for the journals in his fields.
was born in Haiduong in 1980. He received his MD. degree from Thai Nguyen University of Medicine and Pharmacy in 2005. He is currently a specialist doctor in diagnostic imaging at the Tam Anh Hospital, Hanoi, Vietnam. His research interests focus on techniques for medical image analysis.
was born in Laichau in 1981. She received his MD. degree from Thai Nguyen University of Medicine and Pharmacy in 2005. She is currently a specialist doctor in preventive medicine at the Healthcare Center, National Assembly, Hanoi, Vietnam. Her research interests focus on techniques for medical image analysis.
Journal of information and communication convergence engineering 2022; 20(3): 219-225
Published online September 30, 2022 https://doi.org/10.56977/jicce.2022.20.3.219
Copyright © Korea Institute of Information and Communication Engineering.
Thanh-Nghi Do 1,4*, Van-Thanh Le 2, and Thi-Huong Doan3*
1Department of Computer Networks, Can Tho University, Can Tho 94000, Viet Nam
2Tam Anh Hospital, Ha Noi 100000, Viet Nam
3Healthcare Center, National Assembly, Ha Noi 100000, Viet Nam
4UMI UMMISCO 209, IRD/UPMC, Paris 75000, France
Correspondence to:*Thanh-Nghi Do (E-mail: dtnghi@ctu.edu.vn, Tel: +84-2923-734-720)
Department of Computer Networks, Can Tho University, Can Tho 94000, Viet Nam
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
In this study, we propose training a support vector machine (SVM) model on top of deep networks for detecting Covid-19 from chest X-ray images. We started by gathering a real chest X-ray image dataset, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Instead of training deep networks from scratch, we fine-tuned recent pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to classify chest X-ray images into one of three classes (Covid-19, normal, and other lung). We propose training an SVM model on top of deep networks to perform a nonlinear combination of deep network outputs, improving classification over any single deep network. The empirical test results on the real chest X-ray image dataset show that deep network models, with an exception of ResNet50 with 82.44%, provide an accuracy of at least 92% on the test set. The proposed SVM on top of the deep network achieved the highest accuracy of 96.16%.
Keywords: Covid-19, X-ray image, Deep learning, Support vector machines
The first case of coronavirus disease (Covid-19) was identified in Wuhan, China, at the end of December 2019 [1-4]. Covid-19 has rapidly spread to 225 countries and territories and has become an epidemic worldwide, with more than six million deaths (6,184,360) and 494,287,348 cases of infections as on April 6, 2022 (World Health Organization, https://www.who.int). Currently, real-time polymerase chain reaction (RT-PCR) is an effective method for coronaviruses diagnosis. However, the major disadvantages of RT-PCR [5-9] are its time-consuming nature and false-negative to confirm Covid-19 patients. In contrast, diagnostic imaging techniques such as chest radiography (CXR) or computed tomography (CT) can play a crucial role in rapidly approving positive Covid-19 patients.
Deep learning techniques have been widely used in automatic medical image analysis and have shown promising results [10-14]. Instead of using handcrafted features (SIFT [15], HoG [16], GIST [17]) and training a support vector machine (SVM) [18], as in the classical framework [19-22] for image classification, the deep learning approach [23] simultaneously trains a visual feature extractor and a softmax classifier in a unified framework.
Researchers have proposed training deep neural networks to detect diseases using chest X-ray images. Kesim et al. [24] used convolutional neural networks (CNNs) to classify chest X-ray images into one of 12 classes. In a previous study [25], tuberculosis images were recognized using deep and transfer learning. Chouhan et al. [26] proposed a combination of five deep-learning models using transfer learning for the detection of pneumonia from X-ray images. Rajpurkar et al. [27] developed a CNN architecture called CheXNeXt for classifying X-ray images into 14 different pathologies. A modified AlexNet was proposed by Bhandary et al. [28] to recognize lung abnormalities in X-ray images. Wozniak et al. [29] illustrated the incorporation of local variance analysis and probabilistic neural networks for the classification of lung carcinomas. Papers [30,31] proposed fine-tuning Inception v3 [32], Xception [33], and VGG16 [34] to identify pneumonia images.
Recently, deep learning approaches have been applied to recognize Covid-19 X-ray images. Capsule networks, namely COVID-CAPS, were proposed by Afshar et al. [35] for detecting Covid-19 images. COVIDX-Net [36] used deep network models, such as VGG19 [34], DenseNet121 [37], Inception v3 [32], ResNet v2 [38], Inception-ResNet v2, Xception [33], and MobileNet v2 [39] to classify Covid-19 x-ray images. In [40], AlexNet [41] and a modified AlexNet were applied for detection of Covid-19 from X-ray and CT images. COVID-Net was designed by Wang et al. [42] for identifying Covid-19 from X-ray images. COVIDiagnosis-Net [43] combines SqueezeNet [44] and Bayesian optimization to classifying Covid-19 images. Other studies [45,46] proposed the use of ResNet [38], Inception v3 [32], Inception-ResNet v2, MobileNet v2 [39], and SqueezeNet [44] to recognize Covid-19 X-ray images. Akkus et al. [47] evaluated the effectiveness of deep learning architectures for the automatic diagnosis of Covid-19. Enireddy et al. [48] trained the linear SVM classifier on deep features extracted by ResNet50, in other word the linear SVM substitutes for the softmax in deep networks. This study proposes to train an SVM model on top of whole deep networks.
We are interested in training a SVM [18] model on top of deep networks to detect Covid-19 from chest X-ray images. For this purpose, we gathered a real chest X-ray image dataset from public data sources [49-53] tagged in one of three classes (positive Covid-19, normal cases, and lung disease not caused by Covid-19). Subsequently, we propose to fine-tune different pre-trained deep network models, such as DenseNet121 [37], MobileNet v2 [39], Inception v3 [32], Xception [33], Res- Net50 [38], VGG16, and VGG19 [34], to classify chest X-ray images. Then, we propose to train an SVM [18] on top of these deep network models to improve chest X-ray image classification. The numerical test results show that deep network models achieve an accuracy of at least 92% on the test set of real chest X-ray images (except ResNet50 with 82.44%). The proposed SVM on top of the deep networks yielded the highest accuracy of 96.16%.
The remainder of this paper is organized as follows. Section II illustrates the proposed SVM on top of the deep network models for Covid-19 detection from chest X-ray images. Section III shows the experimental results. The conclusions and future work are presented in Section IV.
We started by gathering a real chest X-ray image dataset from public data sources [49-53]. The X-ray images were tagged in one of three classes (positive Covid-19 infected patients, normal cases, and lung diseases not caused by Covid-19, such as lung opacity and viral pneumonia). An example of the chest X-ray images of the three classes are shown in Fig. 1.
We obtained a dataset with 19,282 chest X-ray images in the PNG format, as shown in Table 1. The full dataset was randomly split into a training set (15,427 images) and a test set (3,855 images).
Table 1 . Description of chest X-ray image dataset.
Dataset | Covid-19 | Normal | Other lung | Total |
---|---|---|---|---|
Full dataset | 6,722 | 6,719 | 5,841 | 19,282 |
Trainset | 5,366 | 5,301 | 4,760 | 15,427 |
Testset | 1,356 | 1,418 | 1,081 | 3,855 |
In recent years, deep learning networks have focused on classifying images owing to their high accuracy. Recent deep networks, such as DenseNet121 [37], MobileNet v2 [39], Inception v3 [32], Xception [33], ResNet50 [38], and VGG [34], have achieved high classification accuracy for the ImageNet dataset [54]. Therefore, we studied these deep networks for the classification of chest X-ray images.
Instead of training deep networks from scratch, we propose fine-tuning pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to recognize Covid-19 from chest X-ray images. This approach, referred to as transfer learning [55,56], involves re-using knowledge from pre-trained deep network models on the ImageNet dataset and performing a similar task, that is, chest X-ray image classification with a new dataset. We used the training set to fine-tune the weights of the last layers while freezing the weights of the first layers in the deep networks. We identified the best fine-tuned configurations, as listed in Table 2, for classifying chest X-ray images.
Table 2 . Best fine-tuned configurations for pre-trained deep network.
No | Deep network | Number of fine-tuned last layers |
---|---|---|
1 | DenseNet121 | 20 |
2 | MobileNet v2 | 8 |
3 | Inception v3 | 143 |
4 | Xception | 39 |
5 | ResNet50 | 14 |
6 | VGG16 | 15 |
7 | VGG19 | 17 |
Any deep network model causes errors in terms of bias and variance in the image classification. Therefore, our aim was to combine the strengths of deep networks to improve the classification of chest X-ray images. We proposed to train a nonlinear support vector machine model (SVM [18]) on top of seven deep network models (DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19), thereby complementing these single models. The proposed SVM on top of deep networks (illustrated in Fig. 2) performs a nonlinear combination of deep network outputs
We are interested in a numerical test to assess the classification performance of the proposed SVM on top of deep networks (denoted by SVM-on-Top) in the classification of chest X-ray images.
First, we implemented the training programs in Python using libraries, such as Keras [57], with backend Tensorflow [58], Scikit-learn [59], and OpenCV [60].
All experiments were launched on a Linux Fedora 34 machine, Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores, 16 GB main memory, and the Gigabyte GeForce RTX 2080Ti 11GB GDDR6, 4352 CUDA cores.
As described in Section II.A, the full X-ray image dataset was randomly divided into training (15,427 images) and test sets (3,855 images).
We used the training set to fine-tune the pre-trained DenseNet121, MobileNet v2, Inception v3, Xception, Res-Net50, VGG16, and VGG19 with the best configurations, as illustrated in Table 2, learning rate = 0.001, and number of epochs = 200. The training set is used to determine the best hyper-parameters (with γ = 0.2 for the non-linear RBF kernel function and the positive constant cost = 105 for a trade-off between the margin size and errors) for the SVM model on top of deep networks.
For the training set, DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, VGG19, and SVM-on-Top provided overall accuracies of 96.88, 95.31, 96.88, 96.88, 87.5, 96.88, 96.88, and 100%, respectively. Then, the resulting models were used to report the classification results on the test set.
The classification results of the deep network models and SVM on top of the deep networks are presented in Table 3 and Fig. 3. In Table 3, the highest accuracy is bold-faced, and the second highest is in italics.
Table 3 . Classification accuracy on the test set (%).
No | Model | Covid-19 | Normal | Other lung | Overall |
---|---|---|---|---|---|
1 | DenseNet121 | 96.97 | 91.05 | 94.10 | 94.03 |
2 | MobileNet v2 | 95.70 | 88.71 | 92.64 | 92.32 |
3 | Inception v3 | 98.97 | 91.61 | 96.48 | 95.56 |
4 | Xception | 99.04 | 92.45 | 93.23 | 94.99 |
5 | ResNet50 | 91.67 | 82.34 | 74.46 | 82.44 |
6 | VGG16 | 98.53 | 92.17 | 93.39 | 94.92 |
7 | VGG19 | 97.39 | 90.24 | 96.07 | 94.37 |
8 | SVM-on-Top | 99.93 | 91.10 | 96.57 | 96.16 |
The last column of Table 3 and Fig. 3(d) present the overall accuracy of classification models. The comparison among the models illustrates that our proposed SVM-on-Top achieves the highest classification accuracy of 96.16%. SVM-on-Top resulted in improvements of 13.72, 3.84, 2.13, 1.79, 1.24, 1.17, and 0.60% for fine-tuned ResNet50, MobileNet v2, DenseNet121, VGG19, VGG16, Xception, and Inception v3, respectively.
The Covid-19 classification is shown in more detail in the third column of Table 3 and Fig. 3(a). The SVM-on-Top yielded the highest accuracy of 99.93%. Xception ranks second with 99.04%, followed by Inception v3, VGG16, VGG19, DenseNet121, MobileNet v2, and ResNet50.
The classification results of healthy lung (Normal) in the fourth column of Table 3 and Fig. 3(b) show that the highest accuracy was 92.45% performed by Xception and the second highest was 92.17% obtained by VGG16. ResNet50 exhibited the lowest accuracy (82.34 %).
Classification results for other lungs (lung opacity and viral pneumonia) as presented in the fifth column of Table 3 and Fig. 3(c) show that SVM-on-Top achieved the highest classification accuracy of 96.57%. Inception v3 was the second most accurate model (96.48%), followed by VGG19, DenseNet121, VGG16, Xception, MobileNet v2, and ResNet50.
The computational complexity of the SVM for our proposed SVM-on-Top is squared with the number of training data points. In this training set, the learning task of SVM takes 0.25 s.
We have proposed a non-linear combination of deep network outputs obtained by training an SVM model on top of deep networks for detecting Covid-19 from chest X-ray images. We gathered the chest X-ray image dataset from public data sources, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Recent pretrained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, were fine-tuned to classify chest X-ray images. We proposed training an SVM model on top of these fine-tuned deep networks to improve chest X-ray image classification. The proposed SVM-on-Top improved 13.72, 3.84, 2.13, 1.79, 1.24, 1.17, and 0.60% classification accuracy against fined-tuned ResNet50, MobileNet v2, DenseNet121, VGG19, VGG16, Xception, and Inception v3, respectively.
In the near future, we intend to enlarge the chest X-ray image dataset to improve the training of deep networks. A promising research aim is to select good network models for use in combination with SVM.
Table 1 . Description of chest X-ray image dataset.
Dataset | Covid-19 | Normal | Other lung | Total |
---|---|---|---|---|
Full dataset | 6,722 | 6,719 | 5,841 | 19,282 |
Trainset | 5,366 | 5,301 | 4,760 | 15,427 |
Testset | 1,356 | 1,418 | 1,081 | 3,855 |
Table 2 . Best fine-tuned configurations for pre-trained deep network.
No | Deep network | Number of fine-tuned last layers |
---|---|---|
1 | DenseNet121 | 20 |
2 | MobileNet v2 | 8 |
3 | Inception v3 | 143 |
4 | Xception | 39 |
5 | ResNet50 | 14 |
6 | VGG16 | 15 |
7 | VGG19 | 17 |
Table 3 . Classification accuracy on the test set (%).
No | Model | Covid-19 | Normal | Other lung | Overall |
---|---|---|---|---|---|
1 | DenseNet121 | 96.97 | 91.05 | 94.10 | 94.03 |
2 | MobileNet v2 | 95.70 | 88.71 | 92.64 | 92.32 |
3 | Inception v3 | 98.97 | 91.61 | 96.48 | 95.56 |
4 | Xception | 99.04 | 92.45 | 93.23 | 94.99 |
5 | ResNet50 | 91.67 | 82.34 | 74.46 | 82.44 |
6 | VGG16 | 98.53 | 92.17 | 93.39 | 94.92 |
7 | VGG19 | 97.39 | 90.24 | 96.07 | 94.37 |
8 | SVM-on-Top | 99.93 | 91.10 | 96.57 | 96.16 |
Hankil Kim,Jinyoung Kim,Hoekyung Jung
Journal of information and communication convergence engineering 2018; 16(3): 160-165 https://doi.org/10.6109/jicce.2018.16.3.160Phuoc-Hai Huynh, Van Hoa Nguyen, and Thanh-Nghi Do
Journal of information and communication convergence engineering 2019; 17(1): 14-20 https://doi.org/10.6109/jicce.2019.17.1.14Lee, Jongwon;Wu, Guanchen;Jung, Hoekyung;
Journal of information and communication convergence engineering 2021; 19(1): 48-53 https://doi.org/10.6109/jicce.2021.19.1.48