Search 닫기

Journal of information and communication convergence engineering 2022; 20(3): 219-225

Published online September 30, 2022

https://doi.org/10.56977/jicce.2022.20.3.219

© Korea Institute of Information and Communication Engineering

SVM on Top of Deep Networks for Covid-19 Detection from Chest X-ray Images

Thanh-Nghi Do 1,4*, Van-Thanh Le 2, and Thi-Huong Doan3*

1Department of Computer Networks, Can Tho University, Can Tho 94000, Viet Nam
2Tam Anh Hospital, Ha Noi 100000, Viet Nam
3Healthcare Center, National Assembly, Ha Noi 100000, Viet Nam
4UMI UMMISCO 209, IRD/UPMC, Paris 75000, France

Correspondence to : *Thanh-Nghi Do (E-mail: dtnghi@ctu.edu.vn, Tel: +84-2923-734-720)
Department of Computer Networks, Can Tho University, Can Tho 94000, Viet Nam

Received: April 6, 2022; Revised: September 4, 2022; Accepted: September 8, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

In this study, we propose training a support vector machine (SVM) model on top of deep networks for detecting Covid-19 from chest X-ray images. We started by gathering a real chest X-ray image dataset, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Instead of training deep networks from scratch, we fine-tuned recent pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to classify chest X-ray images into one of three classes (Covid-19, normal, and other lung). We propose training an SVM model on top of deep networks to perform a nonlinear combination of deep network outputs, improving classification over any single deep network. The empirical test results on the real chest X-ray image dataset show that deep network models, with an exception of ResNet50 with 82.44%, provide an accuracy of at least 92% on the test set. The proposed SVM on top of the deep network achieved the highest accuracy of 96.16%.

Keywords Covid-19, X-ray image, Deep learning, Support vector machines

The first case of coronavirus disease (Covid-19) was identified in Wuhan, China, at the end of December 2019 [1-4]. Covid-19 has rapidly spread to 225 countries and territories and has become an epidemic worldwide, with more than six million deaths (6,184,360) and 494,287,348 cases of infections as on April 6, 2022 (World Health Organization, https://www.who.int). Currently, real-time polymerase chain reaction (RT-PCR) is an effective method for coronaviruses diagnosis. However, the major disadvantages of RT-PCR [5-9] are its time-consuming nature and false-negative to confirm Covid-19 patients. In contrast, diagnostic imaging techniques such as chest radiography (CXR) or computed tomography (CT) can play a crucial role in rapidly approving positive Covid-19 patients.

Deep learning techniques have been widely used in automatic medical image analysis and have shown promising results [10-14]. Instead of using handcrafted features (SIFT [15], HoG [16], GIST [17]) and training a support vector machine (SVM) [18], as in the classical framework [19-22] for image classification, the deep learning approach [23] simultaneously trains a visual feature extractor and a softmax classifier in a unified framework.

Researchers have proposed training deep neural networks to detect diseases using chest X-ray images. Kesim et al. [24] used convolutional neural networks (CNNs) to classify chest X-ray images into one of 12 classes. In a previous study [25], tuberculosis images were recognized using deep and transfer learning. Chouhan et al. [26] proposed a combination of five deep-learning models using transfer learning for the detection of pneumonia from X-ray images. Rajpurkar et al. [27] developed a CNN architecture called CheXNeXt for classifying X-ray images into 14 different pathologies. A modified AlexNet was proposed by Bhandary et al. [28] to recognize lung abnormalities in X-ray images. Wozniak et al. [29] illustrated the incorporation of local variance analysis and probabilistic neural networks for the classification of lung carcinomas. Papers [30,31] proposed fine-tuning Inception v3 [32], Xception [33], and VGG16 [34] to identify pneumonia images.

Recently, deep learning approaches have been applied to recognize Covid-19 X-ray images. Capsule networks, namely COVID-CAPS, were proposed by Afshar et al. [35] for detecting Covid-19 images. COVIDX-Net [36] used deep network models, such as VGG19 [34], DenseNet121 [37], Inception v3 [32], ResNet v2 [38], Inception-ResNet v2, Xception [33], and MobileNet v2 [39] to classify Covid-19 x-ray images. In [40], AlexNet [41] and a modified AlexNet were applied for detection of Covid-19 from X-ray and CT images. COVID-Net was designed by Wang et al. [42] for identifying Covid-19 from X-ray images. COVIDiagnosis-Net [43] combines SqueezeNet [44] and Bayesian optimization to classifying Covid-19 images. Other studies [45,46] proposed the use of ResNet [38], Inception v3 [32], Inception-ResNet v2, MobileNet v2 [39], and SqueezeNet [44] to recognize Covid-19 X-ray images. Akkus et al. [47] evaluated the effectiveness of deep learning architectures for the automatic diagnosis of Covid-19. Enireddy et al. [48] trained the linear SVM classifier on deep features extracted by ResNet50, in other word the linear SVM substitutes for the softmax in deep networks. This study proposes to train an SVM model on top of whole deep networks.

We are interested in training a SVM [18] model on top of deep networks to detect Covid-19 from chest X-ray images. For this purpose, we gathered a real chest X-ray image dataset from public data sources [49-53] tagged in one of three classes (positive Covid-19, normal cases, and lung disease not caused by Covid-19). Subsequently, we propose to fine-tune different pre-trained deep network models, such as DenseNet121 [37], MobileNet v2 [39], Inception v3 [32], Xception [33], Res- Net50 [38], VGG16, and VGG19 [34], to classify chest X-ray images. Then, we propose to train an SVM [18] on top of these deep network models to improve chest X-ray image classification. The numerical test results show that deep network models achieve an accuracy of at least 92% on the test set of real chest X-ray images (except ResNet50 with 82.44%). The proposed SVM on top of the deep networks yielded the highest accuracy of 96.16%.

The remainder of this paper is organized as follows. Section II illustrates the proposed SVM on top of the deep network models for Covid-19 detection from chest X-ray images. Section III shows the experimental results. The conclusions and future work are presented in Section IV.

A. Chest x-ray Image Dataset

We started by gathering a real chest X-ray image dataset from public data sources [49-53]. The X-ray images were tagged in one of three classes (positive Covid-19 infected patients, normal cases, and lung diseases not caused by Covid-19, such as lung opacity and viral pneumonia). An example of the chest X-ray images of the three classes are shown in Fig. 1.

Fig. 1. Sample of chest x-ray images.

We obtained a dataset with 19,282 chest X-ray images in the PNG format, as shown in Table 1. The full dataset was randomly split into a training set (15,427 images) and a test set (3,855 images).

Table 1 . Description of chest X-ray image dataset

DatasetCovid-19NormalOther lungTotal
Full dataset6,7226,7195,84119,282
Trainset5,3665,3014,76015,427
Testset1,3561,4181,0813,855


B. Fine-tuning Deep Networks

In recent years, deep learning networks have focused on classifying images owing to their high accuracy. Recent deep networks, such as DenseNet121 [37], MobileNet v2 [39], Inception v3 [32], Xception [33], ResNet50 [38], and VGG [34], have achieved high classification accuracy for the ImageNet dataset [54]. Therefore, we studied these deep networks for the classification of chest X-ray images.

Instead of training deep networks from scratch, we propose fine-tuning pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to recognize Covid-19 from chest X-ray images. This approach, referred to as transfer learning [55,56], involves re-using knowledge from pre-trained deep network models on the ImageNet dataset and performing a similar task, that is, chest X-ray image classification with a new dataset. We used the training set to fine-tune the weights of the last layers while freezing the weights of the first layers in the deep networks. We identified the best fine-tuned configurations, as listed in Table 2, for classifying chest X-ray images.

Table 2 . Best fine-tuned configurations for pre-trained deep network

NoDeep networkNumber of fine-tuned last layers
1DenseNet12120
2MobileNet v28
3Inception v3143
4Xception39
5ResNet5014
6VGG1615
7VGG1917


C. Training Support Vector Machines on Top of Deep Networks

Any deep network model causes errors in terms of bias and variance in the image classification. Therefore, our aim was to combine the strengths of deep networks to improve the classification of chest X-ray images. We proposed to train a nonlinear support vector machine model (SVM [18]) on top of seven deep network models (DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19), thereby complementing these single models. The proposed SVM on top of deep networks (illustrated in Fig. 2) performs a nonlinear combination of deep network outputs y^i, y^d, ..., y^0 with a kernel function RBF (Radial Basis Function), making the prediction output y^. This improved the classification accuracy compared to any single deep network.

Fig. 2. Training SVM on top of deep networks for classifying chest x-ray images.

We are interested in a numerical test to assess the classification performance of the proposed SVM on top of deep networks (denoted by SVM-on-Top) in the classification of chest X-ray images.

A. Experimental Setup

First, we implemented the training programs in Python using libraries, such as Keras [57], with backend Tensorflow [58], Scikit-learn [59], and OpenCV [60].

All experiments were launched on a Linux Fedora 34 machine, Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores, 16 GB main memory, and the Gigabyte GeForce RTX 2080Ti 11GB GDDR6, 4352 CUDA cores.

As described in Section II.A, the full X-ray image dataset was randomly divided into training (15,427 images) and test sets (3,855 images).

We used the training set to fine-tune the pre-trained DenseNet121, MobileNet v2, Inception v3, Xception, Res-Net50, VGG16, and VGG19 with the best configurations, as illustrated in Table 2, learning rate = 0.001, and number of epochs = 200. The training set is used to determine the best hyper-parameters (with γ = 0.2 for the non-linear RBF kernel function and the positive constant cost = 105 for a trade-off between the margin size and errors) for the SVM model on top of deep networks.

For the training set, DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, VGG19, and SVM-on-Top provided overall accuracies of 96.88, 95.31, 96.88, 96.88, 87.5, 96.88, 96.88, and 100%, respectively. Then, the resulting models were used to report the classification results on the test set.

B. Classification Results

The classification results of the deep network models and SVM on top of the deep networks are presented in Table 3 and Fig. 3. In Table 3, the highest accuracy is bold-faced, and the second highest is in italics.

Table 3 . Classification accuracy on the test set (%)

NoModelCovid-19NormalOther lungOverall
1DenseNet12196.9791.0594.1094.03
2MobileNet v295.7088.7192.6492.32
3Inception v398.9791.6196.4895.56
4Xception99.0492.4593.2394.99
5ResNet5091.6782.3474.4682.44
6VGG1698.5392.1793.3994.92
7VGG1997.3990.2496.0794.37
8SVM-on-Top99.9391.1096.5796.16


Fig. 3. Comparison between classification results.

The last column of Table 3 and Fig. 3(d) present the overall accuracy of classification models. The comparison among the models illustrates that our proposed SVM-on-Top achieves the highest classification accuracy of 96.16%. SVM-on-Top resulted in improvements of 13.72, 3.84, 2.13, 1.79, 1.24, 1.17, and 0.60% for fine-tuned ResNet50, MobileNet v2, DenseNet121, VGG19, VGG16, Xception, and Inception v3, respectively.

The Covid-19 classification is shown in more detail in the third column of Table 3 and Fig. 3(a). The SVM-on-Top yielded the highest accuracy of 99.93%. Xception ranks second with 99.04%, followed by Inception v3, VGG16, VGG19, DenseNet121, MobileNet v2, and ResNet50.

The classification results of healthy lung (Normal) in the fourth column of Table 3 and Fig. 3(b) show that the highest accuracy was 92.45% performed by Xception and the second highest was 92.17% obtained by VGG16. ResNet50 exhibited the lowest accuracy (82.34 %).

Classification results for other lungs (lung opacity and viral pneumonia) as presented in the fifth column of Table 3 and Fig. 3(c) show that SVM-on-Top achieved the highest classification accuracy of 96.57%. Inception v3 was the second most accurate model (96.48%), followed by VGG19, DenseNet121, VGG16, Xception, MobileNet v2, and ResNet50.

The computational complexity of the SVM for our proposed SVM-on-Top is squared with the number of training data points. In this training set, the learning task of SVM takes 0.25 s.

We have proposed a non-linear combination of deep network outputs obtained by training an SVM model on top of deep networks for detecting Covid-19 from chest X-ray images. We gathered the chest X-ray image dataset from public data sources, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Recent pretrained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, were fine-tuned to classify chest X-ray images. We proposed training an SVM model on top of these fine-tuned deep networks to improve chest X-ray image classification. The proposed SVM-on-Top improved 13.72, 3.84, 2.13, 1.79, 1.24, 1.17, and 0.60% classification accuracy against fined-tuned ResNet50, MobileNet v2, DenseNet121, VGG19, VGG16, Xception, and Inception v3, respectively.

In the near future, we intend to enlarge the chest X-ray image dataset to improve the training of deep networks. A promising research aim is to select good network models for use in combination with SVM.

  1. C. Huang, and Y. Wang, and X. Li, and L. Ren, and J. Zhao, and Y. Hu, and L. Zhang, and G. Fan, and J. Xu, and X. Gu, and Z. Cheng, and T. Yu, and J. Xia, and Y. Wei, and W. Wu, and X. Xie, and W. Yin, and H. Li, and M. Liu, and Y. Xiao, and H. Gao, and L. Guo, and J. Xie, and G. Wang, and R. Jiang, and Z. Gao, and Q. Jin, and J. Wang, and B. Cao, Clinical features of patients infected with 2019 novel coronavirus in wuhan, china, The Lancet, vol. 395, no. 10223, pp. 497-506, Feb., 2020. DOI: 10.1016/S0140-6736(20)30183-5.
    CrossRef
  2. Q. Li, and X. Guan, and P. Wu, and X. Wang, and L. Zhou, and Y. Tong, and R. Ren, and K. S. M. Leung, and E. H. Y. Lau, and J. Y. Wong, and X. Xing, and N. Xiang, and Y. Wu, and C. Li, and Q. Chen, and D. Li, and T. Liu, and J. Zhao, and M. Liu, and W. Tu, and C. Chen, and L. Jin, and R. Yang, and Q. Wang, and S. Zhou, and R. Wang, and H. Liu, and Y. Luo, and Y. Liu, and G. Shao, and H. Li, and Z. Tao, and Y. Yang, and Z. Deng, and B. Liu, and Z. Ma, and Y. Zhang, and G. Shi, and T. T. Y. Lam, and J. T. Wu, and G. F. Gao, and B. J. Cowling, and B. Yang, and G. M. Leung, and Z. Feng, Early transmission dynamics in Wuhan, China, of novel coronavirus-infected pneumonia, New England Journal of Medicine, vol. 382, pp. 1199-1207, Mar., 2020. DOI: 10.1056/NEJMoa2001316.
    Pubmed KoreaMed CrossRef
  3. F. Wu, and S. Zhao, and B. Yu, and Y. Chen, and W. Wang, and Z. Song, and Y. Hu, and Z. Tao, and J. Tian, and Y. Pei, and M. Yuan, and Y. Zhang, and F. Dai, and Y. Liu, and Q. Wang, and J. Zheng, and L. Xu, and E. C. Holmes, and Y. Zhang, A new coronavirus associated with human respiratory disease in China, Nature, vol. 579, pp. 1-8, Feb., 2020. 10.1038/s41586-020-2008-3.
    Pubmed KoreaMed CrossRef
  4. N. Zhu, and D. Zhang, and W. Wang, and X. Li, and B. Yang, and J. Song, and X. Zhao, and B. Huang, and W. Shi, and R. Lu, and P. Niu, and F. Zhan, and D. Wang, and W. Xu, and G. Wu, and G. F. Gao, and D. Phil, A novel coronavirus from patients with pneumonia in China, 2019, New England Journal of Medicine, vol. 382, pp. 727-733, Feb., 2020. DOI: 10.1056/NEJMoa2001017.
    Pubmed KoreaMed CrossRef
  5. I. Arevalo-Rodriguez, and D. Buitrago-Garcia, and D. Simancas-Racines, and P. Zambrano-Achig, and R. D. Campo, and A. Ciapponi, and O. Sued, and L. Martines-Garcia, and A. W. Rutjes, and N. Low, and P. M. Bossuyt, and J. A. Perez-Molina, and J. Zamora, False-negative results of initial rt-pcr assays for covid-19: A systematic review, PLoS One, vol. 15, no. 12, e0242958, Dec., 2020. DOI: 10.1371/journal.pone.0242958.
    Pubmed KoreaMed CrossRef
  6. J. F. Chan, and S. Yuan, and K. Kok, and K. K. To, and H. Chu, and J. Yang, and F. Xing, and J. Liu, and C. C. Yip, and R. W. Poon, and H. Tsoi, and S. K. Lo, and K. Chan, and V. K. Poon, and W. Chan, and J. D. Ip, and J. Cai, and Y. C. Cheng, and H. Chen, and C. K. Hui, and K. Yuen A familial cluster of pneumonia associated with the 2019, the, 2019.
    CrossRef
  7. W. Hao, and M. Li, Clinical diagnostic value of CT imaging in COVID-19 with multiple negative RT-PCR testing, Travel Medicine and Infectious Disease, vol. 34, 101627, Mar.-Apr., 2020. DOI: 10.1016/j.tmaid.2020.101627.
    Pubmed KoreaMed CrossRef
  8. P. Huang, and T. Liu, and L. Huang, and H. Liu, and M. Lei, and W. Xu, and X. Hu, and J. Chen, and B. Liu, Use of chest ct in combination with negative RT-PCR assay for the 2019 novel coronavirus but high clinical suspicion, Radiology, vol. 295, no. 1, pp. 22-23, Apr., 2020. DOI: 10.1148/radiol.2020200330.
    Pubmed KoreaMed CrossRef
  9. X. Xie, and Z. Zhong, and W. Zhong, and W. Zhao, and C. Zheng, and F. Wang, and J. Liu, Chest CT for typical coronavirus disease 2019 (COVID-19) pneumonia: Relationship to negative RT-PCR testing, Radiology, vol. 296, no. 2, pp. E41-E45, Feb., 2020. DOI: 10.1148/radiol. 2020200343.
    Pubmed KoreaMed CrossRef
  10. Z. Akkus, and A. Galimzianova, and A. Hoogi, and D. L. Rubin, and B. J. Erickson, Deep learning for brain MRI segmentation: State of the art and future directions, Journal of Digital Imaging, vol. 30, no. 4, pp. 449-459, Jun., 2017. DOI: 10.1007/s10278-017-9983-4.
    Pubmed KoreaMed CrossRef
  11. J. Ker, and L. Wang, and J. Rao, and T. Lim, Deep learning applications in medical image analysis, , vol. 6, pp. 9375-9389, Dec., 2017. DOI: 10.1109/ACCESS.2017.2788044.
    CrossRef
  12. C. Liang, and Y. Liu, and M. Wu, and F. Garcia-Castro, and A. Alberich-Bayarri, and F. Wu, Identifying pulmonary nodules or masses on chest radiography using deep learning: External validation and strategies to improve clinical practice, Clinical Radiology, vol. 75, no. 1, pp. 38-45, Jan., 2020. DOI: 10.1016/j.crad.2019.08.005.
    Pubmed CrossRef
  13. G. Litjens, and T. Kooi, and B. E. Bejnordi, and A. A. A. Setio, and F. Ciompi, and M. Ghafoorian, and J. A. W. M. van der Laak, and B. van Ginneken, and C. I. Sanchez, A survey on deep learning in medical image analysis, Medical Image Analysis, vol. 42, pp. 60-88, Dec., 2017. DOI: 10.1016/j.media.2017.07.005.
    Pubmed CrossRef
  14. D. Shen and G. Wu and H. Suk, Deep learning in medical image analysis, Annual Review of Biomedical Engineering, vol. 19, pp. 221-248, Jun., 2017. DOI: 10.1146/annurev-bioeng-071516-044442.
    Pubmed KoreaMed CrossRef
  15. D. G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, vol. 60, pp. 91-110, Nov., 2004. DOI: 10.1023/B:VISI.0000029664.99615.94.
    CrossRef
  16. N. Dalal, and B. Trigs, Histograms of oriented gradients for human detection, in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego: CA, USA, pp. 886-893, 2005. DOI: 10.1109/CVPR.2005.177.
    CrossRef
  17. A. Oliva, and A. Torralba, Modeling the shape of the scene: A holistic representation of the spatial envelope, International Journal of Computer Vision, vol. 42, pp. 145-175, May., 2001. DOI: 10.1023/A:1011139631724.
    CrossRef
  18. V. N. Vapnik The Nature of Statistical Learning Theory, 2nd ed, Springer-Verlag, 2000.
    CrossRef
  19. A. Bosch and A. Zisserman and X. Munoz, Scene classification via plsa, in Proceedings of the European Conference on Computer Vision, Graz, Austria, pp. 517-530, 2006. DOI: 10.1007/11744085_40.
    CrossRef
  20. T. Do and P. Lenca and S. Lallich, Classifying many-class highdimensional fingerprint datasets using random forest of oblique decision trees, Vietnam Journal of Computer Science, vol. 2, pp. 3-12, Jun., 2014. DOI: 10.1007/s40595-014-0024-7.
    CrossRef
  21. L. Fei-Fei, and P. Perona, A bayesian hierarchical model for learning natural scene categories, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego: CA, USA, pp. 524-531, 2005. DOI: 10.1109/CVPR.2005.16.
    CrossRef
  22. Sivic, and Zisserman, Video Google: A text retrieval approach to object matching in videos, in 9th IEEE Intl Conference on Computer Vision, vol. 2, Nice, France, pp. 1470-1477, 2003. DOI: 10.1109/ICCV.2003.1238663.
    CrossRef
  23. Y. Lecun, and L. Bottou, and Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov., 1998. DOI: 10.1109/5.726791.
    CrossRef
  24. E. Kesim and Z. Dokur and T. Olmez, X-ray chest image classification by a small-sized convolutional neural network, in 2019 Scientific Meeting on Electrical-Electronics Biomedical Engineering and Computer Science (EBBT), Istanbul, Turkey, pp. 1-5, 2019. DOI: 10.1109/EBBT.2019.8742050.
    CrossRef
  25. C. Liu, and Y. Cao, and M. Alcantara, and B. Liu, and M. Brunette, and J. Peinado, and W. Curioso, TX-CNN: Detecting tuberculosis in chest x-ray images using convolutional neural network, in 2017 IEEE Intl Conference on Image Processing (ICIP), Beijing, China, pp. 2314-2318, 2017. DOI: 10.1109/ICIP.2017.8296695.
    CrossRef
  26. V. Chouhan, and S. K. Singh, and A. Khamparia, and D. Gupta, and P. Tiwari, and C. Moreira, and R. Damasevicius, and V. H. C. de Albuquerque, A novel transfer learning based approach for pneumonia detection in chest Xray images, Applied Sciences, vol. 10, no. 2, p. 559, Jan., 2020. DOI: 10.3390/app10020559.
    CrossRef
  27. P. Rajpurkar, and J. Irvin, and R. L. Ball, and K. Zhu, and B. Yang, and H. Mehta, and T. Duan, and D. Ding, and A. Bagul, and C. P. Langlotz, and B. N. Patel, and K. W. Yeom, and K. Shpanskaya, and F. G. Blankenberg, and J. Seekins, and T. J. Amrhein, and D. A. Mong, and S. S. Halabi, and E. J. Zucker, and A. Y. Ng, and M. P. Lungren, Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists, PLOS Medicine, vol. 15, no. 11, e1002686, Nov., 2018. DOI: 10.1371/journal.pmed.1002686.
    Pubmed KoreaMed CrossRef
  28. A. Bhandary, and G. A. Prabhu, and V. Rajinikanth, and K. P. Thanaraj, and S. C. Satapathy, and D. E. Robbins, and C. Shasky, and Y. Zhang, and J. M. R. S. Tavares, and N. S. M. Raja, Deep-learning framework to detect lung abnormality - A study with chest X-ray and lung CT scan images, Pattern Recognition Letters, vol. 129, pp. 271-278, Jan., 2020. DOI: 10.1016/j.patrec.2019.11.013.
    CrossRef
  29. M. Wozniak, and D. Polap, and G. Capizzi, and G. L. Sciuto, and L. Kosmider, and K. Frankiewicz, Small lung nodules detection based on local variance analysis and probabilistic neural network, Computer Methods and Programs in Biomedicine, vol. 161, pp. 173-180, 2018. DOI: 10.1016/j.cmpb.2018.04.025.
    Pubmed CrossRef
  30. E. Ayan, and H. M. Unver, Diagnosis of pneumonia from chest Xray images using deep learning, in 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT), Istanbul, Turkey, pp. 1-5, 2019. DOI: 10.1109/EBBT.2019.8741582.
    Pubmed CrossRef
  31. S. S. Yadav, and S. M. Jadhav, Deep convolutional neural network based medical image classification for disease diagnosis, Journal of Big Data, vol. 6, p. 113, Dec., 2019. DOI: 10.1186/s40537-019-0276-2.
    CrossRef
  32. C.L Szegedy, and V. Vanhoucke, and S. Ioffe, and J. Shlens, and Z. Wojna, Rethinking the inception architecture for computer vision, arXiv:1512.00567, 2015. DOI: CoRR abs/1512.00567.
    CrossRef
  33. F. Chollet, Xception: Deep learning with depthwise separable convolutions, arXiv:1610.02357, 2016. DOI: CoRR abs/1610.02357.
    CrossRef
  34. K. Simonyan, and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv:1409.1556, 2014. DOI: CoRR abs/1409.1556.
  35. P. Afshar, and S. Heidarian, and F. Naderkhani, and A. Oikonomou, and K. N. Plataniotis, and A. Mohammadi, COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images, Pattern Recognition Letters, vol. 138, pp. 638-643, Oct., 2020. DOI: 10.1016/j.patrec.2020.09.010.
    Pubmed KoreaMed CrossRef
  36. E. E. Hemdan and M. A. Shouman and M. E. Karar, COVIDX-Net: A framework of deep learning classifiers to diagnose COIVD-19 in Xray images, arXiv:2003.11055, 2020. DOI: 10.48550/arXiv.2003.11055.
  37. G. Huang, and Z. Liu, and L. V. D. Maaten, and K. Q. Weinberger, Densely connected convolutional networks, in Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu: HI, USA, pp. 2261-2269, 2017.
    KoreaMed CrossRef
  38. K. He, and X. Zhang, and S. Ren, and J. Sun, Deep residual learning for image recognition, CoRR abs/1512.03385, 2015. DOI: 10.48550/arXiv.1512.03385.
    Pubmed CrossRef
  39. M. Sandler, and A. Howard, and M. Zhu, and A. Zhmoginov, and L. Chen, MobileNetV2: Inverted residuals and linear bottlenecks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City: UT, USA, pp. 4510-4520, 2018.
    CrossRef
  40. H. S. Maghdid, and A. T. Asaad, and K. Z. Ghafoor, and A. S. Sadiq, and M. K. Khan, Diagnosing COIVD-19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms, arXiv:2004.00038, p. 26, 2021. DOI: 10.48550/arXiv.2004.00038.
    CrossRef
  41. A. Krizhevsky and I. Sutskever and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Communication of the ACM, vol. 60, no. 6, pp. 84-90, Jun., 2017. DOI: 10.1145/3065386.
    CrossRef
  42. L. Wang and Z. Q. Lin and A. Wong, COVID-Net: A tailored deep convolutional neural network design for detection of COIVD-19 cases from chest X-ray images, Scientific Reports, vol. 10, 19549, Nov., 2020. DOI: 10.1038/s41598-020-76550-z.
    Pubmed KoreaMed CrossRef
  43. F. Ucar, and D. Korkmaz, COVIDidiagnosis-Net: Deep bayessqueezenet based diagnosis of the coronavirus disease 2019 (COIVD-19) from X-ray images, Medical Hypotheses, vol. 140, Jul., 2020. DOI: 10.1016/j.mehy.2020.109761.
    Pubmed KoreaMed CrossRef
  44. F. N. Iandola, and S. Han, and M. W. Moskewicz, and K. Asraf, and W. J. Dally, and K. Keutzer, SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, arXiv:1602.07360, 2016. DOI: 10.48550/arXiv.1602.07360.
  45. A. Narin and C. Kaya and Z. Pamuk, Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks, Pattern Analysis and Applications, vol. 24, no. 3, pp. 1207-1220, May., 2021. DOI: 10.1007/s10044-021-00984-y.
    Pubmed KoreaMed CrossRef
  46. M. Togacar and B. Ergen and Z. Comert, COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches, Computers in Biology and Medicine, vol. 121, 103805, Jun., 2020. DOI: 10.1016/j.compbiomed.2020.103805.
    Pubmed KoreaMed CrossRef
  47. I. D. Apostolopoulos, and T. A. Mpesiana, Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks, Physical and Engineering Sciences in Medicine, vol. 43, no. 2, pp. 635-640, Apr., 2020. DOI: 10.1007/s13246-020-00865-4.
    Pubmed KoreaMed CrossRef
  48. V. Enireddy, and M. J. K. Kumar, and B. Donepudi, and C. Karthikeyan, Detection of COVID-19 using hybrid ResNet and SVM, in Proceedings of IOP Conference Series: Materials Science and Engineering, vol. 993, no. 1, Kancheepuram, India, 2020.
    CrossRef
  49. J. P. Cohen, and P. Morrison, and L. Dao, and K. Roth, and T. Q. Duong, and M. Ghassemi, COVID-19 image data collection: Prospective predictions are the future, arXiv: 2006.11988, 2020. DOI: 10.48550/arXiv.2006.11988.
  50. A. Haghanifa and M. M. Majdabadi and S. Ko, COVID-19 chest X-ray image repository, , May., 2021. DOI: 10.6084/m9.figshare.12580328.v3.
  51. H. B. Winther, and H. Laser, and S. Gerbel, and S. K. Maschke, and J. B. Hinrichs, and J. Vogel-Claussen, and F. K. Wacker, and M. M. Hoper, and B. C. Meyer, Dataset: Covid-19 image repository, , 2020.
  52. M. de la Iglesia Baya, and J. M. Saborit, and J. A. Montell, and A. Pertusa, and A. Bustos, and M. Cazorla, and J. Galant, and X. Barber, and D. Orozco-Beltran, and F. Garcia-Garcia, and M. Caparros, and G. Gonzalez, and J. M. Salinas, BIMCV COVID-19+: A large annotated dataset of RX and CT images from COVID-19 patients, arXiv:2006.01174v3, 2021. DOI: 10.48550/arXiv.2006.01174.
  53. D. S. Kermany, and M. Goldbaum, and W. Cai, and C. C. S. Valentim, and H. Liang, and S. L Baxter, and A. McKeown, and G. Yang, and X. Wu, and F. Yan, and J. Dong, and M. K. Prasadha, and J. Pei, and M. Y. L. Ting, and J. Zhu, and C. Li, and S. Hewett, and J. Dong, and I. Ziyar, and A. Shi, and R. Zhang, and L. Zheng, and R. Hou, and W. Shi, and X. Fu, and Y. Duan, and V. A. N. Huu, and C. Wen, and E. D. Zhang, and C. L. Zhang, and O. Li, and X. Wang, and M. A. Singer, and X. Sun, and J. Xu, and A. Tafreshi, and M. A. Lewis, and H. Xia, and K. Zhang, Identifying medical diagnoses and treatable diseases by image-based deep learning, Cell, vol. 172, no. 5, pp. 1122-1131.e9, Feb., 2018. DOI: 10.1016/j.cell.2018.02.010.
    Pubmed CrossRef
  54. J. Deng, and A. C. Berg, and K. Li, and L. Fei-Fei, What does classifying more than 10,000 image categories tell us?, in Computer Vision -ECCV 2010 - 11th European Conference on Computer Vision, Heraklion, Crete, Greece, pp. 71-84, 2010. DOI: 10.1007/978-3-642-15555.
    CrossRef
  55. A. S. Razavian, and H. Azizpour, and J. Sullivan, and S. Carlsson, CNN features off-the-shelf: An astounding baseline for recognition, in IEEE Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2014, Columbus: OH, USA, pp. 512-519, 2014.
    CrossRef
  56. J. Yosinski, and J. Clune, and Y. Bengio, and H. Lipson, How transferable are features in deep neural networks?, in Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal: QC, Canada, pp. 3320-3328, 2014.
  57. Keras [Online]. Available: https://keras.io.
  58. M. Abadi, and A. Agarwal, and P. Barham, and E. Brevdo, and Z. Chen, and C. Citro, and G. Corrado, and A. Davis, and J. Dean, and M. Devin, and S. Ghemawat, and I. Godfellow, and A. Harp, and G. Irving, and M. Isard, and Y. Jia, and R. Jozefowicz, and L. Kaiser, and M. Kudlur, and J. Levenberg, and D. Mane, and R. Monga, and S. Moore, and D. Murray, and C. Olah, and M. Schuster, and J. Shlens, and B. Steiner, and I. Sutskever, and K. Talwar, and P. Tucker, and V. Vanhoucke, and V. Vasudevan, and F. Viegas, and O. Vinyals, and P. Warden, and M. Wattenberg, and M. Wicke, and Y. Yu, and X. Zheng, TensorFlow: Large-scale machine learning on heterogeneous distributed systems, arXiv:1603.04467v2, 2015. DOI: 10.48550/arXiv.1603.04467.
  59. F. Bedregosa, and G. Varoquaux, and A. Gramfort, and V. Michel, and B. Thirion, and O. Grisel, and M. Blondel, and P. Prettenhofer, and R. Weiss, and V. Dubourg, and J. Vanderplas, and A. Passos, and D. Cournapeau, and M. Brucher, and E. Duchesnay, Scikit-learn: Machine learning in python, Journal of Machine Learning Research, vol. 12, no. 85, pp. 2825-2830, 2011.
  60. Itseez: Open-source computer vision library [Online]. Available: https://github.com/opencv/opencv.

Thanh-Nghi Do

was born in Cantho in 1974. He received his PhD. degree in Informatics from the University of Nantes in 2004. He is currently an associate professor at the College of Information Technology, Cantho University, Vietnam. He is also an associate researcher at UMI UMMISCO 209 (IRD/UPMC), Sorbonne university, Pierre and Marie Curie University, France. His research interests include data mining with support vector machines, kernel-based methods, decision tree algorithms, ensemble-based learning, and information visualization. He has served on the program committees of international conferences and is a reviewer for the journals in his fields.


Van-Thanh Le

was born in Haiduong in 1980. He received his MD. degree from Thai Nguyen University of Medicine and Pharmacy in 2005. He is currently a specialist doctor in diagnostic imaging at the Tam Anh Hospital, Hanoi, Vietnam. His research interests focus on techniques for medical image analysis.


Thi-Huong Doan

was born in Laichau in 1981. She received his MD. degree from Thai Nguyen University of Medicine and Pharmacy in 2005. She is currently a specialist doctor in preventive medicine at the Healthcare Center, National Assembly, Hanoi, Vietnam. Her research interests focus on techniques for medical image analysis.


Article

Journal of information and communication convergence engineering 2022; 20(3): 219-225

Published online September 30, 2022 https://doi.org/10.56977/jicce.2022.20.3.219

Copyright © Korea Institute of Information and Communication Engineering.

SVM on Top of Deep Networks for Covid-19 Detection from Chest X-ray Images

Thanh-Nghi Do 1,4*, Van-Thanh Le 2, and Thi-Huong Doan3*

1Department of Computer Networks, Can Tho University, Can Tho 94000, Viet Nam
2Tam Anh Hospital, Ha Noi 100000, Viet Nam
3Healthcare Center, National Assembly, Ha Noi 100000, Viet Nam
4UMI UMMISCO 209, IRD/UPMC, Paris 75000, France

Correspondence to:*Thanh-Nghi Do (E-mail: dtnghi@ctu.edu.vn, Tel: +84-2923-734-720)
Department of Computer Networks, Can Tho University, Can Tho 94000, Viet Nam

Received: April 6, 2022; Revised: September 4, 2022; Accepted: September 8, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this study, we propose training a support vector machine (SVM) model on top of deep networks for detecting Covid-19 from chest X-ray images. We started by gathering a real chest X-ray image dataset, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Instead of training deep networks from scratch, we fine-tuned recent pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to classify chest X-ray images into one of three classes (Covid-19, normal, and other lung). We propose training an SVM model on top of deep networks to perform a nonlinear combination of deep network outputs, improving classification over any single deep network. The empirical test results on the real chest X-ray image dataset show that deep network models, with an exception of ResNet50 with 82.44%, provide an accuracy of at least 92% on the test set. The proposed SVM on top of the deep network achieved the highest accuracy of 96.16%.

Keywords: Covid-19, X-ray image, Deep learning, Support vector machines

I. INTRODUCTION

The first case of coronavirus disease (Covid-19) was identified in Wuhan, China, at the end of December 2019 [1-4]. Covid-19 has rapidly spread to 225 countries and territories and has become an epidemic worldwide, with more than six million deaths (6,184,360) and 494,287,348 cases of infections as on April 6, 2022 (World Health Organization, https://www.who.int). Currently, real-time polymerase chain reaction (RT-PCR) is an effective method for coronaviruses diagnosis. However, the major disadvantages of RT-PCR [5-9] are its time-consuming nature and false-negative to confirm Covid-19 patients. In contrast, diagnostic imaging techniques such as chest radiography (CXR) or computed tomography (CT) can play a crucial role in rapidly approving positive Covid-19 patients.

Deep learning techniques have been widely used in automatic medical image analysis and have shown promising results [10-14]. Instead of using handcrafted features (SIFT [15], HoG [16], GIST [17]) and training a support vector machine (SVM) [18], as in the classical framework [19-22] for image classification, the deep learning approach [23] simultaneously trains a visual feature extractor and a softmax classifier in a unified framework.

Researchers have proposed training deep neural networks to detect diseases using chest X-ray images. Kesim et al. [24] used convolutional neural networks (CNNs) to classify chest X-ray images into one of 12 classes. In a previous study [25], tuberculosis images were recognized using deep and transfer learning. Chouhan et al. [26] proposed a combination of five deep-learning models using transfer learning for the detection of pneumonia from X-ray images. Rajpurkar et al. [27] developed a CNN architecture called CheXNeXt for classifying X-ray images into 14 different pathologies. A modified AlexNet was proposed by Bhandary et al. [28] to recognize lung abnormalities in X-ray images. Wozniak et al. [29] illustrated the incorporation of local variance analysis and probabilistic neural networks for the classification of lung carcinomas. Papers [30,31] proposed fine-tuning Inception v3 [32], Xception [33], and VGG16 [34] to identify pneumonia images.

Recently, deep learning approaches have been applied to recognize Covid-19 X-ray images. Capsule networks, namely COVID-CAPS, were proposed by Afshar et al. [35] for detecting Covid-19 images. COVIDX-Net [36] used deep network models, such as VGG19 [34], DenseNet121 [37], Inception v3 [32], ResNet v2 [38], Inception-ResNet v2, Xception [33], and MobileNet v2 [39] to classify Covid-19 x-ray images. In [40], AlexNet [41] and a modified AlexNet were applied for detection of Covid-19 from X-ray and CT images. COVID-Net was designed by Wang et al. [42] for identifying Covid-19 from X-ray images. COVIDiagnosis-Net [43] combines SqueezeNet [44] and Bayesian optimization to classifying Covid-19 images. Other studies [45,46] proposed the use of ResNet [38], Inception v3 [32], Inception-ResNet v2, MobileNet v2 [39], and SqueezeNet [44] to recognize Covid-19 X-ray images. Akkus et al. [47] evaluated the effectiveness of deep learning architectures for the automatic diagnosis of Covid-19. Enireddy et al. [48] trained the linear SVM classifier on deep features extracted by ResNet50, in other word the linear SVM substitutes for the softmax in deep networks. This study proposes to train an SVM model on top of whole deep networks.

We are interested in training a SVM [18] model on top of deep networks to detect Covid-19 from chest X-ray images. For this purpose, we gathered a real chest X-ray image dataset from public data sources [49-53] tagged in one of three classes (positive Covid-19, normal cases, and lung disease not caused by Covid-19). Subsequently, we propose to fine-tune different pre-trained deep network models, such as DenseNet121 [37], MobileNet v2 [39], Inception v3 [32], Xception [33], Res- Net50 [38], VGG16, and VGG19 [34], to classify chest X-ray images. Then, we propose to train an SVM [18] on top of these deep network models to improve chest X-ray image classification. The numerical test results show that deep network models achieve an accuracy of at least 92% on the test set of real chest X-ray images (except ResNet50 with 82.44%). The proposed SVM on top of the deep networks yielded the highest accuracy of 96.16%.

The remainder of this paper is organized as follows. Section II illustrates the proposed SVM on top of the deep network models for Covid-19 detection from chest X-ray images. Section III shows the experimental results. The conclusions and future work are presented in Section IV.

II. METHODS

A. Chest x-ray Image Dataset

We started by gathering a real chest X-ray image dataset from public data sources [49-53]. The X-ray images were tagged in one of three classes (positive Covid-19 infected patients, normal cases, and lung diseases not caused by Covid-19, such as lung opacity and viral pneumonia). An example of the chest X-ray images of the three classes are shown in Fig. 1.

Figure 1. Sample of chest x-ray images.

We obtained a dataset with 19,282 chest X-ray images in the PNG format, as shown in Table 1. The full dataset was randomly split into a training set (15,427 images) and a test set (3,855 images).

Table 1 . Description of chest X-ray image dataset.

DatasetCovid-19NormalOther lungTotal
Full dataset6,7226,7195,84119,282
Trainset5,3665,3014,76015,427
Testset1,3561,4181,0813,855


B. Fine-tuning Deep Networks

In recent years, deep learning networks have focused on classifying images owing to their high accuracy. Recent deep networks, such as DenseNet121 [37], MobileNet v2 [39], Inception v3 [32], Xception [33], ResNet50 [38], and VGG [34], have achieved high classification accuracy for the ImageNet dataset [54]. Therefore, we studied these deep networks for the classification of chest X-ray images.

Instead of training deep networks from scratch, we propose fine-tuning pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to recognize Covid-19 from chest X-ray images. This approach, referred to as transfer learning [55,56], involves re-using knowledge from pre-trained deep network models on the ImageNet dataset and performing a similar task, that is, chest X-ray image classification with a new dataset. We used the training set to fine-tune the weights of the last layers while freezing the weights of the first layers in the deep networks. We identified the best fine-tuned configurations, as listed in Table 2, for classifying chest X-ray images.

Table 2 . Best fine-tuned configurations for pre-trained deep network.

NoDeep networkNumber of fine-tuned last layers
1DenseNet12120
2MobileNet v28
3Inception v3143
4Xception39
5ResNet5014
6VGG1615
7VGG1917


C. Training Support Vector Machines on Top of Deep Networks

Any deep network model causes errors in terms of bias and variance in the image classification. Therefore, our aim was to combine the strengths of deep networks to improve the classification of chest X-ray images. We proposed to train a nonlinear support vector machine model (SVM [18]) on top of seven deep network models (DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19), thereby complementing these single models. The proposed SVM on top of deep networks (illustrated in Fig. 2) performs a nonlinear combination of deep network outputs y^i, y^d, ..., y^0 with a kernel function RBF (Radial Basis Function), making the prediction output y^. This improved the classification accuracy compared to any single deep network.

Figure 2. Training SVM on top of deep networks for classifying chest x-ray images.

III. RESULTS

We are interested in a numerical test to assess the classification performance of the proposed SVM on top of deep networks (denoted by SVM-on-Top) in the classification of chest X-ray images.

A. Experimental Setup

First, we implemented the training programs in Python using libraries, such as Keras [57], with backend Tensorflow [58], Scikit-learn [59], and OpenCV [60].

All experiments were launched on a Linux Fedora 34 machine, Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores, 16 GB main memory, and the Gigabyte GeForce RTX 2080Ti 11GB GDDR6, 4352 CUDA cores.

As described in Section II.A, the full X-ray image dataset was randomly divided into training (15,427 images) and test sets (3,855 images).

We used the training set to fine-tune the pre-trained DenseNet121, MobileNet v2, Inception v3, Xception, Res-Net50, VGG16, and VGG19 with the best configurations, as illustrated in Table 2, learning rate = 0.001, and number of epochs = 200. The training set is used to determine the best hyper-parameters (with γ = 0.2 for the non-linear RBF kernel function and the positive constant cost = 105 for a trade-off between the margin size and errors) for the SVM model on top of deep networks.

For the training set, DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, VGG19, and SVM-on-Top provided overall accuracies of 96.88, 95.31, 96.88, 96.88, 87.5, 96.88, 96.88, and 100%, respectively. Then, the resulting models were used to report the classification results on the test set.

B. Classification Results

The classification results of the deep network models and SVM on top of the deep networks are presented in Table 3 and Fig. 3. In Table 3, the highest accuracy is bold-faced, and the second highest is in italics.

Table 3 . Classification accuracy on the test set (%).

NoModelCovid-19NormalOther lungOverall
1DenseNet12196.9791.0594.1094.03
2MobileNet v295.7088.7192.6492.32
3Inception v398.9791.6196.4895.56
4Xception99.0492.4593.2394.99
5ResNet5091.6782.3474.4682.44
6VGG1698.5392.1793.3994.92
7VGG1997.3990.2496.0794.37
8SVM-on-Top99.9391.1096.5796.16


Figure 3. Comparison between classification results.

The last column of Table 3 and Fig. 3(d) present the overall accuracy of classification models. The comparison among the models illustrates that our proposed SVM-on-Top achieves the highest classification accuracy of 96.16%. SVM-on-Top resulted in improvements of 13.72, 3.84, 2.13, 1.79, 1.24, 1.17, and 0.60% for fine-tuned ResNet50, MobileNet v2, DenseNet121, VGG19, VGG16, Xception, and Inception v3, respectively.

The Covid-19 classification is shown in more detail in the third column of Table 3 and Fig. 3(a). The SVM-on-Top yielded the highest accuracy of 99.93%. Xception ranks second with 99.04%, followed by Inception v3, VGG16, VGG19, DenseNet121, MobileNet v2, and ResNet50.

The classification results of healthy lung (Normal) in the fourth column of Table 3 and Fig. 3(b) show that the highest accuracy was 92.45% performed by Xception and the second highest was 92.17% obtained by VGG16. ResNet50 exhibited the lowest accuracy (82.34 %).

Classification results for other lungs (lung opacity and viral pneumonia) as presented in the fifth column of Table 3 and Fig. 3(c) show that SVM-on-Top achieved the highest classification accuracy of 96.57%. Inception v3 was the second most accurate model (96.48%), followed by VGG19, DenseNet121, VGG16, Xception, MobileNet v2, and ResNet50.

The computational complexity of the SVM for our proposed SVM-on-Top is squared with the number of training data points. In this training set, the learning task of SVM takes 0.25 s.

IV. CONCLUSIONS AND FUTURE WORK

We have proposed a non-linear combination of deep network outputs obtained by training an SVM model on top of deep networks for detecting Covid-19 from chest X-ray images. We gathered the chest X-ray image dataset from public data sources, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Recent pretrained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, were fine-tuned to classify chest X-ray images. We proposed training an SVM model on top of these fine-tuned deep networks to improve chest X-ray image classification. The proposed SVM-on-Top improved 13.72, 3.84, 2.13, 1.79, 1.24, 1.17, and 0.60% classification accuracy against fined-tuned ResNet50, MobileNet v2, DenseNet121, VGG19, VGG16, Xception, and Inception v3, respectively.

In the near future, we intend to enlarge the chest X-ray image dataset to improve the training of deep networks. A promising research aim is to select good network models for use in combination with SVM.

Fig 1.

Figure 1.Sample of chest x-ray images.
Journal of Information and Communication Convergence Engineering 2022; 20: 219-225https://doi.org/10.56977/jicce.2022.20.3.219

Fig 2.

Figure 2.Training SVM on top of deep networks for classifying chest x-ray images.
Journal of Information and Communication Convergence Engineering 2022; 20: 219-225https://doi.org/10.56977/jicce.2022.20.3.219

Fig 3.

Figure 3.Comparison between classification results.
Journal of Information and Communication Convergence Engineering 2022; 20: 219-225https://doi.org/10.56977/jicce.2022.20.3.219

Table 1 . Description of chest X-ray image dataset.

DatasetCovid-19NormalOther lungTotal
Full dataset6,7226,7195,84119,282
Trainset5,3665,3014,76015,427
Testset1,3561,4181,0813,855

Table 2 . Best fine-tuned configurations for pre-trained deep network.

NoDeep networkNumber of fine-tuned last layers
1DenseNet12120
2MobileNet v28
3Inception v3143
4Xception39
5ResNet5014
6VGG1615
7VGG1917

Table 3 . Classification accuracy on the test set (%).

NoModelCovid-19NormalOther lungOverall
1DenseNet12196.9791.0594.1094.03
2MobileNet v295.7088.7192.6492.32
3Inception v398.9791.6196.4895.56
4Xception99.0492.4593.2394.99
5ResNet5091.6782.3474.4682.44
6VGG1698.5392.1793.3994.92
7VGG1997.3990.2496.0794.37
8SVM-on-Top99.9391.1096.5796.16

References

  1. C. Huang, and Y. Wang, and X. Li, and L. Ren, and J. Zhao, and Y. Hu, and L. Zhang, and G. Fan, and J. Xu, and X. Gu, and Z. Cheng, and T. Yu, and J. Xia, and Y. Wei, and W. Wu, and X. Xie, and W. Yin, and H. Li, and M. Liu, and Y. Xiao, and H. Gao, and L. Guo, and J. Xie, and G. Wang, and R. Jiang, and Z. Gao, and Q. Jin, and J. Wang, and B. Cao, Clinical features of patients infected with 2019 novel coronavirus in wuhan, china, The Lancet, vol. 395, no. 10223, pp. 497-506, Feb., 2020. DOI: 10.1016/S0140-6736(20)30183-5.
    CrossRef
  2. Q. Li, and X. Guan, and P. Wu, and X. Wang, and L. Zhou, and Y. Tong, and R. Ren, and K. S. M. Leung, and E. H. Y. Lau, and J. Y. Wong, and X. Xing, and N. Xiang, and Y. Wu, and C. Li, and Q. Chen, and D. Li, and T. Liu, and J. Zhao, and M. Liu, and W. Tu, and C. Chen, and L. Jin, and R. Yang, and Q. Wang, and S. Zhou, and R. Wang, and H. Liu, and Y. Luo, and Y. Liu, and G. Shao, and H. Li, and Z. Tao, and Y. Yang, and Z. Deng, and B. Liu, and Z. Ma, and Y. Zhang, and G. Shi, and T. T. Y. Lam, and J. T. Wu, and G. F. Gao, and B. J. Cowling, and B. Yang, and G. M. Leung, and Z. Feng, Early transmission dynamics in Wuhan, China, of novel coronavirus-infected pneumonia, New England Journal of Medicine, vol. 382, pp. 1199-1207, Mar., 2020. DOI: 10.1056/NEJMoa2001316.
    Pubmed KoreaMed CrossRef
  3. F. Wu, and S. Zhao, and B. Yu, and Y. Chen, and W. Wang, and Z. Song, and Y. Hu, and Z. Tao, and J. Tian, and Y. Pei, and M. Yuan, and Y. Zhang, and F. Dai, and Y. Liu, and Q. Wang, and J. Zheng, and L. Xu, and E. C. Holmes, and Y. Zhang, A new coronavirus associated with human respiratory disease in China, Nature, vol. 579, pp. 1-8, Feb., 2020. 10.1038/s41586-020-2008-3.
    Pubmed KoreaMed CrossRef
  4. N. Zhu, and D. Zhang, and W. Wang, and X. Li, and B. Yang, and J. Song, and X. Zhao, and B. Huang, and W. Shi, and R. Lu, and P. Niu, and F. Zhan, and D. Wang, and W. Xu, and G. Wu, and G. F. Gao, and D. Phil, A novel coronavirus from patients with pneumonia in China, 2019, New England Journal of Medicine, vol. 382, pp. 727-733, Feb., 2020. DOI: 10.1056/NEJMoa2001017.
    Pubmed KoreaMed CrossRef
  5. I. Arevalo-Rodriguez, and D. Buitrago-Garcia, and D. Simancas-Racines, and P. Zambrano-Achig, and R. D. Campo, and A. Ciapponi, and O. Sued, and L. Martines-Garcia, and A. W. Rutjes, and N. Low, and P. M. Bossuyt, and J. A. Perez-Molina, and J. Zamora, False-negative results of initial rt-pcr assays for covid-19: A systematic review, PLoS One, vol. 15, no. 12, e0242958, Dec., 2020. DOI: 10.1371/journal.pone.0242958.
    Pubmed KoreaMed CrossRef
  6. J. F. Chan, and S. Yuan, and K. Kok, and K. K. To, and H. Chu, and J. Yang, and F. Xing, and J. Liu, and C. C. Yip, and R. W. Poon, and H. Tsoi, and S. K. Lo, and K. Chan, and V. K. Poon, and W. Chan, and J. D. Ip, and J. Cai, and Y. C. Cheng, and H. Chen, and C. K. Hui, and K. Yuen A familial cluster of pneumonia associated with the 2019, the, 2019.
    CrossRef
  7. W. Hao, and M. Li, Clinical diagnostic value of CT imaging in COVID-19 with multiple negative RT-PCR testing, Travel Medicine and Infectious Disease, vol. 34, 101627, Mar.-Apr., 2020. DOI: 10.1016/j.tmaid.2020.101627.
    Pubmed KoreaMed CrossRef
  8. P. Huang, and T. Liu, and L. Huang, and H. Liu, and M. Lei, and W. Xu, and X. Hu, and J. Chen, and B. Liu, Use of chest ct in combination with negative RT-PCR assay for the 2019 novel coronavirus but high clinical suspicion, Radiology, vol. 295, no. 1, pp. 22-23, Apr., 2020. DOI: 10.1148/radiol.2020200330.
    Pubmed KoreaMed CrossRef
  9. X. Xie, and Z. Zhong, and W. Zhong, and W. Zhao, and C. Zheng, and F. Wang, and J. Liu, Chest CT for typical coronavirus disease 2019 (COVID-19) pneumonia: Relationship to negative RT-PCR testing, Radiology, vol. 296, no. 2, pp. E41-E45, Feb., 2020. DOI: 10.1148/radiol. 2020200343.
    Pubmed KoreaMed CrossRef
  10. Z. Akkus, and A. Galimzianova, and A. Hoogi, and D. L. Rubin, and B. J. Erickson, Deep learning for brain MRI segmentation: State of the art and future directions, Journal of Digital Imaging, vol. 30, no. 4, pp. 449-459, Jun., 2017. DOI: 10.1007/s10278-017-9983-4.
    Pubmed KoreaMed CrossRef
  11. J. Ker, and L. Wang, and J. Rao, and T. Lim, Deep learning applications in medical image analysis, , vol. 6, pp. 9375-9389, Dec., 2017. DOI: 10.1109/ACCESS.2017.2788044.
    CrossRef
  12. C. Liang, and Y. Liu, and M. Wu, and F. Garcia-Castro, and A. Alberich-Bayarri, and F. Wu, Identifying pulmonary nodules or masses on chest radiography using deep learning: External validation and strategies to improve clinical practice, Clinical Radiology, vol. 75, no. 1, pp. 38-45, Jan., 2020. DOI: 10.1016/j.crad.2019.08.005.
    Pubmed CrossRef
  13. G. Litjens, and T. Kooi, and B. E. Bejnordi, and A. A. A. Setio, and F. Ciompi, and M. Ghafoorian, and J. A. W. M. van der Laak, and B. van Ginneken, and C. I. Sanchez, A survey on deep learning in medical image analysis, Medical Image Analysis, vol. 42, pp. 60-88, Dec., 2017. DOI: 10.1016/j.media.2017.07.005.
    Pubmed CrossRef
  14. D. Shen and G. Wu and H. Suk, Deep learning in medical image analysis, Annual Review of Biomedical Engineering, vol. 19, pp. 221-248, Jun., 2017. DOI: 10.1146/annurev-bioeng-071516-044442.
    Pubmed KoreaMed CrossRef
  15. D. G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, vol. 60, pp. 91-110, Nov., 2004. DOI: 10.1023/B:VISI.0000029664.99615.94.
    CrossRef
  16. N. Dalal, and B. Trigs, Histograms of oriented gradients for human detection, in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego: CA, USA, pp. 886-893, 2005. DOI: 10.1109/CVPR.2005.177.
    CrossRef
  17. A. Oliva, and A. Torralba, Modeling the shape of the scene: A holistic representation of the spatial envelope, International Journal of Computer Vision, vol. 42, pp. 145-175, May., 2001. DOI: 10.1023/A:1011139631724.
    CrossRef
  18. V. N. Vapnik The Nature of Statistical Learning Theory, 2nd ed, Springer-Verlag, 2000.
    CrossRef
  19. A. Bosch and A. Zisserman and X. Munoz, Scene classification via plsa, in Proceedings of the European Conference on Computer Vision, Graz, Austria, pp. 517-530, 2006. DOI: 10.1007/11744085_40.
    CrossRef
  20. T. Do and P. Lenca and S. Lallich, Classifying many-class highdimensional fingerprint datasets using random forest of oblique decision trees, Vietnam Journal of Computer Science, vol. 2, pp. 3-12, Jun., 2014. DOI: 10.1007/s40595-014-0024-7.
    CrossRef
  21. L. Fei-Fei, and P. Perona, A bayesian hierarchical model for learning natural scene categories, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego: CA, USA, pp. 524-531, 2005. DOI: 10.1109/CVPR.2005.16.
    CrossRef
  22. Sivic, and Zisserman, Video Google: A text retrieval approach to object matching in videos, in 9th IEEE Intl Conference on Computer Vision, vol. 2, Nice, France, pp. 1470-1477, 2003. DOI: 10.1109/ICCV.2003.1238663.
    CrossRef
  23. Y. Lecun, and L. Bottou, and Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov., 1998. DOI: 10.1109/5.726791.
    CrossRef
  24. E. Kesim and Z. Dokur and T. Olmez, X-ray chest image classification by a small-sized convolutional neural network, in 2019 Scientific Meeting on Electrical-Electronics Biomedical Engineering and Computer Science (EBBT), Istanbul, Turkey, pp. 1-5, 2019. DOI: 10.1109/EBBT.2019.8742050.
    CrossRef
  25. C. Liu, and Y. Cao, and M. Alcantara, and B. Liu, and M. Brunette, and J. Peinado, and W. Curioso, TX-CNN: Detecting tuberculosis in chest x-ray images using convolutional neural network, in 2017 IEEE Intl Conference on Image Processing (ICIP), Beijing, China, pp. 2314-2318, 2017. DOI: 10.1109/ICIP.2017.8296695.
    CrossRef
  26. V. Chouhan, and S. K. Singh, and A. Khamparia, and D. Gupta, and P. Tiwari, and C. Moreira, and R. Damasevicius, and V. H. C. de Albuquerque, A novel transfer learning based approach for pneumonia detection in chest Xray images, Applied Sciences, vol. 10, no. 2, p. 559, Jan., 2020. DOI: 10.3390/app10020559.
    CrossRef
  27. P. Rajpurkar, and J. Irvin, and R. L. Ball, and K. Zhu, and B. Yang, and H. Mehta, and T. Duan, and D. Ding, and A. Bagul, and C. P. Langlotz, and B. N. Patel, and K. W. Yeom, and K. Shpanskaya, and F. G. Blankenberg, and J. Seekins, and T. J. Amrhein, and D. A. Mong, and S. S. Halabi, and E. J. Zucker, and A. Y. Ng, and M. P. Lungren, Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists, PLOS Medicine, vol. 15, no. 11, e1002686, Nov., 2018. DOI: 10.1371/journal.pmed.1002686.
    Pubmed KoreaMed CrossRef
  28. A. Bhandary, and G. A. Prabhu, and V. Rajinikanth, and K. P. Thanaraj, and S. C. Satapathy, and D. E. Robbins, and C. Shasky, and Y. Zhang, and J. M. R. S. Tavares, and N. S. M. Raja, Deep-learning framework to detect lung abnormality - A study with chest X-ray and lung CT scan images, Pattern Recognition Letters, vol. 129, pp. 271-278, Jan., 2020. DOI: 10.1016/j.patrec.2019.11.013.
    CrossRef
  29. M. Wozniak, and D. Polap, and G. Capizzi, and G. L. Sciuto, and L. Kosmider, and K. Frankiewicz, Small lung nodules detection based on local variance analysis and probabilistic neural network, Computer Methods and Programs in Biomedicine, vol. 161, pp. 173-180, 2018. DOI: 10.1016/j.cmpb.2018.04.025.
    Pubmed CrossRef
  30. E. Ayan, and H. M. Unver, Diagnosis of pneumonia from chest Xray images using deep learning, in 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT), Istanbul, Turkey, pp. 1-5, 2019. DOI: 10.1109/EBBT.2019.8741582.
    Pubmed CrossRef
  31. S. S. Yadav, and S. M. Jadhav, Deep convolutional neural network based medical image classification for disease diagnosis, Journal of Big Data, vol. 6, p. 113, Dec., 2019. DOI: 10.1186/s40537-019-0276-2.
    CrossRef
  32. C.L Szegedy, and V. Vanhoucke, and S. Ioffe, and J. Shlens, and Z. Wojna, Rethinking the inception architecture for computer vision, arXiv:1512.00567, 2015. DOI: CoRR abs/1512.00567.
    CrossRef
  33. F. Chollet, Xception: Deep learning with depthwise separable convolutions, arXiv:1610.02357, 2016. DOI: CoRR abs/1610.02357.
    CrossRef
  34. K. Simonyan, and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv:1409.1556, 2014. DOI: CoRR abs/1409.1556.
  35. P. Afshar, and S. Heidarian, and F. Naderkhani, and A. Oikonomou, and K. N. Plataniotis, and A. Mohammadi, COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images, Pattern Recognition Letters, vol. 138, pp. 638-643, Oct., 2020. DOI: 10.1016/j.patrec.2020.09.010.
    Pubmed KoreaMed CrossRef
  36. E. E. Hemdan and M. A. Shouman and M. E. Karar, COVIDX-Net: A framework of deep learning classifiers to diagnose COIVD-19 in Xray images, arXiv:2003.11055, 2020. DOI: 10.48550/arXiv.2003.11055.
  37. G. Huang, and Z. Liu, and L. V. D. Maaten, and K. Q. Weinberger, Densely connected convolutional networks, in Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu: HI, USA, pp. 2261-2269, 2017.
    KoreaMed CrossRef
  38. K. He, and X. Zhang, and S. Ren, and J. Sun, Deep residual learning for image recognition, CoRR abs/1512.03385, 2015. DOI: 10.48550/arXiv.1512.03385.
    Pubmed CrossRef
  39. M. Sandler, and A. Howard, and M. Zhu, and A. Zhmoginov, and L. Chen, MobileNetV2: Inverted residuals and linear bottlenecks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City: UT, USA, pp. 4510-4520, 2018.
    CrossRef
  40. H. S. Maghdid, and A. T. Asaad, and K. Z. Ghafoor, and A. S. Sadiq, and M. K. Khan, Diagnosing COIVD-19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms, arXiv:2004.00038, p. 26, 2021. DOI: 10.48550/arXiv.2004.00038.
    CrossRef
  41. A. Krizhevsky and I. Sutskever and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Communication of the ACM, vol. 60, no. 6, pp. 84-90, Jun., 2017. DOI: 10.1145/3065386.
    CrossRef
  42. L. Wang and Z. Q. Lin and A. Wong, COVID-Net: A tailored deep convolutional neural network design for detection of COIVD-19 cases from chest X-ray images, Scientific Reports, vol. 10, 19549, Nov., 2020. DOI: 10.1038/s41598-020-76550-z.
    Pubmed KoreaMed CrossRef
  43. F. Ucar, and D. Korkmaz, COVIDidiagnosis-Net: Deep bayessqueezenet based diagnosis of the coronavirus disease 2019 (COIVD-19) from X-ray images, Medical Hypotheses, vol. 140, Jul., 2020. DOI: 10.1016/j.mehy.2020.109761.
    Pubmed KoreaMed CrossRef
  44. F. N. Iandola, and S. Han, and M. W. Moskewicz, and K. Asraf, and W. J. Dally, and K. Keutzer, SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, arXiv:1602.07360, 2016. DOI: 10.48550/arXiv.1602.07360.
  45. A. Narin and C. Kaya and Z. Pamuk, Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks, Pattern Analysis and Applications, vol. 24, no. 3, pp. 1207-1220, May., 2021. DOI: 10.1007/s10044-021-00984-y.
    Pubmed KoreaMed CrossRef
  46. M. Togacar and B. Ergen and Z. Comert, COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches, Computers in Biology and Medicine, vol. 121, 103805, Jun., 2020. DOI: 10.1016/j.compbiomed.2020.103805.
    Pubmed KoreaMed CrossRef
  47. I. D. Apostolopoulos, and T. A. Mpesiana, Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks, Physical and Engineering Sciences in Medicine, vol. 43, no. 2, pp. 635-640, Apr., 2020. DOI: 10.1007/s13246-020-00865-4.
    Pubmed KoreaMed CrossRef
  48. V. Enireddy, and M. J. K. Kumar, and B. Donepudi, and C. Karthikeyan, Detection of COVID-19 using hybrid ResNet and SVM, in Proceedings of IOP Conference Series: Materials Science and Engineering, vol. 993, no. 1, Kancheepuram, India, 2020.
    CrossRef
  49. J. P. Cohen, and P. Morrison, and L. Dao, and K. Roth, and T. Q. Duong, and M. Ghassemi, COVID-19 image data collection: Prospective predictions are the future, arXiv: 2006.11988, 2020. DOI: 10.48550/arXiv.2006.11988.
  50. A. Haghanifa and M. M. Majdabadi and S. Ko, COVID-19 chest X-ray image repository, , May., 2021. DOI: 10.6084/m9.figshare.12580328.v3.
  51. H. B. Winther, and H. Laser, and S. Gerbel, and S. K. Maschke, and J. B. Hinrichs, and J. Vogel-Claussen, and F. K. Wacker, and M. M. Hoper, and B. C. Meyer, Dataset: Covid-19 image repository, , 2020.
  52. M. de la Iglesia Baya, and J. M. Saborit, and J. A. Montell, and A. Pertusa, and A. Bustos, and M. Cazorla, and J. Galant, and X. Barber, and D. Orozco-Beltran, and F. Garcia-Garcia, and M. Caparros, and G. Gonzalez, and J. M. Salinas, BIMCV COVID-19+: A large annotated dataset of RX and CT images from COVID-19 patients, arXiv:2006.01174v3, 2021. DOI: 10.48550/arXiv.2006.01174.
  53. D. S. Kermany, and M. Goldbaum, and W. Cai, and C. C. S. Valentim, and H. Liang, and S. L Baxter, and A. McKeown, and G. Yang, and X. Wu, and F. Yan, and J. Dong, and M. K. Prasadha, and J. Pei, and M. Y. L. Ting, and J. Zhu, and C. Li, and S. Hewett, and J. Dong, and I. Ziyar, and A. Shi, and R. Zhang, and L. Zheng, and R. Hou, and W. Shi, and X. Fu, and Y. Duan, and V. A. N. Huu, and C. Wen, and E. D. Zhang, and C. L. Zhang, and O. Li, and X. Wang, and M. A. Singer, and X. Sun, and J. Xu, and A. Tafreshi, and M. A. Lewis, and H. Xia, and K. Zhang, Identifying medical diagnoses and treatable diseases by image-based deep learning, Cell, vol. 172, no. 5, pp. 1122-1131.e9, Feb., 2018. DOI: 10.1016/j.cell.2018.02.010.
    Pubmed CrossRef
  54. J. Deng, and A. C. Berg, and K. Li, and L. Fei-Fei, What does classifying more than 10,000 image categories tell us?, in Computer Vision -ECCV 2010 - 11th European Conference on Computer Vision, Heraklion, Crete, Greece, pp. 71-84, 2010. DOI: 10.1007/978-3-642-15555.
    CrossRef
  55. A. S. Razavian, and H. Azizpour, and J. Sullivan, and S. Carlsson, CNN features off-the-shelf: An astounding baseline for recognition, in IEEE Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2014, Columbus: OH, USA, pp. 512-519, 2014.
    CrossRef
  56. J. Yosinski, and J. Clune, and Y. Bengio, and H. Lipson, How transferable are features in deep neural networks?, in Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal: QC, Canada, pp. 3320-3328, 2014.
  57. Keras [Online]. Available: https://keras.io.
  58. M. Abadi, and A. Agarwal, and P. Barham, and E. Brevdo, and Z. Chen, and C. Citro, and G. Corrado, and A. Davis, and J. Dean, and M. Devin, and S. Ghemawat, and I. Godfellow, and A. Harp, and G. Irving, and M. Isard, and Y. Jia, and R. Jozefowicz, and L. Kaiser, and M. Kudlur, and J. Levenberg, and D. Mane, and R. Monga, and S. Moore, and D. Murray, and C. Olah, and M. Schuster, and J. Shlens, and B. Steiner, and I. Sutskever, and K. Talwar, and P. Tucker, and V. Vanhoucke, and V. Vasudevan, and F. Viegas, and O. Vinyals, and P. Warden, and M. Wattenberg, and M. Wicke, and Y. Yu, and X. Zheng, TensorFlow: Large-scale machine learning on heterogeneous distributed systems, arXiv:1603.04467v2, 2015. DOI: 10.48550/arXiv.1603.04467.
  59. F. Bedregosa, and G. Varoquaux, and A. Gramfort, and V. Michel, and B. Thirion, and O. Grisel, and M. Blondel, and P. Prettenhofer, and R. Weiss, and V. Dubourg, and J. Vanderplas, and A. Passos, and D. Cournapeau, and M. Brucher, and E. Duchesnay, Scikit-learn: Machine learning in python, Journal of Machine Learning Research, vol. 12, no. 85, pp. 2825-2830, 2011.
  60. Itseez: Open-source computer vision library [Online]. Available: https://github.com/opencv/opencv.
JICCE
Sep 30, 2024 Vol.22 No.3, pp. 173~266

Stats or Metrics

Share this article on

  • line

Related articles in JICCE

Journal of Information and Communication Convergence Engineering Jouranl of information and
communication convergence engineering
(J. Inf. Commun. Converg. Eng.)

eISSN 2234-8883
pISSN 2234-8255