Search 닫기

Regular paper

Split Viewer

Journal of information and communication convergence engineering 2024; 22(2): 165-171

Published online June 30, 2024

https://doi.org/10.56977/jicce.2024.22.2.165

© Korea Institute of Information and Communication Engineering

Improving Chest X-ray Image Classification via Integration of Self-Supervised Learning and Machine Learning Algorithms

Tri-Thuc Vo 1 and Thanh-Nghi Do2,3*

1Department of Computer Science, Can Tho University, Can Tho 94000, Viet Nam
2Department of Computer Networks, Can Tho University, Can Tho 94000, Viet Nam
3UMI UMMISCO 209, IRD/UPMC, Paris 75000, France

Correspondence to : Thanh-Nghi Do (E-mail: dtnghi@ctu.edu.vn)
Department of Computer Networks, Can Tho University, Can Tho 94165, Viet Nam

Received: July 4, 2023; Revised: February 1, 2024; Accepted: March 4, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

In this study, we present a novel approach for enhancing chest X-ray image classification (normal, Covid-19, edema, mass nodules, and pneumothorax) by combining contrastive learning and machine learning algorithms. A vast amount of unlabeled data was leveraged to learn representations so that data efficiency is improved as a means of addressing the limited availability of labeled data in X-ray images. Our approach involves training classification algorithms using the extracted features from a linear fine-tuned Momentum Contrast (MoCo) model. The MoCo architecture with a Resnet34, Resnet50, or Resnet101 backbone is trained to learn features from unlabeled data. Instead of only fine-tuning the linear classifier layer on the MoCo-pretrained model, we propose training nonlinear classifiers as substitutes for softmax in deep networks. The empirical results show that while the linear fine-tuned ImageNet-pretrained models achieved the highest accuracy of only 82.9% and the linear fine-tuned MoCo-pretrained models an increased highest accuracy of 84.8%, our proposed method offered a significant improvement and achieved the highest accuracy of 87.9%.

Keywords Chest X-ray image, Contrastive learning, Image classification, Self-supervised learning

The lung is an important human organ. Lung diseases can affect health and even lead to death. According to the World Health Organization, the number of deaths attributed to lung-related diseases exceeded 3.3 million individuals in 2017. The Covid-19 pandemic has caused more than 6.9 million deaths and approximately 768 million infections as of June 28, 2023 (WHO, https://www.who.int). Chest X-ray is the most cost-effective diagnostic tool and a common method for diagnosing and screening lung diseases. However, disease diagnosis based on chest X-ray images requires highly skilled radiologists. The potential subjectivity of lung diseases detection from X-ray images can lead to inaccurate diagnostic results and less effective treatments. Therefore, a system that can assist in disease diagnosis from chest radiographs will provide significant treatment cost benefits to patients and improve their physical and mental health.

Deep learning has been widely applied to solve problems related to medical image data such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI) images in recent years and achieved promising results [1-3]. However, deep learning requires large amounts of annotated data. Unfortunately, the availability of labeled medical data is limited owing to a number of factors such as a lack of domain experts, privacy regulations, and expensive and timeconsuming data labeling processes. Self-supervised learning is an alternative approach in which the wealth of available unlabeled data is leveraged for the learning of useful representations or features without explicit human annotation. Impressive results have been obtained in this approach by utilizing unlabeled data images to generate pretrained models, which were subsequently fine-tuned using a limited amount of labeled data [4-7]. The pretrained model was constructed by maximizing the agreement between different views of the same image and minimizing the agreement between different images based on a loss function. Self-supervised learning offers improved accuracy compared to supervised learning using labeled data [4-8].

In this study, a novel approach is proposed for improving the X-ray image classification of normal lungs and lungs with four lung diseases comprising Covid-19, edema, massnodule, and pneumothorax. In our approach, a self-supervised learning technique is combined with supervised learning algorithms to efficiently classify X-ray images of lung diseases. The linear fine-tuned model obtained from Momentum Contrast (MoCo) contrastive learning is used as a feature extractor for labeled X-ray images. The extracted features are then trained on classification algorithms comprising support vector machine (SVM) [9], LightGBM [10], XGBoost [11], and CatBoost [12], which are used to substitute for softmax in deep networks. The experimental results show that our proposed approach achieves better accuracy than those of linear fine-tuned MoCo-pretrained models.

The remainder of this paper is organized as follows: The related work on X-ray image lung disease classification is briefly discussed in Section II. Our proposed method is presented in Section III and the experimental results in Section IV. The conclusions and future work are presented in Section V.

Deep learning techniques have been applied to lung disease detection from chest X-ray images. A convolutional neural network for image classification problems with 12 classes was proposed and an accuracy of 86% achieved [13]. Chouhan et al. [14] investigated the combination of five deep learning models with transfer learning for pneumonia detection in chest X-ray images and achieved an accuracy of 96.4% using their ensemble model. A CNN architecture called CheXNeXt for classifying 14 different pathologies using chest X-ray images as the input was developed [15]. Bhandary et al. suggested a deep learning framework based on a modified version of AlexNet and a SVM to detect lung abnormalities in chest X-ray and CT images [16]. A novel method that incorporates local variance analysis and a probabilistic neural network for classifying lung carcinomas with 92% accuracy was introduced in [17]. A transfer learning method involving two convolutional neural networks (VGG16 and InceptionV3) was applied in [18] for pneumonia classification in chest X-ray images.

The outbreak of the Covid-19 pandemic has resulted in many studies on diagnosing Covid-19 through the application of deep learning to chest X-ray images. In [19], a SVM model was employed on top of deep networks for detecting Covid-19 in chest X-ray images and the highest accuracy of 96.16% was achieved. In [20], Afshar et al. proposed a model framework based on capsule networks named COVID-CAPS for diagnosing Covid-19 from X-ray images. An accuracy of 95.7%, sensitivity of 90%, and specificity of 95.8% were achieved. A new deep learning framework called COVIDX-Net was presented in [21], in which seven deep convolutional neural network architectures were used to construct a framework for classifying X-ray images. The highest F1-scores of 89% and 91% were achieved for normal patients and patients infected by Covid-19, respectively. Ozturk et al. developed a model for automatic Covid-19 diagnosis using X-ray images as input [22]. Their system achieved an accuracy of 98.08% for two-class scenarios (Covid-19 and No-Findings) and 87.02% for multiclass scenarios (Covid-19, No-Findings, and pneumonia).

Self-supervised learning has recently received considerable attention from the machine learning community because of its ability to construct trained models from unlabeled datasets and improve performance in downstream tasks such as fine-tuning labeled data. This method is especially crucial for addressing the scarcity of annotated data and improving the accuracy of classification models in medical imaging. Chen et al. proposed a self-learning framework called Sim-CLR, which is an improvement over ImageNet [6,7]. In [4,5], the use of MoCo for contrastive learning was proposed and competitive results achieved by fine-tuning a linear classifier using the ImageNet dataset. In [8], Sowrirajan proposed a method called MoCo-CXR for X-ray image classification based on MoCo contrastive learning. This method was designed for detecting several lung diseases such as pleural effusion, tuberculosis, and atelectasis. MoCo-CXR performed better than an ImageNet-pretrained model with only fine-tuning.

A. MoCo Self-Supervised Learning

Promising results have been achieved in self-supervised learning methods such as MoCo [4,5] by leveraging unlabeled data to generate a pretrained model in which a visual representation encoder is trained based on a loss function. In the MoCo architecture, features are learnt from unlabeled data by training two encoders comprising the encoder and the momentum encoder. Both encoders have the same architecture, which is typically a deep neural network such as a convolutional neural network. MoCo is a method for building large and consistent dictionaries for learning from unlabeled data through a contrastive loss function called InfoNCE, which is applied to measure the agreement between positive and negative image pairs. The positive image pairs are created by applying two data-augmentation operators on the same image. The dictionary is used as a queue for samples and updated by enqueuing the current mini-batch and dequeuing the oldest mini-batch. The training process involves updating the parameters of the encoder to minimize the contrastive loss using backpropagation. At the same time, the momentum encoder parameters are updated by applying an exponential moving average to the parameters of the encoder. Labeled data is subsequently used to fine-tune the MoCo-pretrained model.

The overall flowchart of the X-ray image-based lung disease classification process is shown in Fig. 1. The CheXpert dataset [24] is first used to train the MoCo architecture with a backbone based on the Resnet34, Resnet50, or Resnet101 deep learning network architectures [23]. The parameters from ImageNet are loaded onto the backbone of the deep learning architecture and then trained on MoCo with contrastive learning by applying the image rotation and image flip transformations. Several data augmentation methods for unlabeled data such as random crop, grayscale, jitter, horizontal flip (MoCo v1 [4]), and Gaussian blurring (MoCo v2 [5]) are provided in MoCo. However, because X-ray images are grayscale images and disease diagnosis from X-ray images depends on specific parts of the image, the grayscale, jitter, and blurring transformations are unsuitable for X-ray images as the image labels may be modified. We therefore applied two types of data augmentation comprising random rotation (10°) and horizontal flipping on the images to create pairs of positive images for contrastive learning through a loss function in a similar manner to [8]. The model obtained from MoCo after the first step is denoted as Model 1.

Fig. 1. Diagram of chest X-ray image classification process.

Supervised learning was performed by fine-tuning Model 1. All the layers of the backbone model were frozen and a linear classifier layer trained. The labeled dataset was divided into three subsets comprising the training, valid, and test sets. The training and valid sets were passed through Model 1 to fine-tune the linear layer to obtain Model 2. A test set was used for evaluation. We experimented with finetuning using linear classifiers from an ImageNet model and a MoCo model pretrained with X-ray images.

B. Feature Extraction and Classifier Training

Our goal is to use the linear fine-tuned MoCo Model 2 to improve X-ray image classification by integrating it with nonlinear classifiers. In other words, the nonlinear classifiers substitute for softmax in the deep networks. In our approach, Model 2 is used as an extractor to extract features (representations) from the labeled dataset. The SVM, LightGBM, XGBoost, and CatBoost nonlinear classifiers are subsequently trained using the extracted features.

The SVM algorithm [9] with a radial basis function (RBF) kernel is a powerful algorithm for multiclass classification. The use of RBFs to handle complex decision boundaries in feature space makes this an effective algorithm for nonlinear data. The hyperparameters C and gamma γ need to be tuned for optimal performance. LightGBM [10] is a gradient boosting framework known for its high speed and efficiency developed by Microsoft. It uses techniques such as gradient-based one-sided sampling (GOSS) and exclusive feature bundling (EFB) to achieve fast training and low memory usage while maintaining high accuracy. Extreme gradient boosting (XGBoost) [11] is a scalable and efficient gradient boosting system used widely owing to its performance and flexibility. The tree construction process is optimized using a regularized objective function and tree pruning techniques to achieve high prediction accuracy and robustness against overfitting. CatBoost [12] is a gradient boosting algorithm developed by Yandex for categorical features. It employs advanced strategies such as ordered boosting and various regularization techniques to achieve high accuracy with minimal hyperparameter tuning.

We demonstrate that better results can be achieved by performing feature extraction with Model 2 to train a nonlinear classifier compared to fine-tuning a MoCo model without trained nonlinear classifiers. The detailed experimental results are presented in Section IV.

C. Chest X-Ray Image Dataset

An actual chest X-ray dataset was obtained from public sources. The CheXpert X-ray dataset [24] was used as an unlabeled dataset to train the contrastive learning with MoCo. This dataset was published by a Stanford University research team in 2019. We used 120,000 images from CheXpert for training with MoCo.

The dataset has five labels for healthy patients, patients infected by Covid-19, and patients with lung diseases excluding Covid-19 comprising edema, mass nodules and pneumothorax. This dataset was obtained from published datasets with five classes (normal, Covid-19, edema, mass nodule, and pneumothorax) [24-34]. The total number of images is 98,996. The X-ray images were tagged with one of the five classes. Examples of chest X-ray images in the five classes are shown in Fig. 2. All the images were resized to 224 × 224 pixels. The labeled dataset was divided into three subsets with 70% of the images in the training set, 15% in the valid set, and 15% in the test set. The details of the dataset are listed in Table 1.

Fig. 2. Sample chest x-ray images.

Table 1 . Labeled chest X-ray dataset

LabelTrain setValid setTest set
Normal18,4253,9493,948
Covid-1914,2523,0543,054
Edema23,2404,9804,980
Mass-nodule4,077873874
Pneumothorax9,3031,9931,994

A. Experimental Setup

The X-ray image lung disease classification program was written in Python. The Resnet34, Resnet50, and Resnet101 [23] deep learning network architectures were implemented using the Keras, TensorFlow [35], Scikit-learn [36], and Pytorch libraries. All the experimental results were obtained on a computer running Ubuntu 20.04.5 with an Intel(R) CoreTM i5-10400 CPU @ 2.90GHz × 12, 16 GB of RAM, and a 12 GB GDDR6 NVIDIA GeForce RTX 3060 with 3584 CUDA cores.

The training process for lung disease classification in chest radiographs comprises three main stages. In the first stage, we trained MoCo with 120,000 images extracted from the CheXpert dataset [24]. The model parameters for contrast learning training on MoCo comprise a batch size of 32, learning rate of 10−3, momentum of 0.9, weight decay of 10−3, the Adam optimizer, and 20 epochs. The checkpoint obtained in the first stage was used to fine-tune the linear layer on the labeled dataset in the second stage. Training was performed using three backbones comprising Resnet34, Resnet50, and Resnet101.

In the third stage, the features extracted from the linear fine-tuned MoCo model trained on the training set were used to determine the best hyper-parameters (nonlinear RBF kernel function with γ = 0.0001 and positive constant cost of 105 considering the trade-off between margin size and errors) for the SVM model. LightGBM and CatBoost were trained using a max_depth of 10 and learning_rate of 0.1 and the objective set to multiclass. Model XGBoost was trained with the objective set to multiple:softprob, a learning_rate of 0.1, and max_depth of 8.

B. Classification Results

The classification results from the various methods are presented in Table 2 and Fig. 3. Model 0 (M0) was trained by fine-tuning a linear layer in a ImageNet-pretrained model using the labeled dataset. Model 2 (M2) was obtained by fine-tuning the linear layer in the MoCo model using chest X-ray images. We experimented with feature extraction using the M0 and M2 models and trained SVM, LightGBM, XGBoost, and CatBoost classifiers.

Fig. 3. Classification accuracy on the test set.

Table 2 . Classification results on the test set (%)

(a) Resnet34
MethodsAccuracyPrecisionRecallF1-score
Model0 (M0)77.572.467.869.6
Features(M0)+SVM83.078.678.078.0
Features(M0)+LightGBM82.178.472.874.6
Features(M0)+CatBoost81.277.071.873.4
Features(M0)+XGBoost81.778.672.474.2
Model 2 (M2)82.178.874.875.6
Features(M2)+SVM86.784.082.483.0
Features(M2)+LightGBM86.584.480.882.4
Features(M2)+CatBoost85.482.479.080.2
Features(M2)+XGBoost86.384.080.481.8
(b) Resnet50
MethodsAccuracyPrecisionRecallF1-score
Model0 (M0)82.979.280.079.4
Features(M0)+SVM83.780.080.480.0
Features(M0)+LightGBM84.482.277.679.2
Features(M0)+CatBoost83.280.275.877.2
Features(M0)+XGBoost84.482.677.479.2
Model 2 (M2)84.881.880.080.8
Features(M2)+SVM87.785.484.084.6
Features(M2)+LightGBM87.485.482.283.4
Features(M2)+CatBoost86.383.680.481.6
Features(M2)+XGBoost87.385.481.883.0
(c) Resnet101
MethodsAccuracyPrecisionRecallF1-score
Model0 (M0)82.578.877.878.2
Features(M0)+SVM83.580.079.479.8
Features(M0)+LightGBM84.181.476.678.2
Features(M0)+CatBoost82.678.874.675.8
Features(M0)+XGBoost83.781.076.077.6
Model 2 (M2)84.681.680.281.0
Features(M2)+SVM87.485.283.284.2
Features(M2)+LightGBM87.986.283.484.6
Features(M2)+CatBoost87.285.282.483.6
Features(M2)+XGBoost87.886.083.484.4


The comprehensive X-ray image lung disease classification results are presented in Table 2 and Fig. 3. The experimental results on the three backbone architectures show that our proposed method, in which features are extracted from a linear fine-tuned MoCo model combined with a trained nonlinear classifier, achieved better results compared to solely fine-tuning a linear classifier on the ImageNet and MoCo pretrained models. In the experiment, the SVM, LightGBM, CatBoost, and XGBoost classifiers were trained on features extracted from the linear fine-tuned ImageNet and MoCo models pretrained on X-ray images. The proposed method achieved improved accuracy on all three network architectures and four classification algorithms for both M0 and M2. For the linear fine-tuned MoCo model with a Resnet34 backbone, the SVM classifier achieved the highest accuracy of 86.7%, followed by LightGBM (86.5%), XGBoost (86.3%), and CatBoost (85.4%). The same ranking order for accuracy holds for Resnet50 combined with the same classifiers. Accuracies exceeding 87% were obtained by using features extracted by M2 (Resnet101) to train the four classifiers with the highest accuracy obtained from LightGBM (87.9%) followed by XGBoost (87.8%), whereas the remaining classifiers provided accuracies of 87.2% to 87.4%.

We found that compared to the MoCo model with only linear fine-tuning, the accuracy of our proposed method was improved by at least 2.5% on the test set except for M2_Resnet50 + CatBoost (1.5% improvement), as detailed in Table 3 and Fig. 4. The SVM classifier improved the classification accuracy by 4.6, 2.9, and 2.8% compared with the linear fine-tuned MoCo model with a Resnet34, Resnet50, and Resnet101 backbone, respectively. All four classifiers improved the accuracy of the Resnet34 backbone by more than 3.0% with improvements of 4.6, 4.4, 4.2, and 3.3% for SVM, LightGBM, XGBoost, and CatBoost, respectively.

Fig. 4. Accuracy improvement of our proposed method on test set compared to the linear fine-tuned MoCo model (%)

Table 3 . Accuracy improvement on the test set of our proposed method compared to the linear fine-tuned MoCo model (%)

MethodResnet34Resnet 50Resnet101
Features(M2)+SVM4.62.92.8
Features(M2)+LightGBM4.42.63.3
Features(M2)+CatBoost3.31.52.6
Features(M2)+XGBoost4.22.53.2

We presented a novel approach for improving the performance of X-ray image classification by incorporating selfsupervised learning and classification algorithms. Contrastive learning was employed to learn features (representations) from the abundant pool of unlabeled data to enhance data efficiency and address the limited availability of labeled data in X-ray images. We gathered two datasets comprising an unlabeled dataset (120,000 images) for self-supervised learning and a labeled dataset (98,996 images) with five classes. A linear fine-tuned MoCo model was integrated to extract features for training nonlinear classifiers (SVM, LightGBM, CatBoost, and XGBoost) to improve classification accuracy. The results of experiments with three ResNet architectures show that the linear fine-tuned ImageNet pretrained models, with the exception of ResNet34 (77.5%), achieved accuracies of at least 82.5% on the test set. Although the linear fine-tuned MoCo models with Resnet34, Resnet50, and Resnet101 backbones achieved noteworthy accuracies 82.1, 84.8, and 84.6%, respectively, our proposed method further increased the accuracy by 1.5% to 4.8% compared to MoCo with only fine-tuned linear layers. Through combination with the SVM, LightGBM, XGBoost, and Cat-Boost classification algorithms, the accuracy of our proposed approach was improved by 4.6, 4.4, 4.2, and 3.3% for a Resnet34 backbone, respectively. Our method achieved superior performance compared to solely fine-tuning a linear classifier layer on the MoCo-pretrained model with the highest accuracy of 87.9%.

In the near future, we intend to collect more chest X-ray images of patients with other lung diseases and conduct experiments on the combination of other self-supervised learning methods with other deep networks.

  1. Z. Akkus, et al, “Deep learning for brain MRI segmentation: state of the art and future directions,” Journal of Digital Imaging, vol. 30, no. 4, pp. 449-459, Jun. 2017. DOI: 10.1007/s10278-017-9983-4.
    Pubmed KoreaMed CrossRef
  2. C. H. Liang, et al, “Identifying pulmonary nodules or masses on chest radiography using deep learning: external validation and strategies to improve clinical practice,” Clinical Radiology, vol. 75, no. 1, pp. 38-45, Jan. 2020. DOI: 10.1016/j.crad.2019.08.005.
    Pubmed CrossRef
  3. J. Ker, et al, “Deep learning applications in medical image analysis,” IEEE Access, vol. 6, pp. 9375-9389, 2018. DOI: 10.1109/ACCESS.2017.2788044.
    CrossRef
  4. K. He, et al, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, pp. 9729-9738, 2020.
    Pubmed CrossRef
  5. X. Chen, et al, “Improved baselines with momentum contrastive learning,” arXiv preprint arXiv:2003.04297, Mar. 2020. DOI: 10.48550/arXiv.2003.04297.
  6. T. Chen, et al, “A simple framework for contrastive learning of visual representations,” arXiv preprint arXiv:2002.05709, 2020. DOI: 10.48550/arXiv.2002.05709.
  7. T. Chen, et al, “Big self-supervised models are strong semi-supervised learners,” arXiv preprint arXiv:2006.10029, 2020. DOI: 10.48550/arXiv.2006.10029.
  8. H. Sowrirajan, et al, “Moco pretraining improves representation and transferability of chest x-ray models,” Medical Imaging with Deep Learning, vol. 143, pp. 728-744. PMLR, 2021. DOI: 10.48550/arXiv.2010.05352v1.
  9. V. N. Vapnik, The Nature of Statistical Learning Theory, 2nd ed. Springer-Verlag New York, 2000.
    CrossRef
  10. G. Ke, et al, “Lightgbm: A highly efficient gradient boosting decision tree,” Advances in Neural Information Processing Systems, vol. 30, 2017.
  11. T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data mining. San Francisco, USA, pp. 785-794, Aug. 2016. DOI: 10.1145/2939672.2939785.
    CrossRef
  12. A. V. Dorogush, et al, “CatBoost: gradient boosting with categorical features support,” arXiv preprint arXiv:1810.11363, Oct. 2018. DOI: 10.48550/arXiv.1810.11363.
  13. Ege Kesim, et al, “X-ray chest image classification by a small-sized convolutional neural network,” in 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT), Istanbul, Turkey, pp. 1-5, 2019. DOI: 10.1109/EBBT.2019.8742050.
    CrossRef
  14. V. Chouhan, et al, “A novel transfer learning based approach for pneumonia detection in chest x-ray images,” Applied Sciences, vol. 10, no. 2, p. 559, Jan. 2020. DOI: 10.3390/app10020559.
    CrossRef
  15. P. Rajpurkar, et al, “Deep learning for chest radiograph diagnosis: A retrospective comparison of the chexnext algorithm to practicing radiologists,” PLoS medicine, vol. 15, no. 11, p. e1002686, Nov. 2018. DOI: 10.1371/journal.pmed.1002686.
    Pubmed KoreaMed CrossRef
  16. A. Bhandary, et al, “Deep-learning framework to detect lung abnormality-a study with chest x-ray and lung CT scan images,” Pattern Recognition Letters, vol. 129, pp. 271-278, Jan. 2020. DOI: 10.1016/j.patrec.2019.11.013.
    CrossRef
  17. M. Wozniak, et al, “Small lung nodules detection based on local variance analysis and probabilistic neural network,” Computer Methods and Programs in Biomedicine, vol. 161, pp. 173-180, Jul. 2018. DOI: 10.1016/j.cmpb.2018.04.025.
    Pubmed CrossRef
  18. S. S. Yadav and S. M. Jadhav, “Deep convolutional neural network based medical image classification for disease diagnosis,” Journal of Big data, vol. 6, no. 1, pp. 1-18, Dec. 2019. DOI: 10.1186/s40537-019-0276-2.
    CrossRef
  19. T-N. Do, et al, “SVM on Top of Deep Networks for Covid-19 Detection from Chest X-ray Images,” Journal of Information and Communication Convergence Engineering, vol. 20, no. 3, pp. 219-225, Sep. 2022. DOI: 10.56977/jicce.2022.20.3.219.
    CrossRef
  20. P. Afshar, et al, “Covid-caps: A capsule network-based framework for identification of covid-19 cases from x-ray images,” Pattern Recognition Letters, vol. 138, pp. 638-643, Oct. 2020. DOI: 10.1016/j.patrec.2020.09.010.
    Pubmed KoreaMed CrossRef
  21. E. E-D. Hemdan, et al, “Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images,” arXiv preprint arXiv:2003.11055, Mar. 2020. DOI: 10.48550/arXiv.2003.11055.
  22. T. Ozturk, et al, “Automated detection of covid-19 cases using deep neural networks with x-ray images,” Computers in Biology and Medicine, vol. 121, p. 103792, Jun. 2020. DOI: 10.1016/j.compbiomed.2020.103792.
    Pubmed KoreaMed CrossRef
  23. K. He, et al, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 770-778, 2016.
    Pubmed CrossRef
  24. J. Irvin, et al, “Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison,” in Proceedings of the AAAI conference on artificial intelligence. Honolulu, USA, vol. 33, no. 1, pp. 590-597, Jul. 2019. DOI: 10.1609/aaai.v33i01.3301590.
    CrossRef
  25. H. Q. Nguyen, et al, “Vindr-cxr: An open dataset of chest x-rays with radiologist's annotations,” Scientific Data, vol. 9, no. 1, p. 429, Jul. 2022. DOI: 10.1038/s41597-022-01498-w.
    Pubmed KoreaMed CrossRef
  26. D. S. Kermany, et al, “Identifying medical diagnoses and treatable diseases by image-based deep learning,” Cell, vol. 172, no. 5, pp. 1122-1131, Feb. 2018. DOI: 10.1016/j.cell.2018.02.010.
    Pubmed CrossRef
  27. J. Shiraishi, et al, “Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists' detection of pulmonary nodules,” American Journal of Roentgenology, vol. 174, no. 1, pp. 71-74, Jan. 2000. DOI: 10.2214/ajr.174.1.1740071.
    Pubmed CrossRef
  28. J. Saltz, et al, Stony Brook University Covid-19 Positive Cases, 2020. [online] Available: https://doi.org/10.7937/TCIA.BBAG-2923.
  29. A. Haghanifar, et al, Covid-19 chest x-ray image repository, 2021. [online] Available: https://figshare.com/articles/dataset/COVID-19.
  30. J. P. Cohen, et al, “Covid-19 image data collection: Prospective predictions are the future,” arXiv preprint arXiv:2006.11988, Jun. 2020. DOI:/10.48550/arXiv.2006.11988.
  31. M. D. L. I. Vaya, et al, “Bimcv covid-19+: a large annotated dataset of rx and ct images from covid-19 patients,” arXiv preprint arXiv: 2006.01174, Jun. 2020. DOI: 10.48550/arXiv.2006.01174.
  32. H. B. Winther, et al, Covid-19 image repository, 2020. [online], Available: https://doi.org/10.6084/m9.figshare.12275009.v1.
  33. SIIM-ACR Pneumothorax Segmentation, SIIM-ACR Pneumothorax Segmentation, [online], Available: https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation.
  34. X. Wang, et al, “Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” in Proceedings of the IEEE conference on computer vision and pattern recognition. Honolulu, USA, pp. 2097-2106, 2017. DOI: 10.1109/cvpr.2017.369.
    CrossRef
  35. M. Abadi, et al, “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603. 04467, Mar. 2016. DOI: 10.48550/arXiv.1603.04467.
  36. F. Pedregosa, et al, “Scikitlearn: Machine learning in python,” The Journal of Machine Learning Research, vol. 12, pp. 2825-2830, Oct. 2011.

Tri-Thuc Vo

was born in Vinhlong in 1989. He received his MSc. degree in informatics from the University of Brest in 2018. He is currently a lecturer at the College of Information Technology, Cantho University, Vietnam. His research interests include medical data analysis and machine learning.


Thanh-Nghi Do

was born in Cantho in 1974. He received his PhD. degree in informatics from the University of Nantes in 2004. He is currently an associate professor at the College of Information Technology, Cantho University, Vietnam. He is also an associate researcher at UMI UMMISCO 209 (IRD/UPMC), Sorbonne University, and the Pierre and Marie Curie University, France. His research interests include data mining with support vector machines, kernel-based methods, decision tree algorithms, ensemble-based learning, and information visualization. He has served on the program committees of international conferences and is a reviewer for journals in his fields of expertise.


Article

Regular paper

Journal of information and communication convergence engineering 2024; 22(2): 165-171

Published online June 30, 2024 https://doi.org/10.56977/jicce.2024.22.2.165

Copyright © Korea Institute of Information and Communication Engineering.

Improving Chest X-ray Image Classification via Integration of Self-Supervised Learning and Machine Learning Algorithms

Tri-Thuc Vo 1 and Thanh-Nghi Do2,3*

1Department of Computer Science, Can Tho University, Can Tho 94000, Viet Nam
2Department of Computer Networks, Can Tho University, Can Tho 94000, Viet Nam
3UMI UMMISCO 209, IRD/UPMC, Paris 75000, France

Correspondence to:Thanh-Nghi Do (E-mail: dtnghi@ctu.edu.vn)
Department of Computer Networks, Can Tho University, Can Tho 94165, Viet Nam

Received: July 4, 2023; Revised: February 1, 2024; Accepted: March 4, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this study, we present a novel approach for enhancing chest X-ray image classification (normal, Covid-19, edema, mass nodules, and pneumothorax) by combining contrastive learning and machine learning algorithms. A vast amount of unlabeled data was leveraged to learn representations so that data efficiency is improved as a means of addressing the limited availability of labeled data in X-ray images. Our approach involves training classification algorithms using the extracted features from a linear fine-tuned Momentum Contrast (MoCo) model. The MoCo architecture with a Resnet34, Resnet50, or Resnet101 backbone is trained to learn features from unlabeled data. Instead of only fine-tuning the linear classifier layer on the MoCo-pretrained model, we propose training nonlinear classifiers as substitutes for softmax in deep networks. The empirical results show that while the linear fine-tuned ImageNet-pretrained models achieved the highest accuracy of only 82.9% and the linear fine-tuned MoCo-pretrained models an increased highest accuracy of 84.8%, our proposed method offered a significant improvement and achieved the highest accuracy of 87.9%.

Keywords: Chest X-ray image, Contrastive learning, Image classification, Self-supervised learning

I. INTRODUCTION

The lung is an important human organ. Lung diseases can affect health and even lead to death. According to the World Health Organization, the number of deaths attributed to lung-related diseases exceeded 3.3 million individuals in 2017. The Covid-19 pandemic has caused more than 6.9 million deaths and approximately 768 million infections as of June 28, 2023 (WHO, https://www.who.int). Chest X-ray is the most cost-effective diagnostic tool and a common method for diagnosing and screening lung diseases. However, disease diagnosis based on chest X-ray images requires highly skilled radiologists. The potential subjectivity of lung diseases detection from X-ray images can lead to inaccurate diagnostic results and less effective treatments. Therefore, a system that can assist in disease diagnosis from chest radiographs will provide significant treatment cost benefits to patients and improve their physical and mental health.

Deep learning has been widely applied to solve problems related to medical image data such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI) images in recent years and achieved promising results [1-3]. However, deep learning requires large amounts of annotated data. Unfortunately, the availability of labeled medical data is limited owing to a number of factors such as a lack of domain experts, privacy regulations, and expensive and timeconsuming data labeling processes. Self-supervised learning is an alternative approach in which the wealth of available unlabeled data is leveraged for the learning of useful representations or features without explicit human annotation. Impressive results have been obtained in this approach by utilizing unlabeled data images to generate pretrained models, which were subsequently fine-tuned using a limited amount of labeled data [4-7]. The pretrained model was constructed by maximizing the agreement between different views of the same image and minimizing the agreement between different images based on a loss function. Self-supervised learning offers improved accuracy compared to supervised learning using labeled data [4-8].

In this study, a novel approach is proposed for improving the X-ray image classification of normal lungs and lungs with four lung diseases comprising Covid-19, edema, massnodule, and pneumothorax. In our approach, a self-supervised learning technique is combined with supervised learning algorithms to efficiently classify X-ray images of lung diseases. The linear fine-tuned model obtained from Momentum Contrast (MoCo) contrastive learning is used as a feature extractor for labeled X-ray images. The extracted features are then trained on classification algorithms comprising support vector machine (SVM) [9], LightGBM [10], XGBoost [11], and CatBoost [12], which are used to substitute for softmax in deep networks. The experimental results show that our proposed approach achieves better accuracy than those of linear fine-tuned MoCo-pretrained models.

The remainder of this paper is organized as follows: The related work on X-ray image lung disease classification is briefly discussed in Section II. Our proposed method is presented in Section III and the experimental results in Section IV. The conclusions and future work are presented in Section V.

II. RELATED WORK

Deep learning techniques have been applied to lung disease detection from chest X-ray images. A convolutional neural network for image classification problems with 12 classes was proposed and an accuracy of 86% achieved [13]. Chouhan et al. [14] investigated the combination of five deep learning models with transfer learning for pneumonia detection in chest X-ray images and achieved an accuracy of 96.4% using their ensemble model. A CNN architecture called CheXNeXt for classifying 14 different pathologies using chest X-ray images as the input was developed [15]. Bhandary et al. suggested a deep learning framework based on a modified version of AlexNet and a SVM to detect lung abnormalities in chest X-ray and CT images [16]. A novel method that incorporates local variance analysis and a probabilistic neural network for classifying lung carcinomas with 92% accuracy was introduced in [17]. A transfer learning method involving two convolutional neural networks (VGG16 and InceptionV3) was applied in [18] for pneumonia classification in chest X-ray images.

The outbreak of the Covid-19 pandemic has resulted in many studies on diagnosing Covid-19 through the application of deep learning to chest X-ray images. In [19], a SVM model was employed on top of deep networks for detecting Covid-19 in chest X-ray images and the highest accuracy of 96.16% was achieved. In [20], Afshar et al. proposed a model framework based on capsule networks named COVID-CAPS for diagnosing Covid-19 from X-ray images. An accuracy of 95.7%, sensitivity of 90%, and specificity of 95.8% were achieved. A new deep learning framework called COVIDX-Net was presented in [21], in which seven deep convolutional neural network architectures were used to construct a framework for classifying X-ray images. The highest F1-scores of 89% and 91% were achieved for normal patients and patients infected by Covid-19, respectively. Ozturk et al. developed a model for automatic Covid-19 diagnosis using X-ray images as input [22]. Their system achieved an accuracy of 98.08% for two-class scenarios (Covid-19 and No-Findings) and 87.02% for multiclass scenarios (Covid-19, No-Findings, and pneumonia).

Self-supervised learning has recently received considerable attention from the machine learning community because of its ability to construct trained models from unlabeled datasets and improve performance in downstream tasks such as fine-tuning labeled data. This method is especially crucial for addressing the scarcity of annotated data and improving the accuracy of classification models in medical imaging. Chen et al. proposed a self-learning framework called Sim-CLR, which is an improvement over ImageNet [6,7]. In [4,5], the use of MoCo for contrastive learning was proposed and competitive results achieved by fine-tuning a linear classifier using the ImageNet dataset. In [8], Sowrirajan proposed a method called MoCo-CXR for X-ray image classification based on MoCo contrastive learning. This method was designed for detecting several lung diseases such as pleural effusion, tuberculosis, and atelectasis. MoCo-CXR performed better than an ImageNet-pretrained model with only fine-tuning.

III. METHODOLOGY

A. MoCo Self-Supervised Learning

Promising results have been achieved in self-supervised learning methods such as MoCo [4,5] by leveraging unlabeled data to generate a pretrained model in which a visual representation encoder is trained based on a loss function. In the MoCo architecture, features are learnt from unlabeled data by training two encoders comprising the encoder and the momentum encoder. Both encoders have the same architecture, which is typically a deep neural network such as a convolutional neural network. MoCo is a method for building large and consistent dictionaries for learning from unlabeled data through a contrastive loss function called InfoNCE, which is applied to measure the agreement between positive and negative image pairs. The positive image pairs are created by applying two data-augmentation operators on the same image. The dictionary is used as a queue for samples and updated by enqueuing the current mini-batch and dequeuing the oldest mini-batch. The training process involves updating the parameters of the encoder to minimize the contrastive loss using backpropagation. At the same time, the momentum encoder parameters are updated by applying an exponential moving average to the parameters of the encoder. Labeled data is subsequently used to fine-tune the MoCo-pretrained model.

The overall flowchart of the X-ray image-based lung disease classification process is shown in Fig. 1. The CheXpert dataset [24] is first used to train the MoCo architecture with a backbone based on the Resnet34, Resnet50, or Resnet101 deep learning network architectures [23]. The parameters from ImageNet are loaded onto the backbone of the deep learning architecture and then trained on MoCo with contrastive learning by applying the image rotation and image flip transformations. Several data augmentation methods for unlabeled data such as random crop, grayscale, jitter, horizontal flip (MoCo v1 [4]), and Gaussian blurring (MoCo v2 [5]) are provided in MoCo. However, because X-ray images are grayscale images and disease diagnosis from X-ray images depends on specific parts of the image, the grayscale, jitter, and blurring transformations are unsuitable for X-ray images as the image labels may be modified. We therefore applied two types of data augmentation comprising random rotation (10°) and horizontal flipping on the images to create pairs of positive images for contrastive learning through a loss function in a similar manner to [8]. The model obtained from MoCo after the first step is denoted as Model 1.

Figure 1. Diagram of chest X-ray image classification process.

Supervised learning was performed by fine-tuning Model 1. All the layers of the backbone model were frozen and a linear classifier layer trained. The labeled dataset was divided into three subsets comprising the training, valid, and test sets. The training and valid sets were passed through Model 1 to fine-tune the linear layer to obtain Model 2. A test set was used for evaluation. We experimented with finetuning using linear classifiers from an ImageNet model and a MoCo model pretrained with X-ray images.

B. Feature Extraction and Classifier Training

Our goal is to use the linear fine-tuned MoCo Model 2 to improve X-ray image classification by integrating it with nonlinear classifiers. In other words, the nonlinear classifiers substitute for softmax in the deep networks. In our approach, Model 2 is used as an extractor to extract features (representations) from the labeled dataset. The SVM, LightGBM, XGBoost, and CatBoost nonlinear classifiers are subsequently trained using the extracted features.

The SVM algorithm [9] with a radial basis function (RBF) kernel is a powerful algorithm for multiclass classification. The use of RBFs to handle complex decision boundaries in feature space makes this an effective algorithm for nonlinear data. The hyperparameters C and gamma γ need to be tuned for optimal performance. LightGBM [10] is a gradient boosting framework known for its high speed and efficiency developed by Microsoft. It uses techniques such as gradient-based one-sided sampling (GOSS) and exclusive feature bundling (EFB) to achieve fast training and low memory usage while maintaining high accuracy. Extreme gradient boosting (XGBoost) [11] is a scalable and efficient gradient boosting system used widely owing to its performance and flexibility. The tree construction process is optimized using a regularized objective function and tree pruning techniques to achieve high prediction accuracy and robustness against overfitting. CatBoost [12] is a gradient boosting algorithm developed by Yandex for categorical features. It employs advanced strategies such as ordered boosting and various regularization techniques to achieve high accuracy with minimal hyperparameter tuning.

We demonstrate that better results can be achieved by performing feature extraction with Model 2 to train a nonlinear classifier compared to fine-tuning a MoCo model without trained nonlinear classifiers. The detailed experimental results are presented in Section IV.

C. Chest X-Ray Image Dataset

An actual chest X-ray dataset was obtained from public sources. The CheXpert X-ray dataset [24] was used as an unlabeled dataset to train the contrastive learning with MoCo. This dataset was published by a Stanford University research team in 2019. We used 120,000 images from CheXpert for training with MoCo.

The dataset has five labels for healthy patients, patients infected by Covid-19, and patients with lung diseases excluding Covid-19 comprising edema, mass nodules and pneumothorax. This dataset was obtained from published datasets with five classes (normal, Covid-19, edema, mass nodule, and pneumothorax) [24-34]. The total number of images is 98,996. The X-ray images were tagged with one of the five classes. Examples of chest X-ray images in the five classes are shown in Fig. 2. All the images were resized to 224 × 224 pixels. The labeled dataset was divided into three subsets with 70% of the images in the training set, 15% in the valid set, and 15% in the test set. The details of the dataset are listed in Table 1.

Figure 2. Sample chest x-ray images.

Table 1 . Labeled chest X-ray dataset.

LabelTrain setValid setTest set
Normal18,4253,9493,948
Covid-1914,2523,0543,054
Edema23,2404,9804,980
Mass-nodule4,077873874
Pneumothorax9,3031,9931,994

IV. EXPERIMENTS AND RESULTS

A. Experimental Setup

The X-ray image lung disease classification program was written in Python. The Resnet34, Resnet50, and Resnet101 [23] deep learning network architectures were implemented using the Keras, TensorFlow [35], Scikit-learn [36], and Pytorch libraries. All the experimental results were obtained on a computer running Ubuntu 20.04.5 with an Intel(R) CoreTM i5-10400 CPU @ 2.90GHz × 12, 16 GB of RAM, and a 12 GB GDDR6 NVIDIA GeForce RTX 3060 with 3584 CUDA cores.

The training process for lung disease classification in chest radiographs comprises three main stages. In the first stage, we trained MoCo with 120,000 images extracted from the CheXpert dataset [24]. The model parameters for contrast learning training on MoCo comprise a batch size of 32, learning rate of 10−3, momentum of 0.9, weight decay of 10−3, the Adam optimizer, and 20 epochs. The checkpoint obtained in the first stage was used to fine-tune the linear layer on the labeled dataset in the second stage. Training was performed using three backbones comprising Resnet34, Resnet50, and Resnet101.

In the third stage, the features extracted from the linear fine-tuned MoCo model trained on the training set were used to determine the best hyper-parameters (nonlinear RBF kernel function with γ = 0.0001 and positive constant cost of 105 considering the trade-off between margin size and errors) for the SVM model. LightGBM and CatBoost were trained using a max_depth of 10 and learning_rate of 0.1 and the objective set to multiclass. Model XGBoost was trained with the objective set to multiple:softprob, a learning_rate of 0.1, and max_depth of 8.

B. Classification Results

The classification results from the various methods are presented in Table 2 and Fig. 3. Model 0 (M0) was trained by fine-tuning a linear layer in a ImageNet-pretrained model using the labeled dataset. Model 2 (M2) was obtained by fine-tuning the linear layer in the MoCo model using chest X-ray images. We experimented with feature extraction using the M0 and M2 models and trained SVM, LightGBM, XGBoost, and CatBoost classifiers.

Figure 3. Classification accuracy on the test set.

Table 2 . Classification results on the test set (%).

(a) Resnet34
MethodsAccuracyPrecisionRecallF1-score
Model0 (M0)77.572.467.869.6
Features(M0)+SVM83.078.678.078.0
Features(M0)+LightGBM82.178.472.874.6
Features(M0)+CatBoost81.277.071.873.4
Features(M0)+XGBoost81.778.672.474.2
Model 2 (M2)82.178.874.875.6
Features(M2)+SVM86.784.082.483.0
Features(M2)+LightGBM86.584.480.882.4
Features(M2)+CatBoost85.482.479.080.2
Features(M2)+XGBoost86.384.080.481.8
(b) Resnet50
MethodsAccuracyPrecisionRecallF1-score
Model0 (M0)82.979.280.079.4
Features(M0)+SVM83.780.080.480.0
Features(M0)+LightGBM84.482.277.679.2
Features(M0)+CatBoost83.280.275.877.2
Features(M0)+XGBoost84.482.677.479.2
Model 2 (M2)84.881.880.080.8
Features(M2)+SVM87.785.484.084.6
Features(M2)+LightGBM87.485.482.283.4
Features(M2)+CatBoost86.383.680.481.6
Features(M2)+XGBoost87.385.481.883.0
(c) Resnet101
MethodsAccuracyPrecisionRecallF1-score
Model0 (M0)82.578.877.878.2
Features(M0)+SVM83.580.079.479.8
Features(M0)+LightGBM84.181.476.678.2
Features(M0)+CatBoost82.678.874.675.8
Features(M0)+XGBoost83.781.076.077.6
Model 2 (M2)84.681.680.281.0
Features(M2)+SVM87.485.283.284.2
Features(M2)+LightGBM87.986.283.484.6
Features(M2)+CatBoost87.285.282.483.6
Features(M2)+XGBoost87.886.083.484.4


The comprehensive X-ray image lung disease classification results are presented in Table 2 and Fig. 3. The experimental results on the three backbone architectures show that our proposed method, in which features are extracted from a linear fine-tuned MoCo model combined with a trained nonlinear classifier, achieved better results compared to solely fine-tuning a linear classifier on the ImageNet and MoCo pretrained models. In the experiment, the SVM, LightGBM, CatBoost, and XGBoost classifiers were trained on features extracted from the linear fine-tuned ImageNet and MoCo models pretrained on X-ray images. The proposed method achieved improved accuracy on all three network architectures and four classification algorithms for both M0 and M2. For the linear fine-tuned MoCo model with a Resnet34 backbone, the SVM classifier achieved the highest accuracy of 86.7%, followed by LightGBM (86.5%), XGBoost (86.3%), and CatBoost (85.4%). The same ranking order for accuracy holds for Resnet50 combined with the same classifiers. Accuracies exceeding 87% were obtained by using features extracted by M2 (Resnet101) to train the four classifiers with the highest accuracy obtained from LightGBM (87.9%) followed by XGBoost (87.8%), whereas the remaining classifiers provided accuracies of 87.2% to 87.4%.

We found that compared to the MoCo model with only linear fine-tuning, the accuracy of our proposed method was improved by at least 2.5% on the test set except for M2_Resnet50 + CatBoost (1.5% improvement), as detailed in Table 3 and Fig. 4. The SVM classifier improved the classification accuracy by 4.6, 2.9, and 2.8% compared with the linear fine-tuned MoCo model with a Resnet34, Resnet50, and Resnet101 backbone, respectively. All four classifiers improved the accuracy of the Resnet34 backbone by more than 3.0% with improvements of 4.6, 4.4, 4.2, and 3.3% for SVM, LightGBM, XGBoost, and CatBoost, respectively.

Figure 4. Accuracy improvement of our proposed method on test set compared to the linear fine-tuned MoCo model (%)

Table 3 . Accuracy improvement on the test set of our proposed method compared to the linear fine-tuned MoCo model (%).

MethodResnet34Resnet 50Resnet101
Features(M2)+SVM4.62.92.8
Features(M2)+LightGBM4.42.63.3
Features(M2)+CatBoost3.31.52.6
Features(M2)+XGBoost4.22.53.2

V. CONCLUSION AND FUTURE WORKS

We presented a novel approach for improving the performance of X-ray image classification by incorporating selfsupervised learning and classification algorithms. Contrastive learning was employed to learn features (representations) from the abundant pool of unlabeled data to enhance data efficiency and address the limited availability of labeled data in X-ray images. We gathered two datasets comprising an unlabeled dataset (120,000 images) for self-supervised learning and a labeled dataset (98,996 images) with five classes. A linear fine-tuned MoCo model was integrated to extract features for training nonlinear classifiers (SVM, LightGBM, CatBoost, and XGBoost) to improve classification accuracy. The results of experiments with three ResNet architectures show that the linear fine-tuned ImageNet pretrained models, with the exception of ResNet34 (77.5%), achieved accuracies of at least 82.5% on the test set. Although the linear fine-tuned MoCo models with Resnet34, Resnet50, and Resnet101 backbones achieved noteworthy accuracies 82.1, 84.8, and 84.6%, respectively, our proposed method further increased the accuracy by 1.5% to 4.8% compared to MoCo with only fine-tuned linear layers. Through combination with the SVM, LightGBM, XGBoost, and Cat-Boost classification algorithms, the accuracy of our proposed approach was improved by 4.6, 4.4, 4.2, and 3.3% for a Resnet34 backbone, respectively. Our method achieved superior performance compared to solely fine-tuning a linear classifier layer on the MoCo-pretrained model with the highest accuracy of 87.9%.

In the near future, we intend to collect more chest X-ray images of patients with other lung diseases and conduct experiments on the combination of other self-supervised learning methods with other deep networks.

Fig 1.

Figure 1.Diagram of chest X-ray image classification process.
Journal of Information and Communication Convergence Engineering 2024; 22: 165-171https://doi.org/10.56977/jicce.2024.22.2.165

Fig 2.

Figure 2.Sample chest x-ray images.
Journal of Information and Communication Convergence Engineering 2024; 22: 165-171https://doi.org/10.56977/jicce.2024.22.2.165

Fig 3.

Figure 3.Classification accuracy on the test set.
Journal of Information and Communication Convergence Engineering 2024; 22: 165-171https://doi.org/10.56977/jicce.2024.22.2.165

Fig 4.

Figure 4.Accuracy improvement of our proposed method on test set compared to the linear fine-tuned MoCo model (%)
Journal of Information and Communication Convergence Engineering 2024; 22: 165-171https://doi.org/10.56977/jicce.2024.22.2.165

Table 1 . Labeled chest X-ray dataset.

LabelTrain setValid setTest set
Normal18,4253,9493,948
Covid-1914,2523,0543,054
Edema23,2404,9804,980
Mass-nodule4,077873874
Pneumothorax9,3031,9931,994

Table 2 . Classification results on the test set (%).

(a) Resnet34
MethodsAccuracyPrecisionRecallF1-score
Model0 (M0)77.572.467.869.6
Features(M0)+SVM83.078.678.078.0
Features(M0)+LightGBM82.178.472.874.6
Features(M0)+CatBoost81.277.071.873.4
Features(M0)+XGBoost81.778.672.474.2
Model 2 (M2)82.178.874.875.6
Features(M2)+SVM86.784.082.483.0
Features(M2)+LightGBM86.584.480.882.4
Features(M2)+CatBoost85.482.479.080.2
Features(M2)+XGBoost86.384.080.481.8
(b) Resnet50
MethodsAccuracyPrecisionRecallF1-score
Model0 (M0)82.979.280.079.4
Features(M0)+SVM83.780.080.480.0
Features(M0)+LightGBM84.482.277.679.2
Features(M0)+CatBoost83.280.275.877.2
Features(M0)+XGBoost84.482.677.479.2
Model 2 (M2)84.881.880.080.8
Features(M2)+SVM87.785.484.084.6
Features(M2)+LightGBM87.485.482.283.4
Features(M2)+CatBoost86.383.680.481.6
Features(M2)+XGBoost87.385.481.883.0
(c) Resnet101
MethodsAccuracyPrecisionRecallF1-score
Model0 (M0)82.578.877.878.2
Features(M0)+SVM83.580.079.479.8
Features(M0)+LightGBM84.181.476.678.2
Features(M0)+CatBoost82.678.874.675.8
Features(M0)+XGBoost83.781.076.077.6
Model 2 (M2)84.681.680.281.0
Features(M2)+SVM87.485.283.284.2
Features(M2)+LightGBM87.986.283.484.6
Features(M2)+CatBoost87.285.282.483.6
Features(M2)+XGBoost87.886.083.484.4

Table 3 . Accuracy improvement on the test set of our proposed method compared to the linear fine-tuned MoCo model (%).

MethodResnet34Resnet 50Resnet101
Features(M2)+SVM4.62.92.8
Features(M2)+LightGBM4.42.63.3
Features(M2)+CatBoost3.31.52.6
Features(M2)+XGBoost4.22.53.2

References

  1. Z. Akkus, et al, “Deep learning for brain MRI segmentation: state of the art and future directions,” Journal of Digital Imaging, vol. 30, no. 4, pp. 449-459, Jun. 2017. DOI: 10.1007/s10278-017-9983-4.
    Pubmed KoreaMed CrossRef
  2. C. H. Liang, et al, “Identifying pulmonary nodules or masses on chest radiography using deep learning: external validation and strategies to improve clinical practice,” Clinical Radiology, vol. 75, no. 1, pp. 38-45, Jan. 2020. DOI: 10.1016/j.crad.2019.08.005.
    Pubmed CrossRef
  3. J. Ker, et al, “Deep learning applications in medical image analysis,” IEEE Access, vol. 6, pp. 9375-9389, 2018. DOI: 10.1109/ACCESS.2017.2788044.
    CrossRef
  4. K. He, et al, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, pp. 9729-9738, 2020.
    Pubmed CrossRef
  5. X. Chen, et al, “Improved baselines with momentum contrastive learning,” arXiv preprint arXiv:2003.04297, Mar. 2020. DOI: 10.48550/arXiv.2003.04297.
  6. T. Chen, et al, “A simple framework for contrastive learning of visual representations,” arXiv preprint arXiv:2002.05709, 2020. DOI: 10.48550/arXiv.2002.05709.
  7. T. Chen, et al, “Big self-supervised models are strong semi-supervised learners,” arXiv preprint arXiv:2006.10029, 2020. DOI: 10.48550/arXiv.2006.10029.
  8. H. Sowrirajan, et al, “Moco pretraining improves representation and transferability of chest x-ray models,” Medical Imaging with Deep Learning, vol. 143, pp. 728-744. PMLR, 2021. DOI: 10.48550/arXiv.2010.05352v1.
  9. V. N. Vapnik, The Nature of Statistical Learning Theory, 2nd ed. Springer-Verlag New York, 2000.
    CrossRef
  10. G. Ke, et al, “Lightgbm: A highly efficient gradient boosting decision tree,” Advances in Neural Information Processing Systems, vol. 30, 2017.
  11. T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data mining. San Francisco, USA, pp. 785-794, Aug. 2016. DOI: 10.1145/2939672.2939785.
    CrossRef
  12. A. V. Dorogush, et al, “CatBoost: gradient boosting with categorical features support,” arXiv preprint arXiv:1810.11363, Oct. 2018. DOI: 10.48550/arXiv.1810.11363.
  13. Ege Kesim, et al, “X-ray chest image classification by a small-sized convolutional neural network,” in 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT), Istanbul, Turkey, pp. 1-5, 2019. DOI: 10.1109/EBBT.2019.8742050.
    CrossRef
  14. V. Chouhan, et al, “A novel transfer learning based approach for pneumonia detection in chest x-ray images,” Applied Sciences, vol. 10, no. 2, p. 559, Jan. 2020. DOI: 10.3390/app10020559.
    CrossRef
  15. P. Rajpurkar, et al, “Deep learning for chest radiograph diagnosis: A retrospective comparison of the chexnext algorithm to practicing radiologists,” PLoS medicine, vol. 15, no. 11, p. e1002686, Nov. 2018. DOI: 10.1371/journal.pmed.1002686.
    Pubmed KoreaMed CrossRef
  16. A. Bhandary, et al, “Deep-learning framework to detect lung abnormality-a study with chest x-ray and lung CT scan images,” Pattern Recognition Letters, vol. 129, pp. 271-278, Jan. 2020. DOI: 10.1016/j.patrec.2019.11.013.
    CrossRef
  17. M. Wozniak, et al, “Small lung nodules detection based on local variance analysis and probabilistic neural network,” Computer Methods and Programs in Biomedicine, vol. 161, pp. 173-180, Jul. 2018. DOI: 10.1016/j.cmpb.2018.04.025.
    Pubmed CrossRef
  18. S. S. Yadav and S. M. Jadhav, “Deep convolutional neural network based medical image classification for disease diagnosis,” Journal of Big data, vol. 6, no. 1, pp. 1-18, Dec. 2019. DOI: 10.1186/s40537-019-0276-2.
    CrossRef
  19. T-N. Do, et al, “SVM on Top of Deep Networks for Covid-19 Detection from Chest X-ray Images,” Journal of Information and Communication Convergence Engineering, vol. 20, no. 3, pp. 219-225, Sep. 2022. DOI: 10.56977/jicce.2022.20.3.219.
    CrossRef
  20. P. Afshar, et al, “Covid-caps: A capsule network-based framework for identification of covid-19 cases from x-ray images,” Pattern Recognition Letters, vol. 138, pp. 638-643, Oct. 2020. DOI: 10.1016/j.patrec.2020.09.010.
    Pubmed KoreaMed CrossRef
  21. E. E-D. Hemdan, et al, “Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images,” arXiv preprint arXiv:2003.11055, Mar. 2020. DOI: 10.48550/arXiv.2003.11055.
  22. T. Ozturk, et al, “Automated detection of covid-19 cases using deep neural networks with x-ray images,” Computers in Biology and Medicine, vol. 121, p. 103792, Jun. 2020. DOI: 10.1016/j.compbiomed.2020.103792.
    Pubmed KoreaMed CrossRef
  23. K. He, et al, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 770-778, 2016.
    Pubmed CrossRef
  24. J. Irvin, et al, “Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison,” in Proceedings of the AAAI conference on artificial intelligence. Honolulu, USA, vol. 33, no. 1, pp. 590-597, Jul. 2019. DOI: 10.1609/aaai.v33i01.3301590.
    CrossRef
  25. H. Q. Nguyen, et al, “Vindr-cxr: An open dataset of chest x-rays with radiologist's annotations,” Scientific Data, vol. 9, no. 1, p. 429, Jul. 2022. DOI: 10.1038/s41597-022-01498-w.
    Pubmed KoreaMed CrossRef
  26. D. S. Kermany, et al, “Identifying medical diagnoses and treatable diseases by image-based deep learning,” Cell, vol. 172, no. 5, pp. 1122-1131, Feb. 2018. DOI: 10.1016/j.cell.2018.02.010.
    Pubmed CrossRef
  27. J. Shiraishi, et al, “Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists' detection of pulmonary nodules,” American Journal of Roentgenology, vol. 174, no. 1, pp. 71-74, Jan. 2000. DOI: 10.2214/ajr.174.1.1740071.
    Pubmed CrossRef
  28. J. Saltz, et al, Stony Brook University Covid-19 Positive Cases, 2020. [online] Available: https://doi.org/10.7937/TCIA.BBAG-2923.
  29. A. Haghanifar, et al, Covid-19 chest x-ray image repository, 2021. [online] Available: https://figshare.com/articles/dataset/COVID-19.
  30. J. P. Cohen, et al, “Covid-19 image data collection: Prospective predictions are the future,” arXiv preprint arXiv:2006.11988, Jun. 2020. DOI:/10.48550/arXiv.2006.11988.
  31. M. D. L. I. Vaya, et al, “Bimcv covid-19+: a large annotated dataset of rx and ct images from covid-19 patients,” arXiv preprint arXiv: 2006.01174, Jun. 2020. DOI: 10.48550/arXiv.2006.01174.
  32. H. B. Winther, et al, Covid-19 image repository, 2020. [online], Available: https://doi.org/10.6084/m9.figshare.12275009.v1.
  33. SIIM-ACR Pneumothorax Segmentation, SIIM-ACR Pneumothorax Segmentation, [online], Available: https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation.
  34. X. Wang, et al, “Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” in Proceedings of the IEEE conference on computer vision and pattern recognition. Honolulu, USA, pp. 2097-2106, 2017. DOI: 10.1109/cvpr.2017.369.
    CrossRef
  35. M. Abadi, et al, “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603. 04467, Mar. 2016. DOI: 10.48550/arXiv.1603.04467.
  36. F. Pedregosa, et al, “Scikitlearn: Machine learning in python,” The Journal of Machine Learning Research, vol. 12, pp. 2825-2830, Oct. 2011.
JICCE
Sep 30, 2024 Vol.22 No.3, pp. 173~266

Stats or Metrics

Share this article on

  • line

Related articles in JICCE

Journal of Information and Communication Convergence Engineering Jouranl of information and
communication convergence engineering
(J. Inf. Commun. Converg. Eng.)

eISSN 2234-8883
pISSN 2234-8255