|Year : 2022 | Volume
| Issue : 1 | Page : 116-118
Use of artificial intelligence on chest skiagrams in patients with COVID-19: Time to widen the horizon
Department of Radiology, Advanced Diagnostics and Institute of Imaging, Amritsar, India
|Date of Submission||19-Jan-2022|
|Date of Decision||03-Feb-2022|
|Date of Acceptance||04-Feb-2022|
|Date of Web Publication||31-Mar-2022|
Advanced Diagnostics and Institute of Imaging, Amritsar, Punjab
Source of Support: None, Conflict of Interest: None
|How to cite this article:|
Kapoor A. Use of artificial intelligence on chest skiagrams in patients with COVID-19: Time to widen the horizon. Cancer Res Stat Treat 2022;5:116-8
|How to cite this URL:|
Kapoor A. Use of artificial intelligence on chest skiagrams in patients with COVID-19: Time to widen the horizon. Cancer Res Stat Treat [serial online] 2022 [cited 2022 May 28];5:116-8. Available from: https://www.crstonline.com/text.asp?2022/5/1/116/341259
The coronavirus disease 2019 (COVID-19) pandemic has affected people worldwide and posed a huge health-care challenge. Real-time reverse transcriptase–polymerase reaction (RT-PCR) is considered the gold standard for the diagnosis of COVID-19 despite its limited sensitivity., In 2020, the American College of Radiology pushed the use of X-rays and computed tomography to a secondary role in their guidelines for use during the pandemic.,, Before the pandemic, artificial intelligence (AI) technology was being studied to detect different types of lesions such as pneumonia, nodules, pleural effusions, and atelectasis on chest skiagrams., The COVID-19 pandemic shifted the application of AI to the detection of COVID-19, and within a year, there were more than 2000 publications reporting the high sensitivity and specificity of AI-based algorithms for this purpose.,,, Most of these articles reported binary results, i.e., differentiation of COVID-19 from non-COVID-19 pathologies. The explosion of publications was primarily due to the easy availability of AI tools online, including pre-processed datasets. Most of the datasets were from China, Italy, and the United States of America. The utility of these public datasets was questionable, as most of them did not have any metadata, limited annotations, with labels restricted merely to COVID-19 and non-COVID-19, and no clinical details. By the end of the second wave of the pandemic, radiologists realized that there should be three foremost goals for evaluating any suspected patient with COVID-19: (a) binary detection of COVID-19 from non-COVID-19 pathologies, (b) segmentation and elucidation of the severity of the disease, and (c) identification of imaging biomarkers for prognosis. Amidst the excitement to use AI tools owing to their easy availability, these goals have been relegated to the background and there has been an incessant desire to publish during the pandemic.
Various AI tools available for use in chest imaging involve a series of steps, starting from pre-processing of datasets and segmentation of data with feature extraction to the classification of labels. Deep learning algorithms such as convolutional neural networks (CNN) are used to create architectures such as ResNet50, InceptionV3, and Inception ResnNetV2. Even though these have small validation datasets, they have demonstrated good sensitivity (up to 93%) and specificity (up to 89%) for the detection of COVID-19; however, they are embedded with serious issues and biases, including their limited availability, imbalance in the nature of dataset classes, and lack of metadata.
In the current issue of the journal, Pawar et al. have described in detail their AI algorithm using a combination of public and private datasets, which again highlights the paucity of private datasets as a result of which the authors had to use preexisting datasets. The authors used a validation dataset of 16,447 samples from the Kaggle RSNA Pneumonia Detection Challenge, 691 samples from an open-source dataset, and 3000 samples from a private-source dataset. This is bound to create biases due to the heterogeneity in datasets. Subsequently, the authors diligently used a three-stage AI pipeline, involving boxing of abnormalities of the lung with deep CNNs (DCNN), followed by segmentation with U-Net and use of DCNN for classification. In this study, the test data comprised 611 scans from 222 RT-PCR-positive patients. Good sensitivity and specificity of 77.7% and 75.4%, respectively, were observed, which was marginally higher than that obtained from the manual reading by radiologists (75.9%). The authors compared their results with those of COVID-Net which although had a higher sensitivity of 91.8%, showed a poor specificity of 46.5%. Various other studies,,, have shown a higher sensitivity and specificity ranging from 89%-94% as compared to the study by Pawar et al. As mentioned by the authors, this variation in results can be attributed to both, the quality of the datasets and the algorithm used. Currently, the use of shared learning network AI algorithms, such as VGG19 and MobileNetV2, is advocated to circumvent this problem of limitation of datasets; Pawar et al. could have considered this approach in their study. In these algorithms, datasets of prior trained networks are extrapolated to generate a higher number. The authors did successfully manage to address the problem of lung segmentation using a three-stage AI pipeline using U-Net, which has been difficult to achieve in chest skiagrams and many studies in the literature could not address it. Pawar et al. used ResNet and U-Net in their study to box ground-glass opacities and consolidations. However, they could have adopted an alternative approach or the segmentation of ground-glass opacities and consolidations, using gradient-weighted class activation mapping (Grad-Cam) or heat maps, which provide a convincing depiction of the pathology. The authors have reported high false-positive and false-negative rates of 22.3% and 24.5%, respectively, in their study, but have neither described the type of cases which mimicked COVID-19 nor given details of the false-negative cases in the test data of 611 cases. This information assumes clinical relevance when such an algorithm is put into real-time practice. Thus, a large number of cases would likely be missed, and an equal number would also be falsely suspected of having COVID-19. Hence, it would be prudent to increase the number of classifiers in the third stage of the AI pipeline, where the classification is changed from the binary black-box system to a multiclass one. In a review of 2900 published articles by Abassi, only 10% of the studies reported the use of AI on radiographs and of these, 72% focused only on diagnosing COVID-19, but none focused on the severity of the disease or prognosis. Abassi also calculated the maturity metric of the articles using AI, based on the quality of data, peer review, experimental rigor, and clinical deployment; they showed that only 2.7% rose to the bar of maturity. Similar results were reported by Born et al. who highlighted that most of the articles on AI in a meta-analysis of 240 articles lacked clinical relevance. The study by Pawar et al. also does not address the other two goals, i.e., depiction of the severity of COVID-19 and possible imaging biomarkers for COVID-19, which are important in the management and prognosis of these patients.
In summary, there is an urgent need to review the experimental designs of AI in imaging. Algorithms must be designed to include clinical data along with metadata of the patients so that multilevel classifiers can be incorporated along with a robust segmentation of the lung lesions, which could be elucidated in the form of heat maps with a quantified severity or area affected scores; this could help in monitoring of the disease processes, which is the primary role of imaging in COVID-19. This could lead to the incorporation of AI in chest skiagram evaluations and have a lasting impact on not only the diagnosis but also the management of patients with COVID-19. Even though literature has proven that the use of AI with chest skiagrams has increased the sensitivity of detection over manual interpretation, this alone is not enough for it to be adopted in clinical practice.
| References|| |
Winichakoon P, Chaiwarith R, Liwsrisakun C, Salee P, Goonna A, Limsukon A, et al.
Negative nasopharyngeal and oropharyngeal swabs do not rule out COVID-19. J Clin Microbiol 2020;58:e00297-20.
Patil N, Lad A, Rajadhyaksha A, Chadha K, Chheda P, Wadhwa V, et al.
COVID-19: Experience of a tertiary reference laboratory on the cusp of accurately testing 5500 samples and planning scalability. Cancer Res Stat Treat 2020;3 Suppl S1:138-40.
Ahuja A, Mahajan A. Imaging and COVID-19: Preparing the radiologist for the pandemic. Cancer Res Stat Treat 2020;3 Suppl S1:80-5.
Sharma PJ, Mahajan A, Rane S, Bhattacharjee A. Assessment of COVID-19 severity using computed tomography imaging: A systematic review and meta-analysis. Cancer Res Stat Treat 2021;4:78-87. [Full text]
Mahajan A, Vaidya T, Gupta A, Rane S, Gupta S. Artificial intelligence in healthcare in developing nations: The beginning of a transformative journey. Cancer Res Stat Treat 2019;2:182-9. [Full text]
Li L, Qin L, Xu Z, Yin Y, Wang X, Kong B, et al.
Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: Evaluation of the diagnostic accuracy. Radiology 2020;296:E65-71.
Bharadwaj KS, Pawar V, Punia V, Apparao ML, Mahajan A. Novel artificial intelligence algorithm for automatic detection of COVID-19 abnormalities in computed tomography images. Cancer Res Stat Treat 2021;4:256-61. [Full text]
Murphy K, Smits H, Knoops AJ, Korst MB, Samson T, Scholten ET, et al.
COVID-19 on chest radiographs: A multireader evaluation of an artificial intelligence system. Radiology 2020;296:E166-72.
Bai HX, Wang R, Xiong Z, Hsieh B, Chang K, Halsey K, et al.
Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT. Radiology 2020;296:E156-65.
Mahajan A, Pawar V, Punia V, Vaswani A, Gupta P, Bharadwaj KS, et al.
Deep learning-based COVID-19 triage tool: An observational study on an X-ray dataset. Cancer Res Stat Treat 2022;5:19-25. [Full text]
Zhang R, Tie X, Qi Z, Bevins NB, Zhang C, Griner D, et al.
Diagnosis of COVID-19 Pneumonia Using Chest Radiography: Value of Artificial Intelligence. Radiology 2020;298:E88-E97. [doi: 10.1148/radiol. 2020202944].
Xu X, Jiang X, Ma C, Du P, Li X, Lv S, et al
. Deep learning system to screen coronavirus disease 2019 pneumonia. ArXiv 2020;6:1122-9. Available from: http://arxiv.org/abs/2002.09334
. [Last accessed on 2020 May 19].
Shi F, et al.
Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for COVID-19. IEEE Rev Biomed Eng 2020;14:4-15. [doi: 10.1109/RBME].
Abassi J. Artificial intelligence in COVID-19 imaging mismatched to the clinic. JAMA 2021;326:124.
Born J, Beymer D, Rajan D, Coy A, Mukherjee VV, Manica M, et al.
On the role of artificial intelligence in medical imaging of COVID-19. Patterns (N Y) 2021;2:100330.
Wang S, Zha Y, Li W, Wu Q, Li X, Niu M, Wang M, et al.
A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis. Eur Respir J 2020;56:200775.