LETTER TO EDITOR
Year : 2021 | Volume
: 4 | Issue : 3 | Page : 596--597
Analysis of RetinaNet experiments for COVID-19 detection using computed tomography scan images
Digital Garage Inc., Tokyo, Japan
Digital Garage Inc. Tokyo
|How to cite this article:|
Kulkarni K. Analysis of RetinaNet experiments for COVID-19 detection using computed tomography scan images.Cancer Res Stat Treat 2021;4:596-597
|How to cite this URL:|
Kulkarni K. Analysis of RetinaNet experiments for COVID-19 detection using computed tomography scan images. Cancer Res Stat Treat [serial online] 2021 [cited 2022 Jan 16 ];4:596-597
Available from: https://www.crstonline.com/text.asp?2021/4/3/596/325905
In this letter, we analyze the paper titled, “Novel artificial intelligence algorithm for automatic detection of COVID-19 abnormalities in computed tomography images” by Bharadwaj et al. which focused on using computed tomography (CT) scan images for diagnosing COVID-19 patients instead of the traditional clinical tests such as real-time polymerase chain reaction. They used a state-of-the-art model framework (RetinaNet) for the task of detection of COVID-19 infection. They tried to overcome the sensitivity issues of traditional clinical tests with the help of a modern deep learning-based image classification approach. They provided sufficient explanation about the methodology and dataset that they have used. We have a few concerns about the overall modeling techniques which we are listing below.
Was a Pre-Trained Model Used or Not?
It would be helpful if the authors could provide the details of the pre-trained models which were used while training the classification model, as was done by Halder and Dutta. We would like to know the performance of the base model itself on the given task without fine-tuning it. This will provide more evidence regarding whether fine-tuning with the dataset curated by the authors really helped with the task.
We would like the authors to provide details regarding whether they experimented with any baseline model such as a simple convolutional neural network. Providing baseline model performances for comparison with the RetinaNet would be helpful to understand whether this contributed to the performance of the given task.
We would like to learn about the confusion matrix of the classification task, as was described by Shah et al. It would be an interesting addition to the paper itself if the authors could provide more details such as the types of cases in which the model is prone to making mistakes. This will be helpful for medical practitioners in making an informed decision.
We would like to see the comparison with similar kinds of methods where deep learning models have been used for the detection of COVID-19 cases using CT scan images. The authors could apply these methods to the same dataset and report the performance for comparison purposes, as was done by Shah et al. and Qiblawey et al., This analysis is needed to support the authors' decision of using RetinaNet over other models.
Bias in the Dataset
The authors mentioned that they have collected the images of normal people (with no abnormalities) from a tertiary care cancer center. Even though COVID-19-positive data samples were taken without demographic, geographic, or ethnic bias, negative data points might be biased toward a certain group of the population because of issues inherent to the method of data collection. We would like the authors to elaborate on the methods they used to collect these images of COVID-19-negative persons.
Bounding Box Performance Measures
The authors reported that the model draws a bounding box over the infected region in the images as an auxiliary output. We think it is important to report the performance metrics such as Intersection over Union or mean average precision for the bounding box problem as well as to help in understanding the performance of the model. There could be cases where the model prediction is positive and correct, but the bounding box might be drawn at the wrong place. This will help us in understanding whether or not the model is able to correctly locate the abnormalities associated with COVID-19.
Experiments With Cross-Validation
Using cross-validation gives a robust performance over dataset biases and other abnormalities, as was described by Silva et al. We request the authors to provide details about the experiments involving cross-validation which would support the validity of the results.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
|1||Bharadwaj KS, Pawar V, Punia V, Apparao ML, Mahajan A. Novel artificial intelligence algorithm for automatic detection of COVID-19 abnormalities in computed tomography images. Cancer Res Stat Treat 2021;4:256-61.|
|2||Pande P, Sharma P, Goyal D, Kulkarni T, Rane S, Mahajan A. COVID-19: A review of the ongoing pandemic. Cancer Res Stat Treat 2020;3:221-32.|
|3||Halder A, Dutta B. COVID-19 detection from lung CT-scan images using transfer learning approach. Mach Learn Sci Technol 2021;2:045013.|
|4||Shah V, Keniya R, Shridharani A, Punjabi M, Shah J, Mehendale N. Diagnosis of COVID-19 using CT scan images and deep learning techniques. Emerg Radiol 2021;28:497-505.|
|5||Qiblawey Y, Tahir A, Chowdhury ME, Khandakar A, Kiranyaz S, Rahman T, et al. Detection and severity classification of COVID-19 in CT images using deep learning. Diagnostics (Basel) 2021;11:893.|
|6||Silva P, Luz E, Silva G, Moreira G, Silva R, Lucio D, et al. COVID-19 detection in CT images with deep learning: A voting-based scheme and cross-datasets analysis. Inform Med Unlocked 2020;20:100427.|