title
stringlengths
2
287
abstract
stringlengths
0
5.14k
journal
stringlengths
4
184
date
unknown
authors
sequencelengths
1
57
doi
stringlengths
16
6.63k
The ensemble deep learning model for novel COVID-19 on CT images.
The rapid detection of the novel coronavirus disease, COVID-19, has a positive effect on preventing propagation and enhancing therapeutic outcomes. This article focuses on the rapid detection of COVID-19. We propose an ensemble deep learning model for novel COVID-19 detection from CT images. 2933 lung CT images from COVID-19 patients were obtained from previous publications, authoritative media reports, and public databases. The images were preprocessed to obtain 2500 high-quality images. 2500 CT images of lung tumor and 2500 from normal lung were obtained from a hospital. Transfer learning was used to initialize model parameters and pretrain three deep convolutional neural network models: AlexNet, GoogleNet, and ResNet. These models were used for feature extraction on all images. Softmax was used as the classification algorithm of the fully connected layer. The ensemble classifier EDL-COVID was obtained via relative majority voting. Finally, the ensemble classifier was compared with three component classifiers to evaluate accuracy, sensitivity, specificity, F value, and Matthews correlation coefficient. The results showed that the overall classification performance of the ensemble model was better than that of the component classifier. The evaluation indexes were also higher. This algorithm can better meet the rapid detection requirements of the novel coronavirus disease COVID-19.
Applied soft computing
"2020-11-17T00:00:00"
[ "TaoZhou", "HuilingLu", "ZaoliYang", "ShiQiu", "BingqiangHuo", "YaliDong" ]
10.1016/j.asoc.2020.106885 10.1001/jama.2020.1585 10.1016/j.ejrad.2020.108961 10.1148/radiol.2020200343 10.1007/s00330-020-06801-0 10.2214/AJR.20.22954 10.1148/radiol.2020200370 10.1007/s11604-020-00956-y 10.1016/j.jinf.2020.03.007 10.1007/s11547-020-01179-x 10.1148/radiol.2020200642 10.1007/s11604-020-00958-w 10.1148/radiol.2020200905 10.1101/2020.03.24.20042317 10.1155/2018/5264526 10.1016/j.asoc.2018.11.001 10.1155/2020/7602384 10.1155/2020/7653946
Automated detection of COVID-19 using ensemble of transfer learning with deep convolutional neural network based on CT scans.
COVID-19 has infected millions of people worldwide. One of the most important hurdles in controlling the spread of this disease is the inefficiency and lack of medical tests. Computed tomography (CT) scans are promising in providing accurate and fast detection of COVID-19. However, determining COVID-19 requires highly trained radiologists and suffers from inter-observer variability. To remedy these limitations, this paper introduces an automatic methodology based on an ensemble of deep transfer learning for the detection of COVID-19. A total of 15 pre-trained convolutional neural networks (CNNs) architectures: EfficientNets(B0-B5), NasNetLarge, NasNetMobile, InceptionV3, ResNet-50, SeResnet 50, Xception, DenseNet121, ResNext50 and Inception_resnet_v2 are used and then fine-tuned on the target task. After that, we built an ensemble method based on majority voting of the best combination of deep transfer learning outputs to further improve the recognition performance. We have used a publicly available dataset of CT scans, which consists of 349 CT scans labeled as being positive for COVID-19 and 397 negative COVID-19 CT scans that are normal or contain other types of lung diseases. The experimental results indicate that the majority voting of 5 deep transfer learning architecture with EfficientNetB0, EfficientNetB3, EfficientNetB5, Inception_resnet_v2, and Xception has the higher results than the individual transfer learning structure and among the other models based on precision (0.857), recall (0.854) and accuracy (0.85) metrics in diagnosing COVID-19 from CT scans. Our study based on an ensemble deep transfer learning system with different pre-trained CNNs architectures can work well on a publicly available dataset of CT images for the diagnosis of COVID-19 based on CT scans.
International journal of computer assisted radiology and surgery
"2020-11-17T00:00:00"
[ "ParisaGifani", "AhmadShalbaf", "MajidVafaeezadeh" ]
10.1007/s11548-020-02286-w 10.1148/radiol.2020200463 10.1016/j.media.2017.07.005 10.4103/2153-3539.186902 10.1109/TMI.2016.2553401 10.1038/s41551-018-0195-0 10.1038/nature21056 10.3390/app10020559 10.1117/1.JMI.3.3.034501 10.1016/j.jcmg.2019.06.009 10.1038/s41591-018-0107-6 10.1038/srep26286 10.1016/j.eswa.2019.01.060 10.1016/j.eswa.2020.113514 10.1016/j.measurement.2019.02.042 10.1038/s41598-019-56989-5 10.1145/3065386 10.1109/TMI.2016.2535302
Deep learning analysis provides accurate COVID-19 diagnosis on chest computed tomography.
Computed Tomography is an essential diagnostic tool in the management of COVID-19. Considering the large amount of examinations in high case-load scenarios, an automated tool could facilitate and save critical time in the diagnosis and risk stratification of the disease. A novel deep learning derived machine learning (ML) classifier was developed using a simplified programming approach and an open source dataset consisting of 6868 chest CT images from 418 patients which was split into training and validation subsets. The diagnostic performance was then evaluated and compared to experienced radiologists on an independent testing dataset. Diagnostic performance metrics were calculated using Receiver Operating Characteristics (ROC) analysis. Operating points with high positive (>10) and low negative (<0.01) likelihood ratios to stratify the risk of COVID-19 being present were identified and validated. The model achieved an overall accuracy of 0.956 (AUC) on an independent testing dataset of 90 patients. Both rule-in and rule out thresholds were identified and tested. At the rule-in operating point, sensitivity and specificity were 84.4 % and 93.3 % and did not differ from both radiologists (p > 0.05). At the rule-out threshold, sensitivity (100 %) and specificity (60 %) differed significantly from the radiologists (p < 0.05). Likelihood ratios and a Fagan nomogram provide prevalence independent test performance estimates. Accurate diagnosis of COVID-19 using a basic deep learning approach is feasible using open-source CT image data. In addition, the machine learning classifier provided validated rule-in and rule-out criteria could be used to stratify the risk of COVID-19 being present.
European journal of radiology
"2020-11-16T00:00:00"
[ "DJavor", "HKaplan", "AKaplan", "S BPuchner", "CKrestan", "PBaltzer" ]
10.1016/j.ejrad.2020.109402 10.1186/s40537-019-0192-5 10.3390/info11020108 10.1016/j.ejrad.2019.108774
Analyzing inter-reader variability affecting deep ensemble learning for COVID-19 detection in chest radiographs.
Data-driven deep learning (DL) methods using convolutional neural networks (CNNs) demonstrate promising performance in natural image computer vision tasks. However, their use in medical computer vision tasks faces several limitations, viz., (i) adapting to visual characteristics that are unlike natural images; (ii) modeling random noise during training due to stochastic optimization and backpropagation-based learning strategy; (iii) challenges in explaining DL black-box behavior to support clinical decision-making; and (iv) inter-reader variability in the ground truth (GT) annotations affecting learning and evaluation. This study proposes a systematic approach to address these limitations through application to the pandemic-caused need for Coronavirus disease 2019 (COVID-19) detection using chest X-rays (CXRs). Specifically, our contribution highlights significant benefits obtained through (i) pretraining specific to CXRs in transferring and fine-tuning the learned knowledge toward improving COVID-19 detection performance; (ii) using ensembles of the fine-tuned models to further improve performance over individual constituent models; (iii) performing statistical analyses at various learning stages for validating results; (iv) interpreting learned individual and ensemble model behavior through class-selective relevance mapping (CRM)-based region of interest (ROI) localization; and, (v) analyzing inter-reader variability and ensemble localization performance using Simultaneous Truth and Performance Level Estimation (STAPLE) methods. We find that ensemble approaches markedly improved classification and localization performance, and that inter-reader variability and performance level assessment helps guide algorithm design and parameter optimization. To the best of our knowledge, this is the first study to construct ensembles, perform ensemble-based disease ROI localization, and analyze inter-reader variability and algorithm performance for COVID-19 detection in CXRs.
PloS one
"2020-11-13T00:00:00"
[ "SivaramakrishnanRajaraman", "SudhirSornapudi", "Philip OAlderson", "Les RFolio", "Sameer KAntani" ]
10.1371/journal.pone.0242301 10.1016/j.chest.2020.04.003 10.1148/radiol.2020200823 10.1109/access.2020.3003810 10.3390/diagnostics10060358 10.1148/radiol.2020200905 10.1146/annurev-bioeng-071516-044442 10.1249/MSS.0000000000001291 10.1016/j.ejrad.2013.02.018 10.1109/TMI.2004.828354 10.1371/journal.pone.0202121 10.3390/diagnostics9020038 10.1109/access.2020.2971257 10.1148/radiol.2017162326 10.1109/EMBC.2019.8856715 10.1136/bmj.331.7513.379 10.4103/0256-4947.60518 10.1016/j.jrid.2020.05.001 10.1016/j.cell.2018.02.010 10.1007/s10278-019-00227-x 10.1016/j.artint.2014.02.004 10.1007/s11548-019-01917-1 10.1017/thg.2017.28 10.1016/j.jss.2007.02.053 10.1515/jib-2017-0063 10.4049/jimmunol.1602077
COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images.
The Coronavirus Disease 2019 (COVID-19) pandemic continues to have a devastating effect on the health and well-being of the global population. A critical step in the fight against COVID-19 is effective screening of infected patients, with one of the key screening approaches being radiology examination using chest radiography. It was found in early studies that patients present abnormalities in chest radiography images that are characteristic of those infected with COVID-19. Motivated by this and inspired by the open source efforts of the research community, in this study we introduce COVID-Net, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest X-ray (CXR) images that is open source and available to the general public. To the best of the authors' knowledge, COVID-Net is one of the first open source network designs for COVID-19 detection from CXR images at the time of initial release. We also introduce COVIDx, an open access benchmark dataset that we generated comprising of 13,975 CXR images across 13,870 patient patient cases, with the largest number of publicly available COVID-19 positive cases to the best of the authors' knowledge. Furthermore, we investigate how COVID-Net makes predictions using an explainability method in an attempt to not only gain deeper insights into critical factors associated with COVID cases, which can aid clinicians in improved screening, but also audit COVID-Net in a responsible and transparent manner to validate that it is making decisions based on relevant information from the CXR images. By no means a production-ready solution, the hope is that the open access COVID-Net, along with the description on constructing the open source COVIDx dataset, will be leveraged and build upon by both researchers and citizen data scientists alike to accelerate the development of highly accurate yet practical deep learning solutions for detecting COVID-19 cases and accelerate treatment of those who need it the most.
Scientific reports
"2020-11-13T00:00:00"
[ "LindaWang", "Zhong QiuLin", "AlexanderWong" ]
10.1038/s41598-020-76550-z 10.1148/ryct.2020200034 10.1016/S0140-6736(20)30183-5 10.1148/radiol.2020200642 10.1016/j.chest.2020.04.003 10.1016/j.crad.2020.03.008 10.1177/0846537120924606 10.1016/j.clinimag.2020.04.001
AI-driven quantification, staging and outcome prediction of COVID-19 pneumonia.
Coronavirus disease 2019 (COVID-19) emerged in 2019 and disseminated around the world rapidly. Computed tomography (CT) imaging has been proven to be an important tool for screening, disease quantification and staging. The latter is of extreme importance for organizational anticipation (availability of intensive care unit beds, patient management planning) as well as to accelerate drug development through rapid, reproducible and quantified assessment of treatment response. Even if currently there are no specific guidelines for the staging of the patients, CT together with some clinical and biological biomarkers are used. In this study, we collected a multi-center cohort and we investigated the use of medical imaging and artificial intelligence for disease quantification, staging and outcome prediction. Our approach relies on automatic deep learning-based disease quantification using an ensemble of architectures, and a data-driven consensus for the staging and outcome prediction of the patients fusing imaging biomarkers with clinical and biological attributes. Highly promising results on multiple external/independent evaluation cohorts as well as comparisons with expert human readers demonstrate the potentials of our approach.
Medical image analysis
"2020-11-11T00:00:00"
[ "GuillaumeChassagnon", "MariaVakalopoulou", "EnzoBattistella", "StergiosChristodoulidis", "Trieu-NghiHoang-Thi", "SeverineDangeard", "EricDeutsch", "FabriceAndre", "EnoraGuillo", "NaraHalm", "StefanyEl Hajj", "FlorianBompard", "SophieNeveu", "ChahinezHani", "InesSaab", "AliénorCampredon", "HasmikKoulakian", "SouhailBennani", "GaelFreche", "MaximeBarat", "AurelienLombard", "LaureFournier", "HippolyteMonnier", "TéodorGrand", "JulesGregory", "YannNguyen", "AntoineKhalil", "ElyasMahdjoub", "Pierre-YvesBrillet", "StéphaneTran Ba", "ValérieBousson", "AhmedMekki", "Robert-YvesCarlier", "Marie-PierreRevel", "NikosParagios" ]
10.1016/j.media.2020.101860 10.1007/s00330-019-06564-3 10.1007/s00330-020-06817-6 10.1148/radiol.2020200905 10.1038/s41591-020-0931-3 10.1038/s41574-020-0364-6
COVIDGR Dataset and COVID-SDNet Methodology for Predicting COVID-19 Based on Chest X-Ray Images.
Currently, Coronavirus disease (COVID-19), one of the most infectious diseases in the 21st century, is diagnosed using RT-PCR testing, CT scans and/or Chest X-Ray (CXR) images. CT (Computed Tomography) scanners and RT-PCR testing are not available in most medical centers and hence in many cases CXR images become the most time/cost effective tool for assisting clinicians in making decisions. Deep learning neural networks have a great potential for building COVID-19 triage systems and detecting COVID-19 patients, especially patients with low severity. Unfortunately, current databases do not allow building such systems as they are highly heterogeneous and biased towards severe cases. This article is three-fold: (i) we demystify the high sensitivities achieved by most recent COVID-19 classification models, (ii) under a close collaboration with Hospital Universitario Clínico San Cecilio, Granada, Spain, we built COVIDGR-1.0, a homogeneous and balanced database that includes all levels of severity, from normal with Positive RT-PCR, Mild, Moderate to Severe. COVIDGR-1.0 contains 426 positive and 426 negative PA (PosteroAnterior) CXR views and (iii) we propose COVID Smart Data based Network (COVID-SDNet) methodology for improving the generalization capacity of COVID-classification models. Our approach reaches good and stable results with an accuracy of [Formula: see text], [Formula: see text], [Formula: see text] in severe, moderate and mild COVID-19 severity levels. Our approach could help in the early detection of COVID-19. COVIDGR-1.0 along with the severity level labels are available to the scientific community through this link https://dasci.es/es/transferencia/open-data/covidgr/.
IEEE journal of biomedical and health informatics
"2020-11-11T00:00:00"
[ "STabik", "AGomez-Rios", "J LMartin-Rodriguez", "ISevillano-Garcia", "MRey-Area", "DCharte", "EGuirado", "J LSuarez", "JLuengo", "M AValero-Gonzalez", "PGarcia-Villanova", "EOlmedo-Sanchez", "FHerrera" ]
10.1109/JBHI.2020.3037127
Deep learning and its role in COVID-19 medical imaging.
COVID-19 is one of the greatest global public health challenges in history. COVID-19 is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and is estimated to have an cumulative global case-fatality rate as high as 7.2% (Onder et al., 2020) [1]. As the SARS-CoV-2 spread across the globe it catalyzed new urgency in building systems to allow rapid sharing and dissemination of data between international healthcare infrastructures and governments in a worldwide effort focused on case tracking/tracing, identifying effective therapeutic protocols, securing healthcare resources, and in drug and vaccine research. In addition to the worldwide efforts to share clinical and routine population health data, there are many large-scale efforts to collect and disseminate medical imaging data, owing to the critical role that imaging has played in diagnosis and management around the world. Given reported false negative rates of the reverse transcriptase polymerase chain reaction (RT-PCR) of up to 61% (Centers for Disease Control and Prevention, Division of Viral Diseases, 2020; Kucirka et al., 2020) [2,3], imaging can be used as an important adjunct or alternative. Furthermore, there has been a shortage of test-kits worldwide and laboratories in many testing sites have struggled to process the available tests within a reasonable time frame. Given these issues surrounding COVID-19, many groups began to explore the benefits of 'big data' processing and algorithms to assist with the diagnosis and therapeutic development of COVID-19.
Intelligence-based medicine
"2020-11-11T00:00:00"
[ "Sudhen BDesai", "AnujPareek", "Matthew PLungren" ]
10.1016/j.ibmed.2020.100013 10.1038/nature14539 10.1016/j.neunet.2014.09.003 10.1016/j.acra.2018.02.018 10.1371/journal.pmed.1002686 10.21105/joss.00747 10.3389/fmed.2020.00550 10.1148/radiol.2020201491 10.1007/s13246-020-00865-4 10.1109/ACCESS.2020.3016780 10.1038/s41591-020-0931-3 10.1148/radiol.2020200463 10.1016/j.cell.2020.04.045 10.1109/TMI.2020.3000314 10.1109/TMI.2020.2996645 10.1111/anae.15082 10.1002/jum.15284 10.1109/TMI.2020.2994459 10.1109/ACCESS.2020.3010287
Classification of Severe and Critical Covid-19 Using Deep Learning and Radiomics.
The coronavirus disease 2019 (COVID-19) is rapidly spreading inside China and internationally. We aimed to construct a model integrating information from radiomics and deep learning (DL) features to discriminate critical cases from severe cases of COVID-19 using computed tomography (CT) images. We retrospectively enrolled 217 patients from three centers in China, including 82 patients with severe disease and 135 with critical disease. Patients were randomly divided into a training cohort (n = 174) and a test cohort (n = 43). We extracted 102 3-dimensional radiomic features from automatically segmented lung volume and selected the significant features. We also developed a 3-dimensional DL network based on center-cropped slices. Using multivariable logistic regression, we then created a merged model based on significant radiomic features and DL scores. We employed the area under the receiver operating characteristic curve (AUC) to evaluate the model's performance. We then conducted cross validation, stratified analysis, survival analysis, and decision curve analysis to evaluate the robustness of our method. The merged model can distinguish critical patients with AUCs of 0.909 (95% confidence interval [CI]: 0.859-0.952) and 0.861 (95% CI: 0.753-0.968) in the training and test cohorts, respectively. Stratified analysis indicated that our model was not affected by sex, age, or chronic disease. Moreover, the results of the merged model showed a strong correlation with patient outcomes. A model combining radiomic and DL features of the lung could help distinguish critical cases from severe cases of COVID-19.
IEEE journal of biomedical and health informatics
"2020-11-10T00:00:00"
[ "CongLi", "DiDong", "LiangLi", "WeiGong", "XiaohuLi", "YanBai", "MeiyunWang", "ZhenhuaHu", "YunfeiZha", "JieTian" ]
10.1109/JBHI.2020.3036722
Breakthrough healthcare technologies in the COVID-19 era: a unique opportunity for cardiovascular practitioners and patients.
The Coronavirus disease 2019 (COVID-19) pandemic, caused by symptomatic severe acute respiratory syndrome-Coronavirus-2 (SARS-CoV-2) infection, has wreaked havoc globally, challenging the healthcare, economical, technological and social status quo of developing but also developed countries. For instance, the COVID-19 scare has reduced timely hospital admissions for ST-elevation myocardial infarction in Europe and the USA, causing unnecessary deaths and disabilities. While the emergency is still ongoing, enough efforts have been put to study and tackle this condition such that a comprehensive perspective and synthesis on the potential role of breakthrough healthcare technologies is possible. Indeed, current state-of-the-art information technologies can provide a unique opportunity to adapt and adjust to the current healthcare needs associated with COVID-19, either directly or indirectly, and in particular those of cardiovascular patients and practitioners. We searched several biomedical databases, websites and social media, including PubMed, Medscape, and Twitter, for smartcare approaches suitable for application in the COVID-19 pandemic. We retrieved details on several promising avenues for present and future healthcare technologies, capable of substantially reduce the mortality, morbidity, and resource use burden of COVID-19 as well as that of cardiovascular disease. In particular, we have found data supporting the importance of data sharing, model sharing, preprint archiving, social media, medical case sharing, distance learning and continuous medical education, smartphone apps, telemedicine, robotics, big data analysis, machine learning, and deep learning, with the ultimate goal of optimization of individual prevention, diagnosis, tracing, risk-stratification, treatment and rehabilitation. We are confident that refinement and command of smartcare technologies will prove extremely beneficial in the short-term, but also dramatically reshape cardiovascular practice and healthcare delivery in the long-term future, for COVID-19 as well as other diseases.
Panminerva medica
"2020-11-10T00:00:00"
[ "RaffaeleNudi", "MarcoCampagna", "AlessioParma", "AndreaNudi", "GiuseppeBiondi Zoccai" ]
10.23736/S0031-0808.20.04188-9
Using artificial intelligence to assist radiologists in distinguishing COVID-19 from other pulmonary infections.
Accurate and rapid diagnosis of coronavirus disease (COVID-19) is crucial for timely quarantine and treatment. In this study, a deep learning algorithm-based AI model using ResUNet network was developed to evaluate the performance of radiologists with and without AI assistance in distinguishing COVID-19 infected pneumonia patients from other pulmonary infections on CT scans. For model development and validation, a total number of 694 cases with 111,066 CT slides were retrospectively collected as training data and independent test data in the study. Among them, 118 are confirmed COVID-19 infected pneumonia cases and 576 are other pulmonary infection cases (e.g. tuberculosis cases, common pneumonia cases and non-COVID-19 viral pneumonia cases). The cases were divided into training and testing datasets. The independent test was performed by evaluating and comparing the performance of three radiologists with different years of practice experience in distinguishing COVID-19 infected pneumonia cases with and without the AI assistance. Our final model achieved an overall test accuracy of 0.914 with an area of the receiver operating characteristic (ROC) curve (AUC) of 0.903 in which the sensitivity and specificity are 0.918 and 0.909, respectively. The deep learning-based model then achieved a comparable performance by improving the radiologists' performance in distinguish COVOD-19 from other pulmonary infections, yielding better average accuracy and sensitivity, from 0.941 to 0.951 and from 0.895 to 0.942, respectively, when compared to radiologists without using AI assistance. A deep learning algorithm-based AI model developed in this study successfully improved radiologists' performance in distinguishing COVID-19 from other pulmonary infections using chest CT images.
Journal of X-ray science and technology
"2020-11-10T00:00:00"
[ "YanhongYang", "Fleming Y MLure", "HengyuanMiao", "ZiqiZhang", "StefanJaeger", "JinxinLiu", "LinGuo" ]
10.3233/XST-200735 10.1016/j.genhosppsych.2020.03.011
Deep Learning Applications to Combat Novel Coronavirus (COVID-19) Pandemic.
During this global pandemic, researchers around the world are trying to find out innovative technology for a smart healthcare system to combat coronavirus. The evidence of deep learning applications on the past epidemic inspires the experts by giving a new direction to control this outbreak. The aim of this paper is to discuss the contributions of deep learning at several scales including medical imaging, disease tracing, analysis of protein structure, drug discovery, and virus severity and infectivity to control the ongoing outbreak. A progressive search of the database related to the applications of deep learning was executed on COVID-19. Further, a comprehensive review is done using selective information by assessing the different perspectives of deep learning. This paper attempts to explore and discuss the overall applications of deep learning on multiple dimensions to control novel coronavirus (COVID-19). Though various studies are conducted using deep learning algorithms, there are still some constraints and challenges while applying for real-world problems. The ongoing progress in deep learning contributes to handle coronavirus infection and plays an effective role to develop appropriate solutions. It is expected that this paper would be a great help for the researchers who would like to contribute to the development of remedies for this current pandemic in this area.
SN computer science
"2020-11-10T00:00:00"
[ "AmanullahAsraf", "Md ZabirulIslam", "Md RezwanulHaque", "Md MilonIslam" ]
10.1007/s42979-020-00383-w 10.1016/j.cmrp.2020.03.011 10.1080/03772063.2020.1713916 10.5815/ijieeb.2019.02.03 10.1007/s42979-020-00305-w 10.1007/s42979-020-00216-w 10.1101/2020.08.24.20181339v1 10.1007/s42979-020-00195-y 10.18280/ria.330605 10.1016/j.dsx.2020.05.008 10.3201/eid1002.030759 10.1016/j.cmpb.2020.105581 10.1016/j.ejrad.2020.109041 10.1101/2020.02.14.20023028 10.1007/s10489-020-01714-3 10.1101/2020.03.20.20039834 10.1101/2020.02.23.20026930 10.1148/ryct.2020200242 10.1016/j.chaos.2020.109864 10.1101/2020.03.25.20043505 10.1016/S1473-3099(20)30237-1 10.26434/chemrxiv.11829102.v2 10.1101/2020.03.03.972133 10.1016/j.csbj.2020.03.025 10.1007/s12539-020-00376-6 10.1101/2020.01.29.925354
The importance of standardisation - COVID-19 CT & Radiograph Image Data Stock for deep learning purpose.
With the number of affected individuals still growing world-wide, the research on COVID-19 is continuously expanding. The deep learning community concentrates their efforts on exploring if neural networks can potentially support the diagnosis using CT and radiograph images of patients' lungs. The two most popular publicly available datasets for COVID-19 classification are COVID-CT and COVID-19 Image Data Collection. In this work, we propose a new dataset which we call COVID-19 CT & Radiograph Image Data Stock. It contains both CT and radiograph samples of COVID-19 lung findings and combines them with additional data to ensure a sufficient number of diverse COVID-19-negative samples. Moreover, it is supplemented with a carefully defined split. The aim of COVID-19 CT & Radiograph Image Data Stock is to create a public pool of CT and radiograph images of lungs to increase the efficiency of distinguishing COVID-19 disease from other types of pneumonia and from healthy chest. We hope that the creation of this dataset would allow standardisation of the approach taken for training deep neural networks for COVID-19 classification and eventually for building more reliable models.
Computers in biology and medicine
"2020-11-09T00:00:00"
[ "KrzysztofMisztal", "AgnieszkaPocha", "MartynaDurak-Kozica", "MichałWątor", "AleksandraKubica-Misztal", "MarcinHartel" ]
10.1016/j.compbiomed.2020.104092 10.5281/zeno.do.3723295 10.5281/zenodo.3723299
A Weakly-Supervised Framework for COVID-19 Classification and Lesion Localization From Chest CT.
Accurate and rapid diagnosis of COVID-19 suspected cases plays a crucial role in timely quarantine and medical treatment. Developing a deep learning-based model for automatic COVID-19 diagnosis on chest CT is helpful to counter the outbreak of SARS-CoV-2. A weakly-supervised deep learning framework was developed using 3D CT volumes for COVID-19 classification and lesion localization. For each patient, the lung region was segmented using a pre-trained UNet; then the segmented 3D lung region was fed into a 3D deep neural network to predict the probability of COVID-19 infectious; the COVID-19 lesions are localized by combining the activation regions in the classification network and the unsupervised connected components. 499 CT volumes were used for training and 131 CT volumes were used for testing. Our algorithm obtained 0.959 ROC AUC and 0.976 PR AUC. When using a probability threshold of 0.5 to classify COVID-positive and COVID-negative, the algorithm obtained an accuracy of 0.901, a positive predictive value of 0.840 and a very high negative predictive value of 0.982. The algorithm took only 1.93 seconds to process a single patient's CT volume using a dedicated GPU. Our weakly-supervised deep learning model can accurately predict the COVID-19 infectious probability and discover lesion regions in chest CT without the need for annotating the lesions for training. The easily-trained and high-performance deep learning algorithm provides a fast way to identify COVID-19 patients, which is beneficial to control the outbreak of SARS-CoV-2. The developed deep learning software is available at https://github.com/sydney0zq/covid-19-detection.
IEEE transactions on medical imaging
"2020-11-07T00:00:00"
[ "XinggangWang", "XianboDeng", "QingFu", "QiangZhou", "JiapeiFeng", "HuiMa", "WenyuLiu", "ChuanshengZheng" ]
10.1109/TMI.2020.2995965
Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography.
Computed tomography (CT) is the preferred imaging method for diagnosing 2019 novel coronavirus (COVID19) pneumonia. We aimed to construct a system based on deep learning for detecting COVID-19 pneumonia on high resolution CT. For model development and validation, 46,096 anonymous images from 106 admitted patients, including 51 patients of laboratory confirmed COVID-19 pneumonia and 55 control patients of other diseases in Renmin Hospital of Wuhan University were retrospectively collected. Twenty-seven prospective consecutive patients in Renmin Hospital of Wuhan University were collected to evaluate the efficiency of radiologists against 2019-CoV pneumonia with that of the model. An external test was conducted in Qianjiang Central Hospital to estimate the system's robustness. The model achieved a per-patient accuracy of 95.24% and a per-image accuracy of 98.85% in internal retrospective dataset. For 27 internal prospective patients, the system achieved a comparable performance to that of expert radiologist. In external dataset, it achieved an accuracy of 96%. With the assistance of the model, the reading time of radiologists was greatly decreased by 65%. The deep learning model showed a comparable performance with expert radiologist, and greatly improved the efficiency of radiologists in clinical practice.
Scientific reports
"2020-11-07T00:00:00"
[ "JunChen", "LianlianWu", "JunZhang", "LiangZhang", "DexinGong", "YilinZhao", "QiuxiangChen", "ShulanHuang", "MingYang", "XiaoYang", "ShanHu", "YongguiWang", "XiaoHu", "BiqingZheng", "KuoZhang", "HuilingWu", "ZehuaDong", "YoumingXu", "YijieZhu", "XiChen", "MengjiaoZhang", "LileiYu", "FanCheng", "HonggangYu" ]
10.1038/s41598-020-76282-0 10.1016/j.ijid.2020.01.009 10.1056/NEJMoa2001191 10.1056/NEJMc2001468 10.1016/S0140-6736(20)30183-5 10.1016/S0140-6736(20)30211-7 10.1016/S0140-6736(19)32501-2 10.1016/S2468-1253(19)30413-3 10.1136/gutjnl-2018-317366 10.1016/j.gie.2019.09.016 10.1016/j.gie.2019.11.026 10.1055/a-0855-3532 10.1016/j.jhin.2020.01.010 10.3390/jcm9020462 10.2807/1560-7917.ES.2020.25.4.2000058 10.1038/s41591-019-0447-x 10.1016/j.acra.2019.05.018 10.1007/s12519-020-00345-5
CT and clinical assessment in asymptomatic and pre-symptomatic patients with early SARS-CoV-2 in outbreak settings.
The early infection dynamics of patients with SARS-CoV-2 are not well understood. We aimed to investigate and characterize associations between clinical, laboratory, and imaging features of asymptomatic and pre-symptomatic patients with SARS-CoV-2. Seventy-four patients with RT-PCR-proven SARS-CoV-2 infection were asymptomatic at presentation. All were retrospectively identified from 825 patients with chest CT scans and positive RT-PCR following exposure or travel risks in outbreak settings in Japan and China. CTs were obtained for every patient within a day of admission and were reviewed for infiltrate subtypes and percent with assistance from a deep learning tool. Correlations of clinical, laboratory, and imaging features were analyzed and comparisons were performed using univariate and multivariate logistic regression. Forty-eight of 74 (65%) initially asymptomatic patients had CT infiltrates that pre-dated symptom onset by 3.8 days. The most common CT infiltrates were ground glass opacities (45/48; 94%) and consolidation (22/48; 46%). Patient body temperature (p < 0.01), CRP (p < 0.01), and KL-6 (p = 0.02) were associated with the presence of CT infiltrates. Infiltrate volume (p = 0.01), percent lung involvement (p = 0.01), and consolidation (p = 0.043) were associated with subsequent development of symptoms. COVID-19 CT infiltrates pre-dated symptoms in two-thirds of patients. Body temperature elevation and laboratory evaluations may identify asymptomatic patients with SARS-CoV-2 CT infiltrates at presentation, and the characteristics of CT infiltrates could help identify asymptomatic SARS-CoV-2 patients who subsequently develop symptoms. The role of chest CT in COVID-19 may be illuminated by a better understanding of CT infiltrates in patients with early disease or SARS-CoV-2 exposure. • Forty-eight of 74 (65%) pre-selected asymptomatic patients with SARS-CoV-2 had abnormal chest CT findings. • CT infiltrates pre-dated symptom onset by 3.8 days (range 1-5). • KL-6, CRP, and elevated body temperature identified patients with CT infiltrates. Higher infiltrate volume, percent lung involvement, and pulmonary consolidation identified patients who developed symptoms.
European radiology
"2020-11-05T00:00:00"
[ "NicoleVarble", "MaximeBlain", "MichaelKassin", "ShengXu", "Evrim BTurkbey", "AmelAmalou", "DilaraLong", "StephanieHarmon", "ThomasSanford", "DongYang", "ZiyueXu", "DaguangXu", "MonaFlores", "PengAn", "GianpaoloCarrafiello", "HirofumiObinata", "HitoshiMori", "KakuTamura", "Ashkan AMalayeri", "Steven MHolland", "TaraPalmore", "KaiyuanSun", "BarisTurkbey", "Bradford JWood" ]
10.1007/s00330-020-07401-8 10.1056/NEJMoa2001316 10.1016/S1473-3099(20)30114-6 10.1056/NEJMc2001737 10.1016/S1473-3099(20)30086-4 10.2807/1560-7917.ES.2020.25.10.2000180 10.1148/radiol.2020200642 10.1086/652241 10.1016/S0140-6736(20)30566-3 10.1016/j.crad.2020.03.008 10.1148/ryct.2020200092 10.1148/ryct.2020200196
A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images.
The Coronavirus disease 2019 (COVID-19) is the fastest transmittable virus caused by severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2). The detection of COVID-19 using artificial intelligence techniques and especially deep learning will help to detect this virus in early stages which will reflect in increasing the opportunities of fast recovery of patients worldwide. This will lead to release the pressure off the healthcare system around the world. In this research, classical data augmentation techniques along with Conditional Generative Adversarial Nets (CGAN) based on a deep transfer learning model for COVID-19 detection in chest CT scan images will be presented. The limited benchmark datasets for COVID-19 especially in chest CT images are the main motivation of this research. The main idea is to collect all the possible images for COVID-19 that exists until the very writing of this research and use the classical data augmentations along with CGAN to generate more images to help in the detection of the COVID-19. In this study, five different deep convolutional neural network-based models (AlexNet, VGGNet16, VGGNet19, GoogleNet, and ResNet50) have been selected for the investigation to detect the Coronavirus-infected patient using chest CT radiographs digital images. The classical data augmentations along with CGAN improve the performance of classification in all selected deep transfer models. The outcomes show that ResNet50 is the most appropriate deep learning model to detect the COVID-19 from limited chest CT dataset using the classical data augmentation with testing accuracy of 82.91%, sensitivity 77.66%, and specificity of 87.62%.
Neural computing & applications
"2020-11-03T00:00:00"
[ "MohamedLoey", "GunasekaranManogaran", "Nour Eldeen MKhalifa" ]
10.1007/s00521-020-05437-x 10.1016/j.tmrv.2020.02.003 10.1007/s12098-020-03263-6 10.1016/j.ijantimicag.2020.105924 10.1016/S1473-3099(20)30063-3 10.1001/jama.2020.3864 10.1016/j.jare.2020.03.005 10.3390/pathogens9030231 10.1038/s41579-020-0336-9 10.1038/s41586-020-2169-0 10.1002/jmv.25699 10.1056/NEJMoa2001191 10.1056/NEJMc2001468 10.1056/NEJMc2001272 10.1016/S1473-3099(20)30067-0 10.1016/j.compag.2019.05.019 10.1016/j.zemedi.2018.11.002 10.1016/j.zemedi.2018.12.003 10.1109/5.726791 10.3390/sym12040651 10.1097/RLI.0000000000000574 10.1038/s41598-019-56989-5
FLANNEL (Focal Loss bAsed Neural Network EnsembLe) for COVID-19 detection.
The study sought to test the possibility of differentiating chest x-ray images of coronavirus disease 2019 (COVID-19) against other pneumonia and healthy patients using deep neural networks. We construct the radiography (x-ray) imaging data from 2 publicly available sources, which include 5508 chest x-ray images across 2874 patients with 4 classes: normal, bacterial pneumonia, non-COVID-19 viral pneumonia, and COVID-19. To identify COVID-19, we propose a FLANNEL (Focal Loss bAsed Neural Network EnsembLe) model, a flexible module to ensemble several convolutional neural network models and fuse with a focal loss for accurate COVID-19 detection on class imbalance data. FLANNEL consistently outperforms baseline models on COVID-19 identification task in all metrics. Compared with the best baseline, FLANNEL shows a higher macro-F1 score, with 6% relative increase on the COVID-19 identification task, in which it achieves precision of 0.7833 ± 0.07, recall of 0.8609 ± 0.03, and F1 score of 0.8168 ± 0.03. Ensemble learning that combines multiple independent basis classifiers can increase the robustness and accuracy. We propose a neural weighing module to learn the importance weight for each base model and combine them via weighted ensemble to get the final classification results. In order to handle the class imbalance challenge, we adapt focal loss to our multiple classification task as the loss function. FLANNEL effectively combines state-of-the-art convolutional neural network classification models and tackles class imbalance with focal loss to achieve better performance on COVID-19 detection from x-rays.
Journal of the American Medical Informatics Association : JAMIA
"2020-10-31T00:00:00"
[ "ZhiQiao", "AustinBae", "Lucas MGlass", "CaoXiao", "JimengSun" ]
10.1093/jamia/ocaa280
Multi-band MR fingerprinting (MRF) ASL imaging using artificial-neural-network trained with high-fidelity experimental data.
We aim to leverage the power of deep-learning with high-fidelity training data to improve the reliability and processing speed of hemodynamic mapping with MR fingerprinting (MRF) arterial spin labeling (ASL). A total of 15 healthy subjects were studied on a 3T MRI. Each subject underwent 10 runs of a multi-band multi-slice MRF-ASL sequence for a total scan time of approximately 40 min. MRF-ASL images were averaged across runs to yield a set of high-fidelity data. Training of a fully connected artificial neural network (ANN) was then performed using these data. The results from ANN were compared to those of dictionary matching (DM), ANN trained with single-run experimental data and with simulation data. Initial clinical performance of the technique was also demonstrated in a Moyamoya patient. The use of ANN reduced the processing time of MRF-ASL data to 3.6 s, compared to DM of 3 h 12 min. Parametric values obtained with ANN and DM were strongly correlated (R Deep-learning-based parametric reconstruction improves the reliability of MRF-ASL hemodynamic maps and reduces processing time.
Magnetic resonance in medicine
"2020-10-28T00:00:00"
[ "HongliFan", "PanSu", "JudyHuang", "PeiyingLiu", "HanzhangLu" ]
10.1002/mrm.28560
Implementation of convolutional neural network approach for COVID-19 disease detection.
In this paper, two novel, powerful, and robust convolutional neural network (CNN) architectures are designed and proposed for two different classification tasks using publicly available data sets. The first architecture is able to decide whether a given chest X-ray image of a patient contains COVID-19 or not with 98.92% average accuracy. The second CNN architecture is able to divide a given chest X-ray image of a patient into three classes (COVID-19 versus normal versus pneumonia) with 98.27% average accuracy. The hyperparameters of both CNN models are automatically determined using Grid Search. Experimental results on large clinical data sets show the effectiveness of the proposed architectures and demonstrate that the proposed algorithms can overcome the disadvantages mentioned above. Moreover, the proposed CNN models are fully automatic in terms of not requiring the extraction of diseased tissue, which is a great improvement of available automatic methods in the literature. To the best of the author's knowledge, this study is the first study to detect COVID-19 disease from given chest X-ray images, using CNN, whose hyperparameters are automatically determined by the Grid Search. Another important contribution of this study is that it is the first CNN-based COVID-19 chest X-ray image classification study that uses the largest possible clinical data set. A total of 1,524 COVID-19, 1,527 pneumonia, and 1524 normal X-ray images are collected. It is aimed to collect the largest number of COVID-19 X-ray images that exist in the literature until the writing of this research paper.
Physiological genomics
"2020-10-24T00:00:00"
[ "EmrahIrmak" ]
10.1152/physiolgenomics.00084.2020 10.1007/978-981-10-9035-6_33 10.1152/physiolgenomics.00029.2020 10.1007/s13246-020-00865-4 10.1088/1742-6596/1193/1/012033 10.3390/ijerph17082690 10.1515/comp-2019-0011 10.1109/ACCESS.2020.2981141 10.1016/j.bbe.2018.10.004 10.1016/j.cell.2018.02.010 10.1148/radiol.2020200905 10.3390/sym12040651 10.1016/j.compbiomed.2020.103792 10.1016/j.jocs.2018.12.003 10.1007/s10096-020-03901-z 10.1109/ACCESS.2019.2919122 10.1016/j.compbiomed.2020.103805 10.1016/j.bbe.2019.11.004 10.1016/j.physa.2019.123592 10.1097/EDE.0000000000001027 10.1155/2019/7289273
Decoding COVID-19 pneumonia: comparison of deep learning and radiomics CT image signatures.
High-dimensional image features that underlie COVID-19 pneumonia remain opaque. We aim to compare feature engineering and deep learning methods to gain insights into the image features that drive CT-based for COVID-19 pneumonia prediction, and uncover CT image features significant for COVID-19 pneumonia from deep learning and radiomics framework. A total of 266 patients with COVID-19 and other viral pneumonia with clinical symptoms and CT signs similar to that of COVID-19 during the outbreak were retrospectively collected from three hospitals in China and the USA. All the pneumonia lesions on CT images were manually delineated by four radiologists. One hundred eighty-four patients (n = 93 COVID-19 positive; n = 91 COVID-19 negative; 24,216 pneumonia lesions from 12,001 CT image slices) from two hospitals from China served as discovery cohort for model development. Thirty-two patients (17 COVID-19 positive, 15 COVID-19 negative; 7883 pneumonia lesions from 3799 CT image slices) from a US hospital served as external validation cohort. A bi-directional adversarial network-based framework and PyRadiomics package were used to extract deep learning and radiomics features, respectively. Linear and Lasso classifiers were used to develop models predictive of COVID-19 versus non-COVID-19 viral pneumonia. 120-dimensional deep learning image features and 120-dimensional radiomics features were extracted. Linear and Lasso classifiers identified 32 high-dimensional deep learning image features and 4 radiomics features associated with COVID-19 pneumonia diagnosis (P < 0.0001). Both models achieved sensitivity > 73% and specificity > 75% on external validation cohort with slight superior performance for radiomics Lasso classifier. Human expert diagnostic performance improved (increase by 16.5% and 11.6% in sensitivity and specificity, respectively) when using a combined deep learning-radiomics model. We uncover specific deep learning and radiomics features to add insight into interpretability of machine learning algorithms and compare deep learning and radiomics models for COVID-19 pneumonia that might serve to augment human diagnostic performance.
European journal of nuclear medicine and molecular imaging
"2020-10-24T00:00:00"
[ "HongmeiWang", "LuWang", "Edward HLee", "JimmyZheng", "WeiZhang", "SafwanHalabi", "ChunleiLiu", "KexueDeng", "JiangdianSong", "Kristen WYeom" ]
10.1007/s00259-020-05075-4 10.1002/ctm2.17 10.1158/0008-5472.CAN-17-0339 10.1038/nrclinonc.2017.141 10.1016/j.ebiom.2018.09.007 10.3389/fonc.2019.00340 10.3389/fonc.2019.00255 10.1007/s11547-020-01195-x 10.1021/acs.molpharmaceut.7b00578 10.1038/s41592-019-0403-1 10.1158/1078-0432.CCR-17-2507 10.1007/s10637-017-0524-2 10.1007/s13139-018-0514-0
Integrative analysis for COVID-19 patient outcome prediction.
While image analysis of chest computed tomography (CT) for COVID-19 diagnosis has been intensively studied, little work has been performed for image-based patient outcome prediction. Management of high-risk patients with early intervention is a key to lower the fatality rate of COVID-19 pneumonia, as a majority of patients recover naturally. Therefore, an accurate prediction of disease progression with baseline imaging at the time of the initial presentation can help in patient management. In lieu of only size and volume information of pulmonary abnormalities and features through deep learning based image segmentation, here we combine radiomics of lung opacities and non-imaging features from demographic data, vital signs, and laboratory findings to predict need for intensive care unit (ICU) admission. To our knowledge, this is the first study that uses holistic information of a patient including both imaging and non-imaging data for outcome prediction. The proposed methods were thoroughly evaluated on datasets separately collected from three hospitals, one in the United States, one in Iran, and another in Italy, with a total 295 patients with reverse transcription polymerase chain reaction (RT-PCR) assay positive COVID-19 pneumonia. Our experimental results demonstrate that adding non-imaging features can significantly improve the performance of prediction to achieve AUC up to 0.884 and sensitivity as high as 96.1%, which can be valuable to provide clinical decision support in managing COVID-19 patients. Our methods may also be applied to other lung diseases including but not limited to community acquired pneumonia. The source code of our work is available at https://github.com/DIAL-RPI/COVID19-ICUPrediction.
Medical image analysis
"2020-10-23T00:00:00"
[ "HanqingChao", "XiFang", "JiajinZhang", "FatemehHomayounieh", "Chiara DArru", "Subba RDigumarthy", "RosaBabaei", "Hadi KMobin", "ImanMohseni", "LucaSaba", "AlessandroCarriero", "ZenoFalaschi", "AlessioPasche", "GeWang", "Mannudeep KKalra", "PingkunYan" ]
10.1016/j.media.2020.101844 10.1148/radiol.2020200642 10.1148/radiol.2020201343 10.1148/radiol.2020200905 10.1038/s42256-020-0180-7 10.2214/AJR.20.22976
A model based on CT radiomic features for predicting RT-PCR becoming negative in coronavirus disease 2019 (COVID-19) patients.
Coronavirus disease 2019 (COVID-19) has emerged as a global pandemic. According to the diagnosis and treatment guidelines of China, negative reverse transcription-polymerase chain reaction (RT-PCR) is the key criterion for discharging COVID-19 patients. However, repeated RT-PCR tests lead to medical waste and prolonged hospital stays for COVID-19 patients during the recovery period. Our purpose is to assess a model based on chest computed tomography (CT) radiomic features and clinical characteristics to predict RT-PCR negativity during clinical treatment. From February 10 to March 10, 2020, 203 mild COVID-19 patients in Fangcang Shelter Hospital were retrospectively included (training: n = 141; testing: n = 62), and clinical characteristics were collected. Lung abnormalities on chest CT images were segmented with a deep learning algorithm. CT quantitative features and radiomic features were automatically extracted. Clinical characteristics and CT quantitative features were compared between RT-PCR-negative and RT-PCR-positive groups. Univariate logistic regression and Spearman correlation analyses identified the strongest features associated with RT-PCR negativity, and a multivariate logistic regression model was established. The diagnostic performance was evaluated for both cohorts. The RT-PCR-negative group had a longer time interval from symptom onset to CT exams than the RT-PCR-positive group (median 23 vs. 16 days, p < 0.001). There was no significant difference in the other clinical characteristics or CT quantitative features. In addition to the time interval from symptom onset to CT exams, nine CT radiomic features were selected for the model. ROC curve analysis revealed AUCs of 0.811 and 0.812 for differentiating the RT-PCR-negative group, with sensitivity/specificity of 0.765/0.625 and 0.784/0.600 in the training and testing datasets, respectively. The model combining CT radiomic features and clinical data helped predict RT-PCR negativity during clinical treatment, indicating the proper time for RT-PCR retesting.
BMC medical imaging
"2020-10-22T00:00:00"
[ "QuanCai", "Si-YaoDu", "SiGao", "Guo-LiangHuang", "ZhengZhang", "ShuLi", "XinWang", "Pei-LingLi", "PengLv", "GangHou", "Li-NaZhang" ]
10.1186/s12880-020-00521-z 10.1007/s00330-020-06801-0 10.1007/s11604-020-01010-7 10.1186/s12880-020-00464-5 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1148/radiol.2020200343 10.1148/radiol.2020200370 10.1148/radiol.2020200463 10.1097/RLI.0000000000000672 10.1148/ryct.2020200047 10.1007/s00330-020-06817-6 10.1148/radiol.2020200905 10.1007/s10096-020-03901-z 10.1148/radiol.2020200527 10.1101/2020.02.11.20021493 10.1148/radiol.2020201433 10.1148/radiol.2020200230 10.1148/radiol.2020200274 10.1001/jama.2020.1585 10.1148/radiol.2020200269 10.1148/radiol.2020200323 10.1148/ryct.2020200031 10.1016/S2213-2600(20)30079-5 10.1016/S0140-6736(20)30211-7 10.1186/s12931-020-01338-8 10.1016/j.ebiom.2020.102763
Automatic classification between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy on chest X-ray image: combination of data augmentation methods.
This study aimed to develop and validate computer-aided diagnosis (CXDx) system for classification between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy on chest X-ray (CXR) images. From two public datasets, 1248 CXR images were obtained, which included 215, 533, and 500 CXR images of COVID-19 pneumonia patients, non-COVID-19 pneumonia patients, and the healthy samples, respectively. The proposed CADx system utilized VGG16 as a pre-trained model and combination of conventional method and mixup as data augmentation methods. Other types of pre-trained models were compared with the VGG16-based model. Single type or no data augmentation methods were also evaluated. Splitting of training/validation/test sets was used when building and evaluating the CADx system. Three-category accuracy was evaluated for test set with 125 CXR images. The three-category accuracy of the CAD system was 83.6% between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy. Sensitivity for COVID-19 pneumonia was more than 90%. The combination of conventional method and mixup was more useful than single type or no data augmentation method. In conclusion, this study was able to create an accurate CADx system for the 3-category classification. Source code of our CADx system is available as open source for COVID-19 research.
Scientific reports
"2020-10-18T00:00:00"
[ "MizuhoNishio", "ShunjiroNoguchi", "HidetoshiMatsuo", "TakamichiMurakami" ]
10.1038/s41598-020-74539-2 10.1148/radiol.2020200432 10.1148/radiol.2020200823 10.1148/radiol.2511081296 10.1148/radiol.2020201160 10.1007/s13244-018-0639-9 10.1016/S0140-6736(18)31645-3 10.1109/tcsvt.2019.2935128
Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation.
This paper presents an automatic classification segmentation tool for helping screening COVID-19 pneumonia using chest CT imaging. The segmented lesions can help to assess the severity of pneumonia and follow-up the patients. In this work, we propose a new multitask deep learning model to jointly identify COVID-19 patient and segment COVID-19 lesion from chest CT images. Three learning tasks: segmentation, classification and reconstruction are jointly performed with different datasets. Our motivation is on the one hand to leverage useful information contained in multiple related tasks to improve both segmentation and classification performances, and on the other hand to deal with the problems of small data because each task can have a relatively small dataset. Our architecture is composed of a common encoder for disentangled feature representation with three tasks, and two decoders and a multi-layer perceptron for reconstruction, segmentation and classification respectively. The proposed model is evaluated and compared with other image segmentation techniques using a dataset of 1369 patients including 449 patients with COVID-19, 425 normal ones, 98 with lung cancer and 397 of different kinds of pathology. The obtained results show very encouraging performance of our method with a dice coefficient higher than 0.88 for the segmentation and an area under the ROC curve higher than 97% for the classification.
Computers in biology and medicine
"2020-10-17T00:00:00"
[ "AmineAmyar", "RomainModzelewski", "HuaLi", "SuRuan" ]
10.1016/j.compbiomed.2020.104037
M
To counter the outbreak of COVID-19, the accurate diagnosis of suspected cases plays a crucial role in timely quarantine, medical treatment, and preventing the spread of the pandemic. Considering the limited training cases and resources (e.g, time and budget), we propose a Multi-task Multi-slice Deep Learning System (M
IEEE journal of biomedical and health informatics
"2020-10-14T00:00:00"
[ "XuelinQian", "HuazhuFu", "WeiyaShi", "TaoChen", "YanweiFu", "FeiShan", "XiangyangXue" ]
10.1109/JBHI.2020.3030853
Development of a quantitative segmentation model to assess the effect of comorbidity on patients with COVID-19.
The coronavirus disease 2019 (COVID-19) has brought a global disaster. Quantitative lesions may provide the radiological evidence of the severity of pneumonia and further to assess the effect of comorbidity on patients with COVID-19. 294 patients with COVID-19 were enrolled from February, 24, 2020 to June, 1, 2020 from six centers. Multi-task Unet network was used to segment the whole lung and lesions from chest CT images. This deep learning method was pre-trained in 650 CT images (550 in primary dataset and 100 in test dataset) with COVID-19 or community-acquired pneumonia and Dice coefficients in test dataset were calculated. 50 CT scans of 50 patients (15 with comorbidity and 35 without comorbidity) were random selected to mark lesions manually. The results will be compared with the automatic segmentation model. Eight quantitative parameters were calculated based on the segmentation results to evaluate the effect of comorbidity on patients with COVID-19. Quantitative segmentation model was proved to be effective and accurate with all Dice coefficients more than 0.85 and all accuracies more than 0.95. Of the 294 patients, 52 (17.7%) patients were reported having at least one comorbidity; 14 (4.8%) having more than one comorbidity. Patients with any comorbidity were older (P < 0.001), had longer incubation period (P < 0.001), were more likely to have abnormal laboratory findings (P < 0.05), and be in severity status (P < 0.001). More lesions (including larger volume of lesion, consolidation, and ground-glass opacity) were shown in patients with any comorbidity than patients without comorbidity (all P < 0.001). More lesions were found on CT images in patients with more comorbidities. The median volumes of lesion, consolidation, and ground-glass opacity in diabetes mellitus group were largest among the groups with single comorbidity that had the incidence rate of top three. Multi-task Unet network can make quantitative CT analysis of lesions to assess the effect of comorbidity on patients with COVID-19, further to provide the radiological evidence of the severity of pneumonia. More lesions (including GGO and consolidation) were found in CT images of cases with comorbidity. The more comorbidities patients have, the more lesions CT images show.
European journal of medical research
"2020-10-14T00:00:00"
[ "CuiZhang", "GuangzhaoYang", "ChunxianCai", "ZhihuaXu", "HaiWu", "YouminGuo", "ZongyuXie", "HengfengShi", "GuohuaCheng", "JianWang" ]
10.1186/s40001-020-00450-1 10.1111/tmi.13383 10.1056/NEJMoa2001191 10.1001/jama.2020.5394 10.1001/jama.2020.4031 10.18632/aging.103000 10.1111/joim.13063 10.1183/13993003.00547-2020 10.1148/radiol.2020200241 10.1016/j.jpha.2020.03.004 10.1186/s40779-020-0233-6 10.2214/AJR.20.22954 10.1016/S0140-6736(20)30211-7 10.1093/cid/ciaa414 10.1001/jama.2020.1585 10.1111/all.14238 10.1016/j.ijid.2016.06.015 10.1038/s41569-020-0360-5 10.1101/cshperspect.a007724 10.1016/S1473-3099(09)70282-8 10.1111/eci.13259 10.1016/j.jcv 10.1097/MEG.0000000000001742
Severity and Consolidation Quantification of COVID-19 From CT Images Using Deep Learning Based on Hybrid Weak Labels.
Early and accurate diagnosis of Coronavirus disease (COVID-19) is essential for patient isolation and contact tracing so that the spread of infection can be limited. Computed tomography (CT) can provide important information in COVID-19, especially for patients with moderate to severe disease as well as those with worsening cardiopulmonary status. As an automatic tool, deep learning methods can be utilized to perform semantic segmentation of affected lung regions, which is important to establish disease severity and prognosis prediction. Both the extent and type of pulmonary opacities help assess disease severity. However, manually pixel-level multi-class labelling is time-consuming, subjective, and non-quantitative. In this article, we proposed a hybrid weak label-based deep learning method that utilize both the manually annotated pulmonary opacities from COVID-19 pneumonia and the patient-level disease-type information available from the clinical report. A UNet was firstly trained with semantic labels to segment the total infected region. It was used to initialize another UNet, which was trained to segment the consolidations with patient-level information using the Expectation-Maximization (EM) algorithm. To demonstrate the performance of the proposed method, multi-institutional CT datasets from Iran, Italy, South Korea, and the United States were utilized. Results show that our proposed method can predict the infected regions as well as the consolidation regions with good correlation to human annotation.
IEEE journal of biomedical and health informatics
"2020-10-13T00:00:00"
[ "DufanWu", "KuangGong", "Chiara DanielaArru", "FatemehHomayounieh", "BernardoBizzo", "VarunBuch", "HuiRen", "KyungsangKim", "NirNeumark", "PengchengXu", "ZhiyuanLiu", "WeiFang", "NuobeiXie", "Won YoungTak", "Soo YoungPark", "Yu RimLee", "Min KyuKang", "Jung GilPark", "AlessandroCarriero", "LucaSaba", "MahsaMasjedi", "HamidrezaTalari", "RosaBabaei", "Hadi KarimiMobin", "ShadiEbrahimian", "IttaiDayan", "Mannudeep KKalra", "QuanzhengLi" ]
10.1109/JBHI.2020.3030224
Zero-shot learning and its applications from autonomous vehicles to COVID-19 diagnosis: A review.
The challenge of learning a new concept, object, or a new medical disease recognition without receiving any examples beforehand is called Zero-Shot Learning (ZSL). One of the major issues in deep learning based methodologies such as in Medical Imaging and other real-world applications is the requirement of large annotated datasets prepared by clinicians or experts to train the model. ZSL is known for having minimal human intervention by relying only on previously known or trained concepts plus currently existing auxiliary information. This is ever-growing research for the cases where we have very limited or no annotated datasets available and the detection
Intelligence-based medicine
"2020-10-13T00:00:00"
[ "MahdiRezaei", "MahsaShahidi" ]
10.1016/j.ibmed.2020.100005 10.1101/2020.03.30.20047456 10.1109/ACCESS.2020.2989273 10.1109/CVPR.2016.14 10.1109/CVPR.2013.111 10.1109/TPAMI.2015.2487986 10.1109/CVPR.2015.7298911 10.1109/CVPR.2016.643 10.5555/3305381.3305404 10.1162/tacl_a_00288 10.1109/RIOS.2017.7956436 10.1007/978-3-030-01246-5_24 10.1109/CVPR.2005.117 10.1016/j.cviu.2007.09.014 10.1145/1282280.1282340 10.1007/978-3-319-46454-1_44 10.1109/ICCVW.2017.308 10.1109/CVPR.2016.575 10.1109/ICCV.2017.376 10.1109/CVPR.2018.00115 10.1109/WACV45572.2020.9093610 10.1007/3-540-33486-6_8 10.1007/978-3-319-10590-1_4 10.1109/CVPR.2009.5206848 10.18653/v1/N19-1423 10.1109/CVPR.2019.00228 10.1109/CVPR.2019.00523 10.1109/TPAMI.2016.2643667 10.1109/ICCV.2013.321 10.1109/CVPR.2017.666 10.1148/radiol.2020200432 10.1109/CVPR.2009.5206772 10.1109/TPAMI.2006.79 10.1007/978-3-030-01231-1_2 10.1109/TPAMI.2015.2408354 10.1609/aaai.v33i01.33018303 10.1307/mmj/1029003026 10.1007/s11263-013-0658-4 10.5555/2969033.2969125 10.5555/2976456.2976521 10.18653/v1/P19-1121 10.5555/3295222.3295327 10.5555/3298023.3298158 10.1162/neco.1997.9.8.1735 10.1080/00437956.1954.11659520 10.1109/CVPR.2016.90 10.1016/j.cmpb.2020.105581 10.1109/CVPR.2019.00089 10.5555/2969033.2969213 10.5555/3327345.3327499 10.1007/978-3-030-01249-6_8 10.1162/tacl_a_00065 10.1109/CVPR.2019.01175 10.1109/ICCV.2019.00851 10.1109/CVPR.2012.6248112 10.1109/CVPR.2017.679 10.1109/CVPR.2015.7298932 10.1145/3132635.3132650 10.1109/ICCV.2015.282 10.1109/CVPR.2017.473 10.1145/3065386 10.1109/CVPR.2018.00450 10.1126/science.aab3050 10.1109/CVPR.2009.5206594 10.1109/TPAMI.2013.140 10.1126/scirobotics.aav3150 10.1109/CVPR.2018.00170 10.1109/ICCV.2015.483 10.1109/TGRS.2017.2689071 10.1109/ACCESS.2019.2925093 10.1109/CVPR.2019.00758 10.1148/radiol.2020200905 10.5555/3045118.3045301 10.1109/CVPR.2017.553 10.2214/AJR.20.22954 10.1109/CVPR.2018.00779 10.1109/LSP.2020.2977498 10.18653/v1/P19-1335 10.1109/WACV.2017.110 10.1109/CVPR.2017.653 10.1023/B:VISI.0000029664.99615.94 10.1109/CVPR.2017.10 10.5555/2999792.2999959 10.1109/CVPR.2000.855856 10.1145/219717.219748 10.1109/CVPRW.2018.00294 10.1109/WACV.2018.00047 10.18653/v1/D16-1089 10.1080/09332480.2014.914768 10.1109/CVPR.2018.00749 10.5555/2984093.2984252 10.1109/ICCV.2011.6126281 10.1109/CVPRW.2018.00278 10.1109/CVPR.2012.6247998 10.3115/v1/D14-1162 10.18653/v1/N18-1202 10.1109/CVPR.2016.247 10.1109/CVPR.2017.117 10.1007/978-3-030-20887-5_34 10.1109/CVPR.2016.13 10.1109/aiar.2018.8769804 10.1109/CVPR.2014.24 10.1109/TITS.2015.2421482 10.5555/2999611.2999617 10.1109/CVPR.2011.5995627 10.1109/CVPR.2010.5540121 10.1007/978-3-319-50077-5_2 10.23937/2378-3656/1410264 10.1109/ISMA.2008.4648837 10.1109/TPAMI.2012.269 10.1016/0306-4573(88)90021-0 10.1109/CVPR.2019.00227 10.1007/3-540-44581-1_27 10.1109/CVPR.2019.00844 10.1007/978-3-642-33715-4_18 10.1109/CVPR.2007.383198 10.1109/WACV.2018.00181 10.1109/CVPR.2018.00379 10.1109/CVPR.2018.00860 10.1109/CVPR.2018.00329 10.5555/3294996.3295163 10.5555/2999611.2999716 10.5555/2969442.2969628 10.1109/CVPR.2018.00113 10.1109/CVPR.2015.7298594 10.1007/978-3-030-35699-6_25 10.1007/11957959_18 10.1109/ICCV.2017.386 10.1145/1553374.1553509 10.1109/CVPR.2015.7298658 10.1155/2019/4180949 10.5555/3295222.3295349 10.1007/978-3-319-71246-8_48 10.7551/mitpress/7503.001.0001 10.5555/3016100.3016198 10.1007/s11263-017-1027-5 10.1109/ICCV.2019.00933 10.1145/3293318 10.1109/ICCV.2013.264 10.1109/CVPR.2017.369 10.1109/CVPR.2018.00717 10.1007/978-3-319-46478-7_31 10.1007/s10994-010-5198-3 10.1002/ppul.24718 10.1109/CVPR.2016.15 10.1109/TPAMI.2018.2857768 10.1109/CVPR.2018.00581 10.1109/CVPR.2017.328 10.1109/CVPR.2019.01052 10.1109/CVPR.2019.00961 10.1109/ICIP.2019.8803426 10.1145/3078971.3078977 10.1109/CVPR.2017.217 10.1109/ICME.2017.8019425 10.1109/TPAMI.2014.2388235 10.1109/CVPR.2017.542 10.1148/radiol.2020200343 10.1007/978-3-642-15555-0_10 10.1109/ICCV.2019.00124 10.1109/CVPR.2017.321 10.1016/j.media.2020.101664 10.1109/ICCV.2015.474 10.1109/CVPR.2016.649 10.1007/978-3-319-46478-7_33 10.1109/ICCVW.2017.310 10.1148/radiol.2020200370 10.1109/CVPR.2019.00311 10.1109/CVPR.2018.00111 10.1109/ICCV.2015.11
A light CNN for detecting COVID-19 from CT scans of the chest.
Computer Tomography (CT) imaging of the chest is a valid diagnosis tool to detect COVID-19 promptly and to control the spread of the disease. In this work we propose a light Convolutional Neural Network (CNN) design, based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with respect to other community-acquired pneumonia and/or healthy CT images. The architecture allows to an accuracy of 85.03% with an improvement of about 3.2% in the first dataset arrangement and of about 2.1% in the second dataset arrangement. The obtained gain, though of low entity, can be really important in medical diagnosis and, in particular, for Covid-19 scenario. Also the average classification time on a high-end workstation, 1.25 s, is very competitive with respect to that of more complex CNN designs, 13.41 s, witch require pre-processing. The proposed CNN can be executed on medium-end laptop without GPU acceleration in 7.81 s: this is impossible for methods requiring GPU acceleration. The performance of the method can be further improved with efficient pre-processing strategies for witch GPU acceleration is not necessary.
Pattern recognition letters
"2020-10-13T00:00:00"
[ "MatteoPolsinelli", "LuigiCinque", "GiuseppePlacidi" ]
10.1016/j.patrec.2020.10.001
A comprehensive study on classification of COVID-19 on computed tomography with pretrained convolutional neural networks.
The use of imaging data has been reported to be useful for rapid diagnosis of COVID-19. Although computed tomography (CT) scans show a variety of signs caused by the viral infection, given a large amount of images, these visual features are difficult and can take a long time to be recognized by radiologists. Artificial intelligence methods for automated classification of COVID-19 on CT scans have been found to be very promising. However, current investigation of pretrained convolutional neural networks (CNNs) for COVID-19 diagnosis using CT data is limited. This study presents an investigation on 16 pretrained CNNs for classification of COVID-19 using a large public database of CT scans collected from COVID-19 patients and non-COVID-19 subjects. The results show that, using only 6 epochs for training, the CNNs achieved very high performance on the classification task. Among the 16 CNNs, DenseNet-201, which is the deepest net, is the best in terms of accuracy, balance between sensitivity and specificity, [Formula: see text] score, and area under curve. Furthermore, the implementation of transfer learning with the direct input of whole image slices and without the use of data augmentation provided better classification rates than the use of data augmentation. Such a finding alleviates the task of data augmentation and manual extraction of regions of interest on CT images, which are adopted by current implementation of deep-learning models for COVID-19 classification.
Scientific reports
"2020-10-11T00:00:00"
[ "Tuan DPham" ]
10.1038/s41598-020-74164-z 10.1007/s00330-020-06827-4 10.1148/ryai.2020200053 10.1007/s00330-020-06817-6 10.1186/s12967-020-02324-w 10.1007/s00330-020-06975-7 10.1007/s00330-020-06863-0 10.1038/s41591-020-0931-3 10.1016/j.compbiomed.2020.103795 10.1148/radiol.2020200905 10.1101/2020.02.14.20023028 10.1186/s40537-019-0197-0 10.1016/j.cmpb.2020.105475 10.1093/jbcr/irz103 10.1109/ACCESS.2019.2919678 10.1016/j.neucom.2018.05.083 10.1109/TKDE.2008.239 10.7763/IJMLC.2013.V3.307 10.1007/s13748-016-0094-0 10.1016/j.cmpb.2019.06.023
Development and evaluation of an artificial intelligence system for COVID-19 diagnosis.
Early detection of COVID-19 based on chest CT enables timely treatment of patients and helps control the spread of the disease. We proposed an artificial intelligence (AI) system for rapid COVID-19 detection and performed extensive statistical analysis of CTs of COVID-19 based on the AI system. We developed and evaluated our system on a large dataset with more than 10 thousand CT volumes from COVID-19, influenza-A/B, non-viral community acquired pneumonia (CAP) and non-pneumonia subjects. In such a difficult multi-class diagnosis task, our deep convolutional neural network-based system is able to achieve an area under the receiver operating characteristic curve (AUC) of 97.81% for multi-way classification on test cohort of 3,199 scans, AUC of 92.99% and 93.25% on two publicly available datasets, CC-CCII and MosMedData respectively. In a reader study involving five radiologists, the AI system outperforms all of radiologists in more challenging tasks at a speed of two orders of magnitude above them. Diagnosis performance of chest x-ray (CXR) is compared to that of CT. Detailed interpretation of deep network is also performed to relate system outputs with CT presentations. The code is available at https://github.com/ChenWWWeixiang/diagnosis_covid19 .
Nature communications
"2020-10-11T00:00:00"
[ "ChengJin", "WeixiangChen", "YukunCao", "ZhanweiXu", "ZimengTan", "XinZhang", "LeiDeng", "ChuanshengZheng", "JieZhou", "HeshuiShi", "JianjiangFeng" ]
10.1038/s41467-020-18685-1 10.1148/radiol.2020200823 10.1148/radiol.2020200642 10.1016/j.chest.2020.04.003 10.1148/radiol.2020201160 10.1016/S1473-3099(20)30086-4 10.1038/nature14539 10.1038/nature21056 10.1016/j.media.2017.07.005 10.1038/s41591-018-0316-z 10.1038/s41591-018-0300-7 10.1038/s41591-019-0447-x 10.1038/s41598-018-37186-2 10.1038/s41591-020-0931-3 10.1016/j.patcog.2018.07.031 10.1109/TNNLS.2019.2892409 10.1038/s41598-019-56589-3 10.1016/j.cell.2020.04.045 10.1148/radiol.2020200905 10.1148/radiol.2020201491 10.1109/TMI.2020.2995965 10.1109/TMI.2020.2996256 10.1109/TMI.2020.2995508 10.1148/radiol.2020201874 10.1118/1.3528204 10.1158/0008-5472.CAN-17-0339 10.1148/radiol.2020200463 10.1183/16000617.0053-2016
Issues associated with deploying CNN transfer learning to detect COVID-19 from chest X-rays.
Covid-19 first occurred in Wuhan, China in December 2019. Subsequently, the virus spread throughout the world and as of June 2020 the total number of confirmed cases are above 4.7 million with over 315,000 deaths. Machine learning algorithms built on radiography images can be used as a decision support mechanism to aid radiologists to speed up the diagnostic process. The aim of this work is to conduct a critical analysis to investigate the applicability of convolutional neural networks (CNNs) for the purpose of COVID-19 detection in chest X-ray images and highlight the issues of using CNN directly on the whole image. To accomplish this task, we use 12-off-the-shelf CNN architectures in transfer learning mode on 3 publicly available chest X-ray databases together with proposing a shallow CNN architecture in which we train it from scratch. Chest X-ray images are fed into CNN models without any preprocessing to replicate researches used chest X-rays in this manner. Then a qualitative investigation performed to inspect the decisions made by CNNs using a technique known as class activation maps (CAM). Using CAMs, one can map the activations contributed to the decision of CNNs back to the original image to visualize the most discriminating region(s) on the input image. We conclude that CNN decisions should not be taken into consideration, despite their high classification accuracy, until clinicians can visually inspect and approve the region(s) of the input image used by CNNs that lead to its prediction.
Physical and engineering sciences in medicine
"2020-10-08T00:00:00"
[ "TabanMajeed", "RasberRashid", "DashtiAli", "ArasAsaad" ]
10.1007/s13246-020-00934-8 10.1148/radiol.2020200642 10.1007/s13246-020-00865-4 10.1101/2020.02.14.20023028v5 10.1101/2020.03.20.20039834 10.1016/j.compbiomed.2020.103792 10.1038/s42256-020-0185-2 10.1038/nature14539 10.1113/jphysiol.1968.sp008455 10.1007/s11263-015-0816-y 10.1109/ACCESS.2017.2784352 10.1016/j.patcog.2017.10.013 10.1007/s13244-018-0639-9 10.1016/j.cell.2018.02.010 10.1007/s11263-019-01228-7
The Performance of Deep Neural Networks in Differentiating Chest X-Rays of COVID-19 Patients From Other Bacterial and Viral Pneumonias.
Chest radiography is a critical tool in the early detection, management planning, and follow-up evaluation of COVID-19 pneumonia; however, in smaller clinics around the world, there is a shortage of radiologists to analyze large number of examinations especially performed during a pandemic. Limited availability of high-resolution computed tomography and real-time polymerase chain reaction in developing countries and regions of high patient turnover also emphasizes the importance of chest radiography as both a screening and diagnostic tool. In this paper, we compare the performance of 17 available deep learning algorithms to help identify imaging features of COVID19 pneumonia. We utilize an existing diagnostic technology (chest radiography) and preexisting neural networks (DarkNet-19) to detect imaging features of COVID-19 pneumonia. Our approach eliminates the extra time and resources needed to develop new technology and associated algorithms, thus aiding the front-line healthcare workers in the race against the COVID-19 pandemic. Our results show that DarkNet-19 is the optimal pre-trained neural network for the detection of radiographic features of COVID-19 pneumonia, scoring an overall accuracy of 94.28% over 5,854 X-ray images. We also present a custom visualization of the results that can be used to highlight important visual biomarkers of the disease and disease progression.
Frontiers in medicine
"2020-10-06T00:00:00"
[ "MohamedElgendi", "Muhammad UmerNasir", "QunfengTang", "Richard RibonFletcher", "NewtonHoward", "CarloMenon", "RababWard", "WilliamParker", "SavvasNicolaou" ]
10.3389/fmed.2020.00550 10.23750/abm.v91i1.9397 10.1016/S0140-6736(20)30183-5 10.1128/JCM.00556-09 10.1001/jama.2020.2648 10.1097/RLI.0000000000000670 10.1016/j.acra.2018.02.018 10.20944/preprints202003.0300.v1 10.1007/s13246-020-00865-4 10.1109/CVPR.2015.7298594 10.1109/CVPR.2016.90 10.1109/CVPR.2018.00716 10.1109/CVPR.2018.00907 10.1109/CVPR.2017.195 10.1109/CVPR.2018.00474 10.1109/CVPR.2017.243 10.1007/s11263-015-0816-y 10.1016/j.ergon.2011.05.001
Review on Diagnosis of COVID-19 from Chest CT Images Using Artificial Intelligence.
The COVID-19 diagnostic approach is mainly divided into two broad categories, a laboratory-based and chest radiography approach. The last few months have witnessed a rapid increase in the number of studies use artificial intelligence (AI) techniques to diagnose COVID-19 with chest computed tomography (CT). In this study, we review the diagnosis of COVID-19 by using chest CT toward AI. We searched ArXiv, MedRxiv, and Google Scholar using the terms "deep learning", "neural networks", "COVID-19", and "chest CT". At the time of writing (August 24, 2020), there have been nearly 100 studies and 30 studies among them were selected for this review. We categorized the studies based on the classification tasks: COVID-19/normal, COVID-19/non-COVID-19, COVID-19/non-COVID-19 pneumonia, and severity. The sensitivity, specificity, precision, accuracy, area under the curve, and F1 score results were reported as high as 100%, 100%, 99.62, 99.87%, 100%, and 99.5%, respectively. However, the presented results should be carefully compared due to the different degrees of difficulty of different classification tasks.
Computational and mathematical methods in medicine
"2020-10-06T00:00:00"
[ "IlkerOzsahin", "BoranSekeroglu", "Musa SaniMusa", "Mubarak TaiwoMustapha", "DilberUzun Ozsahin" ]
10.1155/2020/9756518 10.1109/rbme.2020.2987975 10.1148/radiol.2020200823 10.1148/radiol.2020200432 10.1016/j.crad.2020.06.005 10.1186/s41747-018-0061-6 10.1148/radiol.2020200905 10.1101/2020.04.24.20078998 10.1007/978-3-030-01264-9_8 10.1109/cvpr.2018.00474 10.1109/CVPR.2017.195 10.1109/cvpr.2016.308 10.1109/CVPR.2016.90 10.1080/07391102.2020.1788642 10.1118/1.3528204 10.1101/2020.03.20.20039834 10.1016/j.compmedimag.2011.07.003 10.1007/s10096-020-03901-z 10.1101/2020.04.16.20064709 10.36227/techrxiv.12334265.v2 10.1109/tmi.2020.2995965 10.1101/2020.02.25.20021568 10.1101/2020.03.19.20039354 10.1109/cvpr.2017.683 10.1016/j.irbm.2020.05.003 10.1109/TMI.2020.2996256 10.1038/s41467-020-17971-2 10.1148/ryct.2020200026 10.1101/2020.04.13.20063941 10.1016/j.eng.2020.04.010 10.1183/13993003.00775-2020 10.1148/radiol.2020201491 10.1109/TMI.2020.2992546 10.1101/2020.02.23.20026930 10.2196/19569 10.1007/s00330-020-07044-9 10.3389/fbioe.2020.00898 10.1371/journal.pone.0236621 10.1007/s00330-020-07156-2
Artificial intelligence in pulmonary medicine: computer vision, predictive model and COVID-19.
Artificial intelligence (AI) is transforming healthcare delivery. The digital revolution in medicine and healthcare information is prompting a staggering growth of data intertwined with elements from many digital sources such as genomics, medical imaging and electronic health records. Such massive growth has sparked the development of an increasing number of AI-based applications that can be deployed in clinical practice. Pulmonary specialists who are familiar with the principles of AI and its applications will be empowered and prepared to seize future practice and research opportunities. The goal of this review is to provide pulmonary specialists and other readers with information pertinent to the use of AI in pulmonary medicine. First, we describe the concept of AI and some of the requisites of machine learning and deep learning. Next, we review some of the literature relevant to the use of computer vision in medical imaging, predictive modelling with machine learning, and the use of AI for battling the novel severe acute respiratory syndrome-coronavirus-2 pandemic. We close our review with a discussion of limitations and challenges pertaining to the further incorporation of AI into clinical pulmonary practice.
European respiratory review : an official journal of the European Respiratory Society
"2020-10-03T00:00:00"
[ "DanaiKhemasuwan", "Jeffrey SSorensen", "Henri GColt" ]
10.1183/16000617.0181-2020 10.1080/17476348.2020.1743181 10.1136/thoraxjnl-2020-214556 10.1183/13993003.01216-2019 10.1007/s41030-020-00110-z 10.1097/MCP.0000000000000459 10.1111/resp.13676 10.1109/TBME.1985.325532 10.1016/0010-4809(83)90021-6 10.1097/00004669-198805000-00010 10.1016/j.jelectrocard.2016.04.010 10.1016/0004-3702(78)90014-0 10.1056/NEJM199406233302506 10.7326/0003-4819-108-1-80 10.1161/CIRCULATIONAHA.115.001593 10.1186/s12874-019-0681-4 10.1056/NEJMp1702071 10.1016/j.jacr.2019.07.019 10.1109/72.935086 10.1186/s13054-017-1836-5 10.1038/s41591-018-0310-5 10.1038/s41591-018-0213-5 10.1007/s11263-015-0816-y 10.1126/science.aaa8685 10.1038/s41746-019-0122-0 10.1038/nature24270 10.1038/s41568-018-0016-5 10.1038/nature14539 10.1038/s41591-019-0536-x 10.1136/thoraxjnl-2019-214104 10.1164/rccm.201903-0505OC 10.1038/srep46479 10.1148/radiol.2018180237 10.1186/s13550-017-0260-9 10.21037/qims.2018.06.03 10.1158/0008-5472.CAN-18-0696 10.1097/RLI.0000000000000574 10.1016/S2213-2600(19)30059-1 10.1161/CIRCRESAHA.118.313911 10.1371/journal.pone.0224453 10.1164/rccm.201808-1543OC 10.1016/j.chest.2018.01.037 10.1183/13993003.01660-2018 10.1148/radiol.2020200905 10.1001/jama.2016.17216 10.1038/nature21056 10.1001/jama.2017.14585 10.1148/radiol.10091808 10.1177/0969141317727771 10.1001/jamaoncol.2016.6416 10.1016/S2589-7500(19)30123-2 10.1136/thoraxjnl-2015-207252 10.1016/j.ejrad.2015.08.016 10.2214/AJR.15.15674 10.1016/S2213-2600(15)00140-X 10.1513/AnnalsATS.201612-947OC 10.1164/rccm.200711-1754OC 10.1164/rccm.200906-0896OC 10.1056/NEJMoa1012740 10.1159/000454956 10.1016/S2213-2600(13)70184-X 10.1183/09031936.05.00035205 10.1016/S0140-6736(16)00080-5 10.1016/j.compbiomed.2020.103792 10.1016/j.ajem.2020.04.016 10.4049/jimmunol.1900033 10.1016/j.csbj.2020.03.025 10.1038/s41586-019-1923-7 10.1152/physiolgenomics.00029.2020 10.21037/jtd.2020.02.64 10.1016/S1473-3099(20)30243-7 10.1016/S1473-3099(20)30120-1 10.1007/s00146-020-00978-0 10.1038/s41591-018-0300-7 10.1093/annonc/mdx781 10.1016/S0140-6736(16)32380-7 10.1001/jama.2019.16489 10.1186/s40537-014-0007-7
Detection Methods of COVID-19.
Since being first detected in China, coronavirus disease 2019 (COVID-19) has spread rapidly across the world, triggering a global pandemic with no viable cure in sight. As a result, national responses have focused on the effective minimization of the spread. Border control measures and travel restrictions have been implemented in a number of countries to limit the import and export of the virus. The detection of COVID-19 is a key task for physicians. The erroneous results of early laboratory tests and their delays led researchers to focus on different options. Information obtained from computed tomography (CT) and radiological images is important for clinical diagnosis. Therefore, it is worth developing a rapid method of detection of viral diseases through the analysis of radiographic images. We propose a novel method of detection of COVID-19. The purpose is to provide clinical decision support to healthcare workers and researchers. The article is to support researchers working on early detection of COVID-19 as well as similar viral diseases.
SLAS technology
"2020-10-01T00:00:00"
[ "AmiraEchtioui", "WassimZouch", "MohamedGhorbel", "ChokriMhiri", "HabibHamam" ]
10.1177/2472630320962002 10.1148/radiol 10.1101/2020.02.14.20023028
AI for radiographic COVID-19 detection selects shortcuts over signal.
Artificial intelligence (AI) researchers and radiologists have recently reported AI systems that accurately detect COVID-19 in chest radiographs. However, the robustness of these systems remains unclear. Using state-of-the-art techniques in explainable AI, we demonstrate that recent deep learning systems to detect COVID-19 from chest radiographs rely on confounding factors rather than medical pathology, creating an alarming situation in which the systems appear accurate, but fail when tested in new hospitals. We observe that the approach to obtain training data for these AI systems introduces a nearly ideal scenario for AI to learn these spurious "shortcuts." Because this approach to data collection has also been used to obtain training data for detection of COVID-19 in computed tomography scans and for medical imaging tasks related to other diseases, our study reveals a far-reaching problem in medical imaging AI. In addition, we show that evaluation of a model on external data is insufficient to ensure AI systems rely on medically relevant pathology, since the undesired "shortcuts" learned by AI systems may not impair performance in new hospitals. These findings demonstrate that explainable AI should be seen as a prerequisite to clinical deployment of ML healthcare models.
medRxiv : the preprint server for health sciences
"2020-10-01T00:00:00"
[ "Alex JDeGrave", "Joseph DJanizek", "Su-InLee" ]
10.1101/2020.09.13.20193565
Viral epitope profiling of COVID-19 patients reveals cross-reactivity and correlates of severity.
Understanding humoral responses to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is critical for improving diagnostics, therapeutics, and vaccines. Deep serological profiling of 232 coronavirus disease 2019 (COVID-19) patients and 190 pre-COVID-19 era controls using VirScan revealed more than 800 epitopes in the SARS-CoV-2 proteome, including 10 epitopes likely recognized by neutralizing antibodies. Preexisting antibodies in controls recognized SARS-CoV-2 ORF1, whereas only COVID-19 patient antibodies primarily recognized spike protein and nucleoprotein. A machine learning model trained on VirScan data predicted SARS-CoV-2 exposure history with 99% sensitivity and 98% specificity; a rapid Luminex-based diagnostic was developed from the most discriminatory SARS-CoV-2 peptides. Individuals with more severe COVID-19 exhibited stronger and broader SARS-CoV-2 responses, weaker antibody responses to prior infections, and higher incidence of cytomegalovirus and herpes simplex virus 1, possibly influenced by demographic covariates. Among hospitalized patients, males produce stronger SARS-CoV-2 antibody responses than females.
Science (New York, N.Y.)
"2020-10-01T00:00:00"
[ "EllenShrock", "EricFujimura", "TomaszKula", "Richard TTimms", "I-HsiuLee", "YumeiLeng", "Matthew LRobinson", "Brandon MSie", "Mamie ZLi", "YuezhouChen", "JenniferLogue", "AdamZuiani", "DeniseMcCulloch", "Felipe J NLelis", "StephanieHenson", "Daniel RMonaco", "MeghanTravers", "ShaghayeghHabibi", "William AClarke", "PatrizioCaturegli", "OliverLaeyendecker", "AlicjaPiechocka-Trocha", "Jonathan ZLi", "AshokKhatri", "Helen YChu", "NoneNone", "Alexandra-ChloéVillani", "KyleKays", "Marcia BGoldberg", "NirHacohen", "Michael RFilbin", "Xu GYu", "Bruce DWalker", "Duane RWesemann", "H BenjaminLarman", "James ALederer", "Stephen JElledge" ]
10.1126/science.abd4250 10.1038/s41579-018-0118-9 10.4014/jmb.2003.03011 10.1016/j.clim.2020.108427 10.1038/nbt.1856 10.1038/s41596-018-0025-6 10.1126/science.aaa0698 10.1126/science.aay6485 10.1038/s41586-020-2012-7 10.1128/CVI.00278-10 10.1038/s42256-019-0138-9 10.1016/j.ymeth.2019.01.014 10.1016/j.cell.2020.05.015 10.1038/s41586-020-2550-z 10.1128/JVI.02015-19 10.1016/j.bbrc.2014.07.090 10.1001/jama.2020.8598 10.15585/mmwr.mm6915e3 10.1172/JCI64096 10.1038/s41467-020-16638-2 10.1016/j.cell.2020.02.058 10.1038/s41586-020-2180-5 10.1126/science.abb2507 10.1038/s41586-020-2381-y 10.1371/journal.pone.0200267 10.1086/652438 10.1016/j.immuni.2008.09.008 10.1007/s00262-005-0109-3 10.1046/j.1365-2567.1997.00310.x 10.1111/acel.12059 10.1128/CMR.00102-14 10.1093/molbev/msy096 10.1073/pnas.0404206101 10.1038/s41586-020-2286-9 10.1093/nar/gkx346 10.1002/(SICI)1096-987X(199802)19:3<319::AID-JCC6>3.0.CO;2-W
Improving the performance of CNN to predict the likelihood of COVID-19 using chest X-ray images with preprocessing algorithms.
This study aims to develop and test a new computer-aided diagnosis (CAD) scheme of chest X-ray images to detect coronavirus (COVID-19) infected pneumonia. CAD scheme first applies two image preprocessing steps to remove the majority of diaphragm regions, process the original image using a histogram equalization algorithm, and a bilateral low-pass filter. Then, the original image and two filtered images are used to form a pseudo color image. This image is fed into three input channels of a transfer learning-based convolutional neural network (CNN) model to classify chest X-ray images into 3 classes of COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) cases. To build and test the CNN model, a publicly available dataset involving 8474 chest X-ray images is used, which includes 415, 5179 and 2,880 cases in three classes, respectively. Dataset is randomly divided into 3 subsets namely, training, validation, and testing with respect to the same frequency of cases in each class to train and test the CNN model. The CNN-based CAD scheme yields an overall accuracy of 94.5 % (2404/2544) with a 95 % confidence interval of [0.93,0.96] in classifying 3 classes. CAD also yields 98.4 % sensitivity (124/126) and 98.0 % specificity (2371/2418) in classifying cases with and without COVID-19 infection. However, without using two preprocessing steps, CAD yields a lower classification accuracy of 88.0 % (2239/2544). This study demonstrates that adding two image preprocessing steps and generating a pseudo color image plays an important role in developing a deep learning CAD scheme of chest X-ray images to improve accuracy in detecting COVID-19 infected pneumonia.
International journal of medical informatics
"2020-09-30T00:00:00"
[ "MortezaHeidari", "SeyedehnafisehMirniaharikandehei", "Abolfazl ZargariKhuzani", "GopichandhDanala", "YuchenQiu", "BinZheng" ]
10.1016/j.ijmedinf.2020.104284 10.1101/2020.02.14.20023028 10.1007/s13246-020-00865-4 10.17632/rscbjbr9sj.3
Learning distinctive filters for COVID-19 detection from chest X-ray using shuffled residual CNN.
COVID-19 is a deadly viral infection that has brought a significant threat to human lives. Automatic diagnosis of COVID-19 from medical imaging enables precise medication, helps to control community outbreak, and reinforces coronavirus testing methods in place. While there exist several challenges in manually inferring traces of this viral infection from X-ray, Convolutional Neural Network (CNN) can mine data patterns that capture subtle distinctions between infected and normal X-rays. To enable automated learning of such latent features, a custom CNN architecture has been proposed in this research. It learns unique convolutional filter patterns for each kind of pneumonia. This is achieved by restricting certain filters in a convolutional layer to maximally respond only to a particular class of pneumonia/COVID-19. The CNN architecture integrates different convolution types to aid better context for learning robust features and strengthen gradient flow between layers. The proposed work also visualizes regions of saliency on the X-ray that have had the most influence on CNN's prediction outcome. To the best of our knowledge, this is the first attempt in deep learning to learn custom filters within a single convolutional layer for identifying specific pneumonia classes. Experimental results demonstrate that the proposed work has significant potential in augmenting current testing methods for COVID-19. It achieves an F1-score of 97.20% and an accuracy of 99.80% on the COVID-19 X-ray set.
Applied soft computing
"2020-09-30T00:00:00"
[ "RKarthik", "RMenaka", "HariharanM" ]
10.1016/j.asoc.2020.106744
Deep learning-based triage and analysis of lesion burden for COVID-19: a retrospective study with external validation.
Prompt identification of patients suspected to have COVID-19 is crucial for disease control. We aimed to develop a deep learning algorithm on the basis of chest CT for rapid triaging in fever clinics. We trained a U-Net-based model on unenhanced chest CT scans obtained from 2447 patients admitted to Tongji Hospital (Wuhan, China) between Feb 1, 2020, and March 3, 2020 (1647 patients with RT-PCR-confirmed COVID-19 and 800 patients without COVID-19) to segment lung opacities and alert cases with COVID-19 imaging manifestations. The ability of artificial intelligence (AI) to triage patients suspected to have COVID-19 was assessed in a large external validation set, which included 2120 retrospectively collected consecutive cases from three fever clinics inside and outside the epidemic centre of Wuhan (Tianyou Hospital [Wuhan, China; area of high COVID-19 prevalence], Xianning Central Hospital [Xianning, China; area of medium COVID-19 prevalence], and The Second Xiangya Hospital [Changsha, China; area of low COVID-19 prevalence]) between Jan 22, 2020, and Feb 14, 2020. To validate the sensitivity of the algorithm in a larger sample of patients with COVID-19, we also included 761 chest CT scans from 722 patients with RT-PCR-confirmed COVID-19 treated in a makeshift hospital (Guanggu Fangcang Hospital, Wuhan, China) between Feb 21, 2020, and March 6, 2020. Additionally, the accuracy of AI was compared with a radiologist panel for the identification of lesion burden increase on pairs of CT scans obtained from 100 patients with COVID-19. In the external validation set, using radiological reports as the reference standard, AI-aided triage achieved an area under the curve of 0·953 (95% CI 0·949-0·959), with a sensitivity of 0·923 (95% CI 0·914-0·932), specificity of 0·851 (0·842-0·860), a positive predictive value of 0·790 (0·777-0·803), and a negative predictive value of 0·948 (0·941-0·954). AI took a median of 0·55 min (IQR: 0·43-0·63) to flag a positive case, whereas radiologists took a median of 16·21 min (11·67-25·71) to draft a report and 23·06 min (15·67-39·20) to release a report. With regard to the identification of increases in lesion burden, AI achieved a sensitivity of 0·962 (95% CI 0·947-1·000) and a specificity of 0·875 (95 %CI 0·833-0·923). The agreement between AI and the radiologist panel was high (Cohen's kappa coefficient 0·839, 95% CI 0·718-0·940). A deep learning algorithm for triaging patients with suspected COVID-19 at fever clinics was developed and externally validated. Given its high accuracy across populations with varied COVID-19 prevalence, integration of this system into the standard clinical workflow could expedite identification of chest CT scans with imaging indications of COVID-19. Special Project for Emergency of the Science and Technology Department of Hubei Province, China.
The Lancet. Digital health
"2020-09-29T00:00:00"
[ "MinghuanWang", "ChenXia", "LuHuang", "ShabeiXu", "ChuanQin", "JunLiu", "YingCao", "PengxinYu", "TingtingZhu", "HuiZhu", "ChaonanWu", "RongguoZhang", "XiangyuChen", "JianmingWang", "GuangDu", "ChenZhang", "ShaokangWang", "KuanChen", "ZhengLiu", "LimingXia", "WeiWang" ]
10.1016/S2589-7500(20)30199-0 10.7326/M20-1495 10.1148/radiol.2020200905
CT scan AI-aided triage for patients with COVID-19 in China.
null
The Lancet. Digital health
"2020-09-29T00:00:00"
[ "VarutVardhanabhuti" ]
10.1016/S2589-7500(20)30222-3 10.1148/radiol.2020202439
Unveiling COVID-19 from CHEST X-Ray with Deep Learning: A Hurdles Race with Small Data.
The possibility to use widespread and simple chest X-ray (CXR) imaging for early screening of COVID-19 patients is attracting much interest from both the clinical and the AI community. In this study we provide insights and also raise warnings on what is reasonable to expect by applying deep learning to COVID classification of CXR images. We provide a methodological guide and critical reading of an extensive set of statistical results that can be obtained using currently available datasets. In particular, we take the challenge posed by current small size COVID data and show how significant can be the bias introduced by transfer-learning using larger public non-COVID CXR datasets. We also contribute by providing results on a medium size COVID CXR dataset, just collected by one of the major emergency hospitals in Northern Italy during the peak of the COVID pandemic. These novel data allow us to contribute to validate the generalization capacity of preliminary results circulating in the scientific community. Our conclusions shed some light into the possibility to effectively discriminate COVID using CXR.
International journal of environmental research and public health
"2020-09-26T00:00:00"
[ "EnzoTartaglione", "Carlo AlbertoBarbano", "ClaudioBerzovini", "MarcoCalandri", "MarcoGrangetto" ]
10.3390/ijerph17186933 10.1148/radiol.2020200490 10.1101/2020.02.11.20021493 10.1148/radiol.2020201365 10.1016/S1473-3099(20)30086-4 10.1148/radiol.2020201160 10.1016/S0140-6736(20)30728-5 10.1080/00313020310001619118 10.1109/TMI.2016.2528162 10.1109/TMI.2016.2535865 10.20944/preprints202003.0300.v1 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2017.08.001 10.17632/rscbjbr9sj2 10.1109/TMI.2013.2290491 10.1109/42.929615 10.1109/TMI.2014.2337057 10.1109/TMI.2016.2535302 10.1007/s00365-006-0663-2
Detection of COVID-19 Using Deep Learning Algorithms on Chest Radiographs.
To evaluate the performance of a deep learning (DL) algorithm for the detection of COVID-19 on chest radiographs (CXR). In this retrospective study, a DL model was trained on 112,120 CXR images with 14 labeled classifiers (ChestX-ray14) and fine-tuned using initial CXR on hospital admission of 509 patients, who had undergone COVID-19 reverse transcriptase-polymerase chain reaction (RT-PCR). The test set consisted of a CXR on presentation of 248 individuals suspected of COVID-19 pneumonia between February 16 and March 3, 2020 from 4 centers (72 RT-PCR positives and 176 RT-PCR negatives). The CXR were independently reviewed by 3 radiologists and using the DL algorithm. Diagnostic performance was compared with radiologists' performance and was assessed by area under the receiver operating characteristics (AUC). The median age of the subjects in the test set was 61 (interquartile range: 39 to 79) years (51% male). The DL algorithm achieved an AUC of 0.81, sensitivity of 0.85, and specificity of 0.72 in detecting COVID-19 using RT-PCR as the reference standard. On subgroup analyses, the model achieved an AUC of 0.79, sensitivity of 0.80, and specificity of 0.74 in detecting COVID-19 in patients presented with fever or respiratory systems and an AUC of 0.87, sensitivity of 0.85, and specificity of 0.81 in distinguishing COVID-19 from other forms of pneumonia. The algorithm significantly outperforms human readers (P<0.001 using DeLong test) with higher sensitivity (P=0.01 using McNemar test). A DL algorithm (COV19NET) for the detection of COVID-19 on chest radiographs can potentially be an effective tool in triaging patients, particularly in resource-stretched health-care systems.
Journal of thoracic imaging
"2020-09-25T00:00:00"
[ "Wan Hang KeithChiu", "VarutVardhanabhuti", "DmytroPoplavskiy", "Philip Leung HoYu", "RichardDu", "Alistair Yun HeeYap", "SailongZhang", "Ambrose Ho-TungFong", "Thomas Wing-YanChin", "Jonan Chun YinLee", "Siu TingLeung", "Christine Shing YenLo", "Macy Mei-SzeLui", "Benjamin Xin HaoFang", "Ming-YenNg", "Michael DKuo" ]
10.1097/RTI.0000000000000559
Advancing COVID-19 differentiation with a robust preprocessing and integration of multi-institutional open-repository computer tomography datasets for deep learning analysis.
The coronavirus pandemic and its unprecedented consequences globally has spurred the interest of the artificial intelligence research community. A plethora of published studies have investigated the role of imaging such as chest X-rays and computer tomography in coronavirus disease 2019 (COVID-19) automated diagnosis. Οpen repositories of medical imaging data can play a significant role by promoting cooperation among institutes in a world-wide scale. However, they may induce limitations related to variable data quality and intrinsic differences due to the wide variety of scanner vendors and imaging parameters. In this study, a state-of-the-art custom U-Net model is presented with a dice similarity coefficient performance of 99.6% along with a transfer learning VGG-19 based model for COVID-19 versus pneumonia differentiation exhibiting an area under curve of 96.1%. The above was significantly improved over the baseline model trained with no segmentation in selected tomographic slices of the same dataset. The presented study highlights the importance of a robust preprocessing protocol for image analysis within a heterogeneous imaging dataset and assesses the potential diagnostic value of the presented COVID-19 model by comparing its performance to the state of the art.
Experimental and therapeutic medicine
"2020-09-25T00:00:00"
[ "EleftheriosTrivizakis", "NikosTsiknakis", "Evangelia EVassalou", "Georgios ZPapadakis", "Demetrios ASpandidos", "DimosthenisSarigiannis", "AristidisTsatsakis", "NikolaosPapanikolaou", "Apostolos HKarantanas", "KostasMarias" ]
10.3892/etm.2020.9210 10.3892/mmr.2020.11127 10.3892/ijmm.2020.4555 10.1016/j.toxrep.2020.04.012 10.1016/j.fct.2020.111418 10.1148/radiol.2020200343 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1016/S0140-6736(20)30211-7 10.1001/jama.2020.1585 10.1056/NEJMoa2001316 10.1148/radiol.2020200230 10.1016/S0140-6736(20)30183-5 10.3892/etm.2020.8797 10.1007/s13246-020-00865-4 10.1148/radiol.2020200905 10.1183/13993003.00775-2020 10.1016/j.cell.2020.04.045 10.1016/j.compbiomed.2020.103795 10.1101/2020.02.23.20026930 10.1101/2020.03.12.20027185 10.1101/2020.04.24.20078584 10.5281/zenodo.3757476 10.1118/1.3528204 10.1101/2020.04.13.20063941 10.1038/s41467-020-17971-2
COVID19XrayNet: A Two-Step Transfer Learning Model for the COVID-19 Detecting Problem Based on a Limited Number of Chest X-Ray Images.
The novel coronavirus severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused a major pandemic outbreak recently. Various diagnostic technologies have been under active development. The novel coronavirus disease (COVID-19) may induce pulmonary failures, and chest X-ray imaging becomes one of the major confirmed diagnostic technologies. The very limited number of publicly available samples has rendered the training of the deep neural networks unstable and inaccurate. This study proposed a two-step transfer learning pipeline and a deep residual network framework COVID19XrayNet for the COVID-19 detection problem based on chest X-ray images. COVID19XrayNet firstly tunes the transferred model on a large dataset of chest X-ray images, which is further tuned using a small dataset of annotated chest X-ray images. The final model achieved 0.9108 accuracy. The experimental data also suggested that the model may be improved with more training samples being released. COVID19XrayNet, a two-step transfer learning framework designed for biomedical images.
Interdisciplinary sciences, computational life sciences
"2020-09-23T00:00:00"
[ "RuochiZhang", "ZhehaoGuo", "YueSun", "QiLu", "ZijianXu", "ZhaominYao", "MeiyuDuan", "ShuaiLiu", "YanjiaoRen", "LanHuang", "FengfengZhou" ]
10.1007/s12539-020-00393-5 10.1016/j.ijantimicag.2020.105955 10.1038/s41586-020-2012-7 10.1001/jama.2020.4683 10.1016/S0140-6736(20)30633-4 10.1111/tmi.13383 10.1093/clinchem/hvaa029 10.1002/jmv.25727 10.1148/radiol.2020200343 10.1109/TIP.2013.2264677 10.1016/j.compbiomed.2018.06.006 10.1186/s12859-018-2477-7 10.1016/j.cell.2018.02.010 10.1038/s41597-019-0322-0 10.1109/72.554195 10.1109/ACCESS.2018.2817593 10.1109/TMI.2019.2948026 10.2217/epi-2019-0230 10.1016/j.compbiomed.2019.103394 10.11613/BM.2012.031
COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images.
Novel Coronavirus disease (COVID-19) has abruptly and undoubtedly changed the world as we know it at the end of the 2nd decade of the 21st century. COVID-19 is extremely contagious and quickly spreading globally making its early diagnosis of paramount importance. Early diagnosis of COVID-19 enables health care professionals and government authorities to break the chain of transition and flatten the epidemic curve. The common type of COVID-19 diagnosis test, however, requires specific equipment and has relatively low sensitivity. Computed tomography (CT) scans and X-ray images, on the other hand, reveal specific manifestations associated with this disease. Overlap with other lung infections makes human-centered diagnosis of COVID-19 challenging. Consequently, there has been an urgent surge of interest to develop Deep Neural Network (DNN)-based diagnosis solutions, mainly based on Convolutional Neural Networks (CNNs), to facilitate identification of positive COVID-19 cases. CNNs, however, are prone to lose spatial information between image instances and require large datasets. The paper presents an alternative modeling framework based on Capsule Networks, referred to as the COVID-CAPS, being capable of handling small datasets, which is of significant importance due to sudden and rapid emergence of COVID-19. Our results based on a dataset of X-ray images show that COVID-CAPS has advantage over previous CNN-based models. COVID-CAPS achieved an Accuracy of 95.7%, Sensitivity of 90%, Specificity of 95.8%, and Area Under the Curve (AUC) of 0.97, while having far less number of trainable parameters in comparison to its counterparts. To potentially and further improve diagnosis capabilities of the COVID-CAPS, pre-training and transfer learning are utilized based on a new dataset constructed from an external dataset of X-ray images. This is in contrary to existing works on COVID-19 detection where pre-training is performed based on natural images. Pre-training with a dataset of similar nature further improved accuracy to 98.3% and specificity to 98.6%.
Pattern recognition letters
"2020-09-23T00:00:00"
[ "ParnianAfshar", "ShahinHeidarian", "FarnooshNaderkhani", "AnastasiaOikonomou", "Konstantinos NPlataniotis", "ArashMohammadi" ]
10.1016/j.patrec.2020.09.010
COVID-19 image classification using deep features and fractional-order marine predators algorithm.
Currently, we witness the severe spread of the pandemic of the new Corona virus, COVID-19, which causes dangerous symptoms to humans and animals, its complications may lead to death. Although convolutional neural networks (CNNs) is considered the current state-of-the-art image classification technique, it needs massive computational cost for deployment and training. In this paper, we propose an improved hybrid classification approach for COVID-19 images by combining the strengths of CNNs (using a powerful architecture called Inception) to extract features and a swarm-based feature selection algorithm (Marine Predators Algorithm) to select the most relevant features. A combination of fractional-order and marine predators algorithm (FO-MPA) is considered an integration among a robust tool in mathematics named fractional-order calculus (FO). The proposed approach was evaluated on two public COVID-19 X-ray datasets which achieves both high performance and reduction of computational complexity. The two datasets consist of X-ray COVID-19 images by international Cardiothoracic radiologist, researchers and others published on Kaggle. The proposed approach selected successfully 130 and 86 out of 51 K features extracted by inception from dataset 1 and dataset 2, while improving classification accuracy at the same time. The results are the best achieved on these datasets when compared to a set of recent feature selection algorithms. By achieving 98.7%, 98.2% and 99.6%, 99% of classification accuracy and F-Score for dataset 1 and dataset 2, respectively, the proposed approach outperforms several CNNs and all recent works on COVID-19 images.
Scientific reports
"2020-09-23T00:00:00"
[ "Ahmed TSahlol", "DaliaYousri", "Ahmed AEwees", "Mohammed A AAl-Qaness", "RobertasDamasevicius", "Mohamed AbdElaziz" ]
10.1038/s41598-020-71294-2 10.1038/nature12711 10.3390/jcm9030674 10.1148/radiol.2020200330 10.1155/2018/3052852 10.1016/j.media.2016.05.004 10.1016/j.ejca.2011.11.036 10.1109/TMI.2015.2459064 10.1016/j.bbe.2019.11.004 10.1007/s10916-019-1428-9 10.1007/s10115-006-0043-5 10.1016/j.irbm.2019.10.006 10.1016/j.cviu.2010.09.007 10.1109/LSP.2014.2364612 10.1002/ima.22118 10.1109/TMI.2009.2028078 10.1016/j.media.2017.07.005 10.1016/j.dss.2011.01.015 10.1007/s11042-019-7354-5 10.1007/s11042-020-08699-8 10.1016/j.eswa.2020.113377 10.1038/s41598-020-59215-9 10.1016/j.comnet.2018.01.007 10.1016/j.engappai.2020.103662 10.1016/j.advengsoft.2016.01.008 10.1016/j.future.2019.07.015 10.1016/j.future.2020.03.055 10.1016/j.advengsoft.2013.12.007 10.1016/j.future.2019.02.028 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2020.103792
Cascaded deep learning classifiers for computer-aided diagnosis of COVID-19 and pneumonia diseases in X-ray scans.
Computer-aided diagnosis (CAD) systems are considered a powerful tool for physicians to support identification of the novel Coronavirus Disease 2019 (COVID-19) using medical imaging modalities. Therefore, this article proposes a new framework of cascaded deep learning classifiers to enhance the performance of these CAD systems for highly suspected COVID-19 and pneumonia diseases in X-ray images. Our proposed deep learning framework constitutes two major advancements as follows. First, complicated multi-label classification of X-ray images have been simplified using a series of binary classifiers for each tested case of the health status. That mimics the clinical situation to diagnose potential diseases for a patient. Second, the cascaded architecture of COVID-19 and pneumonia classifiers is flexible to use different fine-tuned deep learning models simultaneously, achieving the best performance of confirming infected cases. This study includes eleven pre-trained convolutional neural network models, such as Visual Geometry Group Network (VGG) and Residual Neural Network (ResNet). They have been successfully tested and evaluated on public X-ray image dataset for normal and three diseased cases. The results of proposed cascaded classifiers showed that VGG16, ResNet50V2, and Dense Neural Network (DenseNet169) models achieved the best detection accuracy of COVID-19, viral (Non-COVID-19) pneumonia, and bacterial pneumonia images, respectively. Furthermore, the performance of our cascaded deep learning classifiers is superior to other multi-label classification methods of COVID-19 and pneumonia diseases in previous studies. Therefore, the proposed deep learning framework presents a good option to be applied in the clinical routine to assist the diagnostic procedures of COVID-19 infection.
Complex & intelligent systems
"2020-09-22T00:00:00"
[ "Mohamed EsmailKarar", "Ezz El-DinHemdan", "Marwa AShouman" ]
10.1007/s40747-020-00199-4 10.1016/j.tmaid.2020.101623 10.1001/jama.2020.0757 10.1016/j.ijsu.2020.02.034 10.1016/j.diagmicrobio.2018.11.014 10.1172/JCI33947 10.1148/radiol.2020200330 10.1016/j.jinf.2020.03.007 10.1148/ryct.2020200034 10.1016/j.jrid.2020.03.006 10.1016/j.clinimag.2020.04.001 10.1148/radiol.2020201160 10.1097/rti.0000000000000404 10.1016/S0140-6736(20)30211-7 10.1016/j.compmedimag.2014.09.005 10.1097/IMI.0b013e31822c6a77 10.1016/j.ejrad.2016.10.006 10.1016/j.media.2020.101666 10.1016/j.medengphy.2020.02.003 10.1016/j.bspc.2019.101678 10.1007/s11548-020-02186-z 10.1016/j.crad.2019.08.005 10.1109/TMI.2019.2894349 10.4018/IJACI.2019070106 10.1097/SLA.0000000000002693 10.1016/j.artmed.2018.08.008 10.1109/ACCESS.2019.2920980 10.1007/s42979-020-00209-9 10.1007/s12559-020-09751-3 10.1016/j.jctube.2019.01.003 10.1016/j.cmpb.2019.105162 10.1016/j.measurement.2019.05.076 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2020.103792 10.1016/j.cmpb.2020.105608 10.1016/j.cmpb.2020.105581 10.1016/j.knosys.2020.106270 10.1016/j.asoc.2020.106580 10.1109/LSP.2017.2679608 10.1016/j.ipm.2009.03.002 10.1109/TMI.2020.2993291 10.3233/jifs-201146
COVID-19 detection in CT images with deep learning: A voting-based scheme and cross-datasets analysis.
Early detection and diagnosis are critical factors to control the COVID-19 spreading. A number of deep learning-based methodologies have been recently proposed for COVID-19 screening in CT scans as a tool to automate and help with the diagnosis. These approaches, however, suffer from at least one of the following problems: (i) they treat each CT scan slice independently and (ii) the methods are trained and tested with sets of images from the same dataset. Treating the slices independently means that the same patient may appear in the training and test sets at the same time which may produce misleading results. It also raises the question of whether the scans from the same patient should be evaluated as a group or not. Moreover, using a single dataset raises concerns about the generalization of the methods. Different datasets tend to present images of varying quality which may come from different types of CT machines reflecting the conditions of the countries and cities from where they come from. In order to address these two problems, in this work, we propose an Efficient Deep Learning Technique for the screening of COVID-19 with a voting-based approach. In this approach, the images from a given patient are classified as group in a voting system. The approach is tested in the two biggest datasets of COVID-19 CT analysis with a patient-based split. A cross dataset study is also presented to assess the robustness of the models in a more realistic scenario in which data comes from different distributions. The cross-dataset analysis has shown that the generalization power of deep learning models is far from acceptable for the task since accuracy drops from 87.68% to 56.16% on the best evaluation scenario. These results highlighted that the methods that aim at COVID-19 detection in CT-images have to improve significantly to be considered as a clinical option and larger and more diverse datasets are needed to evaluate the methods in a realistic scenario.
Informatics in medicine unlocked
"2020-09-22T00:00:00"
[ "PedroSilva", "EduardoLuz", "GuilhermeSilva", "GladstonMoreira", "RodrigoSilva", "DiegoLucio", "DavidMenotti" ]
10.1016/j.imu.2020.100427 10.1101/2020.03.30.20047456 10.1101/2020.02.14.20023028
Detection of COVID-19 from Chest X-Ray Images Using Convolutional Neural Networks.
The detection of severe acute respiratory syndrome coronavirus 2 (SARS CoV-2), which is responsible for coronavirus disease 2019 (COVID-19), using chest X-ray images has life-saving importance for both patients and doctors. In addition, in countries that are unable to purchase laboratory kits for testing, this becomes even more vital. In this study, we aimed to present the use of deep learning for the high-accuracy detection of COVID-19 using chest X-ray images. Publicly available X-ray images (1583 healthy, 4292 pneumonia, and 225 confirmed COVID-19) were used in the experiments, which involved the training of deep learning and machine learning classifiers. Thirty-eight experiments were performed using convolutional neural networks, 10 experiments were performed using five machine learning models, and 14 experiments were performed using the state-of-the-art pre-trained networks for transfer learning. Images and statistical data were considered separately in the experiments to evaluate the performances of models, and eightfold cross-validation was used. A mean sensitivity of 93.84%, mean specificity of 99.18%, mean accuracy of 98.50%, and mean receiver operating characteristics-area under the curve scores of 96.51% are achieved. A convolutional neural network without pre-processing and with minimized layers is capable of detecting COVID-19 in a limited number of, and in imbalanced, chest X-ray images.
SLAS technology
"2020-09-20T00:00:00"
[ "BoranSekeroglu", "IlkerOzsahin" ]
10.1177/2472630320958376
Initial chest radiographs and artificial intelligence (AI) predict clinical outcomes in COVID-19 patients: analysis of 697 Italian patients.
To evaluate whether the initial chest X-ray (CXR) severity assessed by an AI system may have prognostic utility in patients with COVID-19. This retrospective single-center study included adult patients presenting to the emergency department (ED) between February 25 and April 9, 2020, with SARS-CoV-2 infection confirmed on real-time reverse transcriptase polymerase chain reaction (RT-PCR). Initial CXRs obtained on ED presentation were evaluated by a deep learning artificial intelligence (AI) system and compared with the Radiographic Assessment of Lung Edema (RALE) score, calculated by two experienced radiologists. Death and critical COVID-19 (admission to intensive care unit (ICU) or deaths occurring before ICU admission) were identified as clinical outcomes. Independent predictors of adverse outcomes were evaluated by multivariate analyses. Six hundred ninety-seven 697 patients were included in the study: 465 males (66.7%), median age of 62 years (IQR 52-75). Multivariate analyses adjusting for demographics and comorbidities showed that an AI system-based score ≥ 30 on the initial CXR was an independent predictor both for mortality (HR 2.60 (95% CI 1.69 - 3.99; p < 0.001)) and critical COVID-19 (HR 3.40 (95% CI 2.35-4.94; p < 0.001)). Other independent predictors were RALE score, older age, male sex, coronary artery disease, COPD, and neurodegenerative disease. AI- and radiologist-assessed disease severity scores on CXRs obtained on ED presentation were independent and comparable predictors of adverse outcomes in patients with COVID-19. ClinicalTrials.gov NCT04318366 ( https://clinicaltrials.gov/ct2/show/NCT04318366 ). • AI system-based score ≥ 30 and a RALE score ≥ 12 at CXRs performed at ED presentation are independent and comparable predictors of death and/or ICU admission in COVID-19 patients. • Other independent predictors are older age, male sex, coronary artery disease, COPD, and neurodegenerative disease. • The comparable performance of the AI system in relation to a radiologist-assessed score in predicting adverse outcomes may represent a game-changer in resource-constrained settings.
European radiology
"2020-09-19T00:00:00"
[ "JunaidMushtaq", "RenatoPennella", "SalvatoreLavalle", "AnnaColarieti", "StephanieSteidler", "Carlo M AMartinenghi", "DiegoPalumbo", "AntonioEsposito", "PatriziaRovere-Querini", "MorenoTresoldi", "GiovanniLandoni", "FabioCiceri", "AlbertoZangrillo", "FrancescoDe Cobelli" ]
10.1007/s00330-020-07269-8 10.1016/S0140-6736(20)30633-4 10.1016/j.chest.2020.04.003 10.3348/kjr.2020.0132 10.1016/j.amjmed.2004.03.020 10.1148/radiol.2332031649 10.2214/ajr.184.3.01840734 10.1148/radiol.2462070712 10.1136/thoraxjnl-2017-211280 10.1007/s11604-020-00975-9
Development of a volumetric pancreas segmentation CT dataset for AI applications through trained technologists: a study during the COVID 19 containment phase.
To evaluate the performance of trained technologists vis-à-vis radiologists for volumetric pancreas segmentation and to assess the impact of supplementary training on their performance. In this IRB-approved study, 22 technologists were trained in pancreas segmentation on portal venous phase CT through radiologist-led interactive videoconferencing sessions based on an image-rich curriculum. Technologists segmented pancreas in 188 CTs using freehand tools on custom image-viewing software. Subsequent supplementary training included multimedia videos focused on common errors, which were followed by second batch of 159 segmentations. Two radiologists reviewed all cases and corrected inaccurate segmentations. Technologists' segmentations were compared against radiologists' segmentations using Dice-Sorenson coefficient (DSC), Jaccard coefficient (JC), and Bland-Altman analysis. Corrections were made in 71 (38%) cases from first batch [26 (37%) oversegmentations and 45 (63%) undersegmentations] and in 77 (48%) cases from second batch [12 (16%) oversegmentations and 65 (84%) undersegmentations]. DSC, JC, false positive (FP), and false negative (FN) [mean (SD)] in first versus second batches were 0.63 (0.15) versus 0.63 (0.16), 0.48 (0.15) versus 0.48 (0.15), 0.29 (0.21) versus 0.21 (0.10), and 0.36 (0.20) versus 0.43 (0.19), respectively. Differences were not significant (p > 0.05). However, range of mean pancreatic volume difference reduced in the second batch [- 2.74 cc (min - 92.96 cc, max 87.47 cc) versus - 23.57 cc (min - 77.32, max 30.19)]. Trained technologists could perform volumetric pancreas segmentation with reasonable accuracy despite its complexity. Supplementary training further reduced range of volume difference in segmentations. Investment into training technologists could augment and accelerate development of body imaging datasets for AI applications.
Abdominal radiology (New York)
"2020-09-18T00:00:00"
[ "GarimaSuman", "AnanyaPanda", "PanagiotisKorfiatis", "Marie EEdwards", "SushilGarg", "Daniel JBlezek", "Suresh TChari", "Ajit HGoenka" ]
10.1007/s00261-020-02741-x 10.1148/radiol.2020192224 10.2214/AJR.18.19914 10.2214/AJR.18.19970 10.1016/j.acra.2019.08.014 10.1007/s00261-018-1793-8 10.1080/17474124.2018.1496015 10.1007/s00330-018-5865-5 10.1038/ajg.2014.1 10.1148/radiol.2016152547 10.1016/j.jacr.2020.05.004 10.1016/j.mri.2012.05.001 10.11613/BM.2015.015 10.1148/radiol.11110938 10.2214/AJR.17.18665 10.2214/AJR.19.22087 10.1093/nsr/nwx106
Development and clinical implementation of tailored image analysis tools for COVID-19 in the midst of the pandemic: The synergetic effect of an open, clinically embedded software development platform and machine learning.
During the emerging COVID-19 pandemic, radiology departments faced a substantial increase in chest CT admissions coupled with the novel demand for quantification of pulmonary opacities. This article describes how our clinic implemented an automated software solution for this purpose into an established software platform in 10 days. The underlying hypothesis was that modern academic centers in radiology are capable of developing and implementing such tools by their own efforts and fast enough to meet the rapidly increasing clinical needs in the wake of a pandemic. Deep convolutional neural network algorithms for lung segmentation and opacity quantification on chest CTs were trained using semi-automatically and manually created ground-truth (N The final algorithm was available at day 10 and achieved human-like performance (Dice coefficient = 0.97). For opacity quantification, a slight underestimation was seen both for the in-house (1.8 %) and for the external algorithm (0.9 %). In contrast to the external reference, the underestimation for the in-house algorithm showed no dependency on total opacity load, making it more suitable for follow-up. The combination of machine learning and a clinically embedded software development platform enabled time-efficient development, instant deployment, and rapid adoption in clinical routine. The algorithm for fully automated lung segmentation and opacity quantification that we developed in the midst of the COVID-19 pandemic was ready for clinical use within just 10 days and achieved human-level performance even in complex cases.
European journal of radiology
"2020-09-15T00:00:00"
[ "ConstantinAnastasopoulos", "ThomasWeikert", "ShanYang", "AhmedAbdulkadir", "LenaSchmülling", "ClaudiaBühler", "FabianoPaciolla", "RaphaelSexauer", "JoshyCyriac", "IvanNesic", "RaphaelTwerenbold", "JensBremerich", "BramStieltjes", "Alexander WSauter", "GregorSommer" ]
10.1016/j.ejrad.2020.109233 10.1148/radiol.2020200432 10.1148/radiol.2020200642 10.1007/s00330-020-06865-y 10.1148/radiol.2020201365 10.1007/s00330-020-06915-5 10.1148/radiol.2020200230 10.1148/radiol.2020200823 10.1148/radiol.2020200463 10.1148/radiol.2020200843 10.1148/ryct.2020200044 10.1148/ryct.2020200082 10.1148/radiol.2020201473 10.1148/ryct.2020200047 10.1148/radiol.2020201433 10.1148/radiol.2020200905 10.1164/ajrccm.164.9.2103121 10.1145/3299887.3299891 10.1148/ryai.2020200029 10.1148/ryct.2020200075 10.1007/s00330-020-06672-5
A deep learning approach to detect Covid-19 coronavirus with X-Ray images.
Rapid and accurate detection of COVID-19 coronavirus is necessity of time to prevent and control of this pandemic by timely quarantine and medical treatment in absence of any vaccine. Daily increase in cases of COVID-19 patients worldwide and limited number of available detection kits pose difficulty in identifying the presence of disease. Therefore, at this point of time, necessity arises to look for other alternatives. Among already existing, widely available and low-cost resources, X-ray is frequently used imaging modality and on the other hand, deep learning techniques have achieved state-of-the-art performances in computer-aided medical diagnosis. Therefore, an alternative diagnostic tool to detect COVID-19 cases utilizing available resources and advanced deep learning techniques is proposed in this work. The proposed method is implemented in four phases, viz., data augmentation, preprocessing, stage-I and stage-II deep network model designing. This study is performed with online available resources of 1215 images and further strengthen by utilizing data augmentation techniques to provide better generalization of the model and to prevent the model overfitting by increasing the overall length of dataset to 1832 images. Deep network implementation in two stages is designed to differentiate COVID-19 induced pneumonia from healthy cases, bacterial and other virus induced pneumonia on X-ray images of chest. Comprehensive evaluations have been performed to demonstrate the effectiveness of the proposed method with both (i) training-validation-testing and (ii) 5-fold cross validation procedures. High classification accuracy as 97.77%, recall as 97.14% and precision as 97.14% in case of COVID-19 detection shows the efficacy of proposed method in present need of time. Further, the deep network architecture showing averaged accuracy/sensitivity/specificity/precision/F1-score of 98.93/98.93/98.66/96.39/98.15 with 5-fold cross validation makes a promising outcome in COVID-19 detection using X-ray images.
Biocybernetics and biomedical engineering
"2020-09-15T00:00:00"
[ "GovardhanJain", "DeeptiMittal", "DakshThakur", "Madhup KMittal" ]
10.1016/j.bbe.2020.08.008 10.1007/s00134-020-05990-y 10.1038/s41591-020-0817-4 10.2807/1560-7917 10.3201/eid2606.200301 10.1148/radiol.2018180921 10.1002/jmri.26534 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2020.103792 10.33889/IJMEMS.2020.5.4.052 10.3389/fmed.2020.00427 10.1016/j.chaos.2020.109944 10.2174/1573405616666200604163954 10.3390/app10134640 10.1109/TMI.2020.2995965 10.1007/s10096-020-03901-z 10.1007/s10489-020-01826-w 10.1016/j.imu.2020.100391 10.1007/s40846-020-00529-4 10.1109/CVPR.2009.5206848 10.1109/CVPR.2017.243
COVID-19 pathways for brain and heart injury in comorbidity patients: A role of medical imaging and artificial intelligence-based COVID severity classification: A review.
Artificial intelligence (AI) has penetrated the field of medicine, particularly the field of radiology. Since its emergence, the highly virulent coronavirus disease 2019 (COVID-19) has infected over 10 million people, leading to over 500,000 deaths as of July 1st, 2020. Since the outbreak began, almost 28,000 articles about COVID-19 have been published (https://pubmed.ncbi.nlm.nih.gov); however, few have explored the role of imaging and artificial intelligence in COVID-19 patients-specifically, those with comorbidities. This paper begins by presenting the four pathways that can lead to heart and brain injuries following a COVID-19 infection. Our survey also offers insights into the role that imaging can play in the treatment of comorbid patients, based on probabilities derived from COVID-19 symptom statistics. Such symptoms include myocardial injury, hypoxia, plaque rupture, arrhythmias, venous thromboembolism, coronary thrombosis, encephalitis, ischemia, inflammation, and lung injury. At its core, this study considers the role of image-based AI, which can be used to characterize the tissues of a COVID-19 patient and classify the severity of their infection. Image-based AI is more important than ever as the pandemic surges and countries worldwide grapple with limited medical resources for detection and diagnosis.
Computers in biology and medicine
"2020-09-13T00:00:00"
[ "Jasjit SSuri", "AnudeepPuvvula", "MainakBiswas", "MishaMajhail", "LucaSaba", "GavinoFaa", "Inder MSingh", "RonaldOberleitner", "MonikaTurk", "Paramjit SChadha", "Amer MJohri", "J MiguelSanches", "Narendra NKhanna", "KlaudijaViskovic", "SophieMavrogeni", "John RLaird", "GyanPareek", "MartinMiner", "David WSobel", "AntonellaBalestrieri", "Petros PSfikakis", "GeorgeTsoulfas", "AthanasiosProtogerou", "Durga PrasannaMisra", "VikasAgarwal", "George DKitas", "PuneetAhluwalia", "RaghuKolluri", "JagjitTeji", "Mustafa AlMaini", "AnnAgbakoba", "Surinder KDhanjil", "MeyypanSockalingam", "AjitSaxena", "AndrewNicolaides", "AdityaSharma", "VijayRathore", "Janet N AAjuluchukwu", "MostafaFatemi", "AzraAlizad", "VijayViswanathan", "Pudukode RKrishnan", "SubbaramNaidu" ]
10.1016/j.compbiomed.2020.103960
Contrastive Cross-Site Learning With Redesigned Net for COVID-19 CT Classification.
The pandemic of coronavirus disease 2019 (COVID-19) has lead to a global public health crisis spreading hundreds of countries. With the continuous growth of new infections, developing automated tools for COVID-19 identification with CT image is highly desired to assist the clinical diagnosis and reduce the tedious workload of image interpretation. To enlarge the datasets for developing machine learning methods, it is essentially helpful to aggregate the cases from different medical systems for learning robust and generalizable models. This paper proposes a novel joint learning framework to perform accurate COVID-19 identification by effectively learning with heterogeneous datasets with distribution discrepancy. We build a powerful backbone by redesigning the recently proposed COVID-Net in aspects of network architecture and learning strategy to improve the prediction accuracy and learning efficiency. On top of our improved backbone, we further explicitly tackle the cross-site domain shift by conducting separate feature normalization in latent space. Moreover, we propose to use a contrastive training objective to enhance the domain invariance of semantic embeddings for boosting the classification performance on each dataset. We develop and evaluate our method with two public large-scale COVID-19 diagnosis datasets made up of CT images. Extensive experiments show that our approach consistently improves the performanceson both datasets, outperforming the original COVID-Net trained on each dataset by 12.16% and 14.23% in AUC respectively, also exceeding existing state-of-the-art multi-site learning methods.
IEEE journal of biomedical and health informatics
"2020-09-12T00:00:00"
[ "ZhaoWang", "QuandeLiu", "QiDou" ]
10.1109/JBHI.2020.3023246
Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network.
Chest X-ray is the first imaging technique that plays an important role in the diagnosis of COVID-19 disease. Due to the high availability of large-scale annotated image datasets, great success has been achieved using convolutional neural networks (
Applied intelligence (Dordrecht, Netherlands)
"2020-09-05T00:00:00"
[ "AsmaaAbbas", "Mohammed MAbdelsamea", "Mohamed MedhatGaber" ]
10.1007/s10489-020-01829-7 10.1109/ACCESS.2020.2989273 10.1109/TMI.2016.2535865 10.1109/TMI.2013.2290491 10.1080/21681163.2015.1124249 10.1109/TMI.2013.2284099 10.1016/j.cmpb.2013.10.011 10.1007/s10916-016-0539-9 10.1109/TKDE.2009.191 10.3846/tede.2010.47 10.1109/TMI.2016.2528162 10.1016/j.ipm.2009.03.002 10.1016/0169-7439(87)80084-9 10.1007/s10618-009-0146-1 10.1007/s10115-007-0114-2
Ultra-low-dose chest CT imaging of COVID-19 patients using a deep residual neural network.
The current study aimed to design an ultra-low-dose CT examination protocol using a deep learning approach suitable for clinical diagnosis of COVID-19 patients. In this study, 800, 170, and 171 pairs of ultra-low-dose and full-dose CT images were used as input/output as training, test, and external validation set, respectively, to implement the full-dose prediction technique. A residual convolutional neural network was applied to generate full-dose from ultra-low-dose CT images. The quality of predicted CT images was assessed using root mean square error (RMSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Scores ranging from 1 to 5 were assigned reflecting subjective assessment of image quality and related COVID-19 features, including ground glass opacities (GGO), crazy paving (CP), consolidation (CS), nodular infiltrates (NI), bronchovascular thickening (BVT), and pleural effusion (PE). The radiation dose in terms of CT dose index (CTDI The results demonstrated that the deep learning algorithm is capable of predicting standard full-dose CT images with acceptable quality for the clinical diagnosis of COVID-19 positive patients with substantial radiation dose reduction. • Ultra-low-dose CT imaging of COVID-19 patients would result in the loss of critical information about lesion types, which could potentially affect clinical diagnosis. • Deep learning-based prediction of full-dose from ultra-low-dose CT images for the diagnosis of COVID-19 could reduce the radiation dose by up to 89%. • Deep learning algorithms failed to recover the correct lesion structure/density for a number of patients considered outliers, and as such, further research and development is warranted to address these limitations.
European radiology
"2020-09-04T00:00:00"
[ "IsaacShiri", "AzadehAkhavanallaf", "AmirhosseinSanaat", "YazdanSalimi", "DariushAskari", "ZahraMansouri", "Sajad PShayesteh", "MohammadHasanian", "KiaraRezaei-Kalantari", "AliSalahshour", "SalehSandoughdaran", "HamidAbdollahi", "HosseinArabi", "HabibZaidi" ]
10.1007/s00330-020-07225-6 10.1001/jama.2020.2648 10.1016/j.ijantimicag.2020.105924 10.1128/JCM.00512-20 10.1016/j.jacr.2020.03.006 10.1148/radiol.2020190389 10.1002/mp.14000 10.1007/s40134-012-0003-7 10.3348/kjr.2012.13.1.1 10.1002/mp.13666 10.1016/j.acra.2020.04.016 10.1007/s11547-020-01179-x 10.1007/s00330-020-06809-6 10.1148/ryct.2020200196 10.1007/s00330-004-2403-4 10.1007/s00330-006-0545-2 10.1016/j.media.2017.07.005 10.1002/mp.13264 10.1109/TMI.2018.2827462 10.1002/mp.13713 10.3348/kjr.2019.0413 10.1007/s10278-019-00274-4 10.2214/AJR.14.13613 10.1109/TNS.2015.2467219 10.1002/mp.13619 10.1001/jama.2010.973 10.1016/j.ejrad.2016.10.021
Visual and software-based quantitative chest CT assessment of COVID-19: correlation with clinical findings.
The aim of this study was to evaluate visual and software-based quantitative assessment of parenchymal changes and normal lung parenchyma in patients with coronavirus disease 2019 (COVID-19) pneumonia. The secondary aim of the study was to compare the radiologic findings with clinical and laboratory data. Patients with COVID-19 who underwent chest computed tomography (CT) between March 11, 2020 and April 15, 2020 were retrospectively evaluated. Clinical and laboratory findings of patients with abnormal findings on chest CT and PCR-evidence of COVID-19 infection were recorded. Visual quantitative assessment score (VQAS) was performed according to the extent of lung opacities. Software-based quantitative assessment of the normal lung parenchyma percentage (SQNLP) was automatically quantified by a deep learning software. The presence of consolidation and crazy paving pattern (CPP) was also recorded. Statistical analyses were performed to evaluate the correlation between quantitative radiologic assessments, and clinical and laboratory findings, as well as to determine the predictive utility of radiologic findings for estimating severe pneumonia and admission to intensive care unit (ICU). A total of 90 patients were enrolled. Both VQAS and SQNLP were significantly correlated with multiple clinical parameters. While VQAS >8.5 (sensitivity, 84.2%; specificity, 80.3%) and SQNLP <82.45% (sensitivity, 83.1%; specificity, 84.2%) were related to severe pneumonia, VQAS >9.5 (sensitivity, 93.3%; specificity, 86.5%) and SQNLP <81.1% (sensitivity, 86.5%; specificity, 86.7%) were predictive of ICU admission. Both consolidation and CPP were more commonly seen in patients with severe pneumonia than patients with nonsevere pneumonia (P = 0.197 for consolidation; P < 0.001 for CPP). Moreover, the presence of CPP showed high specificity (97.2%) for severe pneumonia. Both SQNLP and VQAS were significantly related to the clinical findings, highlighting their clinical utility in predicting severe pneumonia, ICU admission, length of hospital stay, and management of the disease. On the other hand, presence of CPP has high specificity for severe COVID-19 pneumonia.
Diagnostic and interventional radiology (Ankara, Turkey)
"2020-09-03T00:00:00"
[ "GamzeDurhan", "SelinArdalı Düzgün", "FigenBaşaran Demirkazık", "İlimIrmak", "İlkayİdilman", "MeltemGülsün Akpınar", "ErhanAkpınar", "SerpilÖcal", "GülçinTelli", "ArzuTopeli", "Orhan MacitArıyürek" ]
10.5152/dir.2020.20407 10.1016/S0140-6736(20)30183-5 10.1016/S0140-6736(20)30251-8 10.1001/jama.2020.1585 10.1016/S2213-2600(20)30076-X 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1148/radiol.2020200230 10.1007/s00330-020-06817-6 10.1148/radiol.2020201433 10.1148/ryct.2020200075 10.1148/rg.2020190099 10.1148/radiol.2020200370 10.2214/AJR.20.23078 10.2214/ajr.175.5.1751329 10.1016/j.jaci.2020.04.006 10.1186/s13054-020-2833-7 10.1007/s11547-020-01202-1 10.1172/JCI137244 10.1016/j.medmal.2020.04.006 10.1148/ryct.2020200047 10.3348/kjr.2020.0171 10.1097/RLI.0000000000000689 10.1097/RLI.0000000000000672 10.1016/j.jtho.2020.02.010 10.1101/2020.04.19.20054262 10.1016/j.trsl.2020.04.007 10.1148/rg.273065194
Toward automated severe pharyngitis detection with smartphone camera using deep learning networks.
Severe pharyngitis is frequently associated with inflammations caused by streptococcal pharyngitis, which can cause immune-mediated and post-infectious complications. The recent global pandemic of coronavirus disease (COVID-19) encourages the use of telemedicine for patients with respiratory symptoms. This study, therefore, purposes automated detection of severe pharyngitis using a deep learning framework with self-taken throat images. A dataset composed of two classes of 131 throat images with pharyngitis and 208 normal throat images was collected. Before the training classifier, we constructed a cycle consistency generative adversarial network (CycleGAN) to augment the training dataset. The ResNet50, Inception-v3, and MobileNet-v2 architectures were trained with transfer learning and validated using a randomly selected test dataset. The performance of the models was evaluated based on the accuracy and area under the receiver operating characteristic curve (ROC-AUC). The CycleGAN-based synthetic images reflected the pragmatic characteristic features of pharyngitis. Using the synthetic throat images, the deep learning model demonstrated a significant improvement in the accuracy of the pharyngitis diagnosis. ResNet50 with GAN-based augmentation showed the best ROC-AUC of 0.988 for pharyngitis detection in the test dataset. In the 4-fold cross-validation using the ResNet50, the highest detection accuracy and ROC-AUC achieved were 95.3% and 0.992, respectively. The deep learning model for smartphone-based pharyngitis screening allows fast identification of severe pharyngitis with a potential of the timely diagnosis of pharyngitis. In the recent pandemic of COVID-19, this framework will help patients with upper respiratory symptoms to improve convenience in diagnosis and reduce transmission.
Computers in biology and medicine
"2020-09-02T00:00:00"
[ "Tae KeunYoo", "Joon YulChoi", "YounilJang", "EinOh", "Ik HeeRyu" ]
10.1016/j.compbiomed.2020.103980 10.1089/tmj.2020.0099 10.1177/0194599820931827 10.4218/etrij.2018-0428 10.1007/s11042-019-08130-x 10.1038/s41746-020-0282-y 10.1128/JCM.00811-19 10.1542/peds.2009-2648 10.1186/s12887-019-1393-y 10.1542/peds.2017-2033 10.3390/s19153307 10.1016/S2589-7500(20)30001-7 10.1016/j.neucom.2018.09.013 10.1007/s00417-020-04709-5 10.1007/s42452-020-3000-0 10.1016/j.media.2020.101794 10.1371/journal.pone.0187336 10.1080/07391102.2020.1767212 10.1016/j.cell.2018.02.010 10.1007/s11517-018-1915-z 10.7717/peerj.8668 10.1148/ryai.2019190015 10.1109/JBHI.2019.2949601 10.1109/ACCESS.2018.2874767 10.1155/2016/6584725 10.1101/2020.03.20.000133 10.1111/mice.12387 10.1136/bmj.m1182 10.1016/j.jid.2020.01.019 10.1016/j.oooo.2015.11.005 10.4103/ijo.IJO_544_18 10.1016/j.eswa.2019.06.070 10.1007/978-981-13-8950-4_27 10.1007/s11548-019-02092-z 10.1016/j.compbiomed.2020.103698 10.1038/s42256-019-0137-x 10.1016/j.compbiomed.2020.103628 10.1093/sleep/29.7.903 10.1371/journal.pone.0191493
Development and Validation of a Deep Learning-Based Model Using Computed Tomography Imaging for Predicting Disease Severity of Coronavirus Disease 2019.
Coronavirus disease 2019 (COVID-19) is sweeping the globe and has resulted in infections in millions of people. Patients with COVID-19 face a high fatality risk once symptoms worsen; therefore, early identification of severely ill patients can enable early intervention, prevent disease progression, and help reduce mortality. This study aims to develop an artificial intelligence-assisted tool using computed tomography (CT) imaging to predict disease severity and further estimate the risk of developing severe disease in patients suffering from COVID-19. Initial CT images of 408 confirmed COVID-19 patients were retrospectively collected between January 1, 2020 and March 18, 2020 from hospitals in Honghu and Nanchang. The data of 303 patients in the People's Hospital of Honghu were assigned as the training data, and those of 105 patients in The First Affiliated Hospital of Nanchang University were assigned as the test dataset. A deep learning based-model using multiple instance learning and residual convolutional neural network (ResNet34) was developed and validated. The discrimination ability and prediction accuracy of the model were evaluated using the receiver operating characteristic curve and confusion matrix, respectively. The deep learning-based model had an area under the curve (AUC) of 0.987 (95% confidence interval [CI]: 0.968-1.00) and an accuracy of 97.4% in the training set, whereas it had an AUC of 0.892 (0.828-0.955) and an accuracy of 81.9% in the test set. In the subgroup analysis of patients who had non-severe COVID-19 on admission, the model achieved AUCs of 0.955 (0.884-1.00) and 0.923 (0.864-0.983) and accuracies of 97.0 and 81.6% in the Honghu and Nanchang subgroups, respectively. Our deep learning-based model can accurately predict disease severity as well as disease progression in COVID-19 patients using CT imaging, offering promise for guiding clinical treatment.
Frontiers in bioengineering and biotechnology
"2020-08-28T00:00:00"
[ "Lu-ShanXiao", "PuLi", "FenglongSun", "YanpeiZhang", "ChenghaiXu", "HongboZhu", "Feng-QinCai", "Yu-LinHe", "Wen-FengZhang", "Si-CongMa", "ChenyiHu", "MengchunGong", "LiLiu", "WenzhaoShi", "HongZhu" ]
10.3389/fbioe.2020.00898 10.1109/tmi.2016.2535865 10.1038/s41591-019-0508-1 10.1148/radiol.2363040958 10.1101/2020.02.25.20021568 10.1016/s0140-6736(20)30211-7 10.2214/AJR.14.13671 10.1097/rli.0000000000000127 10.1101/2020.02.19.20025296 10.1101/2020.03.28.20046045 10.1101/2020.03.17.20037515 10.1148/radiol.2020200905 10.7150/thno.46569 10.1001/jamainternmed.2020.2033 10.36227/techrxiv.12156522 10.1097/CM9.0000000000000819 10.1186/1471-2105-12-77 10.1007/s10916-020-01562-1 10.1016/j.neunet.2014.09.003 10.1016/j.ijsu.2020.02.034 10.1101/2020.02.23.20026930 10.1148/radiol.2020200843 10.7150/thno.46833 10.1101/2020.02.10.20021675 10.1148/radiol.2020200370 10.1101/2020.03.12.20027185 10.1056/NEJMoa2001017
Digital pathology and computational image analysis in nephropathology.
The emergence of digital pathology - an image-based environment for the acquisition, management and interpretation of pathology information supported by computational techniques for data extraction and analysis - is changing the pathology ecosystem. In particular, by virtue of our new-found ability to generate and curate digital libraries, the field of machine vision can now be effectively applied to histopathological subject matter by individuals who do not have deep expertise in machine vision techniques. Although these novel approaches have already advanced the detection, classification, and prognostication of diseases in the fields of radiology and oncology, renal pathology is just entering the digital era, with the establishment of consortia and digital pathology repositories for the collection, analysis and integration of pathology data with other domains. The development of machine-learning approaches for the extraction of information from image data, allows for tissue interrogation in a way that was not previously possible. The application of these novel tools are placing pathology centre stage in the process of defining new, integrated, biologically and clinically homogeneous disease categories, to identify patients at risk of progression, and shifting current paradigms for the treatment and prevention of kidney diseases.
Nature reviews. Nephrology
"2020-08-28T00:00:00"
[ "LauraBarisoni", "Kyle JLafata", "Stephen MHewitt", "AnantMadabhushi", "Ulysses G JBalis" ]
10.1038/s41581-020-0321-6 10.1117/12.2008695 10.1093/jnci/djx137
Adaptive Feature Selection Guided Deep Forest for COVID-19 Classification With Chest CT.
Chest computed tomography (CT) becomes an effective tool to assist the diagnosis of coronavirus disease-19 (COVID-19). Due to the outbreak of COVID-19 worldwide, using the computed-aided diagnosis technique for COVID-19 classification based on CT images could largely alleviate the burden of clinicians. In this paper, we propose an Adaptive Feature Selection guided Deep Forest (AFS-DF) for COVID-19 classification based on chest CT images. Specifically, we first extract location-specific features from CT images. Then, in order to capture the high-level representation of these features with the relatively small-scale data, we leverage a deep forest model to learn high-level representation of the features. Moreover, we propose a feature selection method based on the trained deep forest model to reduce the redundancy of features, where the feature selection could be adaptively incorporated with the COVID-19 classification model. We evaluated our proposed AFS-DF on COVID-19 dataset with 1495 patients of COVID-19 and 1027 patients of community acquired pneumonia (CAP). The accuracy (ACC), sensitivity (SEN), specificity (SPE), AUC, precision and F1-score achieved by our method are 91.79%, 93.05%, 89.95%, 96.35%, 93.10% and 93.07%, respectively. Experimental results on the COVID-19 dataset suggest that the proposed AFS-DF achieves superior performance in COVID-19 vs. CAP classification, compared with 4 widely used machine learning methods.
IEEE journal of biomedical and health informatics
"2020-08-28T00:00:00"
[ "LiangSun", "ZhanhaoMo", "FuhuaYan", "LimingXia", "FeiShan", "ZhongxiangDing", "BinSong", "WanchunGao", "WeiShao", "FengShi", "HuanYuan", "HuitingJiang", "DijiaWu", "YingWei", "YaozongGao", "HeSui", "DaoqiangZhang", "DinggangShen" ]
10.1109/JBHI.2020.3019505
A systematic review on recent trends in transmission, diagnosis, prevention and imaging features of COVID-19.
As the new cases of COVID-19 are growing every daysince January 2020, the major way to control the spread wasthrough early diagnosis. Prevention and early diagnosis are the key strategies followed by most countries. This study presents the perspective of different modes of transmission of coronavirus,especially during clinical practices and among the pediatrics. Further, the diagnostic methods and the advancement of the computerized tomography have been discussed. Droplets, aerosol, and close contact are thesignificantfactors to transfer the infection to the suspect. This study predicts the possible transmission of the virus through medical practices such as ophthalmology, dental, and endoscopy procedures. With regard to pediatric transmission, as of now, only afew child fatalities had been reported. Childrenusually respond to the respiratory virus; however, COVID-19 response ison the contrary. The possibility of getting infected is minimal for the newborn. There has been no asymptomatic spread in children until now. Moreover, breastfeedingwould not transmit COVID-19, which is encouraging hygiene news for the pediatric. In addition, the current diagnostic methods for COVID-19 including Immunoglobulin M (IgM) and Immunoglobulin G (IgG)and chest computed topography(CT) scan, reverse transcription-polymerase chain reaction (RT-PCR) andimmunochromatographic fluorescence assay, are also discussed in detail. The introduction of artificial intelligence and deep learning algorithmhas the ability to diagnose COVID-19 in precise. However, the developments of a potential technology for the identification of the infection, such as a drone with thermal screening without human intervention, need to be encouraged.
Process biochemistry (Barking, London, England)
"2020-08-28T00:00:00"
[ "SManigandan", "Ming-TsangWu", "Vinoth KumarPonnusamy", "Vinay BRaghavendra", "ArivalaganPugazhendhi", "KathirvelBrindhadevi" ]
10.1016/j.procbio.2020.08.016 10.1016/S0140-6736(20)30360-3 10.1148/radiol.2020200230 10.1007/s12630-020-01627-2 10.1056/NEJMc2022236 10.1038/s41577-020-0394-2
[Research on coronavirus disease 2019 (COVID-19) detection method based on depthwise separable DenseNet in chest X-ray images].
Coronavirus disease 2019 (COVID-19) has spread rapidly around the world. In order to diagnose COVID-19 more quickly, in this paper, a depthwise separable DenseNet was proposed. The paper constructed a deep learning model with 2 905 chest X-ray images as experimental dataset. In order to enhance the contrast, the contrast limited adaptive histogram equalization (CLAHE) algorithm was used to preprocess the X-ray image before network training, then the images were put into the training network and the parameters of the network were adjusted to the optimal. Meanwhile, Leaky ReLU was selected as the activation function. VGG16, ResNet18, ResNet34, DenseNet121 and SDenseNet models were used to compare with the model proposed in this paper. Compared with ResNet34, the proposed classification model of pneumonia had improved 2.0%, 2.3% and 1.5% in accuracy, sensitivity and specificity respectively. Compared with the SDenseNet network without depthwise separable convolution, number of parameters of the proposed model was reduced by 43.9%, but the classification effect did not decrease. It can be found that the proposed DWSDenseNet has a good classification effect on the COVID-19 chest X-ray images dataset. Under the condition of ensuring the accuracy as much as possible, the depthwise separable convolution can effectively reduce number of parameters of the model. 新型冠状病毒肺炎肆虐全球,为了更加快速地诊断新型冠状病毒肺炎(COVID-19),本文提出一种深度可分离稠密网络 DWSDenseNet,以 2 905 例 COVID-19 胸部 X 线平片影像作为实验数据集,在网络训练前使用限制对比度自适应直方图均衡化(CLAHE)算法对图像进行预处理,增强图像的对比度,将预处理之后的图像放入训练网络中,采用 Leaky ReLU 作为激活函数,调整参数以达到最优。本文引入 VGG16、ResNet18、ResNet34、DenseNet121 和 SDenseNet 模型进行比较,所提出的网络在三分类实验中相较于 ResNet34 在准确率、灵敏度和特异性上分别提高了 2.0%、2.3%、1.5%。相对于改进前的 SDenseNet 网络,本文模型的参数量减少了 43.9%,但分类效果并未下降。通过对比实验可以发现,本文所提出的深度可分离稠密网络对 COVID-19 胸部 X 线平片影像数据集具有良好的分类效果,在保证准确率的情况下,深度可分离卷积能够有效地降低模型参数量。.
Sheng wu yi xue gong cheng xue za zhi = Journal of biomedical engineering = Shengwu yixue gongchengxue zazhi
"2020-08-26T00:00:00"
[ "YiboFeng", "DaweiQiu", "HuiCao", "JunzhongZhang", "ZaihaiXin", "JingLiu" ]
10.7507/1001-5515.202005056
A Deep Learning System to Screen Novel Coronavirus Disease 2019 Pneumonia.
The real-time reverse transcription-polymerase chain reaction (RT-PCR) detection of viral RNA from sputum or nasopharyngeal swab had a relatively low positive rate in the early stage of coronavirus disease 2019 (COVID-19). Meanwhile, the manifestations of COVID-19 as seen through computed tomography (CT) imaging show individual characteristics that differ from those of other types of viral pneumonia such as influenza-A viral pneumonia (IAVP). This study aimed to establish an early screening model to distinguish COVID-19 from IAVP and healthy cases through pulmonary CT images using deep learning techniques. A total of 618 CT samples were collected: 219 samples from 110 patients with COVID-19 (mean age 50 years; 63 (57.3%) male patients); 224 samples from 224 patients with IAVP (mean age 61 years; 156 (69.6%) male patients); and 175 samples from 175 healthy cases (mean age 39 years; 97 (55.4%) male patients). All CT samples were contributed from three COVID-19-designated hospitals in Zhejiang Province, China. First, the candidate infection regions were segmented out from the pulmonary CT image set using a 3D deep learning model. These separated images were then categorized into the COVID-19, IAVP, and irrelevant to infection (ITI) groups, together with the corresponding confidence scores, using a location-attention classification model. Finally, the infection type and overall confidence score for each CT case were calculated using the Noisy-OR Bayesian function. The experimental result of the benchmark dataset showed that the overall accuracy rate was 86.7% in terms of all the CT cases taken together. The deep learning models established in this study were effective for the early screening of COVID-19 patients and were demonstrated to be a promising supplementary diagnostic method for frontline clinical doctors.
Engineering (Beijing, China)
"2020-08-25T00:00:00"
[ "XiaoweiXu", "XiangaoJiang", "ChunlianMa", "PengDu", "XukunLi", "ShuangzhiLv", "LiangYu", "QinNi", "YanfeiChen", "JunweiSu", "GuanjingLang", "YongtaoLi", "HongZhao", "JunLiu", "KaijinXu", "LingxiangRuan", "JifangSheng", "YunqingQiu", "WeiWu", "TingboLiang", "LanjuanLi" ]
10.1016/j.eng.2020.04.010
Deep learning applications in pulmonary medical imaging: recent updates and insights on COVID-19.
Shortly after deep learning algorithms were applied to Image Analysis, and more importantly to medical imaging, their applications increased significantly to become a trend. Likewise, deep learning applications (DL) on pulmonary medical images emerged to achieve remarkable advances leading to promising clinical trials. Yet, coronavirus can be the real trigger to open the route for fast integration of DL in hospitals and medical centers. This paper reviews the development of deep learning applications in medical image analysis targeting pulmonary imaging and giving insights of contributions to COVID-19. It covers more than 160 contributions and surveys in this field, all issued between February 2017 and May 2020 inclusively, highlighting various deep learning tasks such as classification, segmentation, and detection, as well as different pulmonary pathologies like airway diseases, lung cancer, COVID-19 and other infections. It summarizes and discusses the current state-of-the-art approaches in this research domain, highlighting the challenges, especially with COVID-19 pandemic current situation.
Machine vision and applications
"2020-08-25T00:00:00"
[ "HananFarhat", "George ESakr", "RimaKilany" ]
10.1007/s00138-020-01101-5 10.1007/s11548-019-01917-1 10.1146/annurev-bioeng-071516-044442 10.3322/caac.21552 10.3390/diagnostics9010029 10.1186/s12938-018-0544-y 10.1186/s13550-017-0260-9 10.1007/s10278-017-0028-9 10.3390/cancers11020212 10.14741/Ijcet/22774106/5.2.2015.121
Identifying COVID19 from Chest CT Images: A Deep Convolutional Neural Networks Based Approach.
Coronavirus Disease (COVID19) is a fast-spreading infectious disease that is currently causing a healthcare crisis around the world. Due to the current limitations of the reverse transcription-polymerase chain reaction (RT-PCR) based tests for detecting COVID19, recently radiology imaging based ideas have been proposed by various works. In this work, various Deep CNN based approaches are explored for detecting the presence of COVID19 from chest CT images. A decision fusion based approach is also proposed, which combines predictions from multiple individual models, to produce a final prediction. Experimental results show that the proposed decision fusion based approach is able to achieve above 86% results across all the performance metrics under consideration, with average AUROC and F1-Score being 0.883 and 0.867, respectively. The experimental observations suggest the potential applicability of such Deep CNN based approach in real diagnostic scenarios, which could be of very high utility in terms of achieving fast testing for COVID19.
Journal of healthcare engineering
"2020-08-25T00:00:00"
[ "Arnab KumarMishra", "Sujit KumarDas", "PinkiRoy", "SivajiBandyopadhyay" ]
10.1155/2020/8843664 10.1101/2020.04.16.20064709v1 10.1101/2020.04.16.20064709 10.3892/etm.2020.8797 10.1109/CVPR.2017.243 10.1080/07391102.2020.1788642 10.1101/2020.03.12.20027185v2 10.1101/2020.03.12.20027185 10.1109/CVPR.2015.7298594 10.1109/CVPR.2016.308 10.1109/CVPR.2016.90
Efficient and Effective Training of COVID-19 Classification Networks With Self-Supervised Dual-Track Learning to Rank.
Coronavirus Disease 2019 (COVID-19) has rapidly spread worldwide since first reported. Timely diagnosis of COVID-19 is crucial both for disease control and patient care. Non-contrast thoracic computed tomography (CT) has been identified as an effective tool for the diagnosis, yet the disease outbreak has placed tremendous pressure on radiologists for reading the exams and may potentially lead to fatigue-related mis-diagnosis. Reliable automatic classification algorithms can be really helpful; however, they usually require a considerable number of COVID-19 cases for training, which is difficult to acquire in a timely manner. Meanwhile, how to effectively utilize the existing archive of non-COVID-19 data (the negative samples) in the presence of severe class imbalance is another challenge. In addition, the sudden disease outbreak necessitates fast algorithm development. In this work, we propose a novel approach for effective and efficient training of COVID-19 classification networks using a small number of COVID-19 CT exams and an archive of negative samples. Concretely, a novel self-supervised learning method is proposed to extract features from the COVID-19 and negative samples. Then, two kinds of soft-labels ('difficulty' and 'diversity') are generated for the negative samples by computing the earth mover's distances between the features of the negative and COVID-19 samples, from which data 'values' of the negative samples can be assessed. A pre-set number of negative samples are selected accordingly and fed to the neural network for training. Experimental results show that our approach can achieve superior performance using about half of the negative samples, substantially reducing model training time.
IEEE journal of biomedical and health informatics
"2020-08-21T00:00:00"
[ "YuexiangLi", "DongWei", "JiaweiChen", "ShileiCao", "HongyuZhou", "YanchunZhu", "JianrongWu", "LanLan", "WenboSun", "TianyiQian", "KaiMa", "HaiboXu", "YefengZheng" ]
10.1109/JBHI.2020.3018181
Determination of disease severity in COVID-19 patients using deep learning in chest X-ray images.
Chest X-ray plays a key role in diagnosis and management of COVID-19 patients and imaging features associated with clinical elements may assist with the development or validation of automated image analysis tools. We aimed to identify associations between clinical and radiographic features as well as to assess the feasibility of deep learning applied to chest X-rays in the setting of an acute COVID-19 outbreak. A retrospective study of X-rays, clinical, and laboratory data was performed from 48 SARS-CoV-2 RT-PCR positive patients (age 60±17 years, 15 women) between February 22 and March 6, 2020 from a tertiary care hospital in Milan, Italy. Sixty-five chest X-rays were reviewed by two radiologists for alveolar and interstitial opacities and classified by severity on a scale from 0 to 3. Clinical factors (age, symptoms, comorbidities) were investigated for association with opacity severity and also with placement of central line or endotracheal tube. Deep learning models were then trained for two tasks: lung segmentation and opacity detection. Imaging characteristics were compared to clinical datapoints using the unpaired student's t-test or Mann-Whitney U test. Cohen's kappa analysis was used to evaluate the concordance of deep learning to conventional radiologist interpretation. Fifty-six percent of patients presented with alveolar opacities, 73% had interstitial opacities, and 23% had normal X-rays. The presence of alveolar or interstitial opacities was statistically correlated with age (P = 0.008) and comorbidities (P = 0.005). The extent of alveolar or interstitial opacities on baseline X-ray was significantly associated with the presence of endotracheal tube (P = 0.0008 and P = 0.049) or central line (P = 0.003 and P = 0.007). In comparison to human interpretation, the deep learning model achieved a kappa concordance of 0.51 for alveolar opacities and 0.71 for interstitial opacities. Chest X-ray analysis in an acute COVID-19 outbreak showed that the severity of opacities was associated with advanced age, comorbidities, as well as acuity of care. Artificial intelligence tools based upon deep learning of COVID-19 chest X-rays are feasible in the acute outbreak setting.
Diagnostic and interventional radiology (Ankara, Turkey)
"2020-08-21T00:00:00"
[ "MaximeBlain", "Michael TKassin", "NicoleVarble", "XiaosongWang", "ZiyueXu", "DaguangXu", "GianpaoloCarrafiello", "ValentinaVespro", "ElviraStellato", "Anna MariaIerardi", "Letizia DiMeglio", "RobertD Suh", "StephanieA Walker", "ShengXu", "ThomasH Sanford", "EvrimB Turkbey", "StephanieHarmon", "BarisTurkbey", "BradfordJ Wood" ]
10.5152/dir.2020.20205 10.1001/jama.2020.2648 10.1016/S0140-6736(20)30566-3 10.1016/S1473-3099(20)30195-X 10.1001/jamainternmed.2020.0994 10.1148/radiol.2020200642 10.1016/j.tmaid.2020.101627 10.2214/AJR.19.22688 10.1016/j.jacr.2020.02.008 10.1148/radiol.2020200823 10.1148/radiol.2020200432 10.1148/radiol.2020200330 10.3348/kjr.2020.0146 10.1148/radiol.2020200343 10.1016/j.clinimag.2020.02.008 10.1097/RLI.0000000000000672 10.1016/j.ejrad.2020.108941 10.2214/AJR.20.22976 10.1016/j.ejrad.2020.108972 10.1148/radiol.2020200463 10.1007/s00330-020-06731-x 10.1148/radiol.2020200370 10.1148/radiol.2020200843 10.1148/radiol.2020200230 10.1148/radiol.2020200241 10.1148/radiol.2020200274 10.1007/s00259-020-04720-2 10.1007/s00330-020-06801-0 10.2214/AJR.20.23154 10.1001/jama.2020.4326 10.1148/ryct.2020200034 10.3348/kjr.2020.0132 10.1148/radiol.2020200905 10.1002/cyto.a.23990 10.1007/s00330-020-06817-6 10.1007/978-3-319-24574-4_28 10.2214/ajr.174.1.1740071 10.1109/CVPR.2017.243 10.1109/CVPR.2017.369
Detection of coronavirus disease from X-ray images using deep learning and transfer learning algorithms.
This study aims to employ the advantages of computer vision and medical image analysis to develop an automated model that has the clinical potential for early detection of novel coronavirus (COVID-19) infected disease. This study applied transfer learning method to develop deep learning models for detecting COVID-19 disease. Three existing state-of-the-art deep learning models namely, Inception ResNetV2, InceptionNetV3 and NASNetLarge, were selected and fine-tuned to automatically detect and diagnose COVID-19 disease using chest X-ray images. A dataset involving 850 images with the confirmed COVID-19 disease, 500 images of community-acquired (non-COVID-19) pneumonia cases and 915 normal chest X-ray images was used in this study. Among the three models, InceptionNetV3 yielded the best performance with accuracy levels of 98.63% and 99.02% with and without using data augmentation in model training, respectively. All the performed networks tend to overfitting (with high training accuracy) when data augmentation is not used, this is due to the limited amount of image data used for training and validation. This study demonstrated that a deep transfer learning is feasible to detect COVID-19 disease automatically from chest X-ray by training the learning model with chest X-ray images mixed with COVID-19 patients, other pneumonia affected patients and people with healthy lungs, which may help doctors more effectively make their clinical decisions. The study also gives an insight to how transfer learning was used to automatically detect the COVID-19 disease. In future studies, as the amount of available dataset increases, different convolution neutral network models could be designed to achieve the goal more efficiently.
Journal of X-ray science and technology
"2020-08-18T00:00:00"
[ "SalehAlbahli", "WaleedAlbattah" ]
10.3233/XST-200720 10.1101/2020.02.14.20023028
Artificial intelligence for the detection of COVID-19 pneumonia on chest CT using multinational datasets.
Chest CT is emerging as a valuable diagnostic tool for clinical management of COVID-19 associated lung disease. Artificial intelligence (AI) has the potential to aid in rapid evaluation of CT scans for differentiation of COVID-19 findings from other clinical entities. Here we show that a series of deep learning algorithms, trained in a diverse multinational cohort of 1280 patients to localize parietal pleura/lung parenchyma followed by classification of COVID-19 pneumonia, can achieve up to 90.8% accuracy, with 84% sensitivity and 93% specificity, as evaluated in an independent test set (not included in training and validation) of 1337 patients. Normal controls included chest CTs from oncology, emergency, and pneumonia-related indications. The false positive rate in 140 patients with laboratory confirmed other (non COVID-19) pneumonias was 10%. AI-based algorithms can readily identify CT scans with COVID-19 associated pneumonia, as well as distinguish non-COVID related pneumonias with high specificity in diverse patient populations.
Nature communications
"2020-08-17T00:00:00"
[ "Stephanie AHarmon", "Thomas HSanford", "ShengXu", "Evrim BTurkbey", "HolgerRoth", "ZiyueXu", "DongYang", "AndriyMyronenko", "VictoriaAnderson", "AmelAmalou", "MaximeBlain", "MichaelKassin", "DilaraLong", "NicoleVarble", "Stephanie MWalker", "UlasBagci", "Anna MariaIerardi", "ElviraStellato", "Guido GiovanniPlensich", "GiuseppeFranceschelli", "CristianoGirlando", "GiovanniIrmici", "DominicLabella", "DimaHammoud", "AshkanMalayeri", "ElizabethJones", "Ronald MSummers", "Peter LChoyke", "DaguangXu", "MonaFlores", "KakuTamura", "HirofumiObinata", "HitoshiMori", "FrancescaPatella", "MaurizioCariati", "GianpaoloCarrafiello", "PengAn", "Bradford JWood", "BarisTurkbey" ]
10.1038/s41467-020-17971-2 10.1016/j.ijsu.2020.02.034 10.1101/2020.02.11.20021493v2 10.1101/2020.1103.1119.20039354v20039351 10.1101/2020.02.14.20023028v5 10.1101/2020.03.19.20039354v1 10.1148/radiol.2020200702 10.1118/1.3528204 10.1007/s11263-019-01228-7
COVID-19 Detection Through Transfer Learning Using Multimodal Imaging Data.
Detecting COVID-19 early may help in devising an appropriate treatment plan and disease containment decisions. In this study, we demonstrate how transfer learning from deep learning models can be used to perform COVID-19 detection using images from three most commonly used medical imaging modes X-Ray, Ultrasound, and CT scan. The aim is to provide over-stressed medical professionals a second pair of eyes through intelligent deep learning image classification models. We identify a suitable
IEEE access : practical innovations, open solutions
"2020-08-14T00:00:00"
[ "Michael JHorry", "SubrataChakraborty", "ManoranjanPaul", "AnwaarUlhaq", "BiswajeetPradhan", "ManasSaha", "NageshShukla" ]
10.1109/ACCESS.2020.3016780 10.1007/978-3-319-46976-820 10.3390/info8030091 10.1001/jama.2020.3786 10.7326/M20-0991 10.1148/radiol.2020200642 10.2214/AJR.20.22969 10.1097/RTI.0000000000000404 10.1148/radiol.2020201160 10.1016/S2213-2600(20)30120-X 10.1111/anae.15082 10.1007/s13244-018-0639-9 10.1002/jmri.26534 10.1109/CVPR.2016.90 10.1109/CVPR.2016.308 10.1109/CVPR.2017.195 10.5555/3298023.3298188 10.1109/CVPR.2017.243 10.1109/CVPR.2018.00907 10.4329/wjr.v5.i11.398 10.1016/j.ijmedinf.2018.06.003 10.1118/1.2208736 10.18280/ts.360406 10.1007/978-3-319-19992-4_46 10.1016/j.compbiomed.2017.04.006 10.1109/JTEHM.2019.2955458 10.1038/s41591-019-0447-x 10.3390/jcm8040514 10.1016/j.ajem.2012.08.041 10.1117/1.jmi.3.1.014501 10.1371/journal.pone.0063820 10.1007/s10278-009-9245-1 10.1007/978-3-642-05177-719 10.1007/s00134-011-2317-y 10.1109/icbbe.2011.5780221 10.1117/12.2254526 10.1186/s13634-015-0214-1 10.1016/j.bspc.2012.02.002 10.1007/s10278-019-00211-5 10.1007/s10096-020-03901-z 10.1109/TMI.2020.2995508 10.1148/radiol.2020200905 10.1109/CVPR.2017.369 10.18103/bme.v3i1.1550 10.1109/BMEiCON.2017.8229130 10.1007/s11263-015-0816-y 10.1007/978-3-030-01258-8_39 10.1109/CVPR.2019.00277 10.1109/CVPR.2019.00741 10.1109/ICCV.2019.00200 10.1109/WACV45572.2020.9093418 10.1109/CVPR.2019.00497 10.1109/TPAMI.2019.2938758 10.1007/978-3-030-01258-8_12 10.1109/CVPR.2019.00638 10.1007/978-3-030-01246-5_2 10.1609/aaai.v33i01.33014780 10.1109/ICCVW.2019.00246 10.1109/TPAMI.2018.2858232 10.1109/CVPR.2019.00060 10.1007/978-3-030-34879-3_12 10.1109/CVPR.2009.5206848 10.3310/hta4050 10.1109/TPAMI.2018.2798607 10.1109/LGRS.2017.2704625 10.1109/TIP.2018.2809606 10.1109/JSTARS.2016.2634863 10.1080/21681163.2015.1135299
Automated quantification of COVID-19 severity and progression using chest CT images.
To develop and test computer software to detect, quantify, and monitor progression of pneumonia associated with COVID-19 using chest CT scans. One hundred twenty chest CT scans from subjects with lung infiltrates were used for training deep learning algorithms to segment lung regions and vessels. Seventy-two serial scans from 24 COVID-19 subjects were used to develop and test algorithms to detect and quantify the presence and progression of infiltrates associated with COVID-19. The algorithm included (1) automated lung boundary and vessel segmentation, (2) registration of the lung boundary between serial scans, (3) computerized identification of the pneumonitis regions, and (4) assessment of disease progression. Agreement between radiologist manually delineated regions and computer-detected regions was assessed using the Dice coefficient. Serial scans were registered and used to generate a heatmap visualizing the change between scans. Two radiologists, using a five-point Likert scale, subjectively rated heatmap accuracy in representing progression. There was strong agreement between computer detection and the manual delineation of pneumonic regions with a Dice coefficient of 81% (CI 76-86%). In detecting large pneumonia regions (> 200 mm The preliminary results suggested the feasibility of using computer software to detect and quantify pneumonic regions associated with COVID-19 and to generate heatmaps that can be used to visualize and assess progression. • Both computer vision and deep learning technology were used to develop computer software to quantify the presence and progression of pneumonia associated with COVID-19 depicted on CT images. • The computer software was tested using both quantitative experiments and subjective assessment. • The computer software has the potential to assist in the detection of the pneumonic regions, monitor disease progression, and assess treatment efficacy related to COVID-19.
European radiology
"2020-08-14T00:00:00"
[ "JiantaoPu", "Joseph KLeader", "AndriyBandos", "ShiKe", "JingWang", "JunliShi", "PangDu", "YouminGuo", "Sally EWenzel", "Carl RFuhrman", "David OWilson", "Frank CSciurba", "ChenwangJin" ]
10.1007/s00330-020-07156-2 10.1016/j.clinimag.2020.02.008 10.1001/jamanetworkopen.2019.1095 10.1136/jamia.2000.0070593 10.2196/jmir.6887 10.2214/ajr.175.5.1751329 10.1152/japplphysiol.00465.2019 10.1016/j.compmedimag.2014.01.002 10.1109/TMI.2010.2076300 10.1016/S1076-6332(03)00671-8 10.1016/j.media.2019.101592 10.1118/1.2948349 10.1109/TVCG.2010.56
Comparing different deep learning architectures for classification of chest radiographs.
Chest radiographs are among the most frequently acquired images in radiology and are often the subject of computer vision research. However, most of the models used to classify chest radiographs are derived from openly available deep neural networks, trained on large image datasets. These datasets differ from chest radiographs in that they are mostly color images and have substantially more labels. Therefore, very deep convolutional neural networks (CNN) designed for ImageNet and often representing more complex relationships, might not be required for the comparably simpler task of classifying medical image data. Sixteen different architectures of CNN were compared regarding the classification performance on two openly available datasets, the CheXpert and COVID-19 Image Data Collection. Areas under the receiver operating characteristics curves (AUROC) between 0.83 and 0.89 could be achieved on the CheXpert dataset. On the COVID-19 Image Data Collection, all models showed an excellent ability to detect COVID-19 and non-COVID pneumonia with AUROC values between 0.983 and 0.998. It could be observed, that more shallow networks may achieve results comparable to their deeper and more complex counterparts with shorter training times, enabling classification performances on medical image data close to the state-of-the-art methods even when using limited hardware.
Scientific reports
"2020-08-14T00:00:00"
[ "Keno KBressem", "Lisa CAdams", "ChristophErxleben", "BerndHamm", "Stefan MNiehues", "Janis LVahldiek" ]
10.1038/s41598-020-70479-z 10.1148/ryai.2019190015 10.3390/info11020108 10.1093/bioinformatics/bti623 10.1109/ACCESS.2019.2916849 10.3348/kjr.2019.0025
Rapid identification of COVID-19 severity in CT scans through classification of deep features.
Chest CT is used for the assessment of the severity of patients infected with novel coronavirus 2019 (COVID-19). We collected chest CT scans of 202 patients diagnosed with the COVID-19, and try to develop a rapid, accurate and automatic tool for severity screening follow-up therapeutic treatment. A total of 729 2D axial plan slices with 246 severe cases and 483 non-severe cases were employed in this study. By taking the advantages of the pre-trained deep neural network, four pre-trained off-the-shelf deep models (Inception-V3, ResNet-50, ResNet-101, DenseNet-201) were exploited to extract the features from these CT scans. These features are then fed to multiple classifiers (linear discriminant, linear SVM, cubic SVM, KNN and Adaboost decision tree) to identify the severe and non-severe COVID-19 cases. Three validation strategies (holdout validation, tenfold cross-validation and leave-one-out) are employed to validate the feasibility of proposed pipelines. The experimental results demonstrate that classification of the features from pre-trained deep models shows the promising application in COVID-19 severity screening, whereas the DenseNet-201 with cubic SVM model achieved the best performance. Specifically, it achieved the highest severity classification accuracy of 95.20% and 95.34% for tenfold cross-validation and leave-one-out, respectively. The established pipeline was able to achieve a rapid and accurate identification of the severity of COVID-19. This may assist the physicians to make more efficient and reliable decisions.
Biomedical engineering online
"2020-08-14T00:00:00"
[ "ZekuanYu", "XiaohuLi", "HaitaoSun", "JianWang", "TongtongZhao", "HongyiChen", "YichuanMa", "ShujinZhu", "ZongyuXie" ]
10.1186/s12938-020-00807-x 10.1016/S0140-6736(20)30154-9 10.1056/NEJMc2001468 10.1002/jmv.25699 10.1148/radiol.2020200463 10.1097/RLI.0000000000000672 10.1101/2020.03.12.20027185
Deep-COVID: Predicting COVID-19 from chest X-ray images using deep transfer learning.
The COVID-19 pandemic is causing a major outbreak in more than 150 countries around the world, having a severe impact on the health and life of many people globally. One of the crucial step in fighting COVID-19 is the ability to detect the infected patients early enough, and put them under special care. Detecting this disease from radiography and radiology images is perhaps one of the fastest ways to diagnose the patients. Some of the early studies showed specific abnormalities in the chest radiograms of patients infected with COVID-19. Inspired by earlier works, we study the application of deep learning models to detect COVID-19 patients from their chest radiography images. We first prepare a dataset of 5000 Chest X-rays from the publicly available datasets. Images exhibiting COVID-19 disease presence were identified by board-certified radiologist. Transfer learning on a subset of 2000 radiograms was used to train four popular convolutional neural networks, including ResNet18, ResNet50, SqueezeNet, and DenseNet-121, to identify COVID-19 disease in the analyzed chest X-ray images. We evaluated these models on the remaining 3000 images, and most of these networks achieved a sensitivity rate of 98% ( ±  3%), while having a specificity rate of around 90%. Besides sensitivity and specificity rates, we also present the receiver operating characteristic (ROC) curve, precision-recall curve, average prediction, and confusion matrix of each model. We also used a technique to generate heatmaps of lung regions potentially infected by COVID-19 and show that the generated heatmaps contain most of the infected areas annotated by our board certified radiologist. While the achieved performance is very encouraging, further analysis is required on a larger set of COVID-19 images, to have a more reliable estimation of accuracy rates. The dataset, model implementations (in PyTorch), and evaluations, are all made publicly available for research community at https://github.com/shervinmin/DeepCovid.git.
Medical image analysis
"2020-08-12T00:00:00"
[ "ShervinMinaee", "RaheleKafieh", "MilanSonka", "ShakibYazdani", "GhazalehJamalipour Soufi" ]
10.1016/j.media.2020.101794 10.1148/radiol.2020200642 10.1148/radiol.2020200527 10.1148/ryct.2020200028
Identification of COVID-19 samples from chest X-Ray images using deep learning: A comparison of transfer learning approaches.
The novel coronavirus disease 2019 (COVID-19) constitutes a public health emergency globally. The number of infected people and deaths are proliferating every day, which is putting tremendous pressure on our social and healthcare system. Rapid detection of COVID-19 cases is a significant step to fight against this virus as well as release pressure off the healthcare system. One of the critical factors behind the rapid spread of COVID-19 pandemic is a lengthy clinical testing time. The imaging tool, such as Chest X-ray (CXR), can speed up the identification process. Therefore, our objective is to develop an automated CAD system for the detection of COVID-19 samples from healthy and pneumonia cases using CXR images. Due to the scarcity of the COVID-19 benchmark dataset, we have employed deep transfer learning techniques, where we examined 15 different pre-trained CNN models to find the most suitable one for this task. A total of 860 images (260 COVID-19 cases, 300 healthy and 300 pneumonia cases) have been employed to investigate the performance of the proposed algorithm, where 70% images of each class are accepted for training, 15% is used for validation, and rest is for testing. It is observed that the VGG19 obtains the highest classification accuracy of 89.3% with an average precision, recall, and F1 score of 0.90, 0.89, 0.90, respectively. This study demonstrates the effectiveness of deep transfer learning techniques for the identification of COVID-19 cases using CXR images.
Journal of X-ray science and technology
"2020-08-11T00:00:00"
[ "Md MamunurRahaman", "ChenLi", "YudongYao", "FrankKulwa", "Mohammad AsadurRahman", "QianWang", "ShouliangQi", "FanjieKong", "XueminZhu", "XinZhao" ]
10.3233/XST-200715
Deep Learning-Based Decision-Tree Classifier for COVID-19 Diagnosis From Chest X-ray Imaging.
The global pandemic of coronavirus disease 2019 (COVID-19) has resulted in an increased demand for testing, diagnosis, and treatment. Reverse transcription polymerase chain reaction (RT-PCR) is the definitive test for the diagnosis of COVID-19; however, chest X-ray radiography (CXR) is a fast, effective, and affordable test that identifies the possible COVID-19-related pneumonia. This study investigates the feasibility of using a deep learning-based decision-tree classifier for detecting COVID-19 from CXR images. The proposed classifier comprises three binary decision trees, each trained by a deep learning model with convolution neural network based on the PyTorch frame. The first decision tree classifies the CXR images as normal or abnormal. The second tree identifies the abnormal images that contain signs of tuberculosis, whereas the third does the same for COVID-19. The accuracies of the first and second decision trees are 98 and 80%, respectively, whereas the average accuracy of the third decision tree is 95%. The proposed deep learning-based decision-tree classifier may be used in pre-screening patients to conduct triage and fast-track decision making before RT-PCR results are available.
Frontiers in medicine
"2020-08-08T00:00:00"
[ "Seung HoonYoo", "HuiGeng", "Tin LokChiu", "Siu KiYu", "Dae ChulCho", "JinHeo", "Min SungChoi", "Il HyunChoi", "CongCung Van", "Nguen VietNhung", "Byung JunMin", "HoLee" ]
10.3389/fmed.2020.00427 10.1148/radiol.2020200642. 10.1148/radiol.2020200432 10.1148/radiol.2020200527. 10.1148/ryct.2020200028 10.1148/radiol.2462070712 10.1016/j.crad.2020.03.003 10.1148/radiol.2020200230 10.1016/S0140-6736(20)30183-5 10.1609/aaai.v33i01.3301590 10.5588/ijtld.11.0004 10.1038/srep25265 10.1109/42.993132 10.1007/978-3-642-15711-0_81 10.1109/TBME.2010.2057509 10.1038/s41598-019-51503-3 10.1109/ISBI.2019.8759442 10.1038/s41598-019-42294-8 10.1109/CVPR.2017.369 10.21037/jmai.2019.12.01 10.1109/KSE.2018.8573404 10.1148/rg.2017160032 10.1109/CVPR.2016.90 10.14316/pmp.2019.30.2.49 10.1118/1.1312192 10.1016/j.media.2012.06.009 10.1148/radiol.11100153 10.3978/j.issn.2223-4292.2013.04.03
Introducing the GEV Activation Function for Highly Unbalanced Data to Develop COVID-19 Diagnostic Models.
Fast and accurate diagnosis is essential for the efficient and effective control of the COVID-19 pandemic that is currently disrupting the whole world. Despite the prevalence of the COVID-19 outbreak, relatively few diagnostic images are openly available to develop automatic diagnosis algorithms. Traditional deep learning methods often struggle when data is highly unbalanced with many cases in one class and only a few cases in another; new methods must be developed to overcome this challenge. We propose a novel activation function based on the generalized extreme value (GEV) distribution from extreme value theory, which improves performance over the traditional sigmoid activation function when one class significantly outweighs the other. We demonstrate the proposed activation function on a publicly available dataset and externally validate on a dataset consisting of 1,909 healthy chest X-rays and 84 COVID-19 X-rays. The proposed method achieves an improved area under the receiver operating characteristic (DeLong's p-value < 0.05) compared to the sigmoid activation. Our method is also demonstrated on a dataset of healthy and pneumonia vs. COVID-19 X-rays and a set of computerized tomography images, achieving improved sensitivity. The proposed GEV activation function significantly improves upon the previously used sigmoid activation for binary classification. This new paradigm is expected to play a significant role in the fight against COVID-19 and other diseases, with relatively few training cases available.
IEEE journal of biomedical and health informatics
"2020-08-06T00:00:00"
[ "JoshuaBridge", "YandaMeng", "YitianZhao", "YongDu", "MingfengZhao", "RenrongSun", "YalinZheng" ]
10.1109/JBHI.2020.3012383 10.1109/RBME.2020.2987975
Deep Bidirectional Classification Model for COVID-19 Disease Infected Patients.
In December of 2019, a novel coronavirus (COVID-19) appeared in Wuhan city, China and has been reported in many countries with millions of people infected within only four months. Chest computed Tomography (CT) has proven to be a useful supplement to reverse transcription polymerase chain reaction (RT-PCR) and has been shown to have high sensitivity to diagnose this condition. Therefore, radiological examinations are becoming crucial in early examination of COVID-19 infection. Currently, CT findings have already been suggested as an important evidence for scientific examination of COVID-19 in Hubei, China. However, classification of patient from chest CT images is not an easy task. Therefore, in this paper, a deep bidirectional long short-term memory network with mixture density network (DBM) model is proposed. To tune the hyperparameters of the DBM model, a Memetic Adaptive Differential Evolution (MADE) algorithm is used. Extensive experiments are drawn by considering the benchmark chest-Computed Tomography (chest-CT) images datasets. Comparative analysis reveals that the proposed MADE-DBM model outperforms the competitive COVID-19 classification approaches in terms of various performance metrics. Therefore, the proposed MADE-DBM model can be used in real-time COVID-19 classification systems.
IEEE/ACM transactions on computational biology and bioinformatics
"2020-08-06T00:00:00"
[ "YadunathPathak", "Piyush KumarShukla", "K VArya" ]
10.1109/TCBB.2020.3009859
A Noise-Robust Framework for Automatic Segmentation of COVID-19 Pneumonia Lesions From CT Images.
Segmentation of pneumonia lesions from CT scans of COVID-19 patients is important for accurate diagnosis and follow-up. Deep learning has a potential to automate this task but requires a large set of high-quality annotations that are difficult to collect. Learning from noisy training labels that are easier to obtain has a potential to alleviate this problem. To this end, we propose a novel noise-robust framework to learn from noisy labels for the segmentation task. We first introduce a noise-robust Dice loss that is a generalization of Dice loss for segmentation and Mean Absolute Error (MAE) loss for robustness against noise, then propose a novel COVID-19 Pneumonia Lesion segmentation network (COPLE-Net) to better deal with the lesions with various scales and appearances. The noise-robust Dice loss and COPLE-Net are combined with an adaptive self-ensembling framework for training, where an Exponential Moving Average (EMA) of a student model is used as a teacher model that is adaptively updated by suppressing the contribution of the student to EMA when the student has a large training loss. The student model is also adaptive by learning from the teacher only when the teacher outperforms the student. Experimental results showed that: (1) our noise-robust Dice loss outperforms existing noise-robust loss functions, (2) the proposed COPLE-Net achieves higher performance than state-of-the-art image segmentation networks, and (3) our framework with adaptive self-ensembling significantly outperforms a standard training process and surpasses other noise-robust training approaches in the scenario of learning from noisy labels for COVID-19 pneumonia lesion segmentation.
IEEE transactions on medical imaging
"2020-07-31T00:00:00"
[ "GuotaiWang", "XinglongLiu", "ChaopingLi", "ZhiyongXu", "JiugenRuan", "HaifengZhu", "TaoMeng", "KangLi", "NingHuang", "ShaotingZhang" ]
10.1109/TMI.2020.3000314
A Rapid, Accurate and Machine-Agnostic Segmentation and Quantification Method for CT-Based COVID-19 Diagnosis.
COVID-19 has caused a global pandemic and become the most urgent threat to the entire world. Tremendous efforts and resources have been invested in developing diagnosis, prognosis and treatment strategies to combat the disease. Although nucleic acid detection has been mainly used as the gold standard to confirm this RNA virus-based disease, it has been shown that such a strategy has a high false negative rate, especially for patients in the early stage, and thus CT imaging has been applied as a major diagnostic modality in confirming positive COVID-19. Despite the various, urgent advances in developing artificial intelligence (AI)-based computer-aided systems for CT-based COVID-19 diagnosis, most of the existing methods can only perform classification, whereas the state-of-the-art segmentation method requires a high level of human intervention. In this paper, we propose a fully-automatic, rapid, accurate, and machine-agnostic method that can segment and quantify the infection regions on CT scans from different sources. Our method is founded upon two innovations: 1) the first CT scan simulator for COVID-19, by fitting the dynamic change of real patients' data measured at different time points, which greatly alleviates the data scarcity issue; and 2) a novel deep learning algorithm to solve the large-scene-small-object problem, which decomposes the 3D segmentation problem into three 2D ones, and thus reduces the model complexity by an order of magnitude and, at the same time, significantly improves the segmentation accuracy. Comprehensive experimental results over multi-country, multi-hospital, and multi-machine datasets demonstrate the superior performance of our method over the existing ones and suggest its important application value in combating the disease.
IEEE transactions on medical imaging
"2020-07-31T00:00:00"
[ "LongxiZhou", "ZhongxiaoLi", "JuexiaoZhou", "HaoyangLi", "YupengChen", "YuxinHuang", "DexuanXie", "LintaoZhao", "MingFan", "ShahrukhHashmi", "FaisalAbdelkareem", "RihamEiada", "XigangXiao", "LihuaLi", "ZhaowenQiu", "XinGao" ]
10.1109/TMI.2020.3001810 10.1148/radiol.2020200642 10.1183/09031936.01.00213501 10.1371/journal.pmed.1002707 10.1016/j.cell.2018.02.010 10.1016/j.diii.2012.04.001 10.1097/MCP.0000000000000567 10.1097/rli.0000000000000574 10.1016/S2213-2600(18)30286-8 10.1109/TMI.2016.2535865 10.1016/j.patrec.2019.11.013 10.1148/rg.2018170048 10.1148/radiol.2020200274 10.1148/radiol.2020200823 10.1101/2020.02.14.20023028 10.1101/2020.02.23.20026930 10.1109/TCBB.2019.2939522 10.1038/nature24270 10.1093/bioinformatics/bty241 10.1093/bioinformatics/bty223.v 10.1093/bioinformatics/btz963 10.1016/j.cell.2020.04.045
Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images.
Coronavirus Disease 2019 (COVID-19) spread globally in early 2020, causing the world to face an existential health crisis. Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19. However, segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues. Further, collecting a large amount of data is impractical within a short time period, inhibiting the training of a deep model. To address these challenges, a novel COVID-19 Lung Infection Segmentation Deep Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices. In our Inf-Net, a parallel partial decoder is used to aggregate the high-level features and generate a global map. Then, the implicit reverse attention and explicit edge-attention are utilized to model the boundaries and enhance the representations. Moreover, to alleviate the shortage of labeled data, we present a semi-supervised segmentation framework based on a randomly selected propagation strategy, which only requires a few labeled images and leverages primarily unlabeled data. Our semi-supervised framework can improve the learning ability and achieve a higher performance. Extensive experiments on our COVID-SemiSeg and real CT volumes demonstrate that the proposed Inf-Net outperforms most cutting-edge segmentation models and advances the state-of-the-art performance.
IEEE transactions on medical imaging
"2020-07-31T00:00:00"
[ "Deng-PingFan", "TaoZhou", "Ge-PengJi", "YiZhou", "GengChen", "HuazhuFu", "JianbingShen", "LingShao" ]
10.1109/TMI.2020.2996645
Dual-Sampling Attention Network for Diagnosis of COVID-19 From Community Acquired Pneumonia.
The coronavirus disease (COVID-19) is rapidly spreading all over the world, and has infected more than 1,436,000 people in more than 200 countries and territories as of April 9, 2020. Detecting COVID-19 at early stage is essential to deliver proper healthcare to the patients and also to protect the uninfected population. To this end, we develop a dual-sampling attention network to automatically diagnose COVID-19 from the community acquired pneumonia (CAP) in chest computed tomography (CT). In particular, we propose a novel online attention module with a 3D convolutional network (CNN) to focus on the infection regions in lungs when making decisions of diagnoses. Note that there exists imbalanced distribution of the sizes of the infection regions between COVID-19 and CAP, partially due to fast progress of COVID-19 after symptom onset. Therefore, we develop a dual-sampling strategy to mitigate the imbalanced learning. Our method is evaluated (to our best knowledge) upon the largest multi-center CT data for COVID-19 from 8 hospitals. In the training-validation stage, we collect 2186 CT scans from 1588 patients for a 5-fold cross-validation. In the testing stage, we employ another independent large-scale testing dataset including 2796 CT scans from 2057 patients. Results show that our algorithm can identify the COVID-19 images with the area under the receiver operating characteristic curve (AUC) value of 0.944, accuracy of 87.5%, sensitivity of 86.9%, specificity of 90.1%, and F1-score of 82.0%. With this performance, the proposed algorithm could potentially aid radiologists with COVID-19 diagnosis from CAP, especially in the early stage of the COVID-19 outbreak.
IEEE transactions on medical imaging
"2020-07-31T00:00:00"
[ "XiOuyang", "JiayuHuo", "LimingXia", "FeiShan", "JunLiu", "ZhanhaoMo", "FuhuaYan", "ZhongxiangDing", "QiYang", "BinSong", "FengShi", "HuanYuan", "YingWei", "XiaohuanCao", "YaozongGao", "DijiaWu", "QianWang", "DinggangShen" ]
10.1109/TMI.2020.2995508
Accurate Screening of COVID-19 Using Attention-Based Deep 3D Multiple Instance Learning.
Automated Screening of COVID-19 from chest CT is of emergency and importance during the outbreak of SARS-CoV-2 worldwide in 2020. However, accurate screening of COVID-19 is still a massive challenge due to the spatial complexity of 3D volumes, the labeling difficulty of infection areas, and the slight discrepancy between COVID-19 and other viral pneumonia in chest CT. While a few pioneering works have made significant progress, they are either demanding manual annotations of infection areas or lack of interpretability. In this paper, we report our attempt towards achieving highly accurate and interpretable screening of COVID-19 from chest CT with weak labels. We propose an attention-based deep 3D multiple instance learning (AD3D-MIL) where a patient-level label is assigned to a 3D chest CT that is viewed as a bag of instances. AD3D-MIL can semantically generate deep 3D instances following the possible infection area. AD3D-MIL further applies an attention-based pooling approach to 3D instances to provide insight into each instance's contribution to the bag label. AD3D-MIL finally learns Bernoulli distributions of the bag-level labels for more accessible learning. We collected 460 chest CT examples: 230 CT examples from 79 patients with COVID-19, 100 CT examples from 100 patients with common pneumonia, and 130 CT examples from 130 people without pneumonia. A series of empirical studies show that our algorithm achieves an overall accuracy of 97.9%, AUC of 99.0%, and Cohen kappa score of 95.7%. These advantages endow our algorithm as an efficient assisted tool in the screening of COVID-19.
IEEE transactions on medical imaging
"2020-07-31T00:00:00"
[ "ZhongyiHan", "BenzhengWei", "YanfeiHong", "TianyangLi", "JinyuCong", "XueZhu", "HaifengWei", "WeiZhang" ]
10.1109/TMI.2020.2996256
Prior-Attention Residual Learning for More Discriminative COVID-19 Screening in CT Images.
We propose a conceptually simple framework for fast COVID-19 screening in 3D chest CT images. The framework can efficiently predict whether or not a CT scan contains pneumonia while simultaneously identifying pneumonia types between COVID-19 and Interstitial Lung Disease (ILD) caused by other viruses. In the proposed method, two 3D-ResNets are coupled together into a single model for the two above-mentioned tasks via a novel prior-attention strategy. We extend residual learning with the proposed prior-attention mechanism and design a new so-called prior-attention residual learning (PARL) block. The model can be easily built by stacking the PARL blocks and trained end-to-end using multi-task losses. More specifically, one 3D-ResNet branch is trained as a binary classifier using lung images with and without pneumonia so that it can highlight the lesion areas within the lungs. Simultaneously, inside the PARL blocks, prior-attention maps are generated from this branch and used to guide another branch to learn more discriminative representations for the pneumonia-type classification. Experimental results demonstrate that the proposed framework can significantly improve the performance of COVID-19 screening. Compared to other methods, it achieves a state-of-the-art result. Moreover, the proposed method can be easily extended to other similar clinical applications such as computer-aided detection and diagnosis of pulmonary nodules in CT images, glaucoma lesions in Retina fundus images, etc.
IEEE transactions on medical imaging
"2020-07-31T00:00:00"
[ "JunWang", "YimingBao", "YaofengWen", "HongbingLu", "HuLuo", "YunfeiXiang", "XiaomingLi", "ChenLiu", "DahongQian" ]
10.1109/TMI.2020.2994908
Automated Assessment of COVID-19 Reporting and Data System and Chest CT Severity Scores in Patients Suspected of Having COVID-19 Using Artificial Intelligence.
Background The coronavirus disease 2019 (COVID-19) pandemic has spread across the globe with alarming speed, morbidity, and mortality. Immediate triage of patients with chest infections suspected to be caused by COVID-19 using chest CT may be of assistance when results from definitive viral testing are delayed. Purpose To develop and validate an artificial intelligence (AI) system to score the likelihood and extent of pulmonary COVID-19 on chest CT scans using the COVID-19 Reporting and Data System (CO-RADS) and CT severity scoring systems. Materials and Methods The CO-RADS AI system consists of three deep-learning algorithms that automatically segment the five pulmonary lobes, assign a CO-RADS score for the suspicion of COVID-19, and assign a CT severity score for the degree of parenchymal involvement per lobe. This study retrospectively included patients who underwent a nonenhanced chest CT examination because of clinical suspicion of COVID-19 at two medical centers. The system was trained, validated, and tested with data from one of the centers. Data from the second center served as an external test set. Diagnostic performance and agreement with scores assigned by eight independent observers were measured using receiver operating characteristic analysis, linearly weighted κ values, and classification accuracy. Results A total of 105 patients (mean age, 62 years ± 16 [standard deviation]; 61 men) and 262 patients (mean age, 64 years ± 16; 154 men) were evaluated in the internal and external test sets, respectively. The system discriminated between patients with COVID-19 and those without COVID-19, with areas under the receiver operating characteristic curve of 0.95 (95% CI: 0.91, 0.98) and 0.88 (95% CI: 0.84, 0.93), for the internal and external test sets, respectively. Agreement with the eight human observers was moderate to substantial, with mean linearly weighted κ values of 0.60 ± 0.01 for CO-RADS scores and 0.54 ± 0.01 for CT severity scores. Conclusion With high diagnostic performance, the CO-RADS AI system correctly identified patients with COVID-19 using chest CT scans and assigned standardized CO-RADS and CT severity scores that demonstrated good agreement with findings from eight independent observers and generalized well to external data. © RSNA, 2020
Radiology
"2020-07-31T00:00:00"
[ "NikolasLessmann", "Clara ISánchez", "LudoBeenen", "Luuk HBoulogne", "MoniqueBrink", "ErdiCalli", "Jean-PaulCharbonnier", "TonDofferhoff", "Wouter Mvan Everdingen", "Paul KGerke", "BramGeurts", "Hester AGietema", "MiriamGroeneveld", "Louisvan Harten", "NilsHendrix", "WardHendrix", "Henkjan JHuisman", "IvanaIšgum", "ColinJacobs", "RubenKluge", "MichelKok", "JasenkoKrdzalic", "BiancaLassen-Schmidt", "Kickyvan Leeuwen", "JamesMeakin", "MikeOverkamp", "Tjalcovan Rees Vellinga", "Eva Mvan Rikxoort", "RiccardoSamperna", "CorneliaSchaefer-Prokop", "StevenSchalekamp", "Ernst ThScholten", "CherylSital", "J LauranStöger", "JonasTeuwen", "Kiran VaidhyaVenkadesh", "Coende Vente", "MariekeVermaat", "WeiyiXie", "Bramde Wilde", "MathiasProkop", "Bramvan Ginneken" ]
10.1148/radiol.2020202439
Implementation of a Deep Learning-Based Computer-Aided Detection System for the Interpretation of Chest Radiographs in Patients Suspected for COVID-19.
To describe the experience of implementing a deep learning-based computer-aided detection (CAD) system for the interpretation of chest X-ray radiographs (CXR) of suspected coronavirus disease (COVID-19) patients and investigate the diagnostic performance of CXR interpretation with CAD assistance. In this single-center retrospective study, initial CXR of patients with suspected or confirmed COVID-19 were investigated. A commercialized deep learning-based CAD system that can identify various abnormalities on CXR was implemented for the interpretation of CXR in daily practice. The diagnostic performance of radiologists with CAD assistance were evaluated based on two different reference standards: 1) real-time reverse transcriptase-polymerase chain reaction (rRT-PCR) results for COVID-19 and 2) pulmonary abnormality suggesting pneumonia on chest CT. The turnaround times (TATs) of radiology reports for CXR and rRT-PCR results were also evaluated. Among 332 patients (male:female, 173:159; mean age, 57 years) with available rRT-PCR results, 16 patients (4.8%) were diagnosed with COVID-19. Using CXR, radiologists with CAD assistance identified rRT-PCR positive COVID-19 patients with sensitivity and specificity of 68.8% and 66.7%, respectively. Among 119 patients (male:female, 75:44; mean age, 69 years) with available chest CTs, radiologists assisted by CAD reported pneumonia on CXR with a sensitivity of 81.5% and a specificity of 72.3%. The TATs of CXR reports were significantly shorter than those of rRT-PCR results (median 51 vs. 507 minutes; Radiologists with CAD assistance could identify patients with rRT-PCR-positive COVID-19 or pneumonia on CXR with a reasonably acceptable performance. In patients suspected with COVID-19, CXR had much faster TATs than rRT-PCRs.
Korean journal of radiology
"2020-07-31T00:00:00"
[ "Eui JinHwang", "HyungjinKim", "Soon HoYoon", "Jin MoGoo", "Chang MinPark" ]
10.3348/kjr.2020.0536 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1148/radiol.2020200343 10.1148/radiol.2020201160
Deep transfer learning artificial intelligence accurately stages COVID-19 lung disease severity on portable chest radiographs.
This study employed deep-learning convolutional neural networks to stage lung disease severity of Coronavirus Disease 2019 (COVID-19) infection on portable chest x-ray (CXR) with radiologist score of disease severity as ground truth. This study consisted of 131 portable CXR from 84 COVID-19 patients (51M 55.1±14.9yo; 29F 60.1±14.3yo; 4 missing information). Three expert chest radiologists scored the left and right lung separately based on the degree of opacity (0-3) and geographic extent (0-4). Deep-learning convolutional neural network (CNN) was used to predict lung disease severity scores. Data were split into 80% training and 20% testing datasets. Correlation analysis between AI-predicted versus radiologist scores were analyzed. Comparison was made with traditional and transfer learning. The average opacity score was 2.52 (range: 0-6) with a standard deviation of 0.25 (9.9%) across three readers. The average geographic extent score was 3.42 (range: 0-8) with a standard deviation of 0.57 (16.7%) across three readers. The inter-rater agreement yielded a Fleiss' Kappa of 0.45 for opacity score and 0.71 for extent score. AI-predicted scores strongly correlated with radiologist scores, with the top model yielding a correlation coefficient (R2) of 0.90 (range: 0.73-0.90 for traditional learning and 0.83-0.90 for transfer learning) and a mean absolute error of 8.5% (ranges: 17.2-21.0% and 8.5%-15.5, respectively). Transfer learning generally performed better. In conclusion, deep-learning CNN accurately stages disease severity on portable chest x-ray of COVID-19 lung infection. This approach may prove useful to stage lung disease severity, prognosticate, and predict treatment response and survival, thereby informing risk management and resource allocation.
PloS one
"2020-07-30T00:00:00"
[ "JocelynZhu", "BeiyiShen", "AlmasAbbasi", "MahsaHoshmand-Kochi", "HaifangLi", "Tim QDuong" ]
10.1371/journal.pone.0236621 10.1016/j.ijid.2020.01.009 10.1002/jmv.25678 10.1056/NEJMoa2001316 10.1186/s12880-015-0103-y 10.1148/radiol.2020201160 10.2214/AJR.20.23034 10.1097/RLI.0000000000000672 10.1148/radiol.2020200843 10.1148/radiol.2020200230 10.1038/nature14539 10.1038/s41379-018-0073-z 10.1088/1361-6560/aa93d4 10.1007/s11548-018-1843-2 10.1371/journal.pone.0221339 10.1016/j.imu.2020.100360 10.1148/radiol.2020201178 10.1148/radiol.2020200905 10.1136/thoraxjnl-2017-211280
Deploying Machine and Deep Learning Models for Efficient Data-Augmented Detection of COVID-19 Infections.
This generation faces existential threats because of the global assault of the novel Corona virus 2019 (i.e., COVID-19). With more than thirteen million infected and nearly 600000 fatalities in 188 countries/regions, COVID-19 is the worst calamity since the World War II. These misfortunes are traced to various reasons, including late detection of latent or asymptomatic carriers, migration, and inadequate isolation of infected people. This makes detection, containment, and mitigation global priorities to contain exposure via quarantine, lockdowns, work/stay at home, and social distancing that are focused on "flattening the curve". While medical and healthcare givers are at the frontline in the battle against COVID-19, it is a crusade for all of humanity. Meanwhile, machine and deep learning models have been revolutionary across numerous domains and applications whose potency have been exploited to birth numerous state-of-the-art technologies utilised in disease detection, diagnoses, and treatment. Despite these potentials, machine and, particularly, deep learning models are data sensitive, because their effectiveness depends on availability and reliability of data. The unavailability of such data hinders efforts of engineers and computer scientists to fully contribute to the ongoing assault against COVID-19. Faced with a calamity on one side and absence of reliable data on the other, this study presents two data-augmentation models to enhance learnability of the Convolutional Neural Network (CNN) and the Convolutional Long Short-Term Memory (ConvLSTM)-based deep learning models (DADLMs) and, by doing so, boost the accuracy of COVID-19 detection. Experimental results reveal improvement in terms of accuracy of detection, logarithmic loss, and testing time relative to DLMs devoid of such data augmentation. Furthermore, average increases of 4% to 11% in COVID-19 detection accuracy are reported in favour of the proposed data-augmented deep learning models relative to the machine learning techniques. Therefore, the proposed algorithm is effective in performing a rapid and consistent Corona virus diagnosis that is primarily aimed at assisting clinicians in making accurate identification of the virus.
Viruses
"2020-07-28T00:00:00"
[ "AhmedSedik", "Abdullah MIliyasu", "BasmaAbd El-Rahiem", "Mohammed EAbdel Samea", "AsmaaAbdel-Raheem", "MohamedHammad", "JialiangPeng", "Fathi EAbd El-Samie", "Ahmed AAbd El-Latif" ]
10.3390/v12070769 10.1016/S0140-6736(20)30360-3 10.1016/S0140-6736(20)30566-3 10.1016/j.jinf.2020.03.037 10.1016/j.jcv.2003.09.011 10.1111/tbed.12401 10.1016/S0140-6736(20)30183-5 10.1186/s40779-020-00240-0 10.1016/j.joen.2020.03.008 10.1007/s40121-020-00295-5 10.1001/jama.2020.4756 10.1016/S1473-3099(14)70846-1 10.1016/j.measurement.2018.05.033 10.1016/j.apacoust.2020.107279 10.21608/mjeer.2019.76962 10.21608/mjeer.2019.76998 10.1007/s11042-020-08769-x 10.1007/s00521-018-3616-9 10.1016/j.eswa.2019.01.080 10.1016/j.knosys.2019.105460 10.1016/j.bspc.2019.101683 10.1371/journal.pone.0214365 10.1016/j.knosys.2019.105153 10.1109/ACCESS.2020.2994762 10.3233/JIFS-191438 10.1162/neco.1997.9.8.1735 10.1364/OL.20.000767 10.1109/TIT.2013.2288257 10.1148/radiology.143.1.7063747 10.1016/j.bspc.2013.09.001 10.1080/23311916.2019.1599537 10.1016/j.cell.2018.02.010 10.1155/2019/4180949 10.1016/j.cmpb.2019.06.023 10.3390/sym12040651
From community-acquired pneumonia to COVID-19: a deep learning-based method for quantitative analysis of COVID-19 on thick-section CT scans.
To develop a fully automated AI system to quantitatively assess the disease severity and disease progression of COVID-19 using thick-section chest CT images. In this retrospective study, an AI system was developed to automatically segment and quantify the COVID-19-infected lung regions on thick-section chest CT images. Five hundred thirty-one CT scans from 204 COVID-19 patients were collected from one appointed COVID-19 hospital. The automatically segmented lung abnormalities were compared with manual segmentation of two experienced radiologists using the Dice coefficient on a randomly selected subset (30 CT scans). Two imaging biomarkers were automatically computed, i.e., the portion of infection (POI) and the average infection HU (iHU), to assess disease severity and disease progression. The assessments were compared with patient status of diagnosis reports and key phrases extracted from radiology reports using the area under the receiver operating characteristic curve (AUC) and Cohen's kappa, respectively. The dice coefficient between the segmentation of the AI system and two experienced radiologists for the COVID-19-infected lung abnormalities was 0.74 ± 0.28 and 0.76 ± 0.29, respectively, which were close to the inter-observer agreement (0.79 ± 0.25). The computed two imaging biomarkers can distinguish between the severe and non-severe stages with an AUC of 0.97 (p value < 0.001). Very good agreement (κ = 0.8220) between the AI system and the radiologists was achieved on evaluating the changes in infection volumes. A deep learning-based AI system built on the thick-section CT imaging can accurately quantify the COVID-19-associated lung abnormalities and assess the disease severity and its progressions. • A deep learning-based AI system was able to accurately segment the infected lung regions by COVID-19 using the thick-section CT scans (Dice coefficient ≥ 0.74). • The computed imaging biomarkers were able to distinguish between the non-severe and severe COVID-19 stages (area under the receiver operating characteristic curve 0.97). • The infection volume changes computed by the AI system were able to assess the COVID-19 progression (Cohen's kappa 0.8220).
European radiology
"2020-07-20T00:00:00"
[ "ZhangLi", "ZhengZhong", "YangLi", "TianyuZhang", "LiangxinGao", "DakaiJin", "YueSun", "XianghuaYe", "LiYu", "ZheyuHu", "JingXiao", "LingyunHuang", "YulingTang" ]
10.1007/s00330-020-07042-x 10.1148/ryct.2020200044
Towards explainable deep neural networks (xDNN).
In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the traditional deep learning approaches and offers an explainable internal architecture that can outperform the existing methods, requires very little computational resources (no need for GPUs) and short training times (in the order of seconds). The proposed approach, xDNN is using prototypes. Prototypes are actual training data samples (images), which are local peaks of the empirical data distribution called typicality as well as of the data density. This generative model is identified in a closed form and equates to the pdf but is derived automatically and entirely from the training data with no user- or problem-specific thresholds, parameters or intervention. The proposed xDNN offers a new deep learning architecture that combines reasoning and learning in a synergy. It is non-iterative and non-parametric, which explains its efficiency in terms of time and computational resources. From the user perspective, the proposed approach is clearly understandable to human users. We tested it on challenging problems as the classification of different lighting conditions for driving scenes (iROADS), object detection (Caltech-256, and Caltech-101), and SARS-CoV-2 identification via computed tomography scan (COVID CT-scans dataset). xDNN outperforms the other methods including deep learning in terms of accuracy, time to train and offers an explainable classifier.
Neural networks : the official journal of the International Neural Network Society
"2020-07-19T00:00:00"
[ "PlamenAngelov", "EduardoSoares" ]
10.1016/j.neunet.2020.07.010
Automated detection and quantification of COVID-19 pneumonia: CT imaging analysis by a deep learning-based software.
The novel coronavirus disease 2019 (COVID-19) is an emerging worldwide threat to public health. While chest computed tomography (CT) plays an indispensable role in its diagnosis, the quantification and localization of lesions cannot be accurately assessed manually. We employed deep learning-based software to aid in detection, localization and quantification of COVID-19 pneumonia. A total of 2460 RT-PCR tested SARS-CoV-2-positive patients (1250 men and 1210 women; mean age, 57.7 ± 14.0 years (age range, 11-93 years) were retrospectively identified from Huoshenshan Hospital in Wuhan from February 11 to March 16, 2020. Basic clinical characteristics were reviewed. The uAI Intelligent Assistant Analysis System was used to assess the CT scans. CT scans of 2215 patients (90%) showed multiple lesions of which 36 (1%) and 50 patients (2%) had left and right lung infections, respectively (> 50% of each affected lung's volume), while 27 (1%) had total lung infection (> 50% of the total volume of both lungs). Overall, 298 (12%), 778 (32%) and 1300 (53%) patients exhibited pure ground glass opacities (GGOs), GGOs with sub-solid lesions and GGOs with both sub-solid and solid lesions, respectively. Moreover, 2305 (94%) and 71 (3%) patients presented primarily with GGOs and sub-solid lesions, respectively. Elderly patients (≥ 60 years) were more likely to exhibit sub-solid lesions. The generalized linear mixed model showed that the dorsal segment of the right lower lobe was the favoured site of COVID-19 pneumonia. Chest CT combined with analysis by the uAI Intelligent Assistant Analysis System can accurately evaluate pneumonia in COVID-19 patients.
European journal of nuclear medicine and molecular imaging
"2020-07-16T00:00:00"
[ "Hai-TaoZhang", "Jin-SongZhang", "Hai-HuaZhang", "Yan-DongNan", "YingZhao", "En-QingFu", "Yong-HongXie", "WeiLiu", "Wang-PingLi", "Hong-JunZhang", "HuaJiang", "Chun-MeiLi", "Yan-YanLi", "Rui-NaMa", "Shao-KangDang", "Bo-BoGao", "Xi-JingZhang", "TaoZhang" ]
10.1007/s00259-020-04953-1 10.1056/NEJMoa2001017 10.1186/s40779-020-0233-6 10.3348/kjr.2020.0146
CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization.
With the recent outbreak of COVID-19, fast diagnostic testing has become one of the major challenges due to the critical shortage of test kit. Pneumonia, a major effect of COVID-19, needs to be urgently diagnosed along with its underlying reasons. In this paper, deep learning aided automated COVID-19 and other pneumonia detection schemes are proposed utilizing a small amount of COVID-19 chest X-rays. A deep convolutional neural network (CNN) based architecture, named as CovXNet, is proposed that utilizes depthwise convolution with varying dilation rates for efficiently extracting diversified features from chest X-rays. Since the chest X-ray images corresponding to COVID-19 caused pneumonia and other traditional pneumonias have significant similarities, at first, a large number of chest X-rays corresponding to normal and (viral/bacterial) pneumonia patients are used to train the proposed CovXNet. Learning of this initial training phase is transferred with some additional fine-tuning layers that are further trained with a smaller number of chest X-rays corresponding to COVID-19 and other pneumonia patients. In the proposed method, different forms of CovXNets are designed and trained with X-ray images of various resolutions and for further optimization of their predictions, a stacking algorithm is employed. Finally, a gradient-based discriminative localization is integrated to distinguish the abnormal regions of X-ray images referring to different types of pneumonia. Extensive experimentations using two different datasets provide very satisfactory detection performance with accuracy of 97.4% for COVID/Normal, 96.9% for COVID/Viral pneumonia, 94.7% for COVID/Bacterial pneumonia, and 90.2% for multiclass COVID/normal/Viral/Bacterial pneumonias. Hence, the proposed schemes can serve as an efficient tool in the current state of COVID-19 pandemic. All the architectures are made publicly available at: https://github.com/Perceptron21/CovXNet.
Computers in biology and medicine
"2020-07-14T00:00:00"
[ "TanvirMahmud", "Md AwsafurRahman", "Shaikh AnowarulFattah" ]
10.1016/j.compbiomed.2020.103869
Efficient GAN-based Chest Radiographs (CXR) augmentation to diagnose coronavirus disease pneumonia.
International journal of medical sciences
"2020-07-07T00:00:00"
[ "SalehAlbahli" ]
10.7150/ijms.46684
Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning.
Deep learning models are widely used in the automatic analysis of radiological images. These techniques can train the weights of networks on large datasets as well as fine tuning the weights of pre-trained networks on small datasets. Due to the small COVID-19 dataset available, the pre-trained neural networks can be used for diagnosis of coronavirus. However, these techniques applied on chest CT image is very limited till now. Hence, the main aim of this paper to use the pre-trained deep learning architectures as an automated tool to detection and diagnosis of COVID-19 in chest CT. A DenseNet201 based deep transfer learning (DTL) is proposed to classify the patients as COVID infected or not i.e. COVID-19 (+) or COVID (-). The proposed model is utilized to extract features by using its own learned weights on the ImageNet dataset along with a convolutional neural structure. Extensive experiments are performed to evaluate the performance of the propose DTL model on COVID-19 chest CT scan images. Comparative analyses reveal that the proposed DTL based COVID-19 classification model outperforms the competitive approaches.Communicated by Ramaswamy H. Sarma.
Journal of biomolecular structure & dynamics
"2020-07-04T00:00:00"
[ "AayushJaiswal", "NehaGianchandani", "DilbagSingh", "VijayKumar", "ManjitKaur" ]
10.1080/07391102.2020.1788642
A COVID-19 patient with seven consecutive false-negative rRT-PCR results from sputum specimens.
null
Internal and emergency medicine
"2020-07-04T00:00:00"
[ "Cong-YingSong", "Da-GanYang", "Yuan-QiangLu" ]
10.1007/s11739-020-02423-y 10.3348/kjr.2020.0195 10.3348/kjr.2020.0146 10.1148/radiol.2020200343 10.1093/cid/ciaa149 10.1007/s11739-020-02321-3 10.1007/s00134-020-05996-6 10.1136/bmj.m1443 10.1631/jzus.B2010011