title
stringlengths 2
287
| abstract
stringlengths 0
5.14k
⌀ | journal
stringlengths 4
184
| date
unknown | authors
sequencelengths 1
57
| doi
stringlengths 16
6.63k
⌀ |
---|---|---|---|---|---|
Artificial Intelligence Clinicians Can Use Chest Computed Tomography Technology to Automatically Diagnose Coronavirus Disease 2019 (COVID-19) Pneumonia and Enhance Low-Quality Images. | Nowadays, the number of patients with COVID-19 pneumonia worldwide is still increasing. The clinical diagnosis of COVID-19 pneumonia faces challenges, such as the difficulty to perform RT-PCR tests in real time, the lack of experienced radiologists, clinical low-quality images, and the similarity of imaging features of community-acquired pneumonia and COVID-19. Therefore, we proposed an artificial intelligence model GARCD that uses chest CT images to assist in the diagnosis of COVID-19 in real time. It can show better diagnostic performance even facing low-quality CT images.
We used 14,129 CT images from 104 patients. A total of 12,929 samples were used to build artificial intelligence models, and 1200 samples were used to test its performance. The image quality improvement module is based on the generative adversarial structure. It improves the quality of the input image under the joint drive of feature loss and content loss. The enhanced image is sent to the disease diagnosis model based on residual convolutional network. It automatically extracts the semantic features of the image and then infers the probability that the sample belongs to COVID-19. The ROC curve is used to evaluate the performance of the model.
This model can effectively enhance the low-quality image and make the image that is difficult to be recognized become recognizable. The model proposed in this paper reached 97.8% AUC, 96.97% sensitivity and 91.16% specificity in an independent test set. ResNet, GADCD, CNN, and DenseNet achieved 80.9%, 97.3%, 70.7% and 85.7% AUC in the same test set, respectively. By comparing the performance with related works, it is proved that the model proposed has stronger clinical usability.
The method proposed can effectively assist doctors in real-time detection of suspected cases of COVID-19 pneumonia even faces unclear image. It can quickly isolate patients in a targeted manner, which is of positive significance for preventing the further spread of COVID-19 pneumonia. | Infection and drug resistance | "2021-03-05T00:00:00" | [
"QuanZhang",
"ZhuoChen",
"GuohuaLiu",
"WenjiaZhang",
"QianDu",
"JiayuanTan",
"QianqianGao"
] | 10.2147/IDR.S296346
10.1590/1806-9282.66.7.880
10.2147/IDR.S258677
10.1016/j.acra.2020.04.031
10.1148/radiol.2020201473
10.1002/jmv.25735
10.1007/s00330-020-06975-7
10.1093/cid/ciaa247
10.1148/radiol.2020200280
10.1007/s00259-020-04929-1
10.1016/S0140-6736(19)32501-2
10.1016/j.cell.2018.02.010
10.1016/S0140-6736(18)31645-3
10.2147/DMSO.S288419
10.3390/diagnostics10110901
10.1109/TMM.2020.3037535
10.1109/TMM.2019.2943750
10.21203/rs.3.rs-21834/v1
10.1148/radiol.2020200490
10.1148/radiol.2020200905
10.1148/radiol.2020201491
10.1148/radiol.2020202439
10.1148/radiol.2020203511
10.1016/j.ejrad.2020.109402
10.1016/j.ejrad.2020.108991
10.1136/fmch-2020-000406 |
Correction to: Decoding COVID-19 pneumonia: comparison of deep learning and radiomics CT image signatures. | null | European journal of nuclear medicine and molecular imaging | "2021-03-04T00:00:00" | [
"HongmeiWang",
"LuWang",
"Edward HLee",
"JimmyZheng",
"WeiZhang",
"SafwanHalabi",
"ChunleiLiu",
"KexueDeng",
"JiangdianSong",
"Kristen WYeom"
] | 10.1007/s00259-021-05268-5 |
Convolutional neural network model based on radiological images to support COVID-19 diagnosis: Evaluating database biases. | As SARS-CoV-2 has spread quickly throughout the world, the scientific community has spent major efforts on better understanding the characteristics of the virus and possible means to prevent, diagnose, and treat COVID-19. A valid approach presented in the literature is to develop an image-based method to support COVID-19 diagnosis using convolutional neural networks (CNN). Because the availability of radiological data is rather limited due to the novelty of COVID-19, several methodologies consider reduced datasets, which may be inadequate, biasing the model. Here, we performed an analysis combining six different databases using chest X-ray images from open datasets to distinguish images of infected patients while differentiating COVID-19 and pneumonia from 'no-findings' images. In addition, the performance of models created from fewer databases, which may imperceptibly overestimate their results, is discussed. Two CNN-based architectures were created to process images of different sizes (512 × 512, 768 × 768, 1024 × 1024, and 1536 × 1536). Our best model achieved a balanced accuracy (BA) of 87.7% in predicting one of the three classes ('no-findings', 'COVID-19', and 'pneumonia') and a specific balanced precision of 97.0% for 'COVID-19' class. We also provided binary classification with a precision of 91.0% for detection of sick patients (i.e., with COVID-19 or pneumonia) and 98.4% for COVID-19 detection (i.e., differentiating from 'no-findings' or 'pneumonia'). Indeed, despite we achieved an unrealistic 97.2% BA performance for one specific case, the proposed methodology of using multiple databases achieved better and less inflated results than from models with specific image datasets for training. Thus, this framework is promising for a low-cost, fast, and noninvasive means to support the diagnosis of COVID-19. | PloS one | "2021-03-02T00:00:00" | [
"Caio B SMaior",
"João M MSantana",
"Isis DLins",
"Márcio J CMoura"
] | 10.1371/journal.pone.0247839
10.1016/j.ijantimicag.2020.105924
10.1093/cid/ciaa344
10.1148/radiol.2020200432
10.1148/radiol.2020200642
10.1016/j.mayocp.2020.04.004
10.1148/radiol.2020200230
10.1016/j.jinf.2020.03.007
10.1016/S0140-6736(20)30183-5
10.1056/NEJMoa2002032
10.2214/AJR.20.22954
10.1148/radiol.2020200905
10.1148/radiol.2020201160
10.1164/rccm.201501-0017OC
10.1097/SHK.0000000000001273
10.11152/mu-1885
10.1148/radiol.2019190613
10.1016/j.jacr.2017.12.028
10.1371/journal.pone.0235187
10.1002/qre.1221
10.17531/ein.2019.4.10
10.1016/j.eswa.2020.113505
10.1016/j.measurement.2016.07.054
10.1038/nature14236
10.1155/2017/5067651
10.1371/journal.pone.0219570
10.1177/1475921718788299
10.1371/journal.pone.0233514
10.1186/s40537-019-0276-2
10.1016/j.imu.2020.100306
10.1016/j.patcog.2018.05.014
10.1016/j.eng.2020.04.010
10.1007/s10096-020-03901-z
10.1101/2020.02.14.20023028
10.1007/s10489-020-01714-3
10.3390/v12070769
10.3390/ai1030027
10.1016/j.eswa.2018.10.010
10.1371/journal.pone.0242535
10.1109/ACCESS.2020.3016780
10.1016/j.cmpb.2020.105581
10.1016/j.compbiomed.2020.103792
10.3390/sym12040651
10.1007/s13246-020-00865-4
10.1016/j.mehy.2020.109761
10.1148/ryai.2019180041
10.1186/s40537-019-0197-0
10.1109/ACCESS.2019.2919678
10.1016/j.eswa.2017.11.028
10.1016/j.neunet.2015.09.014
10.1007/s00521-018-3924-0
10.1016/j.biosystemseng.2016.08.024
10.1080/2150704X.2016.1196837 |
Accurately Discriminating COVID-19 from Viral and Bacterial Pneumonia According to CT Images Via Deep Learning. | Computed tomography (CT) is one of the most efficient diagnostic methods for rapid diagnosis of the widespread COVID-19. However, reading CT films brings a lot of concentration and time for doctors. Therefore, it is necessary to develop an automatic CT image diagnosis system to assist doctors in diagnosis. Previous studies devoted to COVID-19 in the past months focused mostly on discriminating COVID-19 infected patients from healthy persons and/or bacterial pneumonia patients, and have ignored typical viral pneumonia since it is hard to collect samples for viral pneumonia that is less frequent in adults. In addition, it is much more challenging to discriminate COVID-19 from typical viral pneumonia as COVID-19 is also a kind of virus. In this study, we have collected CT images of 262, 100, 219, and 78 persons for COVID-19, bacterial pneumonia, typical viral pneumonia, and healthy controls, respectively. To the best of our knowledge, this was the first study of quaternary classification to include also typical viral pneumonia. To effectively capture the subtle differences in CT images, we have constructed a new model by combining the ResNet50 backbone with SE blocks that was recently developed for fine image analysis. Our model was shown to outperform commonly used baseline models, achieving an overall accuracy of 0.94 with AUC of 0.96, recall of 0.94, precision of 0.95, and F1-score of 0.94. The model is available in https://github.com/Zhengfudan/COVID-19-Diagnosis-and-Pneumonia-Classification . | Interdisciplinary sciences, computational life sciences | "2021-03-01T00:00:00" | [
"FudanZheng",
"LiangLi",
"XiangZhang",
"YingSong",
"ZiwangHuang",
"YutianChong",
"ZhiguangChen",
"HuilingZhu",
"JiahaoWu",
"WeifengChen",
"YutongLu",
"YuedongYang",
"YunfeiZha",
"HuiyingZhao",
"JunShen"
] | 10.1007/s12539-021-00420-z
10.1016/S0140-6736(20)30251-8
10.1016/S0140-6736(20)30183-5
10.1056/NEJMoa2001316
10.1148/radiol.2020200236
10.1148/radiol.2020200269
10.1148/radiol.2020200274
10.1109/RBME.2020.2987975
10.1101/2020.02.23.20026930
10.1016/j.eng.2020.04.010
10.1148/radiol.2020200905
10.1038/s41598-020-76282-0
10.1101/2020.03.12.20027185
10.1101/2020.03.20.20039834
10.1016/j.asoc.2020.106897
10.1007/s12539-020-00403-6
10.1007/s12539-020-00393-5
10.1109/TMI.2020.2995508
10.1109/TMI.2020.2996256
10.1109/TMI.2020.2992546
10.1007/s13246-020-00865-4
10.1080/07391102.2020.1788642
10.1109/TPAMI.2019.2913372
10.1016/j.patrec.2005.10.010
10.1093/bib/bbaa205
10.1021/acs.jcim.9b00749 |
Multicenter Assessment of CT Pneumonia Analysis Prototype for Predicting Disease Severity and Patient Outcome. | To perform a multicenter assessment of the CT Pneumonia Analysis prototype for predicting disease severity and patient outcome in COVID-19 pneumonia both without and with integration of clinical information. Our IRB-approved observational study included consecutive 241 adult patients (> 18 years; 105 females; 136 males) with RT-PCR-positive COVID-19 pneumonia who underwent non-contrast chest CT at one of the two tertiary care hospitals (site A: Massachusetts General Hospital, USA; site B: Firoozgar Hospital Iran). We recorded patient age, gender, comorbid conditions, laboratory values, intensive care unit (ICU) admission, mechanical ventilation, and final outcome (recovery or death). Two thoracic radiologists reviewed all chest CTs to record type, extent of pulmonary opacities based on the percentage of lobe involved, and severity of respiratory motion artifacts. Thin-section CT images were processed with the prototype (Siemens Healthineers) to obtain quantitative features including lung volumes, volume and percentage of all-type and high-attenuation opacities (≥ -200 HU), and mean HU and standard deviation of opacities within a given lung region. These values are estimated for the total combined lung volume, and separately for each lung and each lung lobe. Multivariable analyses of variance (MANOVA) and multiple logistic regression were performed for data analyses. About 26% of chest CTs (62/241) had moderate to severe motion artifacts. There were no significant differences in the AUCs of quantitative features for predicting disease severity with and without motion artifacts (AUC 0.94-0.97) as well as for predicting patient outcome (AUC 0.7-0.77) (p > 0.5). Combination of the volume of all-attenuation opacities and the percentage of high-attenuation opacities (AUC 0.76-0.82, 95% confidence interval (CI) 0.73-0.82) had higher AUC for predicting ICU admission than the subjective severity scores (AUC 0.69-0.77, 95% CI 0.69-0.81). Despite a high frequency of motion artifacts, quantitative features of pulmonary opacities from chest CT can help differentiate patients with favorable and adverse outcomes. | Journal of digital imaging | "2021-02-27T00:00:00" | [
"FatemehHomayounieh",
"Marcio AloisioBezerra Cavalcanti Rockenbach",
"ShadiEbrahimian",
"RuhaniDoda Khera",
"Bernardo CBizzo",
"VarunBuch",
"RosaBabaei",
"HadiKarimi Mobin",
"ImanMohseni",
"MatthiasMitschke",
"MathisZimmermann",
"FelixDurlak",
"FranziskaRauch",
"Subba RDigumarthy",
"Mannudeep KKalra"
] | 10.1007/s10278-021-00430-9
10.1503/cmaj.200715
10.1073/pnas.2004064117
10.1148/ryct.2020200047
10.2214/AJR.20.22976
10.1016/j.ejrad.2020.109041
10.1016/j.compbiomed.2020.103795 |
Monitoring social distancing under various low light conditions with deep learning and a single motionless time of flight camera. | The purpose of this work is to provide an effective social distance monitoring solution in low light environments in a pandemic situation. The raging coronavirus disease 2019 (COVID-19) caused by the SARS-CoV-2 virus has brought a global crisis with its deadly spread all over the world. In the absence of an effective treatment and vaccine the efforts to control this pandemic strictly rely on personal preventive actions, e.g., handwashing, face mask usage, environmental cleaning, and most importantly on social distancing which is the only expedient approach to cope with this situation. Low light environments can become a problem in the spread of disease because of people's night gatherings. Especially, in summers when the global temperature is at its peak, the situation can become more critical. Mostly, in cities where people have congested homes and no proper air cross-system is available. So, they find ways to get out of their homes with their families during the night to take fresh air. In such a situation, it is necessary to take effective measures to monitor the safety distance criteria to avoid more positive cases and to control the death toll. In this paper, a deep learning-based solution is proposed for the above-stated problem. The proposed framework utilizes the you only look once v4 (YOLO v4) model for real-time object detection and the social distance measuring approach is introduced with a single motionless time of flight (ToF) camera. The risk factor is indicated based on the calculated distance and safety distance violations are highlighted. Experimental results show that the proposed model exhibits good performance with 97.84% mean average precision (mAP) score and the observed mean absolute error (MAE) between actual and measured social distance values is 1.01 cm. | PloS one | "2021-02-26T00:00:00" | [
"AdinaRahim",
"AyeshaMaqbool",
"TauseefRana"
] | 10.1371/journal.pone.0247440
10.3390/jcm9020596
10.3390/ijerph17082932
10.3390/app10144755
10.1016/j.epidem.2019.02.004
10.1109/TIP.2016.2639450
10.1016/j.patcog.2016.06.008
10.1109/TIP.2018.2810539
10.1109/TIP.2019.2910412
10.1016/j.neucom.2015.12.114
10.1016/S2468-2667(20)30073-6
10.12688/wellcomeopenres.15843.2
10.1016/j.patrec.2012.07.005
10.1016/j.puhe.2020.04.016
10.3390/s141121247
10.3390/s17051065
10.1016/j.snb.2019.01.063
10.3390/s140508895
10.1016/j.compbiomed.2020.103792
10.1109/TPAMI.2015.2437384
10.3354/cr030079
10.1016/j.cviu.2018.10.010
10.14569/IJACSA.2018.090103
10.1016/j.dss.2017.11.001
10.1080/01691864.2017.1365009
10.3390/mti2030047
10.1017/ATSIP.2014.4 |
A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19). | The outbreak of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) has caused more than 26 million cases of Corona virus disease (COVID-19) in the world so far. To control the spread of the disease, screening large numbers of suspected cases for appropriate quarantine and treatment are a priority. Pathogenic laboratory testing is typically the gold standard, but it bears the burden of significant false negativity, adding to the urgent need of alternative diagnostic methods to combat the disease. Based on COVID-19 radiographic changes in CT images, this study hypothesized that artificial intelligence methods might be able to extract specific graphical features of COVID-19 and provide a clinical diagnosis ahead of the pathogenic test, thus saving critical time for disease control.
We collected 1065 CT images of pathogen-confirmed COVID-19 cases along with those previously diagnosed with typical viral pneumonia. We modified the inception transfer-learning model to establish the algorithm, followed by internal and external validation.
The internal validation achieved a total accuracy of 89.5% with a specificity of 0.88 and sensitivity of 0.87. The external testing dataset showed a total accuracy of 79.3% with a specificity of 0.83 and sensitivity of 0.67. In addition, in 54 COVID-19 images, the first two nucleic acid test results were negative, and 46 were predicted as COVID-19 positive by the algorithm, with an accuracy of 85.2%.
These results demonstrate the proof-of-principle for using artificial intelligence to extract radiological features for timely and accurate COVID-19 diagnosis.
• The study evaluated the diagnostic performance of a deep learning algorithm using CT images to screen for COVID-19 during the influenza season. • As a screening method, our model achieved a relatively high sensitivity on internal and external CT image datasets. • The model was used to distinguish between COVID-19 and other typical viral pneumonia, both of which have quite similar radiologic characteristics. | European radiology | "2021-02-26T00:00:00" | [
"ShuaiWang",
"BoKang",
"JinluMa",
"XianjunZeng",
"MingmingXiao",
"JiaGuo",
"MengjiaoCai",
"JingyiYang",
"YaodongLi",
"XiangfeiMeng",
"BoXu"
] | 10.1007/s00330-021-07715-1
10.1111/tmi.13383
10.1007/s11517-019-01965-4
10.1515/ijb-2018-0060
10.14299/ijser.2020.03.02
10.1089/omi.2018.0097 |
SOM-LWL method for identification of COVID-19 on chest X-rays. | The outbreak of coronavirus disease 2019 (COVID-19) has had an immense impact on world health and daily life in many countries. Sturdy observing of the initial site of infection in patients is crucial to gain control in the struggle with COVID-19. The early automated detection of the recent coronavirus disease (COVID-19) will help to limit its dissemination worldwide. Many initial studies have focused on the identification of the genetic material of coronavirus and have a poor detection rate for long-term surgery. The first imaging procedure that played an important role in COVID-19 treatment was the chest X-ray. Radiological imaging is often used as a method that emphasizes the performance of chest X-rays. Recent findings indicate the presence of COVID-19 in patients with irregular findings on chest X-rays. There are many reports on this topic that include machine learning strategies for the identification of COVID-19 using chest X-rays. Other current studies have used non-public datasets and complex artificial intelligence (AI) systems. In our research, we suggested a new COVID-19 identification technique based on the locality-weighted learning and self-organization map (LWL-SOM) strategy for detecting and capturing COVID-19 cases. We first grouped images from chest X-ray datasets based on their similar features in different clusters using the SOM strategy in order to discriminate between the COVID-19 and non-COVID-19 cases. Then, we built our intelligent learning model based on the LWL algorithm to diagnose and detect COVID-19 cases. The proposed SOM-LWL model improved the correlation coefficient performance results between the Covid19, no-finding, and pneumonia cases; pneumonia and no-finding cases; Covid19 and pneumonia cases; and Covid19 and no-finding cases from 0.9613 to 0.9788, 0.6113 to 1 0.8783 to 0.9999, and 0.8894 to 1, respectively. The proposed LWL-SOM had better results for discriminating COVID-19 and non-COVID-19 patients than the current machine learning-based solutions using AI evaluation measures. | PloS one | "2021-02-25T00:00:00" | [
"Ahmed HamzaOsman",
"Hani MoetqueAljahdali",
"Sultan MenwerAltarrazi",
"AliAhmed"
] | 10.1371/journal.pone.0247176
10.1002/jmv.25678
10.1016/j.compbiomed.2020.103792
10.1016/S1473-3099(20)30086-4
10.1148/radiol.2020200490
10.1148/radiol.2020200343
10.1016/S1473-3099(20)30134-1
10.1148/radiol.2020200463
10.2214/AJR.20.22954
10.1016/S0140-6736(20)30154-9
10.2214/AJR.20.22976
10.1016/j.chaos.2020.110337
10.1146/annurev-bioeng-071516-044442
10.1016/j.cmpb.2018.04.005
10.1016/j.compbiomed.2020.103726
10.1007/s00521-020-05410-8
10.2174/1573405617999210112193220
10.1007/s00521-020-05636-6
10.1016/j.compbiomed.2018.09.009
10.1016/j.compbiomed.2017.08.022
10.1016/j.compmedimag.2019.101673
10.1016/j.cmpb.2019.06.005
10.3390/v12070769
10.1007/s13246-020-00865-4
10.1371/journal.pone.0239474
10.1371/journal.pone.0235187
10.1109/72.846729
10.1101/gr.634603
10.1016/j.neunet.2012.09.018 |
CoVNet-19: A Deep Learning model for the detection and analysis of COVID-19 patients. | The ongoing fight with Novel Corona Virus, getting quick treatment, and rapid diagnosis reports have become an act of high priority. With millions getting infected daily and a fatality rate of 2%, we made it our motive to contribute a little to solve this real-world problem by accomplishing a significant and substantial method for diagnosing COVID-19 patients.
The Exponential growth of COVID-19 cases worldwide has severely affected the health care system of highly populated countries due to proportionally a smaller number of medical practitioners, testing kits, and other resources, thus becoming essential to identify the infected people. Catering to the above problems, the purpose of this paper is to formulate an accurate, efficient, and time-saving method for detecting positive corona patients.
In this paper, an Ensemble Deep Convolution Neural Network model "CoVNet-19" is being proposed that can unveil important diagnostic characteristics to find COVID-19 infected patients using X-ray images chest and help radiologists and medical experts to fight this pandemic.
The experimental results clearly show that the overall classification accuracy obtained with the proposed approach for three-class classification among COVID-19, Pneumonia, and Normal is 98.28%, along with an average precision and Recall of 98.33% and 98.33%, respectively. Besides this, for binary classification between Non-COVID and COVID Chest X-ray images, an overall accuracy of 99.71% was obtained.
Having a high diagnostic accuracy, our proposed ensemble Deep Learning classification model can be a productive and substantial contribution to detecting COVID-19 infected patients. | Applied soft computing | "2021-02-23T00:00:00" | [
"PriyanshKedia",
"NoneAnjum",
"RahulKatarya"
] | 10.1016/j.asoc.2021.107184
10.1002/jmv.25681
10.1016/j.ijsu.2020.02.034
10.1016/j.ijsu.2020.04.001
10.1007/s13246-020-00865-4
10.1016/j.cmpb.2020.105581
10.20944/preprints202003.0300.v1
10.1016/j.irbm.2020.05.003
10.1109/CVPR.2017.243
10.1007/978-1-4842-4470-8_7
10.1007/s11263-019-01228-7 |
A rapid screening classifier for diagnosing COVID-19. | International journal of biological sciences | "2021-02-23T00:00:00" | [
"YangXia",
"WeixiangChen",
"HongyiRen",
"JianpingZhao",
"LihuaWang",
"RuiJin",
"JiesenZhou",
"QiyuanWang",
"FuguiYan",
"BinZhang",
"JianLou",
"ShaobinWang",
"XiaomengLi",
"JieZhou",
"LimingXia",
"ChengJin",
"JianjiangFeng",
"WenLi",
"HuahaoShen"
] | 10.7150/ijbs.53982 |
|
Codeless Deep Learning of COVID-19 Chest X-Ray Image Dataset with KNIME Analytics Platform. | This paper proposes a method for computer-assisted diagnosis of coronavirus disease 2019 (COVID-19) through chest X-ray imaging using a deep learning model without writing a single line of code using the Konstanz Information Miner (KNIME) analytics platform.
We obtained 155 samples of posteroanterior chest X-ray images from COVID-19 open dataset repositories to develop a classification model using a simple convolutional neural network (CNN). All of the images contained diagnostic information for COVID-19 and other diseases. The model would classify whether a patient was infected with COVID-19 or not. Eighty percent of the images were used for model training, and the rest were used for testing. The graphic user interface-based programming in the KNIME enabled class label annotation, data preprocessing, CNN model training and testing, performance evaluation, and so on.
1,000 epochs training were performed to test the simple CNN model. The lower and upper bounds of positive predictive value (precision), sensitivity (recall), specificity, and f-measure are 92.3% and 94.4%. Both bounds of the model's accuracies were equal to 93.5% and 96.6% of the area under the receiver operating characteristic curve for the test set.
In this study, a researcher who does not have basic knowledge of python programming successfully performed deep learning analysis of chest x-ray image dataset using the KNIME independently. The KNIME will reduce the time spent and lower the threshold for deep learning research applied to healthcare. | Healthcare informatics research | "2021-02-22T00:00:00" | [
"Jun YoungAn",
"HoseokSeo",
"Young-GonKim",
"Kyu EunLee",
"SungwanKim",
"Hyoun-JoongKong"
] | 10.4258/hir.2021.27.1.82
10.1101/2020.08.20.20178913 |
Hybrid ensemble model for differential diagnosis between COVID-19 and common viral pneumonia by chest X-ray radiograph. | Chest X-ray radiography (CXR) has been widely considered as an accessible, feasible, and convenient method to evaluate suspected patients' lung involvement during the COVID-19 pandemic. However, with the escalating number of suspected cases, traditional diagnosis via CXR fails to deliver results within a short period of time. Therefore, it is crucial to employ artificial intelligence (AI) to enhance CXRs for obtaining quick and accurate diagnoses. Previous studies have reported the feasibility of utilizing deep learning methods to screen for COVID-19 using CXR and CT results. However, these models only use a single deep learning network for chest radiograph detection; the accuracy of this approach required further improvement.
In this study, we propose a three-step hybrid ensemble model, including a feature extractor, a feature selector, and a classifier. First, a pre-trained AlexNet with an improved structure extracts the original image features. Then, the ReliefF algorithm is adopted to sort the extracted features, and a trial-and-error approach is used to select the n most important features to reduce the feature dimension. Finally, an SVM classifier provides classification results based on the n selected features.
Compared to five existing models (InceptionV3: 97.916 ± 0.408%; SqueezeNet: 97.189 ± 0.526%; VGG19: 96.520 ± 1.220%; ResNet50: 97.476 ± 0.513%; ResNet101: 98.241 ± 0.209%), the proposed model demonstrated the best performance in terms of overall accuracy rate (98.642 ± 0.398%). Additionally, compared to the existing models, the proposed model demonstrates a considerable improvement in classification time efficiency (SqueezeNet: 6.602 ± 0.001s; InceptionV3: 12.376 ± 0.002s; ResNet50: 10.952 ± 0.001s; ResNet101: 18.040 ± 0.002s; VGG19: 16.632 ± 0.002s; proposed model: 5.917 ± 0.001s).
The model proposed in this article is practical and effective, and can provide high-precision COVID-19 CXR detection. We demonstrated its suitability to aid medical professionals in distinguishing normal CXRs, viral pneumonia CXRs and COVID-19 CXRs efficiently on small sample sizes. | Computers in biology and medicine | "2021-02-21T00:00:00" | [
"WeiqiuJin",
"ShuqinDong",
"ChangziDong",
"XiaodanYe"
] | 10.1016/j.compbiomed.2021.104252
10.1007/s11547-020-01232-9
10.1007/s00330-020-06967-7
10.1016/j.ins.2020.09.041
10.1016/j.compbiomed.2020.103792
10.1007/s00138-020-01128-8
10.1016/j.patrec.2020.10.001
10.1016/j.chaos.2020.110153
10.1016/j.chaos.2020.110170
10.1016/j.media.2020.101794
10.1038/s41598-020-76550-z
10.3390/jpm10040213
10.1016/j.chaos.2020.110245
10.1016/j.bspc.2020.102365
10.1016/j.ipm.2020.102411
10.1007/978-3-030-13969-8_18
10.1016/j.jocs.2018.11.008
10.1016/j.mehy.2020.109577
10.1016/j.jbi.2018.07.014
10.1007/s13246-020-00865-4
10.1007/s10489-020-01829-7 |
CovidCTNet: an open-source deep learning approach to diagnose covid-19 using small cohort of CT images. | Coronavirus disease 2019 (Covid-19) is highly contagious with limited treatment options. Early and accurate diagnosis of Covid-19 is crucial in reducing the spread of the disease and its accompanied mortality. Currently, detection by reverse transcriptase-polymerase chain reaction (RT-PCR) is the gold standard of outpatient and inpatient detection of Covid-19. RT-PCR is a rapid method; however, its accuracy in detection is only ~70-75%. Another approved strategy is computed tomography (CT) imaging. CT imaging has a much higher sensitivity of ~80-98%, but similar accuracy of 70%. To enhance the accuracy of CT imaging detection, we developed an open-source framework, CovidCTNet, composed of a set of deep learning algorithms that accurately differentiates Covid-19 from community-acquired pneumonia (CAP) and other lung diseases. CovidCTNet increases the accuracy of CT imaging detection to 95% compared to radiologists (70%). CovidCTNet is designed to work with heterogeneous and small sample sizes independent of the CT imaging hardware. To facilitate the detection of Covid-19 globally and assist radiologists and physicians in the screening process, we are releasing all algorithms and model parameter details as open-source. Open-source sharing of CovidCTNet enables developers to rapidly improve and optimize services while preserving user privacy and data ownership. | NPJ digital medicine | "2021-02-20T00:00:00" | [
"TaherehJavaheri",
"MortezaHomayounfar",
"ZohrehAmoozgar",
"RezaReiazi",
"FatemehHomayounieh",
"EngyAbbas",
"AzadehLaali",
"Amir RezaRadmard",
"Mohammad HadiGharib",
"Seyed Ali JavadMousavi",
"OmidGhaemi",
"RosaBabaei",
"Hadi KarimiMobin",
"MehdiHosseinzadeh",
"RanaJahanban-Esfahlan",
"KhaledSeidi",
"Mannudeep KKalra",
"GuanglanZhang",
"L TChitkushev",
"BenjaminHaibe-Kains",
"RezaMalekzadeh",
"RezaRawassizadeh"
] | 10.1038/s41746-021-00399-3
10.1126/science.1176062
10.3390/jcm9020580
10.2807/1560-7917.ES.2020.25.4.2000058
10.1016/S0140-6736(03)14630-2
10.1016/j.idc.2010.04.009
10.1371/journal.pone.0230548
10.1148/radiol.2020200230
10.1016/S1470-2045(19)30739-9
10.1038/s41598-019-48995-4
10.1038/s41598-019-55972-4
10.1038/s41591-019-0447-x
10.1016/j.jacr.2017.12.027
10.1117/1.JMI.3.4.044506
10.1145/325165.325247
10.1016/j.patcog.2018.07.031
10.1126/science.aba4456 |
Assisting scalable diagnosis automatically via CT images in the combat against COVID-19. | The pandemic of Coronavirus Disease 2019 (COVID-19) is causing enormous loss of life globally. Prompt case identification is critical. The reference method is the real-time reverse transcription PCR (RT-PCR) assay, whose limitations may curb its prompt large-scale application. COVID-19 manifests with chest computed tomography (CT) abnormalities, some even before the onset of symptoms. We tested the hypothesis that the application of deep learning (DL) to 3D CT images could help identify COVID-19 infections. Using data from 920 COVID-19 and 1,073 non-COVID-19 pneumonia patients, we developed a modified DenseNet-264 model, COVIDNet, to classify CT images to either class. When tested on an independent set of 233 COVID-19 and 289 non-COVID-19 pneumonia patients, COVIDNet achieved an accuracy rate of 94.3% and an area under the curve of 0.98. As of March 23, 2020, the COVIDNet system had been used 11,966 times with a sensitivity of 91.12% and a specificity of 88.50% in six hospitals with PCR confirmation. Application of DL to CT images may improve both efficiency and capacity of case detection and long-term surveillance. | Scientific reports | "2021-02-20T00:00:00" | [
"BohanLiu",
"PanLiu",
"LutaoDai",
"YanlinYang",
"PengXie",
"YiqingTan",
"JichengDu",
"WeiShan",
"ChenghuiZhao",
"QinZhong",
"XixiangLin",
"XizhouGuan",
"NingXing",
"YuhuiSun",
"WenjunWang",
"ZhibingZhang",
"XiaFu",
"YanqingFan",
"MeifangLi",
"NaZhang",
"LinLi",
"YaouLiu",
"LinXu",
"JingboDu",
"ZhenhuaZhao",
"XuelongHu",
"WeipengFan",
"RongpinWang",
"ChongchongWu",
"YongkangNie",
"LiuquanCheng",
"LinMa",
"ZongrenLi",
"QianJia",
"MinchaoLiu",
"HuayuanGuo",
"GaoHuang",
"HaipengShen",
"LiangZhang",
"PeifangZhang",
"GangGuo",
"HaoLi",
"WeiminAn",
"JianxinZhou",
"KunlunHe"
] | 10.1038/s41598-021-83424-5
10.1056/NEJMoa2001017
10.1056/NEJMoa2001316
10.1002/jmv.25678
10.1016/S0140-6736(20)30154-9
10.1136/bmj.m1090
10.1126/science.abb5793
10.1148/radiol.2020200343
10.1148/radiol.2020200432
10.1148/radiol.2020200642
10.1148/ryct.2020200034
10.1016/S1473-3099(20)30086-4
10.1016/S2589-7500(19)30058-5
10.1016/S2589-7500(19)30159-1
10.1016/S2589-7500(20)30025-X
10.1016/S2213-2600(20)30003-5
10.1148/radiol.2020200236
10.1148/radiol.2020200905
10.1016/S2213-2600(18)30286-8
10.1056/NEJMp2015897
10.1093/nsr/nwaa036 |
Correlation between lung infection severity and clinical laboratory indicators in patients with COVID-19: a cross-sectional study based on machine learning. | Coronavirus disease 2019 (COVID-19) has caused a global pandemic that has raised worldwide concern. This study aims to investigate the correlation between the extent of lung infection and relevant clinical laboratory testing indicators in COVID-19 and to analyse its underlying mechanism.
Chest high-resolution computer tomography (CT) images and laboratory examination data of 31 patients with COVID-19 were extracted, and the lesion areas in CT images were quantitatively segmented and calculated using a deep learning (DL) system. A cross-sectional study method was carried out to explore the differences among the proportions of lung lobe infection and to correlate the percentage of infection (POI) of the whole lung in all patients with clinical laboratory examination values.
No significant difference in the proportion of infection was noted among various lung lobes (P > 0.05). The POI of total lung was negatively correlated with the peripheral blood lymphocyte percentage (L%) (r = - 0.633, P < 0.001) and lymphocyte (LY) count (r = - 0.555, P = 0.001) but positively correlated with the neutrophil percentage (N%) (r = 0.565, P = 0.001). Otherwise, the POI was not significantly correlated with the peripheral blood white blood cell (WBC) count, monocyte percentage (M%) or haemoglobin (HGB) content. In some patients, as the infection progressed, the L% and LY count decreased progressively accompanied by a continuous increase in the N%.
Lung lesions in COVID-19 patients are significantly correlated with the peripheral blood lymphocyte and neutrophil levels, both of which could serve as prognostic indicators that provide warning implications, and contribute to clinical interventions in patients. | BMC infectious diseases | "2021-02-20T00:00:00" | [
"XingruiWang",
"QinglinChe",
"XiaoxiaoJi",
"XinyiMeng",
"LangZhang",
"RongrongJia",
"HairongLyu",
"WeixianBai",
"LingjieTan",
"YanjunGao"
] | 10.1186/s12879-021-05839-9
10.1016/j.micinf.2020.01.004
10.1148/radiol.2020200432
10.1038/s41568-018-0016-5
10.1016/S0140-6736(20)30183-5
10.1016/S2213-2600(20)30076-X
10.1002/jmv.25709
10.1038/s41564-020-0688-y
10.1002/path.1570
10.1002/1521-4141(200209)32:9<2490::AID-IMMU2490>3.0.CO;2-G
10.4049/jimmunol.176.7.4284
10.1016/j.micinf.2005.06.007
10.1002/path.1711260307
10.7326/0003-4819-84-3-304
10.1016/0091-6749(78)90200-2
10.1016/S0140-6736(20)30317-2
10.1378/chest.129.6.1441
10.1016/S2213-2600(19)30417-5
10.1038/ni762
10.1111/j.1365-2249.2004.02415.x
10.1111/j.1440-1843.2007.01102.x
10.1164/rccm.201203-0508OC |
Deep Learning-Based Measurement of Total Plaque Area in B-Mode Ultrasound Images. | Measurement of total-plaque-area (TPA) is important for determining long term risk for stroke and monitoring carotid plaque progression. Since delineation of carotid plaques is required, a deep learning method can provide automatic plaque segmentations and TPA measurements; however, it requires large datasets and manual annotations for training with unknown performance on new datasets. A UNet++ ensemble algorithm was proposed to segment plaques from 2D carotid ultrasound images, trained on three small datasets (n = 33, 33, 34 subjects) and tested on 44 subjects from the SPARC dataset (n = 144, London, Canada). The ensemble was also trained on the entire SPARC dataset and tested with a different dataset (n = 497, Zhongnan Hospital, China). Algorithm and manual segmentations were compared using Dice-similarity-coefficient (DSC), and TPAs were compared using the difference ( ∆TPA), Pearson correlation coefficient (r) and Bland-Altman analyses. Segmentation variability was determined using the intra-class correlation coefficient (ICC) and coefficient-of-variation (CoV). For 44 SPARC subjects, algorithm DSC was 83.3-85.7%, and algorithm TPAs were strongly correlated (r = 0.985-0.988; p < 0.001) with manual results with marginal biases (0.73-6.75) mm | IEEE journal of biomedical and health informatics | "2021-02-19T00:00:00" | [
"RanZhou",
"FuminGuo",
"M RezaAzarpazhooh",
"SaminehHashemi",
"XinyaoCheng",
"J DavidSpence",
"MingyueDing",
"AaronFenster"
] | 10.1109/JBHI.2021.3060163 |
JCS: An Explainable COVID-19 Diagnosis System by Joint Classification and Segmentation. | Recently, the coronavirus disease 2019 (COVID-19) has caused a pandemic disease in over 200 countries, influencing billions of humans. To control the infection, identifying and separating the infected people is the most crucial step. The main diagnostic tool is the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. Still, the sensitivity of the RT-PCR test is not high enough to effectively prevent the pandemic. The chest CT scan test provides a valuable complementary tool to the RT-PCR test, and it can identify the patients in the early-stage with high sensitivity. However, the chest CT scan test is usually time-consuming, requiring about 21.5 minutes per case. This paper develops a novel Joint Classification and Segmentation (JCS) system to perform real-time and explainable COVID- 19 chest CT diagnosis. To train our JCS system, we construct a large scale COVID- 19 Classification and Segmentation (COVID-CS) dataset, with 144,167 chest CT images of 400 COVID- 19 patients and 350 uninfected cases. 3,855 chest CT images of 200 patients are annotated with fine-grained pixel-level labels of opacifications, which are increased attenuation of the lung parenchyma. We also have annotated lesion counts, opacification areas, and locations and thus benefit various diagnosis aspects. Extensive experiments demonstrate that the proposed JCS diagnosis system is very efficient for COVID-19 classification and segmentation. It obtains an average sensitivity of 95.0% and a specificity of 93.0% on the classification test set, and 78.5% Dice score on the segmentation test set of our COVID-CS dataset. The COVID-CS dataset and code are available at https://github.com/yuhuan-wu/JCS. | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society | "2021-02-19T00:00:00" | [
"Yu-HuanWu",
"Shang-HuaGao",
"JieMei",
"JunXu",
"Deng-PingFan",
"Rong-GuoZhang",
"Ming-MingCheng"
] | 10.1109/TIP.2021.3058783 |
A deep learning integrated radiomics model for identification of coronavirus disease 2019 using computed tomography. | Since its first outbreak, Coronavirus Disease 2019 (COVID-19) has been rapidly spreading worldwide and caused a global pandemic. Rapid and early detection is essential to contain COVID-19. Here, we first developed a deep learning (DL) integrated radiomics model for end-to-end identification of COVID-19 using CT scans and then validated its clinical feasibility. We retrospectively collected CT images of 386 patients (129 with COVID-19 and 257 with other community-acquired pneumonia) from three medical centers to train and externally validate the developed models. A pre-trained DL algorithm was utilized to automatically segment infected lesions (ROIs) on CT images which were used for feature extraction. Five feature selection methods and four machine learning algorithms were utilized to develop radiomics models. Trained with features selected by L1 regularized logistic regression, classifier multi-layer perceptron (MLP) demonstrated the optimal performance with AUC of 0.922 (95% CI 0.856-0.988) and 0.959 (95% CI 0.910-1.000), the same sensitivity of 0.879, and specificity of 0.900 and 0.887 on internal and external testing datasets, which was equivalent to the senior radiologist in a reader study. Additionally, diagnostic time of DL-MLP was more efficient than radiologists (38 s vs 5.15 min). With an adequate performance for identifying COVID-19, DL-MLP may help in screening of suspected cases. | Scientific reports | "2021-02-18T00:00:00" | [
"XiaoguoZhang",
"DaweiWang",
"JiangShao",
"SongTian",
"WeixiongTan",
"YanMa",
"QingnanXu",
"XiaomanMa",
"DashengLi",
"JunChai",
"DingjunWang",
"WenwenLiu",
"LingboLin",
"JiangfenWu",
"ChenXia",
"ZhongfaZhang"
] | 10.1038/s41598-021-83237-6
10.3348/kjr.2020.0146
10.3348/kjr.2020.0195
10.1016/j.ejrad.2020.108961
10.1016/j.acra.2020.09.004
10.1038/s41467-020-18685-1
10.7150/thno.46428
10.1007/s00330-020-07032-z
10.21037/atm-20-3026
10.1136/bmj.m1328
10.1038/nrclinonc.2017.141
10.1016/j.canlet.2017.06.004
10.3389/fonc.2016.00071
10.1038/srep13087
10.1016/j.compbiomed.2020.104037
10.5152/dir.2019.19321
10.3389/fnhum.2015.00353
10.1148/radiology.143.1.7063747
10.7189/jogh.10.010347
10.1016/j.tmaid.2020.101627
10.1016/j.rmed.2020.105980
10.1038/s41467-020-17971-2
10.1016/j.ejrad.2020.109041
10.1007/s11548-020-02286-w
10.1007/s11548-013-0913-8
10.1016/j.ejrad.2020.109402
10.1038/s41598-020-76282-0 |
A novel multiple instance learning framework for COVID-19 severity assessment via data augmentation and self-supervised learning. | How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world. Currently, the chest CT is regarded as a popular and informative imaging tool for COVID-19 diagnosis. However, we observe that there are two issues - weak annotation and insufficient data that may obstruct automatic COVID-19 severity assessment with CT images. To address these challenges, we propose a novel three-component method, i.e., 1) a deep multiple instance learning component with instance-level attention to jointly classify the bag and also weigh the instances, 2) a bag-level data augmentation component to generate virtual bags by reorganizing high confidential instances, and 3) a self-supervised pretext component to aid the learning process. We have systematically evaluated our method on the CT images of 229 COVID-19 cases, including 50 severe and 179 non-severe cases. Our method could obtain an average accuracy of 95.8%, with 93.6% sensitivity and 96.4% specificity, which outperformed previous works. | Medical image analysis | "2021-02-16T00:00:00" | [
"ZekunLi",
"WeiZhao",
"FengShi",
"LeiQi",
"XingzhiXie",
"YingWei",
"ZhongxiangDing",
"YangGao",
"ShangjieWu",
"JunLiu",
"YinghuanShi",
"DinggangShen"
] | 10.1016/j.media.2021.101978 |
DON: Deep Learning and Optimization-Based Framework for Detection of Novel Coronavirus Disease Using X-ray Images. | In the hospital, a limited number of COVID-19 test kits are available due to the spike in cases every day. For this reason, a rapid alternative diagnostic option should be introduced as an automated detection method to prevent COVID-19 spreading among individuals. This article proposes multi-objective optimization and a deep-learning methodology for the detection of infected coronavirus patients with X-rays. J48 decision tree method classifies the deep characteristics of affected X-ray corona images to detect the contaminated patients effectively. Eleven different convolutional neuronal network-based (CNN) models were developed in this study to detect infected patients with coronavirus pneumonia using X-ray images (AlexNet, VGG16, VGG19, GoogleNet, ResNet18, ResNet500, ResNet101, InceptionV3, InceptionResNetV2, DenseNet201 and XceptionNet). In addition, the parameters of the CNN profound learning model are described using an emperor penguin optimizer with several objectives (MOEPO). A broad review reveals that the proposed model can categorise the X-ray images at the correct rates of precision, accuracy, recall, specificity and F1-score. Extensive test results show that the proposed model outperforms competitive models with well-known efficiency metrics. The proposed model is, therefore, useful for the real-time classification of X-ray chest images of COVID-19 disease. | Interdisciplinary sciences, computational life sciences | "2021-02-16T00:00:00" | [
"GauravDhiman",
"VVinoth Kumar",
"AmandeepKaur",
"AshutoshSharma"
] | 10.1007/s12539-021-00418-7
10.1101/2020.02.27.20028027
10.1016/S0140-6736(20)30183-5
10.1136/bmj.m641
10.1148/radiol.2020200642
10.1101/2020.02.14.20023028
10.1016/j.ejrnm.2015.11.004
10.1016/S0140-6736(20)30183-5
10.1016/S0140-6736(20)30154-9
10.3390/jcm9020388
10.3390/jcm9020419
10.3390/jcm9020462
10.3390/jcm9020498
10.3390/jcm9020523
10.1016/j.engappai.2020.104008
10.1016/j.advengsoft.2017.05.014
10.1016/j.knosys.2018.03.011
10.1016/j.knosys.2018.11.024
10.1016/j.engappai.2019.03.021
10.1016/j.engappai.2020.103541
10.1016/j.knosys.2018.06.001
10.1016/j.compbiomed.2019.103387
10.1016/j.patrec.2020.03.011 |
Current limitations to identify COVID-19 using artificial intelligence with chest X-ray imaging. | The scientific community has joined forces to mitigate the scope of the current COVID-19 pandemic. The early identification of the disease, as well as the evaluation of its evolution is a primary task for the timely application of medical protocols. The use of medical images of the chest provides valuable information to specialists. Specifically, chest X-ray images have been the focus of many investigations that apply artificial intelligence techniques for the automatic classification of this disease. The results achieved to date on the subject are promising. However, some results of these investigations contain errors that must be corrected to obtain appropriate models for clinical use. This research discusses some of the problems found in the current scientific literature on the application of artificial intelligence techniques in the automatic classification of COVID-19. It is evident that in most of the reviewed works an incorrect evaluation protocol is applied, which leads to overestimating the results. | Health and technology | "2021-02-16T00:00:00" | [
"José DanielLópez-Cabrera",
"RubénOrozco-Morales",
"Jorge ArmandoPortal-Diaz",
"OrlandoLovelle-Enríquez",
"MarlénPérez-Díaz"
] | 10.1007/s12553-021-00520-2
10.1016/j.ijantimicag.2020.105924
10.1016/j.mehy.2020.109689
10.1016/j.ijid.2020.03.071
10.1016/j.cca.2020.03.009
10.1148/radiol.2020200642
10.1148/radiol.2020200527
10.1007/s13246-020-00865-4
10.1109/ACCESS.2020.3010287
10.1016/j.imu.2020.100412
10.1016/S2589-7500(20)30079-0
10.1016/j.jiph.2020.06.028
10.15212/bioi-2020-0015
10.1109/JBHI.2020.3037127
10.1109/ACCESS.2018.2830661
10.1371/journal.pmed.1002683
10.1148/ryai.2019180031
10.2196/19673
10.1148/radiol.2020200847
10.1148/ryct.2020200034
10.1016/j.mehy.2020.109761
10.1016/j.compbiomed.2020.103805
10.1109/ACCESS.2020.2990893
10.1148/ryai.2020200053
10.1371/journal.pone.0235187
10.1016/j.bbe.2020.08.008
10.3892/etm.2020.8797
10.1016/S0734-189X(87)80186-X
10.1007/s11263-015-0816-y
10.1016/j.cell.2018.02.010
10.1148/ryai.2019180041
10.1016/j.media.2020.101797 |
Transfer learning for establishment of recognition of COVID-19 on CT imaging using small-sized training datasets. | The coronavirus disease, called COVID-19, which is spreading fast worldwide since the end of 2019, and has become a global challenging pandemic. Until 27th May 2020, it caused more than 5.6 million individuals infected throughout the world and resulted in greater than 348,145 deaths. CT images-based classification technique has been tried to use the identification of COVID-19 with CT imaging by hospitals, which aims to minimize the possibility of virus transmission and alleviate the burden of clinicians and radiologists. Early diagnosis of COVID-19, which not only prevents the disease from spreading further but allows more reasonable allocation of limited medical resources. Therefore, CT images play an essential role in identifying cases of COVID-19 that are in great need of intensive clinical care. Unfortunately, the current public health emergency, which has caused great difficulties in collecting a large set of precise data for training neural networks. To tackle this challenge, our first thought is transfer learning, which is a technique that aims to transfer the knowledge from one or more source tasks to a target task when the latter has fewer training data. Since the training data is relatively limited, so a transfer learning-based DensNet-121 approach for the identification of COVID-19 is established. The proposed method is inspired by the precious work of predecessors such as CheXNet for identifying common Pneumonia, which was trained using the large Chest X-ray14 dataset, and the dataset contains 112,120 frontal chest X-rays of 14 different chest diseases (including Pneumonia) that are individually labeled and achieved good performance. Therefore, CheXNet as the pre-trained network was used for the target task (COVID-19 classification) by fine-tuning the network weights on the small-sized dataset in the target task. Finally, we evaluated our proposed method on the COVID-19-CT dataset. Experimentally, our method achieves state-of-the-art performance for the accuracy (ACC) and F1-score. The quantitative indicators show that the proposed method only uses a GPU can reach the best performance, up to 0.87 and 0.86, respectively, compared with some widely used and recent deep learning methods, which are helpful for COVID-19 diagnosis and patient triage. The codes used in this manuscript are publicly available on GitHub at (https://github.com/lichun0503/CT-Classification). | Knowledge-based systems | "2021-02-16T00:00:00" | [
"ChunLi",
"YunyunYang",
"HuiLiang",
"BoyingWu"
] | 10.1016/j.knosys.2021.106849 |
An Uncertainty-Aware Transfer Learning-Based Framework for COVID-19 Diagnosis. | The early and reliable detection of COVID-19 infected patients is essential to prevent and limit its outbreak. The PCR tests for COVID-19 detection are not available in many countries, and also, there are genuine concerns about their reliability and performance. Motivated by these shortcomings, this article proposes a deep uncertainty-aware transfer learning framework for COVID-19 detection using medical images. Four popular convolutional neural networks (CNNs), including VGG16, ResNet50, DenseNet121, and InceptionResNetV2, are first applied to extract deep features from chest X-ray and computed tomography (CT) images. Extracted features are then processed by different machine learning and statistical modeling techniques to identify COVID-19 cases. We also calculate and report the epistemic uncertainty of classification results to identify regions where the trained models are not confident about their decisions (out of distribution problem). Comprehensive simulation results for X-ray and CT image data sets indicate that linear support vector machine and neural network models achieve the best results as measured by accuracy, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (AUC). Also, it is found that predictive uncertainty estimates are much higher for CT images compared to X-ray images. | IEEE transactions on neural networks and learning systems | "2021-02-12T00:00:00" | [
"AfsharShamsi",
"HamzehAsgharnezhad",
"Shirin ShamsiJokandan",
"AbbasKhosravi",
"Parham MKebria",
"DariusNahavandi",
"SaeidNahavandi",
"DiptiSrinivasan"
] | 10.1109/TNNLS.2021.3054306 |
Artificial intelligence and cardiac surgery during COVID-19 era. | The coronavirus disease 2019 (COVID-19) pandemic has increased the burden on hospital staff world-wide. Through the redistribution of scarce resources to these high-priority cases, the cardiac sector has fallen behind. In efforts to reduce transmission, reduction in direct patient-physician contact has led to a backlog of cardiac cases. However, this accumulation of postponed or cancelled nonurgent cardiac care seems to be resolvable with the assistance of technology. From telemedicine to artificial intelligence (AI), technology has transformed healthcare systems nationwide. Telemedicine enables patient monitoring from a distance, while AI unveils a whole new realm of possibilities in clinical practice, examples include: traditional systems replacement with more efficient and accurate processing machines; automation of clerical process; and triage assistance through risk predictions. These possibilities are driven by deep and machine learning. The two subsets of AI are explored and limitations regarding "big data" are discussed. The aims of this review are to explore AI: the advancements in methodology; current integration in cardiac surgery or other clinical scenarios; and potential future roles, which are innately nearing as the COVID-19 era urges alternative approaches for care. | Journal of cardiac surgery | "2021-02-11T00:00:00" | [
"Raveena KKhalsa",
"ArwaKhashkhusha",
"SaraZaidi",
"AmerHarky",
"MohamadBashir"
] | 10.1111/jocs.15417
10.1016/j.jcmg.2020.05.004
10.1007/s11886-013-0441-8
10.1177/2047487320922926
10.1080/00015385.2020.1787636
10.1016/j.jtcvs.2018.09.124
10.1007/s10916-018-1029-z
10.1007/s00146-020-00978-0
10.1007/s00068-020-01444-8 |
Classification of COVID-19 by Compressed Chest CT Image through Deep Learning on a Large Patients Cohort. | Corona Virus Disease (COVID-19) has spread globally quickly, and has resulted in a large number of causalities and medical resources insufficiency in many countries. Reverse-transcriptase polymerase chain reaction (RT-PCR) testing is adopted as biopsy tool for confirmation of virus infection. However, its accuracy is as low as 60-70%, which is inefficient to uncover the infected. In comparison, the chest CT has been considered as the prior choice in diagnosis and monitoring progress of COVID-19 infection. Although the COVID-19 diagnostic systems based on artificial intelligence have been developed for assisting doctors in diagnosis, the small sample size and the excessive time consumption limit their applications. To this end, this paper proposed a diagnosis prototype system for COVID-19 infection testing. The proposed deep learning model is trained and is tested on 2267 CT sequences from 1357 patients clinically confirmed with COVID-19 and 1235 CT sequences from non-infected people. The main highlights of the prototype system are: (1) no data augmentation is needed to accurately discriminate the COVID-19 from normal controls with the specificity of 0.92 and sensitivity of 0.93; (2) the raw DICOM image is not necessary in testing. Highly compressed image like Jpeg can be used to allow a quick diagnosis; and (3) it discriminates the virus infection within 6 seconds and thus allows an online test with light cost. We also applied our model on 48 asymptomatic patients diagnosed with COVID-19. We found that: (1) the positive rate of RT-PCR assay is 63.5% (687/1082). (2) 45.8% (22/48) of the RT-PCR assay is negative for asymptomatic patients, yet the accuracy of CT scans is 95.8%. The online detection system is available: http://212.64.70.65/covid . | Interdisciplinary sciences, computational life sciences | "2021-02-11T00:00:00" | [
"ZiweiZhu",
"ZhangXingming",
"GuihuaTao",
"TingtingDan",
"JiaoLi",
"XijieChen",
"YangLi",
"ZhichaoZhou",
"XiangZhang",
"JinzhaoZhou",
"DongpeiChen",
"HanchunWen",
"HongminCai"
] | 10.1007/s12539-020-00408-1
10.1148/radiol.2020200642
10.1109/TIP.2014.2298981
10.1148/radiol.2020200230
10.1002/ppul.24885
10.1148/radiol.2020200432
10.1016/j.jinf.2020.03.041
10.1056/NEJMoa2002032
10.1148/radiol.2020200330
10.1148/radiol.2020200527
10.1016/j.cmi.2020.04.040
10.1016/s1473-3099(20)30134-1
10.1148/radiol.2020200905
10.1109/TMI.2020.2976825
10.1109/TNSRE.2020.2973434
10.1109/TNSRE.2019.2915621
10.1001/jamanetworkopen.2020.10182 |
Deep Learning Algorithm Trained with COVID-19 Pneumonia Also Identifies Immune Checkpoint Inhibitor Therapy-Related Pneumonitis. | Coronavirus disease 2019 (COVID-19) pneumonia and immune checkpoint inhibitor (ICI) therapy-related pneumonitis share common features. The aim of this study was to determine on chest computed tomography (CT) images whether a deep convolutional neural network algorithm is able to solve the challenge of differential diagnosis between COVID-19 pneumonia and ICI therapy-related pneumonitis.
We enrolled three groups: a pneumonia-free group (
The algorithm showed low specificity in distinguishing COVID-19 from ICI therapy-related pneumonitis (sensitivity 97.1%, specificity 14.3%, area under the curve (AUC) = 0.62). ICI therapy-related pneumonitis was identified by the AI when compared to pneumonia-free controls (sensitivity = 85.7%, specificity 100%, AUC = 0.97).
The deep learning algorithm is not able to distinguish between COVID-19 pneumonia and ICI therapy-related pneumonitis. Awareness must be increased among clinicians about imaging similarities between COVID-19 and ICI therapy-related pneumonitis. ICI therapy-related pneumonitis can be applied as a challenge population for cross-validation to test the robustness of AI models used to analyze interstitial pneumonias of variable etiology. | Cancers | "2021-02-11T00:00:00" | [
"Carlo AugustoMallio",
"AndreaNapolitano",
"GennaroCastiello",
"Francesco MariaGiordano",
"PasqualeD'Alessio",
"MarioIozzino",
"YipengSun",
"SilviaAngeletti",
"MarcoRussano",
"DanieleSantini",
"GiuseppeTonini",
"Bruno BeomonteZobel",
"BrunoVincenzi",
"Carlo CosimoQuattrocchi"
] | 10.3390/cancers13040652
10.1056/NEJMe2002387
10.1101/2020.02.07.937862
10.1148/radiol.2020200230
10.1148/radiol.2020201237
10.1148/radiol.2020201365
10.1148/radiol.2020200905
10.1148/rg.2019190036
10.1136/esmoopen-2017-000213
10.1158/1078-0432.CCR-15-0685
10.1200/JCO.2016.68.2005
10.1001/jamaoncol.2016.2453
10.1053/j.seminoncol.2010.09.003
10.3892/ol.2017.6919
10.2217/imt-2020-0067
10.1002/jmv.25897
10.1177/1078155217745144
10.1016/S0140-6736(13)60250-0
10.1016/j.cell.2018.02.010
10.1097/RLI.0000000000000127
10.1109/TMI.2016.2535865
10.1148/ryct.2020200075
10.21037/qims-20-782
10.1016/S2589-7500(20)30199-0
10.1097/RTI.0000000000000524
10.1148/ryct.2020200047
10.1148/ryct.2020204001
10.1148/radiol.2020200843
10.1136/jclinpath-2020-206522
10.3390/cancers11030305
10.1080/14712598.2020.1789097
10.21037/qims-20-1306 |
Multiscale Attention Guided Network for COVID-19 Diagnosis Using Chest X-Ray Images. | Coronavirus disease 2019 (COVID-19) is one of the most destructive pandemic after millennium, forcing the world to tackle a health crisis. Automated lung infections classification using chest X-ray (CXR) images could strengthen diagnostic capability when handling COVID-19. However, classifying COVID-19 from pneumonia cases using CXR image is a difficult task because of shared spatial characteristics, high feature variation and contrast diversity between cases. Moreover, massive data collection is impractical for a newly emerged disease, which limited the performance of data thirsty deep learning models. To address these challenges, Multiscale Attention Guided deep network with Soft Distance regularization (MAG-SD) is proposed to automatically classify COVID-19 from pneumonia CXR images. In MAG-SD, MA-Net is used to produce prediction vector and attention from multiscale feature maps. To improve the robustness of trained model and relieve the shortage of training data, attention guided augmentations along with a soft distance regularization are posed, which aims at generating meaningful augmentations and reduce noise. Our multiscale attention model achieves better classification performance on our pneumonia CXR image dataset. Plentiful experiments are proposed for MAG-SD which demonstrates its unique advantage in pneumonia classification over cutting-edge models. The code is available at https://github.com/JasonLeeGHub/MAG-SD. | IEEE journal of biomedical and health informatics | "2021-02-10T00:00:00" | [
"JingxiongLi",
"YaqiWang",
"ShuaiWang",
"JunWang",
"JunLiu",
"QunJin",
"LinglingSun"
] | 10.1109/JBHI.2021.3058293
10.1109/TCBB.2019.2911947 |
COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet. | Currently, there is an urgent need for efficient tools to assess the diagnosis of COVID-19 patients. In this paper, we present feasible solutions for detecting and labeling infected tissues on CT lung images of such patients. Two structurally-different deep learning techniques, SegNet and U-NET, are investigated for semantically segmenting infected tissue regions in CT lung images.
We propose to use two known deep learning networks, SegNet and U-NET, for image tissue classification. SegNet is characterized as a scene segmentation network and U-NET as a medical segmentation tool. Both networks were exploited as binary segmentors to discriminate between infected and healthy lung tissue, also as multi-class segmentors to learn the infection type on the lung. Each network is trained using seventy-two data images, validated on ten images, and tested against the left eighteen images. Several statistical scores are calculated for the results and tabulated accordingly.
The results show the superior ability of SegNet in classifying infected/non-infected tissues compared to the other methods (with 0.95 mean accuracy), while the U-NET shows better results as a multi-class segmentor (with 0.91 mean accuracy).
Semantically segmenting CT scan images of COVID-19 patients is a crucial goal because it would not only assist in disease diagnosis, also help in quantifying the severity of the illness, and hence, prioritize the population treatment accordingly. We propose computer-based techniques that prove to be reliable as detectors for infected tissue in lung CT scans. The availability of such a method in today's pandemic would help automate, prioritize, fasten, and broaden the treatment of COVID-19 patients globally. | BMC medical imaging | "2021-02-10T00:00:00" | [
"AdnanSaood",
"IyadHatem"
] | 10.1186/s12880-020-00529-5
10.1016/S1473-3099(20)30086-4
10.1007/s00330-020-06801-0
10.1016/j.procs.2020.03.295
10.1016/j.eswa.2019.112855
10.1016/j.imu.2020.100357
10.1016/j.procs.2018.01.104
10.13053/cys-22-3-2526
10.3390/s20051516
10.1016/j.neucom.2018.12.085
10.1016/j.neunet.2020.01.005
10.1016/j.ultras.2019.03.014
10.1016/j.compbiomed.2020.103792
10.1016/j.imu.2020.100360
10.1007/s13246-020-00865-4
10.1016/j.compbiomed.2020.104037
10.1186/s41747-020-00173-2
10.1109/TPAMI.2016.2644615
10.1016/j.icte.2020.04.010 |
COVIDetection-Net: A tailored COVID-19 detection from chest radiography images using deep learning. | In this study, a medical system based on Deep Learning (DL) which we called "COVIDetection-Net" is proposed for automatic detection of new corona virus disease 2019 (COVID-19) infection from chest radiography images (CRIs). The proposed system is based on ShuffleNet and SqueezeNet architecture to extract deep learned features and Multiclass Support Vector Machines (MSVM) for detection and classification. Our dataset contains 1200 CRIs that collected from two different publicly available databases. Extensive experiments were carried out using the proposed model. The highest detection accuracy of 100 % for COVID/NonCOVID, 99.72 % for COVID/Normal/pneumonia and 94.44 % for COVID/Normal/Bacterial pneumonia/Viral pneumonia have been obtained. The proposed system superior all published methods in recall, specificity, precision, F1-Score and accuracy. Confusion Matrix (CM) and Receiver Operation Characteristics (ROC) analysis are also used to depict the performance of the proposed model. Hence the proposed COVIDetection-Net can serve as an efficient system in the current state of COVID-19 pandemic and can be used in everywhere that are facing shortage of test kits. | Optik | "2021-02-09T00:00:00" | [
"Ahmed SElkorany",
"Zeinab FElsharkawy"
] | 10.1016/j.ijleo.2021.166405 |
A narrative review on characterization of acute respiratory distress syndrome in COVID-19-infected lungs using artificial intelligence. | COVID-19 has infected 77.4 million people worldwide and has caused 1.7 million fatalities as of December 21, 2020. The primary cause of death due to COVID-19 is Acute Respiratory Distress Syndrome (ARDS). According to the World Health Organization (WHO), people who are at least 60 years old or have comorbidities that have primarily been targeted are at the highest risk from SARS-CoV-2. Medical imaging provides a non-invasive, touch-free, and relatively safer alternative tool for diagnosis during the current ongoing pandemic. Artificial intelligence (AI) scientists are developing several intelligent computer-aided diagnosis (CAD) tools in multiple imaging modalities, i.e., lung computed tomography (CT), chest X-rays, and lung ultrasounds. These AI tools assist the pulmonary and critical care clinicians through (a) faster detection of the presence of a virus, (b) classifying pneumonia types, and (c) measuring the severity of viral damage in COVID-19-infected patients. Thus, it is of the utmost importance to fully understand the requirements of for a fast and successful, and timely lung scans analysis. This narrative review first presents the pathological layout of the lungs in the COVID-19 scenario, followed by understanding and then explains the comorbid statistical distributions in the ARDS framework. The novelty of this review is the approach to classifying the AI models as per the by school of thought (SoTs), exhibiting based on segregation of techniques and their characteristics. The study also discusses the identification of AI models and its extension from non-ARDS lungs (pre-COVID-19) to ARDS lungs (post-COVID-19). Furthermore, it also presents AI workflow considerations of for medical imaging modalities in the COVID-19 framework. Finally, clinical AI design considerations will be discussed. We conclude that the design of the current existing AI models can be improved by considering comorbidity as an independent factor. Furthermore, ARDS post-processing clinical systems must involve include (i) the clinical validation and verification of AI-models, (ii) reliability and stability criteria, and (iii) easily adaptable, and (iv) generalization assessments of AI systems for their use in pulmonary, critical care, and radiological settings. | Computers in biology and medicine | "2021-02-08T00:00:00" | [
"Jasjit SSuri",
"SushantAgarwal",
"Suneet KGupta",
"AnudeepPuvvula",
"MainakBiswas",
"LucaSaba",
"ArindamBit",
"Gopal STandel",
"MohitAgarwal",
"AnubhavPatrick",
"GavinoFaa",
"Inder MSingh",
"RonaldOberleitner",
"MonikaTurk",
"Paramjit SChadha",
"Amer MJohri",
"JMiguel Sanches",
"Narendra NKhanna",
"KlaudijaViskovic",
"SophieMavrogeni",
"John RLaird",
"GyanPareek",
"MartinMiner",
"David WSobel",
"AntonellaBalestrieri",
"Petros PSfikakis",
"GeorgeTsoulfas",
"AthanasiosProtogerou",
"Durga PrasannaMisra",
"VikasAgarwal",
"George DKitas",
"PuneetAhluwalia",
"JagjitTeji",
"MustafaAl-Maini",
"Surinder KDhanjil",
"MeyypanSockalingam",
"AjitSaxena",
"AndrewNicolaides",
"AdityaSharma",
"VijayRathore",
"Janet N AAjuluchukwu",
"MostafaFatemi",
"AzraAlizad",
"VijayViswanathan",
"P KKrishnan",
"SubbaramNaidu"
] | 10.1016/j.compbiomed.2021.104210
10.1016/j.irbm.2020.1007.1001 |
SARS-CoV-2 diagnosis using medical imaging techniques and artificial intelligence: A review. | SARS-CoV-2 is a worldwide health emergency with unrecognized clinical features. This paper aims to review the most recent medical imaging techniques used for the diagnosis of SARS-CoV-2 and their potential contributions to attenuate the pandemic. Recent researches, including artificial intelligence tools, will be described.
We review the main clinical features of SARS-CoV-2 revealed by different medical imaging techniques. First, we present the clinical findings of each technique. Then, we describe several artificial intelligence approaches introduced for the SARS-CoV-2 diagnosis.
CT is the most accurate diagnostic modality of SARS-CoV-2. Additionally, ground-glass opacities and consolidation are the most common signs of SARS-CoV-2 in CT images. However, other findings such as reticular pattern, and crazy paving could be observed. We also found that pleural effusion and pneumothorax features are less common in SARS-CoV-2. According to the literature, the B lines artifacts and pleural line irregularities are the common signs of SARS-CoV-2 in ultrasound images. We have also stated the different studies, focusing on artificial intelligence tools, to evaluate the SARS-CoV-2 severity. We found that most of the reported works based on deep learning focused on the detection of SARS-CoV-2 from medical images while the challenge for the radiologists is how to differentiate between SARS-CoV-2 and other viral infections with the same clinical features.
The identification of SARS-CoV-2 manifestations on medical images is a key step in radiological workflow for the diagnosis of the virus and could be useful for researchers working on computer-aided diagnosis of pulmonary infections. | Clinical imaging | "2021-02-06T00:00:00" | [
"NarjesBenameur",
"RamziMahmoudi",
"SorayaZaid",
"YounesArous",
"BadiiHmida",
"Mohamed HediBedoui"
] | 10.1016/j.clinimag.2021.01.019
10.1038/s41591-020-0820-9
10.1071/MA20013
10.3201/eid2606.200239
10.1007/s11606-020-05762-w
10.1016/S2468-1253(20)30048-0
10.1053/j.gastro.2020.02.054
10.1038/s41591-020-0817-4
10.1016/j.jinf.2020.03.041
10.1007/s00428-020-02829-1
10.1007/s00330-020-06854-1
10.1053/j.gastro.2020.04.008
10.1053/j.gastro.2020.02.055
10.1007/s00134-020-05985-9
10.1128/JVI.79.23.14614-14621.2005
10.1007/s10067-020-05073-9
10.1097/01.ju.0000047363.03411.6b
10.1016/j.cmi.2020.04.001
10.1007/s42058-020-00030-6
10.1007/s00259-020-04735-9
10.1016/j.clinimag.2020.04.001
10.21037/tlcr.2017.01.02
10.1148/radiol.2020201160
10.1016/j.ejim.2020.04.037
10.1148/ryct.2020200034
10.1007/s00330-020-06823-8
10.1016/j.ajem.2020.04.016
10.1016/j.jinf.2020.03.035
10.1016/j.jinf.2020.04.004
10.1148/radiol.2020200432
10.1148/radiol.2020200642
10.2214/AJR.20.22959
10.2214/AJR.20.22954
10.1016/S0140-6736(20)30211-7
10.1007/s00330-020-06801-0
10.1007/s13244-012-0207-7
10.1007/s13244-010-0060-5
10.2214/AJR.20.22975
10.2214/AJR.08.1286
10.1016/j.ejro.2020.100231
10.1148/radiol.2020200236
10.2214/AJR.20.22969
10.1007/s00247-003-1081-8
10.1016/j.jrid.2020.04.001
10.1007/s00134-020-05996-6
10.1186/s13089-020-00171-w
10.1016/j.jcrc.2015.08.021
10.1016/j.ultrasmedbio.2014.10.002
10.1016/j.jen.2020.07.010
10.1097/ALN.0000000000003303
10.1148/radiol.2020200905
10.1101/2020.03.20.20039834
10.1101/2020.02.25.20021568
10.1101/2020.03.12.20027185
10.1101/2020.02.23.20026930
10.1016/j.compbiomed.2020.103792 |
Anam-Net: Anamorphic Depth Embedding-Based Lightweight CNN for Segmentation of Anomalies in COVID-19 Chest CT Images. | Chest computed tomography (CT) imaging has become indispensable for staging and managing coronavirus disease 2019 (COVID-19), and current evaluation of anomalies/abnormalities associated with COVID-19 has been performed majorly by the visual score. The development of automated methods for quantifying COVID-19 abnormalities in these CT images is invaluable to clinicians. The hallmark of COVID-19 in chest CT images is the presence of ground-glass opacities in the lung region, which are tedious to segment manually. We propose anamorphic depth embedding-based lightweight CNN, called Anam-Net, to segment anomalies in COVID-19 chest CT images. The proposed Anam-Net has 7.8 times fewer parameters compared to the state-of-the-art UNet (or its variants), making it lightweight capable of providing inferences in mobile or resource constraint (point-of-care) platforms. The results from chest CT images (test cases) across different experiments showed that the proposed method could provide good Dice similarity scores for abnormal and normal regions in the lung. We have benchmarked Anam-Net with other state-of-the-art architectures, such as ENet, LEDNet, UNet++, SegNet, Attention UNet, and DeepLabV3+. The proposed Anam-Net was also deployed on embedded systems, such as Raspberry Pi 4, NVIDIA Jetson Xavier, and mobile-based Android application (CovSeg) embedded with Anam-Net to demonstrate its suitability for point-of-care platforms. The generated codes, models, and the mobile application are available for enthusiastic users at https://github.com/NaveenPaluru/Segmentation-COVID-19. | IEEE transactions on neural networks and learning systems | "2021-02-06T00:00:00" | [
"NaveenPaluru",
"AveenDayal",
"Havard BjorkeJenssen",
"TomasSakinis",
"Linga ReddyCenkeramaddi",
"JayaPrakash",
"Phaneendra KYalavarthy"
] | 10.1109/TNNLS.2021.3054746 |
Six artificial intelligence paradigms for tissue characterisation and classification of non-COVID-19 pneumonia against COVID-19 pneumonia in computed tomography lungs. | COVID-19 pandemic has currently no vaccines. Thus, the only feasible solution for prevention relies on the detection of COVID-19-positive cases through quick and accurate testing. Since artificial intelligence (AI) offers the powerful mechanism to automatically extract the tissue features and characterise the disease, we therefore hypothesise that AI-based strategies can provide quick detection and classification, especially for radiological computed tomography (CT) lung scans.
Six models, two traditional machine learning (ML)-based (k-NN and RF), two transfer learning (TL)-based (VGG19 and InceptionV3), and the last two were our custom-designed deep learning (DL) models (CNN and iCNN), were developed for classification between COVID pneumonia (CoP) and non-COVID pneumonia (NCoP). K10 cross-validation (90% training: 10% testing) protocol on an Italian cohort of 100 CoP and 30 NCoP patients was used for performance evaluation and bispectrum analysis for CT lung characterisation.
Using K10 protocol, our results showed the accuracy in the order of DL > TL > ML, ranging the six accuracies for k-NN, RF, VGG19, IV3, CNN, iCNN as 74.58 ± 2.44%, 96.84 ± 2.6, 94.84 ± 2.85%, 99.53 ± 0.75%, 99.53 ± 1.05%, and 99.69 ± 0.66%, respectively. The corresponding AUCs were 0.74, 0.94, 0.96, 0.99, 0.99, and 0.99 (p-values < 0.0001), respectively. Our Bispectrum-based characterisation system suggested CoP can be separated against NCoP using AI models. COVID risk severity stratification also showed a high correlation of 0.7270 (p < 0.0001) with clinical scores such as ground-glass opacities (GGO), further validating our AI models.
We prove our hypothesis by demonstrating that all the six AI models successfully classified CoP against NCoP due to the strong presence of contrasting features such as ground-glass opacities (GGO), consolidations, and pleural effusion in CoP patients. Further, our online system takes < 2 s for inference. | International journal of computer assisted radiology and surgery | "2021-02-04T00:00:00" | [
"LucaSaba",
"MohitAgarwal",
"AnubhavPatrick",
"AnudeepPuvvula",
"Suneet KGupta",
"AlessandroCarriero",
"John RLaird",
"George DKitas",
"Amer MJohri",
"AntonellaBalestrieri",
"ZenoFalaschi",
"AlessioPaschè",
"VijayViswanathan",
"AymanEl-Baz",
"IqbalAlam",
"AbhinavJain",
"SubbaramNaidu",
"RonaldOberleitner",
"Narendra NKhanna",
"ArindamBit",
"MostafaFatemi",
"AzraAlizad",
"Jasjit SSuri"
] | 10.1007/s11548-021-02317-0
10.1164/rccm.202003-0527LE
10.1097/ALN.0000000000003296
10.1016/S0140-6736(20)30183-5
10.1016/j.cpcardiol.2020.100618
10.1109/ACCESS.2020.3005510
10.1007/s00330-020-07042-x
10.1038/s41598-019-56847-4
10.1148/ryai.2020200048
10.1080/07391102.2020.1788642
10.3233/XST-190545
10.1007/s10916-017-0797-1
10.1016/j.measurement.2017.01.016
10.1016/j.patrec.2020.10.001
10.3390/e22050517
10.21595/chs.2020.21263
10.1016/j.patrec.2018.07.026
10.1097/RTI.0000000000000534
10.1016/j.compbiomed.2020.103804
10.2741/4876 |
Fast and Accurate Detection of COVID-19 Along With 14 Other Chest Pathologies Using a Multi-Level Classification: Algorithm Development and Validation Study. | COVID-19 has spread very rapidly, and it is important to build a system that can detect it in order to help an overwhelmed health care system. Many research studies on chest diseases rely on the strengths of deep learning techniques. Although some of these studies used state-of-the-art techniques and were able to deliver promising results, these techniques are not very useful if they can detect only one type of disease without detecting the others.
The main objective of this study was to achieve a fast and more accurate diagnosis of COVID-19. This study proposes a diagnostic technique that classifies COVID-19 x-ray images from normal x-ray images and those specific to 14 other chest diseases.
In this paper, we propose a novel, multilevel pipeline, based on deep learning models, to detect COVID-19 along with other chest diseases based on x-ray images. This pipeline reduces the burden of a single network to classify a large number of classes. The deep learning models used in this study were pretrained on the ImageNet dataset, and transfer learning was used for fast training. The lungs and heart were segmented from the whole x-ray images and passed onto the first classifier that checks whether the x-ray is normal, COVID-19 affected, or characteristic of another chest disease. If it is neither a COVID-19 x-ray image nor a normal one, then the second classifier comes into action and classifies the image as one of the other 14 diseases.
We show how our model uses state-of-the-art deep neural networks to achieve classification accuracy for COVID-19 along with 14 other chest diseases and normal cases based on x-ray images, which is competitive with currently used state-of-the-art models. Due to the lack of data in some classes such as COVID-19, we applied 10-fold cross-validation through the ResNet50 model. Our classification technique thus achieved an average training accuracy of 96.04% and test accuracy of 92.52% for the first level of classification (ie, 3 classes). For the second level of classification (ie, 14 classes), our technique achieved a maximum training accuracy of 88.52% and test accuracy of 66.634% by using ResNet50. We also found that when all the 16 classes were classified at once, the overall accuracy for COVID-19 detection decreased, which in the case of ResNet50 was 88.92% for training data and 71.905% for test data.
Our proposed pipeline can detect COVID-19 with a higher accuracy along with detecting 14 other chest diseases based on x-ray images. This is achieved by dividing the classification task into multiple steps rather than classifying them collectively. | Journal of medical Internet research | "2021-02-03T00:00:00" | [
"SalehAlbahli",
"Ghulam Nabi Ahmad HassanYar"
] | 10.2196/23693
10.1016/s2213-2600(20)30079-5
10.2174/1573405616666200604163954
10.1007/s12553-018-0244-4
10.1148/radiol.2019181960
10.1002/mp.13245
10.1109/icassp.2018.8461430
10.1109/cvpr.2017.369
10.18653/v1/2020.emnlp-main.117
10.3390/app9194130
10.1007/s13246-020-00865-4
10.1007/s10489-020-01829-7
10.1101/2020.02.25.20021568v2
10.1101/2020.02.25.20021568
10.1148/radiol.2020200905
10.20944/preprints202003.0300.v1
10.1101/2020.02.14.20023028v3.full.pdf
10.1016/j.cmpb.2020.105608
10.1038/s41598-020-76550-z
10.1038/s41598-020-76550-z
10.1101/2020.02.23.20026930v1
10.1101/2020.02.23.20026930
10.1101/2020.03.12.20027185v2
10.1101/2020.03.12.20027185
10.1016/j.compbiomed.2020.103792
10.1016/j.compbiomed.2020.103795
10.1109/cvpr.2018.00907
10.1109/cvpr.2017.195
10.1109/CVPR.2016.308
10.5555/3298023.3298188
10.1109/cvpr.2016.90 |
Quantitative Assessment of Chest CT Patterns in COVID-19 and Bacterial Pneumonia Patients: a Deep Learning Perspective. | It is difficult to distinguish subtle differences shown in computed tomography (CT) images of coronavirus disease 2019 (COVID-19) and bacterial pneumonia patients, which often leads to an inaccurate diagnosis. It is desirable to design and evaluate interpretable feature extraction techniques to describe the patient's condition.
This is a retrospective cohort study of 170 confirmed patients with COVID-19 or bacterial pneumonia acquired at Yeungnam University Hospital in Daegu, Korea. The Lung and lesion regions were segmented to crop the lesion into 2D patches to train a classifier model that could differentiate between COVID-19 and bacterial pneumonia. The K-means algorithm was used to cluster deep features extracted by the trained model into 20 groups. Each lesion patch cluster was described by a characteristic imaging term for comparison. For each CT image containing multiple lesions, a histogram of lesion types was constructed using the cluster information. Finally, a Support Vector Machine classifier was trained with the histogram and radiomics features to distinguish diseases and severity.
The 20 clusters constructed from 170 patients were reviewed based on common radiographic appearance types. Two clusters showed typical findings of COVID-19, with two other clusters showing typical findings related to bacterial pneumonia. Notably, there is one cluster that showed bilateral diffuse ground-glass opacities (GGOs) in the central and peripheral lungs and was considered to be a key factor for severity classification. The proposed method achieved an accuracy of 91.2% for classifying COVID-19 and bacterial pneumonia patients with 95% reported for severity classification. The CT quantitative parameters represented by the values of cluster 8 were correlated with existing laboratory data and clinical parameters.
Deep chest CT analysis with constructed lesion clusters revealed well-known COVID-19 CT manifestations comparable to manual CT analysis. The constructed histogram features improved accuracy for both diseases and severity classification, and showed correlations with laboratory data and clinical parameters. The constructed histogram features can provide guidance for improved analysis and treatment of COVID-19. | Journal of Korean medical science | "2021-02-03T00:00:00" | [
"MyeongkyunKang",
"Kyung SooHong",
"PhilipChikontwe",
"MiguelLuna",
"Jong GeolJang",
"JongsooPark",
"Kyeong CheolShin",
"Sang HyunPark",
"June HongAhn"
] | 10.3346/jkms.2021.36.e46
10.1109/ICCV.2017.89
10.1101/2020.05.20.20100362
10.1109/CVPR.2016.90
10.21037/atm-20-3026 |
Machine learning applied on chest x-ray can aid in the diagnosis of COVID-19: a first experience from Lombardy, Italy. | We aimed to train and test a deep learning classifier to support the diagnosis of coronavirus disease 2019 (COVID-19) using chest x-ray (CXR) on a cohort of subjects from two hospitals in Lombardy, Italy.
We used for training and validation an ensemble of ten convolutional neural networks (CNNs) with mainly bedside CXRs of 250 COVID-19 and 250 non-COVID-19 subjects from two hospitals (Centres 1 and 2). We then tested such system on bedside CXRs of an independent group of 110 patients (74 COVID-19, 36 non-COVID-19) from one of the two hospitals. A retrospective reading was performed by two radiologists in the absence of any clinical information, with the aim to differentiate COVID-19 from non-COVID-19 patients. Real-time polymerase chain reaction served as the reference standard.
At 10-fold cross-validation, our deep learning model classified COVID-19 and non-COVID-19 patients with 0.78 sensitivity (95% confidence interval [CI] 0.74-0.81), 0.82 specificity (95% CI 0.78-0.85), and 0.89 area under the curve (AUC) (95% CI 0.86-0.91). For the independent dataset, deep learning showed 0.80 sensitivity (95% CI 0.72-0.86) (59/74), 0.81 specificity (29/36) (95% CI 0.73-0.87), and 0.81 AUC (95% CI 0.73-0.87). Radiologists' reading obtained 0.63 sensitivity (95% CI 0.52-0.74) and 0.78 specificity (95% CI 0.61-0.90) in Centre 1 and 0.64 sensitivity (95% CI 0.52-0.74) and 0.86 specificity (95% CI 0.71-0.95) in Centre 2.
This preliminary experience based on ten CNNs trained on a limited training dataset shows an interesting potential of deep learning for COVID-19 diagnosis. Such tool is in training with new CXRs to further increase its performance. | European radiology experimental | "2021-02-03T00:00:00" | [
"IsabellaCastiglioni",
"DavideIppolito",
"MatteoInterlenghi",
"Caterina BeatriceMonti",
"ChristianSalvatore",
"SimoneSchiaffino",
"AnnalisaPolidori",
"DavideGandola",
"CristinaMessa",
"FrancescoSardanelli"
] | 10.1186/s41747-020-00203-z
10.1016/S0140-6736(20)30183-5
10.1016/j.clinimag.2020.06.031
10.3348/kjr.2020.0132
10.1007/s40846-020-00529-4
10.3233/XST-200715
10.1016/j.mehy.2020.109761
10.1148/radiol.2020200642
10.1186/s41747-018-0061-6 |
TLCoV- An automated Covid-19 screening model using Transfer Learning from chest X-ray images. | The Coronavirus disease (Covid-19) has been declared a pandemic by World Health Organisation (WHO) and till date caused 585,727 numbers of deaths all over the world. The only way to minimize the number of death is to quarantine the patients tested Corona positive. The quick spread of this disease can be reduced by automatic screening to cover the lack of radiologists. Though the researchers already have done extremely well to design pioneering deep learning models for the screening of Covid-19, most of them results in low accuracy rate. In addition, over-fitting problem increases difficulties for those models to learn on existing Covid-19 datasets. In this paper, an automated Covid-19 screening model is designed to identify the patients suffering from this disease by using their chest X-ray images. The model classifies the images in three categories - Covid-19 positive, other pneumonia infection and no infection. Three learning schemes such as CNN, VGG-16 and ResNet-50 are separately used to learn the model. A standard Covid-19 radiography dataset from the repository of Kaggle is used to get the chest X-ray images. The performance of the model with all the three learning schemes has been evaluated and it shows VGG-16 performed better as compared to CNN and ResNet-50. The model with VGG-16 gives the accuracy of 97.67%, precision of 96.65%, recall of 96.54% and F1 score of 96.59%. The performance evaluation also shows that our model outperforms two existing models to screen the Covid-19. | Chaos, solitons, and fractals | "2021-02-03T00:00:00" | [
"Ayan KumarDas",
"SidraKalam",
"ChiranjeevKumar",
"DitipriyaSinha"
] | 10.1016/j.chaos.2021.110713
10.1016/j.compbiomed.2017.08.022
10.1016/j.chaos.2020.110176
10.1007/s10489-020-01714-3
10.1016/j.patrec.2020.03.011
10.1016/j.chaos.2020.110245
10.1147/JRD.2017.2708299
10.1117/12.2043872
10.1109/BHI.2017.7897215
10.1038/nature21056
10.1002/jbio.201700003
10.1109/TMI.2020.2996256
10.1038/s41591-018-0268-3
10.1109/CVPR.2016.90
10.1016/S0140-6736(20)30183-5
10.1101/2020.03.20.20039834
10.1101/2020.03.19.20039354
10.1109/TMI.2020.2992546
10.1016/j.ijantimicag.2020.105924
10.1002/jmv.25678
10.1016/j.chaos.2020.109853
10.1101/2020.04.08.20057679
10.1016/j.chaos.2020.110027
10.1109/ACCESS.2020.2997311
10.1007/s12098-020-03263-6
10.1016/j.cmpb.2019.06.005
10.1016/j.chaos.2020.110122
10.1016/j.chaos.2020.110072
10.1016/j.compmedimag.2019.101673
10.1016/j.ins.2017.08.050
10.1016/j.chaos.2020.110153
10.1001/jama.2020.1585
10.1109/TMI.2020.2995965
10.1016/j.compbiomed.2018.09.009
10.1101/2020.03.12.20027185
10.1016/j.chaos.2020.110137 |
COVID-19 Chest CT Image Segmentation Network by Multi-Scale Fusion and Enhancement Operations. | A novel coronavirus disease 2019 (COVID-19) was detected and has spread rapidly across various countries around the world since the end of the year 2019. Computed Tomography (CT) images have been used as a crucial alternative to the time-consuming RT-PCR test. However, pure manual segmentation of CT images faces a serious challenge with the increase of suspected cases, resulting in urgent requirements for accurate and automatic segmentation of COVID-19 infections. Unfortunately, since the imaging characteristics of the COVID-19 infection are diverse and similar to the backgrounds, existing medical image segmentation methods cannot achieve satisfactory performance. In this article, we try to establish a new deep convolutional neural network tailored for segmenting the chest CT images with COVID-19 infections. We first maintain a large and new chest CT image dataset consisting of 165,667 annotated chest CT images from 861 patients with confirmed COVID-19. Inspired by the observation that the boundary of the infected lung can be enhanced by adjusting the global intensity, in the proposed deep CNN, we introduce a feature variation block which adaptively adjusts the global properties of the features for segmenting COVID-19 infection. The proposed FV block can enhance the capability of feature representation effectively and adaptively for diverse cases. We fuse features at different scales by proposing Progressive Atrous Spatial Pyramid Pooling to handle the sophisticated infection areas with diverse appearance and shapes. The proposed method achieves state-of-the-art performance. Dice similarity coefficients are 0.987 and 0.726 for lung and COVID-19 segmentation, respectively. We conducted experiments on the data collected in China and Germany and show that the proposed deep CNN can produce impressive performance effectively. The proposed network enhances the segmentation ability of the COVID-19 infection, makes the connection with other techniques and contributes to the development of remedying COVID-19 infection. | IEEE transactions on big data | "2021-02-02T00:00:00" | [
"QingsenYan",
"BoWang",
"DongGong",
"ChuanLuo",
"WeiZhao",
"JianhuShen",
"JingyangAi",
"QinfengShi",
"YanningZhang",
"ShuoJin",
"LiangZhang",
"ZhengYou"
] | 10.1109/TBDATA.2021.3056564
10.1109/JBHI.2020.3042069 |
Diagnosis of COVID-19 using CT scan images and deep learning techniques. | Early diagnosis of the coronavirus disease in 2019 (COVID-19) is essential for controlling this pandemic. COVID-19 has been spreading rapidly all over the world. There is no vaccine available for this virus yet. Fast and accurate COVID-19 screening is possible using computed tomography (CT) scan images. The deep learning techniques used in the proposed method is based on a convolutional neural network (CNN). Our manuscript focuses on differentiating the CT scan images of COVID-19 and non-COVID 19 CT using different deep learning techniques. A self-developed model named CTnet-10 was designed for the COVID-19 diagnosis, having an accuracy of 82.1%. Also, other models that we tested are DenseNet-169, VGG-16, ResNet-50, InceptionV3, and VGG-19. The VGG-19 proved to be superior with an accuracy of 94.52% as compared to all other deep learning models. Automated diagnosis of COVID-19 from the CT scan pictures can be used by the doctors as a quick and efficient method for COVID-19 screening. | Emergency radiology | "2021-02-02T00:00:00" | [
"VruddhiShah",
"RinkalKeniya",
"AkankshaShridharani",
"ManavPunjabi",
"JainamShah",
"NinadMehendale"
] | 10.1007/s10140-020-01886-y
10.1016/S0140-6736(20)30185-9
10.1016/j.ijid.2020.03.070
10.1128/JCM.00512-20
10.1007/s00392-020-01626-9
10.1002/jmv.25726
10.1016/S0140-6736(13)61492-0
10.1146/annurev-bioeng-071516-044442
10.1109/ACCESS.2017.2788044
10.1164/rccm.201705-0860OC
10.2214/AJR.20.22976
10.1097/RLI.0000000000000670
10.2214/AJR.07.5212 |
Convolutional neural network use chest radiography images for identification of COVID-19. | The epic Covid sickness 2019 (COVID-19) has turned into the significant danger to humankind in year 2020. The pandemic COVID-19 flare-up has influenced more than 2.7 million individuals and caused around 187 thousand fatalities worldwide [1] inside scarcely any months of its first appearance in Wuhan city of China and the number is developing quickly in various pieces of world. As researcher everywhere on the world are battling to discover the fix and treatment for COVID-19, the urgent advance fighting against COVID-19 is the screening of immense number of associated cases for disconnection and isolate with the patients. One of the key methodologies in screening of COVID-19 can be chest radiological imaging. The early investigations on the patients influenced by COVID-19 shows the attributes variations from the norm in chest radiography pictures. This introduced a chance to utilize distinctive counterfeit clever (AI) frameworks dependent on profound picking up utilizing chest radiology pictures for the recognition of COVID-19 and numerous such framework were proposed indicating promising outcomes. In this paper, we proposed a profound learning based convolution neural organization to characterize COVID-19, Pneumonia and Normal cases from chest radiology pictures. The proposed convolution neural organization (CNN) grouping model had the option to accomplish exactness of 94.85% on test dataset. The trial was completed utilizing the subset of information accessible in GitHub and Kaggle. | Materials today. Proceedings | "2021-02-02T00:00:00" | [
"DMurali",
"EBhuvaneswari",
"SParvathi",
"A NSanjeev Kumar"
] | 10.1016/j.matpr.2020.10.866 |
Automatic Screening of COVID-19 Using an Optimized Generative Adversarial Network. | The quick spread of coronavirus disease (COVID-19) has resulted in a global pandemic and more than fifteen million confirmed cases. To battle this spread, clinical imaging techniques, for example, computed tomography (CT), can be utilized for diagnosis. Automatic identification software tools are essential for helping to screen COVID-19 using CT images. However, there are few datasets available, making it difficult to train deep learning (DL) networks. To address this issue, a generative adversarial network (GAN) is proposed in this work to generate more CT images. The Whale Optimization Algorithm (WOA) is used to optimize the hyperparameters of GAN's generator. The proposed method is tested and validated with different classification and meta-heuristics algorithms using the SARS-CoV-2 CT-Scan dataset, consisting of COVID-19 and non-COVID-19 images. The performance metrics of the proposed optimized model, including accuracy (99.22%), sensitivity (99.78%), specificity (97.78%), F1-score (98.79%), positive predictive value (97.82%), and negative predictive value (99.77%), as well as its confusion matrix and receiver operating characteristic (ROC) curves, indicate that it performs better than state-of-the-art methods. This proposed model will help in the automatic screening of COVID-19 patients and decrease the burden on medicinal services frameworks. | Cognitive computation | "2021-02-02T00:00:00" | [
"TriptiGoel",
"RMurugan",
"SeyedaliMirjalili",
"Deba KumarChakrabartty"
] | 10.1007/s12559-020-09785-7
10.1001/jama.2020.2565
10.1016/j.chaos.2020.110170
10.1016/j.compbiomed.2020.103795
10.1177/0846537120913033
10.3348/kjr.2020.0112
10.1016/j.advengsoft.2016.01.008
10.1038/scientificamerican0792-66
10.1145/321062.321069
10.1016/j.advengsoft.2013.12.007
10.1186/s40779-019-0229-2 |
COVID-19 classification by CCSHNet with deep fusion using transfer learning and discriminant correlation analysis. | : COVID-19 is a disease caused by a new strain of coronavirus. Up to 18th October 2020, worldwide there have been 39.6 million confirmed cases resulting in more than 1.1 million deaths. To improve diagnosis, we aimed to design and develop a novel advanced AI system for COVID-19 classification based on chest CT (CCT) images.
: Our dataset from local hospitals consisted of 284 COVID-19 images, 281 community-acquired pneumonia images, 293 secondary pulmonary tuberculosis images; and 306 healthy control images. We first used pretrained models (PTMs) to learn features, and proposed a novel (L, 2) transfer feature learning algorithm to extract features, with a hyperparameter of number of layers to be removed (NLR, symbolized as
: On the test set, CCSHNet achieved sensitivities of four classes of 95.61%, 96.25%, 98.30%, and 97.86%, respectively. The precision values of four classes were 97.32%, 96.42%, 96.99%, and 97.38%, respectively. The F1 scores of four classes were 96.46%, 96.33%, 97.64%, and 97.62%, respectively. The MA F1 score was 97.04%. In addition, CCSHNet outperformed 12 state-of-the-art COVID-19 detection methods.
: CCSHNet is effective in detecting COVID-19 and other lung infectious diseases using first-line clinical imaging and can therefore assist radiologists in making accurate diagnoses based on CCTs. | An international journal on information fusion | "2021-02-02T00:00:00" | [
"Shui-HuaWang",
"Deepak RanjanNayak",
"David SGuttery",
"XinZhang",
"Yu-DongZhang"
] | 10.1016/j.inffus.2020.11.005
10.1109/JSEN.2020.3025855
10.1111/srt.12891
10.1016/j.inffus.2020.10.004
10.1109/TNSE.2020.2990963
10.1109/TITS.2020.2990214 |
Deep COVID DeteCT: an international experience on COVID-19 lung detection and prognosis using chest CT. | The Coronavirus disease 2019 (COVID-19) presents open questions in how we clinically diagnose and assess disease course. Recently, chest computed tomography (CT) has shown utility for COVID-19 diagnosis. In this study, we developed Deep COVID DeteCT (DCD), a deep learning convolutional neural network (CNN) that uses the entire chest CT volume to automatically predict COVID-19 (COVID+) from non-COVID-19 (COVID-) pneumonia and normal controls. We discuss training strategies and differences in performance across 13 international institutions and 8 countries. The inclusion of non-China sites in training significantly improved classification performance with area under the curve (AUCs) and accuracies above 0.8 on most test sites. Furthermore, using available follow-up scans, we investigate methods to track patient disease course and predict prognosis. | NPJ digital medicine | "2021-01-31T00:00:00" | [
"Edward HLee",
"JimmyZheng",
"ErrolColak",
"MaryamMohammadzadeh",
"GolnazHoushmand",
"NicholasBevins",
"FelipeKitamura",
"EmreAltinmakas",
"Eduardo PontesReis",
"Jae-KwangKim",
"ChadKlochko",
"MichelleHan",
"SadeghMoradian",
"AliMohammadzadeh",
"HashemSharifian",
"HassanHashemi",
"KavousFirouznia",
"HossienGhanaati",
"MasoumehGity",
"HakanDoğan",
"HojjatSalehinejad",
"HenriqueAlves",
"JayneSeekins",
"NitamarAbdala",
"ÇetinAtasoy",
"HamidrezaPouraliakbar",
"MajidMaleki",
"S SimonWong",
"Kristen WYeom"
] | 10.1038/s41746-020-00369-1
10.1038/s41586-020-2012-7
10.1148/radiol.2020202439
10.1148/radiol.2020200230 |
Chest Imaging of Patients with Sarcoidosis and SARS-CoV-2 Infection. Current Evidence and Clinical Perspectives. | The recent COVID-19 pandemic has dramatically changed the world in the last months, leading to a serious global emergency related to a novel coronavirus infection that affects both sexes of all ages ubiquitously. Advanced age, cardiovascular comorbidity, and viral load have been hypothesized as some of the risk factors for severity, but their role in patients affected with other diseases, in particular immune disorders, such as sarcoidosis, and the specific interaction between these two diseases remains unclear. The two conditions might share similar imaging findings but have distinctive features that are here described. The recent development of complex imaging softwares, called deep learning techniques, opens new scenarios for the diagnosis and management. | Diagnostics (Basel, Switzerland) | "2021-01-31T00:00:00" | [
"ClaudioTana",
"CesareMantini",
"FrancescoCipollone",
"Maria AdeleGiamberardino"
] | 10.3390/diagnostics11020183
10.4103/jgid.jgid_86_20
10.1016/j.diagmicrobio.2020.115094
10.1016/j.ejim.2014.10.009
10.2174/1573405614666180522074320
10.1136/bmj.2.5261.1165
10.1148/rg.306105512
10.1007/s11547-017-0830-y
10.2174/1573405614666180806141415
10.1097/MCP.0000000000000705
10.1155/2020/6175964
10.1007/s11547-020-01200-3
10.1007/s00330-020-06801-0
10.1148/radiol.2020200370
10.1016/S1473-3099(20)30086-4
10.1183/09031936.00047908
10.1148/radiol.2020200241
10.1148/rg.236035101
10.1259/bjr/77712845
10.1186/s13244-020-00933-z
10.1259/bjr/29049682
10.1016/S2213-2600(20)30120-X
10.1111/anae.15082
10.1002/jum.15415
10.1016/j.it.2020.08.001
10.1016/j.ccm.2015.08.001
10.3949/ccjm.87a.ccc026
10.1016/j.ijid.2020.11.184
10.1016/j.cell.2020.02.052
10.3389/fcvm.2020.585866
10.1056/NEJMoa2028836
10.1136/bmj.m1432
10.1016/j.antiviral.2020.104762
10.1016/j.jcrc.2020.03.005
10.3389/fmed.2020.588527
10.1136/annrheumdis-2020-217681
10.2147/TCRM.S192922
10.1002/14651858.CD001114.pub2
10.1056/nejmoa2021436
10.1177/039463201402700302
10.1186/s13054-020-03240-7
10.1056/NEJMsa2011686
10.21037/atm-20-5731
10.1177/039463201302600204
10.1007/s10354-014-0269-x
10.1093/cvr/cvaa325
10.1080/14787210.2020.1822737
10.3390/jcm9093028
10.1016/j.ejrad.2020.109217
10.1038/s41568-018-0016-5
10.1016/S2213-2600(15)00140-X
10.1016/j.chaos.2020.110495
10.1371/journal.pone.0242535
10.1038/s41598-020-76282-0
10.1097/RLI.0000000000000748 |
Integrating deep learning CT-scan model, biological and clinical variables to predict severity of COVID-19 patients. | The SARS-COV-2 pandemic has put pressure on intensive care units, so that identifying predictors of disease severity is a priority. We collect 58 clinical and biological variables, and chest CT scan data, from 1003 coronavirus-infected patients from two French hospitals. We train a deep learning model based on CT scans to predict severity. We then construct the multimodal AI-severity score that includes 5 clinical and biological variables (age, sex, oxygenation, urea, platelet) in addition to the deep learning model. We show that neural network analysis of CT-scans brings unique prognosis information, although it is correlated with other markers of severity (oxygenation, LDH, and CRP) explaining the measurable but limited 0.03 increase of AUC obtained when adding CT-scan information to clinical variables. Here, we show that when comparing AI-severity with 11 existing severity scores, we find significantly improved prognosis performance; AI-severity can therefore rapidly become a reference scoring approach. | Nature communications | "2021-01-29T00:00:00" | [
"NathalieLassau",
"SamyAmmari",
"EmilieChouzenoux",
"HugoGortais",
"PaulHerent",
"MatthieuDevilder",
"SamerSoliman",
"OlivierMeyrignac",
"Marie-PaulineTalabard",
"Jean-PhilippeLamarque",
"RemyDubois",
"NicolasLoiseau",
"PaulTrichelair",
"EtienneBendjebbar",
"GabrielGarcia",
"CorinneBalleyguier",
"MansouriaMerad",
"AnnabelleStoclin",
"SimonJegou",
"FranckGriscelli",
"NicolasTetelboum",
"YingpingLi",
"SagarVerma",
"MatthieuTerris",
"TasnimDardouri",
"KavyaGupta",
"AnaNeacsu",
"FrankChemouni",
"MeriemSefta",
"PaulJehanno",
"ImadBousaid",
"YannickBoursin",
"EmmanuelPlanchet",
"MikaelAzoulay",
"JocelynDachary",
"FabienBrulport",
"AdrianGonzalez",
"OlivierDehaene",
"Jean-BaptisteSchiratti",
"KathrynSchutte",
"Jean-ChristophePesquet",
"HuguesTalbot",
"ElodiePronier",
"GillesWainrib",
"ThomasClozel",
"FabriceBarlesi",
"Marie-FranceBellin",
"Michael G BBlum"
] | 10.1038/s41467-020-20657-4
10.1136/bmj.m1985
10.1016/S2213-2600(20)30161-2
10.1038/s42256-020-0180-7
10.1136/bmj.m1328
10.1136/bmj.m3339
10.1016/j.mayocp.2020.04.006
10.1016/S0140-6736(20)30566-3
10.2214/AJR.20.22976
10.1097/RLI.0000000000000670
10.1038/s41591-019-0583-3
10.1186/s12916-019-1466-7
10.1038/s41467-020-18786-x
10.1007/s00134-020-05991-x
10.18632/aging.103000
10.1016/j.dsx.2020.03.002
10.1016/S1470-2045(20)30096-6
10.1097/RLI.0000000000000672
10.1007/s00277-020-04019-0
10.1371/journal.pone.0230548
10.1016/j.ejrad.2020.108941
10.1016/S2213-2600(20)30076-X
10.1016/j.jtho.2020.02.010
10.1016/j.crad.2020.03.004
10.1148/radiol.2462070712
10.2307/2531595
10.1136/thorax.58.5.377
10.1038/s41467-020-17280-8 |
A Novel Block Imaging Technique Using Nine Artificial Intelligence Models for COVID-19 Disease Classification, Characterization and Severity Measurement in Lung Computed Tomography Scans on an Italian Cohort. | Computer Tomography (CT) is currently being adapted for visualization of COVID-19 lung damage. Manual classification and characterization of COVID-19 may be biased depending on the expert's opinion. Artificial Intelligence has recently penetrated COVID-19, especially deep learning paradigms. There are nine kinds of classification systems in this study, namely one deep learning-based CNN, five kinds of transfer learning (TL) systems namely VGG16, DenseNet121, DenseNet169, DenseNet201 and MobileNet, three kinds of machine-learning (ML) systems, namely artificial neural network (ANN), decision tree (DT), and random forest (RF) that have been designed for classification of COVID-19 segmented CT lung against Controls. Three kinds of characterization systems were developed namely (a) Block imaging for COVID-19 severity index (CSI); (b) Bispectrum analysis; and (c) Block Entropy. A cohort of Italian patients with 30 controls (990 slices) and 30 COVID-19 patients (705 slices) was used to test the performance of three types of classifiers. Using K10 protocol (90% training and 10% testing), the best accuracy and AUC was for DCNN and RF pairs were 99.41 ± 5.12%, 0.991 (p < 0.0001), and 99.41 ± 0.62%, 0.988 (p < 0.0001), respectively, followed by other ML and TL classifiers. We show that diagnostics odds ratio (DOR) was higher for DL compared to ML, and both, Bispecturm and Block Entropy shows higher values for COVID-19 patients. CSI shows an association with Ground Glass Opacities (0.9146, p < 0.0001). Our hypothesis holds true that deep learning shows superior performance compared to machine learning models. Block imaging is a powerful novel approach for pinpointing COVID-19 severity and is clinically validated. | Journal of medical systems | "2021-01-27T00:00:00" | [
"MohitAgarwal",
"LucaSaba",
"Suneet KGupta",
"AlessandroCarriero",
"ZenoFalaschi",
"AlessioPaschè",
"PietroDanna",
"AymanEl-Baz",
"SubbaramNaidu",
"Jasjit SSuri"
] | 10.1007/s10916-021-01707-w
10.1186/s13578-020-00404-4
10.1016/j.healun.2020.04.004
10.1088/0031-9155/53/20/N03
10.1016/j.compbiomed.2020.103958
10.1016/j.media.2012.02.005
10.1109/MSP.2010.936730
10.5121/ijsc.2011.2103
10.1148/rg.2017160130
10.1016/j.cmpb.2017.09.004
10.1007/s10916-018-0940-7
10.1016/j.cmpb.2019.04.008
10.1109/29.9037
10.1016/S0895-4356(03)00177-X
10.1007/s10916-015-0214-6
10.1016/j.compbiomed.2017.08.014
10.1016/j.cmpb.2016.03.016
10.1118/1.4725759
10.1007/s10916-017-0862-9
10.1016/j.ultrasmedbio.2010.07.011
10.7785/tcrt.2012.500381
10.1177/1533034614547445
10.7785/tcrtexpress.2013.600273
10.1007/s11517-012-1019-0
10.1109/TIM.2011.2174897
10.1007/s10916-017-0745-0
10.1016/j.cmpb.2012.09.008
10.1177/0954411913480622
10.7785/tcrt.2012.500272
10.1016/j.bspc.2013.08.008
10.7785/tcrt.2012.500346
10.1016/j.cmpb.2013.07.012
10.1016/j.cmpb.2015.11.013
10.1016/j.cmpb.2013.08.017
10.1002/jcu.22183
10.1007/s10916-010-9645-2
10.21037/cdt.2020.01.07
10.1016/j.ihj.2020.06.004
10.2741/4850
10.4103/0974-7788.59946
10.1038/nrmicro.2016.81
10.1074/jbc.M111.325803
10.1161/CIRCRESAHA.116.307708
10.1002/path.1570
10.1016/S0140-6736(20)30211-7
10.1111/j.1365-2362.2009.02153.x
10.1007/s00234-019-02327-5
10.1007/s00234-018-2142-x
10.1016/j.cmpb.2015.10.022
10.1016/j.compbiomed.2017.10.019
10.1016/j.cmpb.2016.02.004
10.1007/s10916-015-0407-z
10.1007/s10916-016-0504-7
10.1142/S0219519409003115 |
Synergistic learning of lung lobe segmentation and hierarchical multi-instance classification for automated severity assessment of COVID-19 in CT images. | Understanding chest CT imaging of the coronavirus disease 2019 (COVID-19) will help detect infections early and assess the disease progression. Especially, automated severity assessment of COVID-19 in CT images plays an essential role in identifying cases that are in great need of intensive clinical care. However, it is often challenging to accurately assess the severity of this disease in CT images, due to variable infection regions in the lungs, similar imaging biomarkers, and large inter-case variations. To this end, we propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images, by jointly performing lung lobe segmentation and multi-instance classification. Considering that only a few infection regions in a CT image are related to the severity assessment, we first represent each input image by a bag that contains a set of 2D image patches (with each cropped from a specific slice). A multi-task multi-instance deep network (called M | Pattern recognition | "2021-01-27T00:00:00" | [
"KeleiHe",
"WeiZhao",
"XingzhiXie",
"WenJi",
"MingxiaLiu",
"ZhenyuTang",
"YinghuanShi",
"FengShi",
"YangGao",
"JunLiu",
"JunfengZhang",
"DinggangShen"
] | 10.1016/j.patcog.2021.107828
10.1148/ryct.2020200047 |
Computer aid screening of COVID-19 using X-ray and CT scan images: An inner comparison. | The objective of this study is to conduct a critical analysis to investigate and compare a group of computer aid screening methods of COVID-19 using chest X-ray images and computed tomography (CT) images. The computer aid screening method includes deep feature extraction, transfer learning, and machine learning image classification approach. The deep feature extraction and transfer learning method considered 13 pre-trained CNN models. The machine learning approach includes three sets of handcrafted features and three classifiers. The pre-trained CNN models include AlexNet, GoogleNet, VGG16, VGG19, Densenet201, Resnet18, Resnet50, Resnet101, Inceptionv3, Inceptionresnetv2, Xception, MobileNetv2 and ShuffleNet. The handcrafted features are GLCM, LBP & HOG, and machine learning based classifiers are KNN, SVM & Naive Bayes. In addition, the different paradigms of classifiers are also analyzed. Overall, the comparative analysis is carried out in 65 classification models, i.e., 13 in deep feature extraction, 13 in transfer learning, and 39 in the machine learning approaches. Finally, all classification models perform better when applying to the chest X-ray image set as comparing to the use of CT scan image set. Among 65 classification models, the VGG19 with SVM achieved the highest accuracy of 99.81%when applying to the chest X-ray images. In conclusion, the findings of this analysis study are beneficial for the researchers who are working towards designing computer aid tools for screening COVID-19 infection diseases. | Journal of X-ray science and technology | "2021-01-26T00:00:00" | [
"Prabira KumarSethy",
"Santi KumariBehera",
"KommaAnitha",
"ChankiPandey",
"M RKhan"
] | 10.3233/XST-200784 |
Deep Ensemble Model for Classification of Novel Coronavirus in Chest X-Ray Images. | The novel coronavirus, SARS-CoV-2, can be deadly to people, causing COVID-19. The ease of its propagation, coupled with its high capacity for illness and death in infected individuals, makes it a hazard to the community. Chest X-rays are one of the most common but most difficult to interpret radiographic examination for early diagnosis of coronavirus-related infections. They carry a considerable amount of anatomical and physiological information, but it is sometimes difficult even for the expert radiologist to derive the related information they contain. Automatic classification using deep learning models can help in better assessing these infections swiftly. Deep CNN models, namely, MobileNet, ResNet50, and InceptionV3, were applied with different variations, including training the model from the start, fine-tuning along with adjusting learned weights of all layers, and fine-tuning with learned weights along with augmentation. Fine-tuning with augmentation produced the best results in pretrained models. Out of these, two best-performing models (MobileNet and InceptionV3) selected for ensemble learning produced accuracy and FScore of 95.18% and 90.34%, and 95.75% and 91.47%, respectively. The proposed hybrid ensemble model generated with the merger of these deep models produced a classification accuracy and FScore of 96.49% and 92.97%. For test dataset, which was separately kept, the model generated accuracy and FScore of 94.19% and 88.64%. Automatic classification using deep ensemble learning can help radiologists in the correct identification of coronavirus-related infections in chest X-rays. Consequently, this swift and computer-aided diagnosis can help in saving precious human lives and minimizing the social and economic impact on society. | Computational intelligence and neuroscience | "2021-01-26T00:00:00" | [
"FareedAhmad",
"AmjadFarooq",
"Muhammad UsmanGhani"
] | 10.1155/2021/8890226
10.3201/eid2313.170418
10.3389/fpubh.2014.00144
10.1016/S0140-6736(16)31012-1
10.1186/1471-2458-14-509
10.1371/journal.pntd.0003257
10.20506/rst.33.2.2292
10.1016/j.jmii.2020.02.001
10.1056/NEJMc2004973
10.1038/d41586-020-00974-w
10.1101/2020.03.08.982637
10.1017/ice.2020.58
10.1001/jama.2020.4756
10.1016/S0140-6736(20)30788-1
10.1038/d41586-020-00548-w
10.3390/vetsci7010028
10.1016/j.coviro.2017.01.002
10.1148/radiol.2020200642
10.1148/ryct.2020200034
10.1016/s0140-6736(20)30183-5
10.1148/radiol.2020200432
10.1371/journal.pone.0184554
10.1002/ima.22469
10.1007/978-3-030-55258-9_17
10.1155/2018/4168538
10.1155/2017/3048181
10.1109/ICABME.2015.7323305
10.1155/2019/4629859
10.1109/CVPR.2015.7298594
10.1109/CVPR.2017.195
10.1109/CVPR.2016.90
10.1007/s11263-015-0816-y
10.1016/j.eswa.2017.11.028
10.1109/ICAIT47043.2019.8987286
10.1007/3-540-57233-3_76
10.1148/radiol.2017162326
10.1148/radiol.2017162725
10.1155/2019/4180949
10.1109/tmi.2014.2350539
10.1109/CCECE.2019.8861969
10.1109/EBBT.2019.8741582
10.1109/ICECCT.2019.8869364
10.1109/CVPR.2017.369
10.3390/app10020559
10.1016/j.cell.2018.02.010
10.1093/jamia/ocv080
10.1101/2020.02.14.20023028
10.1016/j.eng.2020.04.010
10.1007/s13246-020-00865-4
10.1016/j.patrec.2020.09.010
10.1007/s10489-020-01829-7
10.1016/j.mehy.2020.109761
10.1016/j.cmpb.2020.105581
10.1016/j.cmpb.2020.105532
10.1109/TMI.2020.2993291
10.1016/j.media.2020.101794
10.1109/ACCESS.2020.3010287
10.1109/ACCESS.2020.3003810
10.17632/2fxz4px6d8.4
10.1109/access.2020.2971257
10.3390/app8101715
10.1007/s11042-019-08453-9
10.1186/s12859-017-1898-z
10.1109/tmi.2016.2528162
10.21037/atm.2019.08.54
10.1109/access.2019.2946000
10.1007/978-3-319-93000-8_92
10.1088/1742-6596/1237/2/022026
10.1007/s13246-019-00807-9 |
Association of AI quantified COVID-19 chest CT and patient outcome. | Severity scoring is a key step in managing patients with COVID-19 pneumonia. However, manual quantitative analysis by radiologists is a time-consuming task, while qualitative evaluation may be fast but highly subjective. This study aims to develop artificial intelligence (AI)-based methods to quantify disease severity and predict COVID-19 patient outcome.
We develop an AI-based framework that employs deep neural networks to efficiently segment lung lobes and pulmonary opacities. The volume ratio of pulmonary opacities inside each lung lobe gives the severity scores of the lobes, which are then used to predict ICU admission and mortality with three different machine learning methods. The developed methods were evaluated on datasets from two hospitals (site A: Firoozgar Hospital, Iran, 105 patients; site B: Massachusetts General Hospital, USA, 88 patients).
AI-based severity scores are strongly associated with those evaluated by radiologists (Spearman's rank correlation 0.837, [Formula: see text]). Using AI-based scores produced significantly higher ([Formula: see text]) area under the ROC curve (AUC) values. The developed AI method achieved the best performance of AUC = 0.813 (95% CI [0.729, 0.886]) in predicting ICU admission and AUC = 0.741 (95% CI [0.640, 0.837]) in mortality estimation on the two datasets.
Accurate severity scores can be obtained using the developed AI methods over chest CT images. The computed severity scores achieved better performance than radiologists in predicting COVID-19 patient outcome by consistently quantifying image features. Such developed techniques of severity assessment may be extended to other lung diseases beyond the current pandemic. | International journal of computer assisted radiology and surgery | "2021-01-24T00:00:00" | [
"XiFang",
"UweKruger",
"FatemehHomayounieh",
"HanqingChao",
"JiajinZhang",
"Subba RDigumarthy",
"Chiara DArru",
"Mannudeep KKalra",
"PingkunYan"
] | 10.1007/s11548-020-02299-5
10.1148/radiol.2020200642
10.1016/j.media.2020.101844
10.1109/TMI.2020.3001036
10.1038/s41467-020-17971-2
10.1186/s41747-020-00173-2
10.1097/RLI.0000000000000672
10.1148/radiol.2020200905
10.1148/ryai.2020200079
10.1088/1361-6560/abbf9e
10.1148/radiol.2020201754
10.1148/radiol.2020201160
10.1148/radiol.2020200343
10.1097/RLI.0000000000000674
10.1148/ryct.2020200047
10.1371/journal.pone.0178944
10.2214/AJR.20.22976
10.1016/j.media.2020.101824 |
Automated Detection and Quantification of COVID-19 Airspace Disease on Chest Radiographs: A Novel Approach Achieving Expert Radiologist-Level Performance Using a Deep Convolutional Neural Network Trained on Digital Reconstructed Radiographs From Computed Tomography-Derived Ground Truth. | The aim of this study was to leverage volumetric quantification of airspace disease (AD) derived from a superior modality (computed tomography [CT]) serving as ground truth, projected onto digitally reconstructed radiographs (DRRs) to (1) train a convolutional neural network (CNN) to quantify AD on paired chest radiographs (CXRs) and CTs, and (2) compare the DRR-trained CNN to expert human readers in the CXR evaluation of patients with confirmed COVID-19.
We retrospectively selected a cohort of 86 COVID-19 patients (with positive reverse transcriptase-polymerase chain reaction test results) from March to May 2020 at a tertiary hospital in the northeastern United States, who underwent chest CT and CXR within 48 hours. The ground-truth volumetric percentage of COVID-19-related AD (POv) was established by manual AD segmentation on CT. The resulting 3-dimensional masks were projected into 2-dimensional anterior-posterior DRR to compute area-based AD percentage (POa). A CNN was trained with DRR images generated from a larger-scale CT dataset of COVID-19 and non-COVID-19 patients, automatically segmenting lungs, AD, and quantifying POa on CXR. The CNN POa results were compared with POa quantified on CXR by 2 expert readers and to the POv ground truth, by computing correlations and mean absolute errors.
Bootstrap mean absolute error and correlations between POa and POv were 11.98% (11.05%-12.47%) and 0.77 (0.70-0.82) for average of expert readers and 9.56% to 9.78% (8.83%-10.22%) and 0.78 to 0.81 (0.73-0.85) for the CNN, respectively.
Our CNN trained with DRR using CT-derived airspace quantification achieved expert radiologist level of accuracy in the quantification of AD on CXR in patients with positive reverse transcriptase-polymerase chain reaction test results for COVID-19. | Investigative radiology | "2021-01-23T00:00:00" | [
"Eduardo JMortani Barbosa",
"Warren BGefter",
"Florin CGhesu",
"SiqiLiu",
"BorisMailhe",
"AwaisMansoor",
"SasaGrbic",
"SebastianVogt"
] | 10.1097/RLI.0000000000000763 |
ADOPT: automatic deep learning and optimization-based approach for detection of novel coronavirus COVID-19 disease using X-ray images. | In the hospital, because of the rise in cases daily, there are a small number of COVID-19 test kits available. For this purpose, a rapid alternative diagnostic choice to prevent COVID-19 spread among individuals must be implemented as an automatic detection method. In this article, the multi-objective optimization and deep learning-based technique for identifying infected patients with coronavirus using X-rays is proposed. J48 decision tree approach classifies the deep feature of corona affected X-ray images for the efficient detection of infected patients. In this study, 11 different convolutional neural network-based (CNN) models (AlexNet, VGG16, VGG19, GoogleNet, ResNet18, ResNet50, ResNet101, InceptionV3, InceptionResNetV2, DenseNet201 and XceptionNet) are developed for detection of infected patients with coronavirus pneumonia using X-ray images. The efficiency of the proposed model is tested using k-fold cross-validation method. Moreover, the parameters of CNN deep learning model are tuned using multi-objective spotted hyena optimizer (MOSHO). Extensive analysis shows that the proposed model can classify the X-ray images at a good accuracy, precision, recall, specificity and F1-score rates. Extensive experimental results reveal that the proposed model outperforms competitive models in terms of well-known performance metrics. Hence, the proposed model is useful for real-time COVID-19 disease classification from X-ray chest images.Communicated by Ramaswamy H. Sarma. | Journal of biomolecular structure & dynamics | "2021-01-22T00:00:00" | [
"GauravDhiman",
"VictorChang",
"KrishnaKant Singh",
"AchyutShankar"
] | 10.1080/07391102.2021.1875049
10.1148/radiol.2020200642
10.1016/j.patrec.2020.03.011
10.1007/s10489-019-01522-4
10.1016/j.engappai.2019.03.021
10.1016/j.advengsoft.2017.05.014
10.1016/j.knosys.2018.06.001
10.1016/j.knosys.2018.03.011
10.1016/j.knosys.2018.11.024
10.1016/j.knosys.2020.106560
10.1016/j.eswa.2020.114150
10.1016/j.ejrnm.2015.11.004
10.1016/S0140-6736(20)30183-5
10.1016/S0140-6736(20)30183-5
10.3390/jcm9020523
10.1016/j.engappai.2020.104008
10.1016/j.engappai.2020.103541
10.1136/bmj.m641
10.3390/jcm9020419
10.1016/j.idm.2020.02.002
10.3390/jcm9020462
10.1101/2020.02.14.20023028
10.1101/2020.02.27.20028027
10.1016/j.compbiomed.2019.103387
10.3390/jcm9020388 |
Automated processing of social media content for radiologists: applied deep learning to radiological content on twitter during COVID-19 pandemic. | The purpose of this study was to develop an automated process to analyze multimedia content on Twitter during the COVID-19 outbreak and classify content for radiological significance using deep learning (DL).
Using Twitter search features, all tweets containing keywords from both "radiology" and "COVID-19" were collected for the period January 01, 2020 up to April 24, 2020. The resulting dataset comprised of 8354 tweets. Images were classified as (i) images with text (ii) radiological content (e.g., CT scan snapshots, X-ray images), and (iii) non-medical content like personal images or memes. We trained our deep learning model using Convolutional Neural Networks (CNN) on training dataset of 1040 labeled images drawn from all three classes. We then trained another DL classifier for segmenting images into categories based on human anatomy. All software used is open-source and adapted for this research. The diagnostic performance of the algorithm was assessed by comparing results on a test set of 1885 images.
Our analysis shows that in COVID-19 related tweets on radiology, nearly 32% had textual images, another 24% had radiological content, and 44% were not of radiological significance. Our results indicated a 92% accuracy in classifying images originally labeled as chest X-ray or chest CT and a nearly 99% accurate classification of images containing medically relevant text. With larger training dataset and algorithmic tweaks, the accuracy can be further improved.
Applying DL on rich textual images and other metadata in tweets we can process and classify content for radiological significance in real time. | Emergency radiology | "2021-01-19T00:00:00" | [
"ShikharKhurana",
"RohanChopra",
"BhartiKhurana"
] | 10.1007/s10140-020-01885-z
10.1016/j.jacr.2013.07.015
10.1016/j.jmir.2016.09.001
10.1016/j.jacr.2019.03.014
10.1371/journal.pone.0210689
10.1002/j.0022-0337.2016.80.2.tb06070.x
10.1017/cem.2020.361
10.1016/j.jacr.2017.09.010 |
COVID-19 diagnosis from chest X-ray images using transfer learning: Enhanced performance by debiasing dataloader. | Chest X-ray imaging has been proved as a powerful diagnostic method to detect and diagnose COVID-19 cases due to its easy accessibility, lower cost and rapid imaging time.
This study aims to improve efficacy of screening COVID-19 infected patients using chest X-ray images with the help of a developed deep convolutional neural network model (CNN) entitled nCoV-NET.
To train and to evaluate the performance of the developed model, three datasets were collected from resources of "ChestX-ray14", "COVID-19 image data collection", and "Chest X-ray collection from Indiana University," respectively. Overall, 299 COVID-19 pneumonia cases and 1,522 non-COVID 19 cases are involved in this study. To overcome the probable bias due to the unbalanced cases in two classes of the datasets, ResNet, DenseNet, and VGG architectures were re-trained in the fine-tuning stage of the process to distinguish COVID-19 classes using a transfer learning method. Lastly, the optimized final nCoV-NET model was applied to the testing dataset to verify the performance of the proposed model.
Although the performance parameters of all re-trained architectures were determined close to each other, the final nCOV-NET model optimized by using DenseNet-161 architecture in the transfer learning stage exhibits the highest performance for classification of COVID-19 cases with the accuracy of 97.1 %. The Activation Mapping method was used to create activation maps that highlights the crucial areas of the radiograph to improve causality and intelligibility.
This study demonstrated that the proposed CNN model called nCoV-NET can be utilized for reliably detecting COVID-19 cases using chest X-ray images to accelerate the triaging and save critical time for disease control as well as assisting the radiologist to validate their initial diagnosis. | Journal of X-ray science and technology | "2021-01-19T00:00:00" | [
"ÇağínPolat",
"OnurKaraman",
"CerenKaraman",
"GüneyKorkmaz",
"Mehmet CanBalcı",
"Sevim ErcanKelek"
] | 10.3233/XST-200757 |
Deep Learning Models for Predicting Severe Progression in COVID-19-Infected Patients: Retrospective Study. | Many COVID-19 patients rapidly progress to respiratory failure with a broad range of severities. Identification of high-risk cases is critical for early intervention.
The aim of this study is to develop deep learning models that can rapidly identify high-risk COVID-19 patients based on computed tomography (CT) images and clinical data.
We analyzed 297 COVID-19 patients from five hospitals in Daegu, South Korea. A mixed artificial convolutional neural network (ACNN) model, combining an artificial neural network for clinical data and a convolutional neural network for 3D CT imaging data, was developed to classify these cases as either high risk of severe progression (ie, event) or low risk (ie, event-free).
Using the mixed ACNN model, we were able to obtain high classification performance using novel coronavirus pneumonia lesion images (ie, 93.9% accuracy, 80.8% sensitivity, 96.9% specificity, and 0.916 area under the curve [AUC] score) and lung segmentation images (ie, 94.3% accuracy, 74.7% sensitivity, 95.9% specificity, and 0.928 AUC score) for event versus event-free groups.
Our study successfully differentiated high-risk cases among COVID-19 patients using imaging and clinical features. The developed model can be used as a predictive tool for interventions in aggressive therapies. | JMIR medical informatics | "2021-01-19T00:00:00" | [
"Thao ThiHo",
"JongminPark",
"TaewooKim",
"ByunggeonPark",
"JaeheeLee",
"Jin YoungKim",
"Ki BeomKim",
"SooyoungChoi",
"Young HwanKim",
"Jae-KwangLim",
"SanghunChoi"
] | 10.2196/24973
10.1056/NEJMp2000929
10.1016/j.coviro.2018.01.001
10.1073/pnas.2009637117
10.1056/NEJMoa2002032
10.1164/rccm.201908-1581ST
10.1080/22221751.2020.1745095
10.1080/22221751.2020.1745095
10.1016/j.ejrad.2020.108961
10.2196/19569
10.7150/thno.45985
10.1148/radiol.2020201365
10.1101/2020.03.28.20045997
10.1101/2020.02.29.20029603
10.1101/2020.02.27.20028027
10.1371/journal.pone.0230548
10.1371/journal.pone.0230548
10.1101/2020.03.25.20043331
10.1136/bmj.m1328
10.2196/24018
10.1016/j.cell.2020.04.045
10.1364/OE.18.015256
10.1148/radiol.2020201433
10.1109/cvpr.2016.90
10.1109/cvpr.2016.308
10.1109/cvpr.2017.243
10.1145/3065386
10.1038/nature14539
10.1007/s11263-019-01228-7.pdf
10.1007/s11263-019-01228-7
10.1016/j.patcog.2007.04.009
10.1016/j.ipm.2009.03.002
10.1038/s41592-019-0686-2
10.12688/f1000research.7035.2
10.1016/s0140-6736(20)30183-5
10.1016/j.ejrad.2004.01.005
10.1016/j.media.2019.101628 |
Multi-window back-projection residual networks for reconstructing COVID-19 CT super-resolution images. | With the increasing problem of coronavirus disease 2019 (COVID-19) in the world, improving the image resolution of COVID-19 computed tomography (CT) becomes a very important task. At present, single-image super-resolution (SISR) models based on convolutional neural networks (CNN) generally have problems such as the loss of high-frequency information and the large size of the model due to the deep network structure.
In this work, we propose an optimization model based on multi-window back-projection residual network (MWSR), which outperforms most of the state-of-the-art methods. Firstly, we use multi-window to refine the same feature map at the same time to obtain richer high/low frequency information, and fuse and filter out the features needed by the deep network. Then, we develop a back-projection network based on the dilated convolution, using up-projection and down-projection modules to extract image features. Finally, we merge several repeated and continuous residual modules with global features, merge the information flow through the network, and input them to the reconstruction module.
The proposed method shows the superiority over the state-of-the-art methods on the benchmark dataset, and generates clear COVID-19 CT super-resolution images.
Both subjective visual effects and objective evaluation indicators are improved, and the model specifications are optimized. Therefore, the MWSR method can improve the clarity of CT images of COVID-19 and effectively assist the diagnosis and quantitative assessment of COVID-19. | Computer methods and programs in biomedicine | "2021-01-18T00:00:00" | [
"DefuQiu",
"YuhuCheng",
"XuesongWang",
"XiaoqiangZhang"
] | 10.1016/j.cmpb.2021.105934 |
Distant Domain Transfer Learning for Medical Imaging. | Medical image processing is one of the most important topics in the Internet of Medical Things (IoMT). Recently, deep learning methods have carried out state-of-the-art performances on medical imaging tasks. In this paper, we propose a novel transfer learning framework for medical image classification. Moreover, we apply our method COVID-19 diagnosis with lung Computed Tomography (CT) images. However, well-labeled training data sets cannot be easily accessed due to the disease's novelty and privacy policies. The proposed method has two components: reduced-size Unet Segmentation model and Distant Feature Fusion (DFF) classification model. This study is related to a not well-investigated but important transfer learning problem, termed Distant Domain Transfer Learning (DDTL). In this study, we develop a DDTL model for COVID-19 diagnosis using unlabeled Office-31, Caltech-256, and chest X-ray image data sets as the source data, and a small set of labeled COVID-19 lung CT as the target data. The main contributions of this study are: 1) the proposed method benefits from unlabeled data in distant domains which can be easily accessed, 2) it can effectively handle the distribution shift between the training data and the testing data, 3) it has achieved 96% classification accuracy, which is 13% higher classification accuracy than "non-transfer" algorithms, and 8% higher than existing transfer and distant transfer algorithms. | IEEE journal of biomedical and health informatics | "2021-01-16T00:00:00" | [
"ShutengNiu",
"MerylLiu",
"YongxinLiu",
"JianWang",
"HoubingSong"
] | 10.1109/JBHI.2021.3051470 |
Explainable COVID-19 Detection Using Chest CT Scans and Deep Learning. | This paper explores how well deep learning models trained on chest CT images can diagnose COVID-19 infected people in a fast and automated process. To this end, we adopted advanced deep network architectures and proposed a transfer learning strategy using custom-sized input tailored for each deep architecture to achieve the best performance. We conducted extensive sets of experiments on two CT image datasets, namely, the SARS-CoV-2 CT-scan and the COVID19-CT. The results show superior performances for our models compared with previous studies. Our best models achieved average accuracy, precision, sensitivity, specificity, and F1-score values of 99.4%, 99.6%, 99.8%, 99.6%, and 99.4% on the SARS-CoV-2 dataset, and 92.9%, 91.3%, 93.7%, 92.2%, and 92.5% on the COVID19-CT dataset, respectively. For better interpretability of the results, we applied visualization techniques to provide visual explanations for the models' predictions. Feature visualizations of the learned features show well-separated clusters representing CT images of COVID-19 and non-COVID-19 cases. Moreover, the visualizations indicate that our models are not only capable of identifying COVID-19 cases but also provide accurate localization of the COVID-19-associated regions, as indicated by well-trained radiologists. | Sensors (Basel, Switzerland) | "2021-01-15T00:00:00" | [
"HammamAlshazly",
"ChristophLinse",
"ErhardtBarth",
"ThomasMartinetz"
] | 10.3390/s21020455
10.3201/eid2606.200239
10.1016/S0140-6736(20)30607-3
10.1016/S0140-6736(20)30211-7
10.1016/j.ejrad.2020.108961
10.1148/radiol.2020200432
10.1148/radiol.2020200642
10.1148/radiol.2020200241
10.1007/s10916-020-01562-1
10.1007/s11263-015-0816-y
10.1016/j.sysarc.2020.101830
10.1109/ACCESS.2020.3024116
10.1109/TMI.2016.2528162
10.1148/radiol.2020200905
10.1038/s41598-020-76282-0
10.1016/j.eng.2020.04.010
10.1016/j.imu.2020.100427
10.1007/s13246-020-00865-4
10.1371/journal.pone.0235187
10.1016/j.ejrad.2020.109041
10.1007/s12652-020-02669-6
10.1038/s41598-020-76550-z
10.1109/JBHI.2020.3023246
10.1007/s10489-020-01943-6
10.1016/j.cmpb.2020.105581
10.1016/j.chaos.2020.110122
10.1016/j.media.2020.101794
10.1016/j.cmpb.2020.105608
10.1101/2020.03.24.20043117
10.1101/2020.02.23.20026930
10.1080/07391102.2020.1788642
10.1016/j.cmpb.2020.105532
10.1101/2020.04.13.20063479
10.3390/s19194139
10.1101/2020.04.24.20078584
10.1101/2020.04.13.20063941
10.1016/j.chaos.2020.110190
10.1155/2020/8843664
10.36227/techrxiv.12476426.v1
10.1007/s00330-020-06801-0
10.1016/j.diii.2020.03.014 |
An Efficient Method for Coronavirus Detection Through X-rays Using Deep Neural Network. | Coronavirus (COVID-19) is a group of infectious diseases caused by related viruses called coronaviruses. In humans, the seriousness of infection caused by a coronavirus in the respiratory tract can vary from mild to lethal. A serious illness can be developed in old people and those with underlying medical problems like diabetes, cardiovascular disease, cancer, and chronic respiratory disease. For the diagnosis of coronavirus disease, due to the growing number of cases, a limited number of test kits for COVID-19 are available in the hospitals. Hence, it is important to implement an automated system as an immediate alternative diagnostic option to pause the spread of COVID-19 in the population.
This paper proposes a deep learning model for the classification of coronavirus infected patient detection using chest X-ray radiographs.
A fully connected convolutional neural network model is developed to classify healthy and diseased X-ray radiographs. The proposed neural network model consists of seven convolutional layers with the rectified linear unit, softmax (last layer) activation functions, and max-pooling layers which were trained using the publicly available COVID-19 dataset.
For validation of the proposed model, the publicly available chest X-ray radiograph dataset consisting of COVID-19 and normal patient's images were used. Considering the performance of the results that are evaluated based on various evaluation metrics such as precision, recall, MSE, RMSE and accuracy, it is seen that the accuracy of the proposed CNN model is 98.07%. | Current medical imaging | "2021-01-14T00:00:00" | [
"P SrinivasaRao",
"PradeepBheemavarapu",
"P S LathaKalyampudi",
"T V MadhusudhanaRao"
] | 10.2174/1573405617999210112193220 |
Diabetic Retinopathy Screening Using Artificial Intelligence and Handheld Smartphone-Based Retinal Camera. | Portable retinal cameras and deep learning (DL) algorithms are novel tools adopted by diabetic retinopathy (DR) screening programs. Our objective is to evaluate the diagnostic accuracy of a DL algorithm and the performance of portable handheld retinal cameras in the detection of DR in a large and heterogenous type 2 diabetes population in a real-world, high burden setting.
Participants underwent fundus photographs of both eyes with a portable retinal camera (Phelcom Eyer). Classification of DR was performed by human reading and a DL algorithm (PhelcomNet), consisting of a convolutional neural network trained on a dataset of fundus images captured exclusively with the portable device; both methods were compared. We calculated the area under the curve (AUC), sensitivity, and specificity for more than mild DR.
A total of 824 individuals with type 2 diabetes were enrolled at Itabuna Diabetes Campaign, a subset of 679 (82.4%) of whom could be fully assessed. The algorithm sensitivity/specificity was 97.8 % (95% CI 96.7-98.9)/61.4 % (95% CI 57.7-65.1); AUC was 0·89. All false negative cases were classified as moderate non-proliferative diabetic retinopathy (NPDR) by human grading.
The DL algorithm reached a good diagnostic accuracy for more than mild DR in a real-world, high burden setting. The performance of the handheld portable retinal camera was adequate, with over 80% of individuals presenting with images of sufficient quality. Portable devices and artificial intelligence tools may increase coverage of DR screening programs. | Journal of diabetes science and technology | "2021-01-14T00:00:00" | [
"Fernando KornMalerbi",
"Rafael ErnaneAndrade",
"Paulo HenriqueMorales",
"José AugustoStuchi",
"DiegoLencione",
"Jean Vitorde Paulo",
"Mayana PereiraCarvalho",
"Fabrícia SilvaNunes",
"Roseanne MontargilRocha",
"Daniel AFerraz",
"RubensBelfort"
] | 10.1177/1932296820985567
10.1177/1932296820906212
10.5935/0004-2749.20200070. |
A novel deep learning-based quantification of serial chest computed tomography in Coronavirus Disease 2019 (COVID-19). | This study aims to explore and compare a novel deep learning-based quantification with the conventional semi-quantitative computed tomography (CT) scoring for the serial chest CT scans of COVID-19. 95 patients with confirmed COVID-19 and a total of 465 serial chest CT scans were involved, including 61 moderate patients (moderate group, 319 chest CT scans) and 34 severe patients (severe group, 146 chest CT scans). Conventional CT scoring and deep learning-based quantification were performed for all chest CT scans for two study goals: (1) Correlation between these two estimations; (2) Exploring the dynamic patterns using these two estimations between moderate and severe groups. The Spearman's correlation coefficient between these two estimation methods was 0.920 (p < 0.001). predicted pulmonary involvement (CT score and percent of pulmonary lesions calculated using deep learning-based quantification) increased more rapidly and reached a higher peak on 23rd days from symptom onset in severe group, which reached a peak on 18th days in moderate group with faster absorption of the lesions. The deep learning-based quantification for COVID-19 showed a good correlation with the conventional CT scoring and demonstrated a potential benefit in the estimation of disease severities of COVID-19. | Scientific reports | "2021-01-13T00:00:00" | [
"FengPan",
"LinLi",
"BoLiu",
"TianheYe",
"LingliLi",
"DehanLiu",
"ZezhenDing",
"GuangfengChen",
"BoLiang",
"LianYang",
"ChuanshengZheng"
] | 10.1038/s41598-020-80261-w
10.1007/s00330-020-06731-x
10.1148/radiol.2020200432
10.1148/radiol.2020200343
10.1148/radiol.2020200642
10.1001/jama.2020.1585
10.1148/radiol.2020200370
10.1016/S1473-3099(20)30086-4
10.1148/radiol.2020200843
10.2214/AJR.20.22976
10.1097/RLI.0000000000000670
10.1097/RLI.0000000000000672
10.1148/radiol.11092149
10.1148/rg.2018170048
10.1148/radiol.2462070712
10.1007/s11604-020-01010-7
10.1148/radiol.2363040958
10.1016/j.media.2017.06.014
10.9734/BJMCS/2017/32229
10.1007/s00330-016-4317-3
10.1016/j.compmedimag.2008.04.005
10.1109/TMI.2012.2219881
10.3760/cma.j.issn.0254-6450.2020.02.003
10.1056/NEJMoa2002032
10.1016/S2213-2600(20)30370-2
10.7150/ijms.46614
10.1038/s41598-020-68057-4
10.1016/S2213-2600(20)30453-7
10.1016/j.ejrad.2020.109233
10.3389/fpubh.2020.587937
10.1038/s41551-020-00633-5 |
COV19-CNNet and COV19-ResNet: Diagnostic Inference Engines for Early Detection of COVID-19. | Chest CT is used in the COVID-19 diagnosis process as a significant complement to the reverse transcription polymerase chain reaction (RT-PCR) technique. However, it has several drawbacks, including long disinfection and ventilation times, excessive radiation effects, and high costs. While X-ray radiography is more useful for detecting COVID-19, it is insensitive to the early stages of the disease. We have developed inference engines that will turn X-ray machines into powerful diagnostic tools by using deep learning technology to detect COVID-19. We named these engines COV19-CNNet and COV19-ResNet. The former is based on convolutional neural network architecture; the latter is on residual neural network (ResNet) architecture. This research is a retrospective study. The database consists of 210 COVID-19, 350 viral pneumonia, and 350 normal (healthy) chest X-ray (CXR) images that were created using two different data sources. This study was focused on the problem of multi-class classification (COVID-19, viral pneumonia, and normal), which is a rather difficult task for the diagnosis of COVID-19. The classification accuracy levels for COV19-ResNet and COV19-CNNet were 97.61% and 94.28%, respectively. The inference engines were developed from scratch using new and special deep neural networks without pre-trained models, unlike other studies in the field. These powerful diagnostic engines allow for the early detection of COVID-19 as well as distinguish it from viral pneumonia with similar radiological appearances. Thus, they can help in fast recovery at the early stages, prevent the COVID-19 outbreak from spreading, and contribute to reducing pressure on health-care systems worldwide. | Cognitive computation | "2021-01-12T00:00:00" | [
"AyturkKeles",
"Mustafa BerkKeles",
"AliKeles"
] | 10.1007/s12559-020-09795-5
10.1016/S2213-2600(20)30076-X
10.1148/radiol.2017161659
10.1148/radiol.2020200370
10.1148/radiol.2481071451
10.1016/j.tibtech.2018.08.005
10.1016/j.zemedi.2018.11.002
10.1016/j.zemedi.2018.12.003
10.1007/s13246-020-00865-4
10.3390/sym12040651
10.1016/j.compbiomed.2020.103792
10.1155/2019/4180949
10.7717/peerj.6201
10.1109/TMI.2016.2538465
10.1016/j.cogsys.2018.12.007
10.1038/nature21056
10.1148/radiol.2020200463
10.1148/radiol.2020200241
10.1148/ryct.2020200034 |
Chest X-ray image phase features for improved diagnosis of COVID-19 using convolutional neural network. | Recently, the outbreak of the novel coronavirus disease 2019 (COVID-19) pandemic has seriously endangered human health and life. In fighting against COVID-19, effective diagnosis of infected patient is critical for preventing the spread of diseases. Due to limited availability of test kits, the need for auxiliary diagnostic approach has increased. Recent research has shown radiography of COVID-19 patient, such as CT and X-ray, contains salient information about the COVID-19 virus and could be used as an alternative diagnosis method. Chest X-ray (CXR) due to its faster imaging time, wide availability, low cost, and portability gains much attention and becomes very promising. In order to reduce intra- and inter-observer variability, during radiological assessment, computer-aided diagnostic tools have been used in order to supplement medical decision making and subsequent management. Computational methods with high accuracy and robustness are required for rapid triaging of patients and aiding radiologist in the interpretation of the collected data.
In this study, we design a novel multi-feature convolutional neural network (CNN) architecture for multi-class improved classification of COVID-19 from CXR images. CXR images are enhanced using a local phase-based image enhancement method. The enhanced images, together with the original CXR data, are used as an input to our proposed CNN architecture. Using ablation studies, we show the effectiveness of the enhanced images in improving the diagnostic accuracy. We provide quantitative evaluation on two datasets and qualitative results for visual inspection. Quantitative evaluation is performed on data consisting of 8851 normal (healthy), 6045 pneumonia, and 3323 COVID-19 CXR scans.
In Dataset-1, our model achieves 95.57% average accuracy for a three classes classification, 99% precision, recall, and F1-scores for COVID-19 cases. For Dataset-2, we have obtained 94.44% average accuracy, and 95% precision, recall, and F1-scores for detection of COVID-19.
Our proposed multi-feature-guided CNN achieves improved results compared to single-feature CNN proving the importance of the local phase-based CXR image enhancement. Future work will involve further evaluation of the proposed method on a larger-size COVID-19 dataset as they become available. | International journal of computer assisted radiology and surgery | "2021-01-10T00:00:00" | [
"XiaoQi",
"Lloyd GBrown",
"David JForan",
"JohnNosher",
"IlkerHacihaliloglu"
] | 10.1007/s11548-020-02305-w
10.1016/S1473-3099(20)30120-1
10.1128/AEM.69.7.4116-4122.2003
10.1016/j.compbiomed.2020.103792
10.1016/j.mehy.2020.109761
10.1109/78.969520
10.1016/j.array.2019.100004
10.1007/s11548-019-01934-0
10.1109/TMI.2017.2712367
10.1016/j.media.2017.07.005
10.1007/s11263-019-01228-7 |
Cascaded deep transfer learning on thoracic CT in COVID-19 patients treated with steroids. | Journal of medical imaging (Bellingham, Wash.) | "2021-01-09T00:00:00" | [
"Jordan DFuhrman",
"JunChen",
"ZegangDong",
"Fleming Y MLure",
"ZheLuo",
"Maryellen LGiger"
] | 10.1117/1.JMI.8.S1.014501
10.1056/NEJMsb2005114
10.1016/S2213-2600(20)30076-X
10.3332/ecancer.2020.1023
10.1001/jama.2020.17023
10.1056/NEJMoa2021436
10.1038/s41392-020-0158-2
10.1093/cid/ciaa601
10.1148/ryct.2020200152
10.1148/ryct.2020200034
10.1148/radiol.2020200642
10.1146/annurev-bioeng-071516-044442
10.1016/j.jacr.2017.12.028
10.1016/j.media.2017.07.005
10.1002/mp.13264
10.1016/j.jacr.2017.12.028
10.1002/mp.12453
10.1109/TMI.2016.2535302
10.1080/14786440109462720
10.1016/j.neucom.2011.06.026
10.1148/radiol.2533090280
10.1006/jmps.1998.1218
10.1080/01621459.1958.10501452
10.1001/jamainternmed.2020.0994
10.1016/S0140-6736(20)30566-3
10.1056/NEJM199701233360402
10.1148/radiology.143.1.7063747 |
|
Accurately Differentiating Between Patients With COVID-19, Patients With Other Viral Infections, and Healthy Individuals: Multimodal Late Fusion Learning Approach. | Effectively identifying patients with COVID-19 using nonpolymerase chain reaction biomedical data is critical for achieving optimal clinical outcomes. Currently, there is a lack of comprehensive understanding in various biomedical features and appropriate analytical approaches for enabling the early detection and effective diagnosis of patients with COVID-19.
We aimed to combine low-dimensional clinical and lab testing data, as well as high-dimensional computed tomography (CT) imaging data, to accurately differentiate between healthy individuals, patients with COVID-19, and patients with non-COVID viral pneumonia, especially at the early stage of infection.
In this study, we recruited 214 patients with nonsevere COVID-19, 148 patients with severe COVID-19, 198 noninfected healthy participants, and 129 patients with non-COVID viral pneumonia. The participants' clinical information (ie, 23 features), lab testing results (ie, 10 features), and CT scans upon admission were acquired and used as 3 input feature modalities. To enable the late fusion of multimodal features, we constructed a deep learning model to extract a 10-feature high-level representation of CT scans. We then developed 3 machine learning models (ie, k-nearest neighbor, random forest, and support vector machine models) based on the combined 43 features from all 3 modalities to differentiate between the following 4 classes: nonsevere, severe, healthy, and viral pneumonia.
Multimodal features provided substantial performance gain from the use of any single feature modality. All 3 machine learning models had high overall prediction accuracy (95.4%-97.7%) and high class-specific prediction accuracy (90.6%-99.9%).
Compared to the existing binary classification benchmarks that are often focused on single-feature modality, this study's hybrid deep learning-machine learning framework provided a novel and effective breakthrough for clinical applications. Our findings, which come from a relatively large sample size, and analytical workflow will supplement and assist with clinical decision support for current COVID-19 diagnostic methods and other clinical applications with high-dimensional multimodal biomedical features. | Journal of medical Internet research | "2021-01-07T00:00:00" | [
"MingXu",
"LiuOuyang",
"LeiHan",
"KaiSun",
"TingtingYu",
"QianLi",
"HuaTian",
"LidaSafarnejad",
"HengdongZhang",
"YueGao",
"Forrest ShengBao",
"YuanfangChen",
"PatrickRobinson",
"YaorongGe",
"BaoliZhu",
"JieLiu",
"ShiChen"
] | 10.2196/25535
10.1002/jmv.25702
10.1002/jmv.25721
10.1016/j.ajic.2020.07.011
10.1109/TMI.2020.3001810
10.1002/hep.31446
10.1016/S1473-3099(20)30086-4
10.1007/s00330-020-06975-7
10.1097/RLI.0000000000000689
10.1016/S0262-4079(20)30909-X
10.1001/jama.2020.2648
10.1056/NEJMp2005689
10.1148/radiol.2020200432
10.1038/s41591-020-0916-2
10.2196/19822
10.1016/j.crad.2020.03.008
10.1093/cid/ciaa322
10.1007/s10916-020-01597-4
10.33321/cdi.2020.44.55
10.33321/cdi.2020.44.55
10.1056/NEJMe2009758
10.3348/kjr.2020.0181
10.1007/s00330-020-06925-3
10.1007/s00330-020-07018-x
10.1080/17843286.2020.1798668
10.1016/S1473-3099(20)30460-6
10.1038/s41598-019-42294-8
10.1038/s41598-019-42294-8
10.1148/radiol.2020200642
10.1093/ofid/ofaa171
10.1097/RLI.0000000000000670
10.1016/j.jinf.2020.04.004
10.2214/AJR.20.22959
10.1016/j.jinf.2020.02.022
10.1109/TPAMI.2018.2798607
10.1164/rccm.201908-1581ST
10.1016/j.compbiomed.2020.103795
10.2196/19569
10.1109/ACCESS.2020.3005510
10.1148/radiol.2020200905
10.1101/2020.05.18.20105841
10.1371/journal.pone.0235187
10.1371/journal.pone.0235187
10.1093/cid/ciaa538
10.1515/cclm-2020-0398
10.1038/s41591-020-0931-3
10.1148/radiol.2020200823
10.1002/cyto.a.23990
10.1089/omi.2020.0073
10.1089/omi.2020.0093
10.1016/j.dsx.2020.06.070
10.1016/j.diabres.2020.108347
10.1001/jamacardio.2020.1286
10.3390/jcm9051407 |
Fast automated detection of COVID-19 from medical images using convolutional neural networks. | Coronavirus disease 2019 (COVID-19) is a global pandemic posing significant health risks. The diagnostic test sensitivity of COVID-19 is limited due to irregularities in specimen handling. We propose a deep learning framework that identifies COVID-19 from medical images as an auxiliary testing method to improve diagnostic sensitivity. We use pseudo-coloring methods and a platform for annotating X-ray and computed tomography images to train the convolutional neural network, which achieves a performance similar to that of experts and provides high scores for multiple statistical indices (F1 scores > 96.72% (0.9307, 0.9890) and specificity >99.33% (0.9792, 1.0000)). Heatmaps are used to visualize the salient features extracted by the neural network. The neural network-based regression provides strong correlations between the lesion areas in the images and five clinical indicators, resulting in high accuracy of the classification framework. The proposed method represents a potential computer-aided diagnosis method for COVID-19 in clinical practice. | Communications biology | "2021-01-06T00:00:00" | [
"ShuangLiang",
"HuixiangLiu",
"YuGu",
"XiuhuaGuo",
"HongjunLi",
"LiLi",
"ZhiyuanWu",
"MengyangLiu",
"LixinTao"
] | 10.1038/s42003-020-01535-7
10.1101/2020.02.11.20021493v2
10.1016/j.tvjl.2005.12.014
10.1377/hlthaff.27.6.1491
10.1021/acsnano.0c02624
10.1162/neco_a_00990
10.1109/TEVC.2019.2916183
10.1109/TNNLS.2018.2876865
10.1038/nature24270
10.1109/MCI.2018.2840738
10.1016/j.cmpb.2013.10.011
10.1001/jama.2016.17216
10.1145/358669.358692
10.1007/s003300101100
10.1038/nature14539
10.1111/1467-9639.00050
10.1007/BF02295996
10.1016/j.media.2017.06.015
10.1007/s10278-013-9622-7
10.1007/s00259-020-04929-1 |
Development and Validation of an Automated Radiomic CT Signature for Detecting COVID-19. | The coronavirus disease 2019 (COVID-19) outbreak has reached pandemic status. Drastic measures of social distancing are enforced in society and healthcare systems are being pushed to and beyond their limits. To help in the fight against this threat on human health, a fully automated AI framework was developed to extract radiomics features from volumetric chest computed tomography (CT) exams. The detection model was developed on a dataset of 1381 patients (181 COVID-19 patients plus 1200 non COVID control patients). A second, independent dataset of 197 RT-PCR confirmed COVID-19 patients and 500 control patients was used to assess the performance of the model. Diagnostic performance was assessed by the area under the receiver operating characteristic curve (AUC). The model had an AUC of 0.882 (95% CI: 0.851-0.913) in the independent test dataset (641 patients). The optimal decision threshold, considering the cost of false negatives twice as high as the cost of false positives, resulted in an accuracy of 85.18%, a sensitivity of 69.52%, a specificity of 91.63%, a negative predictive value (NPV) of 94.46% and a positive predictive value (PPV) of 59.44%. Benchmarked against RT-PCR confirmed cases of COVID-19, our AI framework can accurately differentiate COVID-19 from routine clinical conditions in a fully automated fashion. Thus, providing rapid accurate diagnosis in patients suspected of COVID-19 infection, facilitating the timely implementation of isolation procedures and early intervention. | Diagnostics (Basel, Switzerland) | "2021-01-06T00:00:00" | [
"JulienGuiot",
"AkshayaaVaidyanathan",
"LouisDeprez",
"FadilaZerka",
"DenisDanthine",
"Anne-NoëlleFrix",
"MarieThys",
"MoniqueHenket",
"GregoryCanivet",
"StephaneMathieu",
"EvanthiaEftaxia",
"PhilippeLambin",
"NathanTsoutzidis",
"BenjaminMiraglio",
"SeanWalsh",
"MichelMoutschen",
"RenaudLouis",
"PaulMeunier",
"WimVos",
"Ralph T HLeijenaar",
"PierreLovinfosse"
] | 10.3390/diagnostics11010041
10.1016/S0140-6736(20)30260-9
10.1183/13993003.00407-2020
10.1148/radiol.2020200642
10.1148/radiol.2020200432
10.1016/S0140-6736(20)30183-5
10.1016/S0140-6736(20)30728-5
10.1016/S1473-3099(20)30241-3
10.1038/ncomms5006
10.1016/S2213-2600(18)30286-8
10.1038/s41586-019-1799-6
10.1038/s41591-019-0447-x
10.1002/mp.12967
10.1016/j.ejca.2011.11.036
10.1038/nrclinonc.2017.141
10.1371/journal.pone.0102107
10.1038/srep13087
10.1007/s11263-019-01228-7
10.1148/radiol.2020200343
10.1038/d41587-020-00002-2
10.1007/s00330-020-06865-y
10.1007/s00259-020-04795-x
10.1101/2020.02.23.20026930
10.1101/2020.02.14.20023028
10.1101/2020.02.25.20021568
10.1259/bjr.20170498
10.1016/j.tmaid.2020.101673
10.1109/RBME.2020.2987975
10.1148/ryai.2020200053
10.1038/s41591-020-0931-3
10.1038/s41467-020-17971-2
10.1155/2020/9756518
10.1016/j.diii.2020.06.001
10.7326/M20-1382
10.1148/radiol.2020201491
10.1148/radiol.2020200905
10.1016/j.ejro.2020.100271
10.1007/s11432-020-2849-3
10.1097/RTI.0000000000000544
10.18383/j.tom.2016.00208 |
Deployment of artificial intelligence for radiographic diagnosis of COVID-19 pneumonia in the emergency department. | The coronavirus disease 2019 pandemic has inspired new innovations in diagnosing, treating, and dispositioning patients during high census conditions with constrained resources. Our objective is to describe first experiences of physician interaction with a novel artificial intelligence (AI) algorithm designed to enhance physician abilities to identify ground-glass opacities and consolidation on chest radiographs.
During the first wave of the pandemic, we deployed a previously developed and validated deep-learning AI algorithm for assisted interpretation of chest radiographs for use by physicians at an academic health system in Southern California. The algorithm overlays radiographs with "heat" maps that indicate pneumonia probability alongside standard chest radiographs at the point of care. Physicians were surveyed in real time regarding ease of use and impact on clinical decisionmaking.
Of the 5125 total visits and 1960 chest radiographs obtained in the emergency department (ED) during the study period, 1855 were analyzed by the algorithm. Among these, emergency physicians were surveyed for their experiences on 202 radiographs. Overall, 86% either strongly agreed or somewhat agreed that the intervention was easy to use in their workflow. Of the respondents, 20% reported that the algorithm impacted clinical decisionmaking.
To our knowledge, this is the first published literature evaluating the impact of medical imaging AI on clinical decisionmaking in the emergency department setting. Urgent deployment of a previously validated AI algorithm clinically was easy to use and was found to have an impact on clinical decision making during the predicted surge period of a global pandemic. | Journal of the American College of Emergency Physicians open | "2021-01-05T00:00:00" | [
"MorganCarlile",
"BrianHurt",
"AlbertHsiao",
"MichaelHogarth",
"Christopher ALonghurst",
"ChristianDameff"
] | 10.1002/emp2.12297 |
A machine learning-based framework for diagnosis of COVID-19 from chest X-ray images. | Corona virus disease (COVID-19) acknowledged as a pandemic by the WHO and mankind all over the world is vulnerable to this virus. Alternative tools are needed that can help in diagnosis of the coronavirus. Researchers of this article investigated the potential of machine learning methods for automatic diagnosis of corona virus with high accuracy from X-ray images. Two most commonly used classifiers were selected: logistic regression (LR) and convolutional neural networks (CNN). The main reason was to make the system fast and efficient. Moreover, a dimensionality reduction approach was also investigated based on principal component analysis (PCA) to further speed up the learning process and improve the classification accuracy by selecting the highly discriminate features. The deep learning-based methods demand large amount of training samples compared to conventional approaches, yet adequate amount of labelled training samples was not available for COVID-19 X-ray images. Therefore, data augmentation technique using generative adversarial network (GAN) was employed to further increase the training samples and reduce the overfitting problem. We used the online available dataset and incorporated GAN to have 500 X-ray images in total for this study. Both CNN and LR showed encouraging results for COVID-19 patient identification. The LR and CNN models showed 95.2-97.6% overall accuracy without PCA and 97.6-100% with PCA for positive cases identification, respectively. | Interdisciplinary sciences, computational life sciences | "2021-01-03T00:00:00" | [
"JawadRasheed",
"Alaa AliHameed",
"ChawkiDjeddi",
"AkhtarJamil",
"FadiAl-Turjman"
] | 10.1007/s12539-020-00403-6
10.1016/S0140-6736(66)92364-6
10.1097/01.inf.0000188166.17324.60
10.1007/s00038-020-01390-7
10.1007/s10916-020-01585-8
10.1186/s12916-020-01533-w
10.3389/fdgth.2020.00008
10.1109/UBMYK48245.2019.8965556
10.1109/TSP.2019.8769040
10.1109/access.2019.2928975
10.1016/j.media.2016.06.032
10.1016/j.patcog.2017.05.025
10.1109/EBBT.2019.8741582
10.1109/RIVF.2019.8713648
10.1109/TMI.2016.2526687
10.1109/ACCESS.2018.2831280
10.1109/TMI.2016.2528162
10.1007/978-981-13-8300-7_8
10.1007/s11263-015-0816-y
10.1109/TMI.2018.2881415
10.1186/s40537-019-0197-0
10.1109/DICTA.2018.8615771
10.1109/TMI.2017.2743464
10.1109/TMI.2017.2760978
10.1109/TMI.2016.2528120
10.1109/TMI.2016.2538465
10.1109/TII.2019.2891738
10.1109/JBHI.2017.2787595
10.1002/wics.101
10.1007/BF02293599
10.1109/ICEngTechnol.2017.8308186
10.1080/00220670209598786
10.1016/j.compbiomed.2020.103805
10.1016/j.chemolab.2020.104054
10.1007/s13246-020-00865-4
10.1016/j.cmpb.2020.105608
10.1080/07391102.2020.1788642
10.1007/s11356-020-10133-3
10.1016/j.mehy.2020.109761
10.2139/ssrn.3557984
10.1016/j.chaos.2020.110122
10.1016/j.chaos.2020.110071 |
The usage of deep neural network improves distinguishing COVID-19 from other suspected viral pneumonia by clinicians on chest CT: a real-world study. | Based on the current clinical routine, we aimed to develop a novel deep learning model to distinguish coronavirus disease 2019 (COVID-19) pneumonia from other types of pneumonia and validate it with a real-world dataset (RWD).
A total of 563 chest CT scans of 380 patients (227/380 were diagnosed with COVID-19 pneumonia) from 5 hospitals were collected to train our deep learning (DL) model. Lung regions were extracted by U-net, then transformed and fed to pre-trained ResNet-50-based IDANNet (Identification and Analysis of New covid-19 Net) to produce a diagnostic probability. Fivefold cross-validation was employed to validate the application of our model. Another 318 scans of 316 patients (243/316 were diagnosed with COVID-19 pneumonia) from 2 other hospitals were enrolled prospectively as the RWDs to testify our DL model's performance and compared it with that from 3 experienced radiologists.
A three-dimensional DL model was successfully established. The diagnostic threshold to differentiate COVID-19 and non-COVID-19 pneumonia was 0.685 with an AUC of 0.906 (95% CI: 0.886-0.913) in the internal validation group. In the RWD cohort, our model achieved an AUC of 0.868 (95% CI: 0.851-0.876) with the sensitivity of 0.811 and the specificity of 0.822, non-inferior to the performance of 3 experienced radiologists, suggesting promising clinical practical usage.
The established DL model was able to achieve accurate identification of COVID-19 pneumonia from other suspected ones in the real-world situation, which could become a reliable tool in clinical routine.
• In an internal validation set, our DL model achieved the best performance to differentiate COVID-19 from non-COVID-19 pneumonia with a sensitivity of 0.836, a specificity of 0.800, and an AUC of 0.906 (95% CI: 0.886-0.913) when the threshold was set at 0.685. • In the prospective RWD cohort, our DL diagnostic model achieved a sensitivity of 0.811, a specificity of 0.822, and AUC of 0.868 (95% CI: 0.851-0.876), non-inferior to the performance of 3 experienced radiologists. • The attention heatmaps were fully generated by the model without additional manual annotation and the attention regions were highly aligned with the ROIs acquired by human radiologists for diagnosis. | European radiology | "2020-12-30T00:00:00" | [
"QiuchenXie",
"YipingLu",
"XianchengXie",
"NanMei",
"YunXiong",
"XuanxuanLi",
"YangyongZhu",
"AnlingXiao",
"BoYin"
] | 10.1007/s00330-020-07553-7
10.1056/NEJMoa2002032
10.1002/jmv.25689
10.1002/jmv.25681
10.1016/j.ijid.2020.03.071
10.1016/S1473-3099(20)30086-4
10.1148/radiol.2020200370
10.3390/s20041214
10.1109/JBHI.2018.2841992
10.1007/s10278-019-00254-8
10.1001/jama.2016.17216
10.1007/s10096-020-03901-z
10.1136/bmj.m689
10.1056/NEJMsb1609216
10.14236/jhi.v22i3.177 |
Automatic COVID-19 CT segmentation using U-Net integrated spatial and channel attention mechanism. | The coronavirus disease (COVID-19) pandemic has led to a devastating effect on the global public health. Computed Tomography (CT) is an effective tool in the screening of COVID-19. It is of great importance to rapidly and accurately segment COVID-19 from CT to help diagnostic and patient monitoring. In this paper, we propose a U-Net based segmentation network using attention mechanism. As not all the features extracted from the encoders are useful for segmentation, we propose to incorporate an attention mechanism including a spatial attention module and a channel attention module, to a U-Net architecture to re-weight the feature representation spatially and channel-wise to capture rich contextual relationships for better feature representation. In addition, the focal Tversky loss is introduced to deal with small lesion segmentation. The experiment results, evaluated on a COVID-19 CT segmentation dataset where 473 CT slices are available, demonstrate the proposed method can achieve an accurate and rapid segmentation result on COVID-19. The method takes only 0.29 second to segment a single CT slice. The obtained Dice Score and Hausdorff Distance are 83.1% and 18.8, respectively. | International journal of imaging systems and technology | "2020-12-29T00:00:00" | [
"TongxueZhou",
"StéphaneCanu",
"SuRuan"
] | 10.1002/ima.22527 |
DeepTracer for fast de novo cryo-EM protein structure modeling and special studies on CoV-related complexes. | Information about macromolecular structure of protein complexes and related cellular and molecular mechanisms can assist the search for vaccines and drug development processes. To obtain such structural information, we present DeepTracer, a fully automated deep learning-based method for fast de novo multichain protein complex structure determination from high-resolution cryoelectron microscopy (cryo-EM) maps. We applied DeepTracer on a previously published set of 476 raw experimental cryo-EM maps and compared the results with a current state of the art method. The residue coverage increased by over 30% using DeepTracer, and the rmsd value improved from 1.29 Å to 1.18 Å. Additionally, we applied DeepTracer on a set of 62 coronavirus-related cryo-EM maps, among them 10 with no deposited structure available in EMDataResource. We observed an average residue match of 84% with the deposited structures and an average rmsd of 0.93 Å. Additional tests with related methods further exemplify DeepTracer's competitive accuracy and efficiency of structure modeling. DeepTracer allows for exceptionally fast computations, making it possible to trace around 60,000 residues in 350 chains within only 2 h. The web service is globally accessible at https://deeptracer.uw.edu. | Proceedings of the National Academy of Sciences of the United States of America | "2020-12-29T00:00:00" | [
"JonasPfab",
"Nhut MinhPhan",
"DongSi"
] | 10.1073/pnas.2017525118 |
Lightweight deep learning models for detecting COVID-19 from chest X-ray images. | Deep learning methods have already enjoyed an unprecedented success in medical imaging problems. Similar success has been evidenced when it comes to the detection of COVID-19 from medical images, therefore deep learning approaches are considered good candidates for detecting this disease, in collaboration with radiologists and/or physicians. In this paper, we propose a new approach to detect COVID-19 via exploiting a conditional generative adversarial network to generate synthetic images for augmenting the limited amount of data available. Additionally, we propose two deep learning models following a lightweight architecture, commensurating with the overall amount of data available. Our experiments focused on both binary classification for COVID-19 vs Normal cases and multi-classification that includes a third class for bacterial pneumonia. Our models achieved a competitive performance compared to other studies in literature and also a ResNet8 model. Our best performing binary model achieved 98.7% accuracy, 100% sensitivity and 98.3% specificity, while our three-class model achieved 98.3% accuracy, 99.3% sensitivity and 98.1% specificity. Moreover, via adopting a testing protocol proposed in literature, our models proved to be more robust and reliable in COVID-19 detection than a baseline ResNet8, making them good candidates for detecting COVID-19 from posteroanterior chest X-ray images. | Computers in biology and medicine | "2020-12-29T00:00:00" | [
"StefanosKarakanis",
"GeorgiosLeontidis"
] | 10.1016/j.compbiomed.2020.104181 |
Toward data-efficient learning: A benchmark for COVID-19 CT lung and infection segmentation. | Accurate segmentation of lung and infection in COVID-19 computed tomography (CT) scans plays an important role in the quantitative management of patients. Most of the existing studies are based on large and private annotated datasets that are impractical to obtain from a single institution, especially when radiologists are busy fighting the coronavirus disease. Furthermore, it is hard to compare current COVID-19 CT segmentation methods as they are developed on different datasets, trained in different settings, and evaluated with different metrics.
To promote the development of data-efficient deep learning methods, in this paper, we built three benchmarks for lung and infection segmentation based on 70 annotated COVID-19 cases, which contain current active research areas, for example, few-shot learning, domain generalization, and knowledge transfer. For a fair comparison among different segmentation methods, we also provide standard training, validation and testing splits, evaluation metrics and, the corresponding code.
Based on the state-of-the-art network, we provide more than 40 pretrained baseline models, which not only serve as out-of-the-box segmentation tools but also save computational time for researchers who are interested in COVID-19 lung and infection segmentation. We achieve average dice similarity coefficient (DSC) scores of 97.3%, 97.7%, and 67.3% and average normalized surface dice (NSD) scores of 90.6%, 91.4%, and 70.0% for left lung, right lung, and infection, respectively.
To the best of our knowledge, this work presents the first data-efficient learning benchmark for medical image segmentation, and the largest number of pretrained models up to now. All these resources are publicly available, and our work lays the foundation for promoting the development of deep learning methods for efficient COVID-19 CT segmentation with limited data. | Medical physics | "2020-12-24T00:00:00" | [
"JunMa",
"YixinWang",
"XingleAn",
"ChengGe",
"ZiqiYu",
"JiananChen",
"QiongjieZhu",
"GuoqiangDong",
"JianHe",
"ZhiqiangHe",
"TianjiaCao",
"YuntaoZhu",
"ZiweiNie",
"XiaopingYang"
] | 10.1002/mp.14676 |
Artificial Intelligence of COVID-19 Imaging: A Hammer in Search of a Nail. | null | Radiology | "2020-12-23T00:00:00" | [
"Ronald MSummers"
] | 10.1148/radiol.2020204226 |
IoMT-Based Automated Detection and Classification of Leukemia Using Deep Learning. | For the last few years, computer-aided diagnosis (CAD) has been increasing rapidly. Numerous machine learning algorithms have been developed to identify different diseases, e.g., leukemia. Leukemia is a white blood cells- (WBC-) related illness affecting the bone marrow and/or blood. A quick, safe, and accurate early-stage diagnosis of leukemia plays a key role in curing and saving patients' lives. Based on developments, leukemia consists of two primary forms, i.e., acute and chronic leukemia. Each form can be subcategorized as myeloid and lymphoid. There are, therefore, four leukemia subtypes. Various approaches have been developed to identify leukemia with respect to its subtypes. However, in terms of effectiveness, learning process, and performance, these methods require improvements. This study provides an Internet of Medical Things- (IoMT-) based framework to enhance and provide a quick and safe identification of leukemia. In the proposed IoMT system, with the help of cloud computing, clinical gadgets are linked to network resources. The system allows real-time coordination for testing, diagnosis, and treatment of leukemia among patients and healthcare professionals, which may save both time and efforts of patients and clinicians. Moreover, the presented framework is also helpful for resolving the problems of patients with critical condition in pandemics such as COVID-19. The methods used for the identification of leukemia subtypes in the suggested framework are Dense Convolutional Neural Network (DenseNet-121) and Residual Convolutional Neural Network (ResNet-34). Two publicly available datasets for leukemia, i.e., ALL-IDB and ASH image bank, are used in this study. The results demonstrated that the suggested models supersede the other well-known machine learning algorithms used for healthy-versus-leukemia-subtypes identification. | Journal of healthcare engineering | "2020-12-22T00:00:00" | [
"NighatBibi",
"MisbaSikandar",
"IkramUd Din",
"AhmadAlmogren",
"SikandarAli"
] | 10.1155/2020/6648574
10.1109/access.2020.3006040
10.1109/access.2020.2968948
10.1016/j.jisa.2020.102615
10.1109/access.2020.3030192
10.3390/s20092468
10.3390/su12083088
10.1109/access.2020.2985851
10.3390/s20072081
10.1109/access.2017.2757844
10.1016/j.future.2018.07.050
10.1016/j.future.2019.04.017
10.1016/j.future.2020.02.054
10.1016/j.future.2020.03.054
10.1016/j.future.2019.05.059
10.3390/electronics9071172
10.1016/j.future.2019.01.033
10.1109/access.2019.2960633
10.1016/j.patrec.2019.03.022
10.1109/access.2019.2963797
10.1038/nature14539
10.1007/978-981-10-3773-3_64
10.1016/j.bspc.2018.08.012
10.1016/j.engappai.2018.04.024
10.3390/diagnostics9030104
10.1016/j.patrec.2019.03.024
10.1016/j.cmpb.2019.104987
10.7763/ijcte.2018.v10.1198
10.1007/s10278-018-0074-y
10.20532/cit.2018.1004123
10.1080/21681163.2016.1234948
10.1016/j.bbe.2017.07.003
10.1016/j.aca.2011.12.069 |
Analysis of COVID-19 Infections on a CT Image Using DeepSense Model. | In this paper, a data mining model on a hybrid deep learning framework is designed to diagnose the medical conditions of patients infected with the coronavirus disease 2019 (COVID-19) virus. The hybrid deep learning model is designed as a combination of convolutional neural network (CNN) and recurrent neural network (RNN) and named as DeepSense method. It is designed as a series of layers to extract and classify the related features of COVID-19 infections from the lungs. The computerized tomography image is used as an input data, and hence, the classifier is designed to ease the process of classification on learning the multidimensional input data using the Expert Hidden layers. The validation of the model is conducted against the medical image datasets to predict the infections using deep learning classifiers. The results show that the DeepSense classifier offers accuracy in an improved manner than the conventional deep and machine learning classifiers. The proposed method is validated against three different datasets, where the training data are compared with 70%, 80%, and 90% training data. It specifically provides the quality of the diagnostic method adopted for the prediction of COVID-19 infections in a patient. | Frontiers in public health | "2020-12-18T00:00:00" | [
"AdilKhadidos",
"Alaa OKhadidos",
"SrihariKannan",
"YuvarajNatarajan",
"Sachi NandanMohanty",
"GeorgiosTsaramirsis"
] | 10.3389/fpubh.2020.599550
10.1016/j.irbm.2020.05.003
10.1007/s10096-020-03901-z
10.1016/j.neucom.2018.12.086
10.1109/TMI.2018.2858202
10.1109/JBHI.2019.2891049
10.1109/TMI.2018.2833385
10.1002/clen.201700162
10.1109/ACCESS.2019.2929270
10.1109/TMI.2018.2876510
10.1109/TMI.2018.2883807
10.12688/wellcomeopenres.15819.2
10.1016/j.cmpb.2020.105532
10.1109/TMI.2020.2995965
10.1007/s00521-018-3688-6
10.1016/j.procs.2016.02.042
10.1007/s00034-019-01041-0
10.1007/s00521-018-3896-0
10.1016/j.jclepro.2018.12.096
10.3390/app9112212
10.1109/ACCESS.2020.2981337
10.3390/ijerph17010267
10.1109/ACCESS.2020.3000322
10.18201/ijisae.2019252786
10.1109/ICSEngT.2019.8906408
10.3390/app9142921
10.1007/s12553-018-00284-2
10.1016/j.knosys.2018.08.036 |
Novel Deep Learning Technique Used in Management and Discharge of Hospitalized Patients with COVID-19 in China. | The low sensitivity and false-negative results of nucleic acid testing greatly affect its performance in diagnosing and discharging patients with coronavirus disease (COVID-19). Chest computed tomography (CT)-based evaluation of pneumonia may indicate a need for isolation. Therefore, this radiologic modality plays an important role in managing patients with suspected COVID-19. Meanwhile, deep learning (DL) technology has been successful in detecting various imaging features of chest CT. This study applied a novel DL technique to standardize the discharge criteria of COVID-19 patients with consecutive negative respiratory pathogen nucleic acid test results at a "square cabin" hospital.
DL was used to evaluate the chest CT scans of 270 hospitalized COVID-19 patients who had two consecutive negative nucleic acid tests (sampling interval >1 day). The CT scans evaluated were obtained after the patients' second negative test result. The standard criterion determined by DL for patient discharge was a total volume ratio of lesion to lung <50%.
The mean number of days between hospitalization and DL was 14.3 (± 2.4). The average intersection over union was 0.7894. Two hundred and thirteen (78.9%) patients exhibited pneumonia, of whom 54.0% (115/213) had mild interstitial fibrosis. Twenty-one, 33, and 4 cases exhibited vascular enlargement, pleural thickening, and mediastinal lymphadenopathy, respectively. Of the latter, 18.8% (40/213) had a total volume ratio of lesions to lung ≥50% according to our severity scale and were monitored continuously in the hospital. Three cases had a positive follow-up nucleic acid test during hospitalization. None of the 230 discharged cases later tested positive or exhibited pneumonia progression.
The novel DL enables the accurate management of hospitalized patients with COVID-19 and can help avoid cluster transmission or exacerbation in patients with false-negative acid test. | Therapeutics and clinical risk management | "2020-12-17T00:00:00" | [
"QingchengMeng",
"WentaoLiu",
"PengruiGao",
"JiaqiZhang",
"AnlanSun",
"JiaDing",
"HaoLiu",
"ZiqiaoLei"
] | 10.2147/TCRM.S280726
10.1056/NEJMe2001126
10.1056/NEJMoa2001017
10.1016/S0140-6736(20)30183-5
10.1001/jama.2020.1585
10.2214/AJR.20.22954
10.1016/j.compmedimag.2019.101688
10.1007/s00330-019-06163-2
10.1007/s00330-020-07042-x
10.1109/TPAMI.2016.2577031
10.1016/j.jtho.2020.02.010
10.1016/S2213-2600(20)30076-X
10.1148/radiol.2462070712
10.1109/EMBC.2018.8512337
10.1016/S1473-3099(20)30086-4 |
Optimised genetic algorithm-extreme learning machine approach for automatic COVID-19 detection. | The coronavirus disease (COVID-19), is an ongoing global pandemic caused by severe acute respiratory syndrome. Chest Computed Tomography (CT) is an effective method for detecting lung illnesses, including COVID-19. However, the CT scan is expensive and time-consuming. Therefore, this work focus on detecting COVID-19 using chest X-ray images because it is widely available, faster, and cheaper than CT scan. Many machine learning approaches such as Deep Learning, Neural Network, and Support Vector Machine; have used X-ray for detecting the COVID-19. Although the performance of those approaches is acceptable in terms of accuracy, however, they require high computational time and more memory space. Therefore, this work employs an Optimised Genetic Algorithm-Extreme Learning Machine (OGA-ELM) with three selection criteria (i.e., random, K-tournament, and roulette wheel) to detect COVID-19 using X-ray images. The most crucial strength factors of the Extreme Learning Machine (ELM) are: (i) high capability of the ELM in avoiding overfitting; (ii) its usability on binary and multi-type classifiers; and (iii) ELM could work as a kernel-based support vector machine with a structure of a neural network. These advantages make the ELM efficient in achieving an excellent learning performance. ELMs have successfully been applied in many domains, including medical domains such as breast cancer detection, pathological brain detection, and ductal carcinoma in situ detection, but not yet tested on detecting COVID-19. Hence, this work aims to identify the effectiveness of employing OGA-ELM in detecting COVID-19 using chest X-ray images. In order to reduce the dimensionality of a histogram oriented gradient features, we use principal component analysis. The performance of OGA-ELM is evaluated on a benchmark dataset containing 188 chest X-ray images with two classes: a healthy and a COVID-19 infected. The experimental result shows that the OGA-ELM achieves 100.00% accuracy with fast computation time. This demonstrates that OGA-ELM is an efficient method for COVID-19 detecting using chest X-ray images. | PloS one | "2020-12-16T00:00:00" | [
"Musatafa Abbas AbboodAlbadr",
"SabrinaTiun",
"MasriAyob",
"Fahad TahaAl-Dhief",
"KhairuddinOmar",
"Faizal AmriHamzah"
] | 10.1371/journal.pone.0242899
10.1001/jama.2020.2565
10.1016/S0140-6736(20)30360-3
10.1148/radiol.2020200432
10.1183/09031936.01.00213501
10.1007/s13246-020-00865-4
10.1109/RBME.2020.2987975
10.1109/TSMCB.2011.2168604
10.1371/journal.pone.0194770
10.1109/TNN.2006.875977
10.1109/TIP.2018.2847035
10.1364/OL.43.001107
10.1016/j.neunet.2009.11.009
10.1016/j.cmpb.2020.105581
10.1016/j.compbiomed.2020.103792
10.1109/72.788640
10.1016/j.asoc.2020.106580
10.1109/TCYB.2020.2983860 |
Hybrid-COVID: a novel hybrid 2D/3D CNN based on cross-domain adaptation approach for COVID-19 screening from chest X-ray images. | The novel Coronavirus disease (COVID-19), which first appeared at the end of December 2019, continues to spread rapidly in most countries of the world. Respiratory infections occur primarily in the majority of patients treated with COVID-19. In light of the growing number of COVID-19 cases, the need for diagnostic tools to identify COVID-19 infection at early stages is of vital importance. For decades, chest X-ray (CXR) technologies have proven their ability to accurately detect respiratory diseases. More recently, with the availability of COVID-19 CXR scans, deep learning algorithms have played a critical role in the healthcare arena by allowing radiologists to recognize COVID-19 patients from their CXR images. However, the majority of screening methods for COVID-19 reported in recent studies are based on 2D convolutional neural networks (CNNs). Although 3D CNNs are capable of capturing contextual information compared to their 2D counterparts, their use is limited due to their increased computational cost (i.e. requires much extra memory and much more computing power). In this study, a transfer learning-based hybrid 2D/3D CNN architecture for COVID-19 screening using CXRs has been developed. The proposed architecture consists of the incorporation of a pre-trained deep model (VGG16) and a shallow 3D CNN, combined with a depth-wise separable convolution layer and a spatial pyramid pooling module (SPP). Specifically, the depth-wise separable convolution helps to preserve the useful features while reducing the computational burden of the model. The SPP module is designed to extract multi-level representations from intermediate ones. Experimental results show that the proposed framework can achieve reasonable performances when evaluated on a collected dataset (3 classes to be predicted: COVID-19, Pneumonia, and Normal). Notably, it achieved a sensitivity of 98.33%, a specificity of 98.68% and an overall accuracy of 96.91. | Physical and engineering sciences in medicine | "2020-12-11T00:00:00" | [
"KhaledBayoudh",
"FayçalHamdaoui",
"AbdellatifMtibaa"
] | 10.1007/s13246-020-00957-1
10.1001/jama.2020.2565
10.1016/j.earlhumdev.2020.105026
10.1016/j.bios.2020.112455
10.1148/radiol.2020200463
10.1148/radiol.2020200642
10.1148/ryct.2020200196
10.1148/radiol.2020200432
10.1148/radiol.2020200823
10.1007/s00330-018-5810-7
10.1016/j.clinimag.2020.04.001
10.1007/s13246-020-00899-8
10.1148/ryct.2020200034
10.1093/cid/ciaa247
10.1007/s42058-020-00031-5
10.1007/s10489-020-01714-3
10.1007/s13246-020-00865-4
10.1016/j.compbiomed.2020.103792
10.1016/j.compbiomed.2020.103805
10.1007/s13246-020-00888-x
10.1007/s11042-018-6912-6
10.1109/TPAMI.2012.59
10.3390/s20185097
10.1007/s10489-020-01801-5
10.1007/s10462-020-09825-6
10.1038/nature14539azz
10.1007/s11263-019-01228-7
10.1186/s40537-019-0197-0
10.1016/j.cmpb.2020.105581
10.1016/j.chaos.2020.110122
10.1016/j.jksuci.2019.09.014 |
COVID-AL: The diagnosis of COVID-19 with deep active learning. | The efficient diagnosis of COVID-19 plays a key role in preventing the spread of this disease. The computer-aided diagnosis with deep learning methods can perform automatic detection of COVID-19 using CT scans. However, large scale annotation of CT scans is impossible because of limited time and heavy burden on the healthcare system. To meet the challenge, we propose a weakly-supervised deep active learning framework called COVID-AL to diagnose COVID-19 with CT scans and patient-level labels. The COVID-AL consists of the lung region segmentation with a 2D U-Net and the diagnosis of COVID-19 with a novel hybrid active learning strategy, which simultaneously considers sample diversity and predicted loss. With a tailor-designed 3D residual network, the proposed COVID-AL can diagnose COVID-19 efficiently and it is validated on a large CT scan dataset collected from the CC-CCII. The experimental results demonstrate that the proposed COVID-AL outperforms the state-of-the-art active learning approaches in the diagnosis of COVID-19. With only 30% of the labeled data, the COVID-AL achieves over 95% accuracy of the deep learning method using the whole dataset. The qualitative and quantitative analysis proves the effectiveness and efficiency of the proposed COVID-AL framework. | Medical image analysis | "2020-12-08T00:00:00" | [
"XingWu",
"ChengChen",
"MingyuZhong",
"JianjiaWang",
"JunShi"
] | 10.1016/j.media.2020.101913 |
Three-Dimensional Analysis of Particle Distribution on Filter Layers inside N95 Respirators by Deep Learning. | The global COVID-19 pandemic has changed many aspects of daily lives. Wearing personal protective equipment, especially respirators (face masks), has become common for both the public and medical professionals, proving to be effective in preventing spread of the virus. Nevertheless, a detailed understanding of respirator filtration-layer internal structures and their physical configurations is lacking. Here, we report three-dimensional (3D) internal analysis of N95 filtration layers via X-ray tomography. Using deep learning methods, we uncover how the distribution and diameters of fibers within these layers directly affect contaminant particle filtration. The average porosity of the filter layers is found to be 89.1%. Contaminants are more efficiently captured by denser fiber regions, with fibers <1.8 μm in diameter being particularly effective, presumably because of the stronger electric field gradient on smaller diameter fibers. This study provides critical information for further development of N95-type respirators that combine high efficiency with good breathability. | Nano letters | "2020-12-08T00:00:00" | [
"Hye RyoungLee",
"LeiLiao",
"WangXiao",
"ArturasVailionis",
"Antonio JRicco",
"RobinWhite",
"YoshioNishi",
"WahChiu",
"StevenChu",
"YiCui"
] | 10.1021/acs.nanolett.0c04230
10.1146/annurev-micro-020518-115759
10.1038/s41586-020-2008-3
10.1038/s41586-020-2012-7
10.1056/NEJMc2007800
10.3390/ijerph17082932
10.1136/bmj.m3223
10.1021/acsnano.0c03597
10.1021/acs.nanolett.0c02211
10.1021/acsnano.0c03972
10.1021/acsnano.0c03252
10.1109/ACCESS.2019.2912200
10.1038/nature14539
10.1109/ISBI.2011.5872394
10.1007/978-3-319-24574-4_28
10.1038/s41592-018-0261-2
10.1016/j.media.2014.10.012
10.1002/pen.760302202
10.1002/1097-4628(20000829)77:9<1921::AID-APP8>3.0.CO;2-1
10.1007/s10853-012-6742-2
10.1109/27.125038
10.1016/0169-4332(93)90025-7
10.1002/1098-2329(200024)19:4<312::AID-ADV7>3.0.CO;2-X
10.1016/j.elstat.2007.05.002
10.1016/j.compositesa.2011.12.025
10.1038/nmeth.2089
10.1017/S143192761800315X
10.1007/s10853-020-05148-7 |
COVID-19 CT Image Synthesis With a Conditional Generative Adversarial Network. | Coronavirus disease 2019 (COVID-19) is an ongoing global pandemic that has spread rapidly since December 2019. Real-time reverse transcription polymerase chain reaction (rRT-PCR) and chest computed tomography (CT) imaging both play an important role in COVID-19 diagnosis. Chest CT imaging offers the benefits of quick reporting, a low cost, and high sensitivity for the detection of pulmonary infection. Recently, deep-learning-based computer vision methods have demonstrated great promise for use in medical imaging applications, including X-rays, magnetic resonance imaging, and CT imaging. However, training a deep-learning model requires large volumes of data, and medical staff faces a high risk when collecting COVID-19 CT data due to the high infectivity of the disease. Another issue is the lack of experts available for data labeling. In order to meet the data requirements for COVID-19 CT imaging, we propose a CT image synthesis approach based on a conditional generative adversarial network that can effectively generate high-quality and realistic COVID-19 CT images for use in deep-learning-based medical imaging tasks. Experimental results show that the proposed method outperforms other state-of-the-art image synthesis methods with the generated COVID-19 CT images and indicates promising for various machine learning applications including semantic segmentation and classification. | IEEE journal of biomedical and health informatics | "2020-12-05T00:00:00" | [
"YifanJiang",
"HanChen",
"MurrayLoew",
"HanseokKo"
] | 10.1109/JBHI.2020.3042523 |
StackNet-DenVIS: a multi-layer perceptron stacked ensembling approach for COVID-19 detection using X-ray images. | The highly contagious nature of Coronavirus disease 2019 (Covid-19) resulted in a global pandemic. Due to the relatively slow and taxing nature of conventional testing for Covid-19, a faster method needs to be in place. The current researches have suggested that visible irregularities found in the chest X-ray of Covid-19 positive patients are indicative of the presence of the disease. Hence, Deep Learning and Image Classification techniques can be employed to learn from these irregularities, and classify accordingly with high accuracy. This research presents an approach to create a classifier model named StackNet-DenVIS which is designed to act as a screening process before conducting the existing swab tests. Using a novel approach, which incorporates Transfer Learning and Stacked Generalization, the model aims to lower the False Negative rate of classification compensating for the 30% False Negative rate of the swab tests. A dataset gathered from multiple reliable sources consisting of 9953 Chest X-rays (868 Covid and 9085 Non-Covid) was used. Also, this research demonstrates handling data imbalance using various techniques involving Generative Adversarial Networks and sampling techniques. The accuracy, sensitivity, and specificity obtained on our proposed model were 95.07%, 99.40% and 94.61% respectively. To the best of our knowledge, the combination of accuracy and false negative rate obtained by this paper outperforms the current implementations. We must also highlight that our proposed architecture also considers other types of viral pneumonia. Given the unprecedented sensitivity of our model we are optimistic it contributes to a better Covid-19 detection. | Physical and engineering sciences in medicine | "2020-12-05T00:00:00" | [
"PratikAutee",
"SagarBagwe",
"VimalShah",
"KritiSrivastava"
] | 10.1007/s13246-020-00952-6
10.1109/TMI.2016.2553401
10.1109/ACCESS.2020.2994762
10.1109/TMI.2016.2528162
10.33889/IJMEMS.2020.5.4.052
10.1145/1007730.1007735
10.1109/TMI.2013.2290491
10.1109/42.929615
10.1109/TMI.2014.2337057
10.3978/j.issn.2223-4292.2014.11.20
10.1109/34.58871
10.1016/S0893-6080(05)80023-1
10.1016/j.neunet.2018.07.011 |
A Deep-Learning Diagnostic Support System for the Detection of COVID-19 Using Chest Radiographs: A Multireader Validation Study. | Five publicly available databases comprising normal CXR, confirmed COVID-19 pneumonia cases, and other pneumonias were used. After the harmonization of the data, the training set included 7966 normal cases, 5451 with other pneumonia, and 258 CXRs with COVID-19 pneumonia, whereas in the testing data set, each category was represented by 100 cases. Eleven blinded radiologists with various levels of expertise independently read the testing data set. The data were analyzed separately with the newly proposed artificial intelligence-based system and by consultant radiologists and residents, with respect to positive predictive value (PPV), sensitivity, and F-score (harmonic mean for PPV and sensitivity). The χ2 test was used to compare the sensitivity, specificity, accuracy, PPV, and F-scores of the readers and the system.
The proposed system achieved higher overall diagnostic accuracy (94.3%) than the radiologists (61.4% ± 5.3%). The radiologists reached average sensitivities for normal CXR, other type of pneumonia, and COVID-19 pneumonia of 85.0% ± 12.8%, 60.1% ± 12.2%, and 53.2% ± 11.2%, respectively, which were significantly lower than the results achieved by the algorithm (98.0%, 88.0%, and 97.0%; P < 0.00032). The mean PPVs for all 11 radiologists for the 3 categories were 82.4%, 59.0%, and 59.0% for the healthy, other pneumonia, and COVID-19 pneumonia, respectively, resulting in an F-score of 65.5% ± 12.4%, which was significantly lower than the F-score of the algorithm (94.3% ± 2.0%, P < 0.00001). When other pneumonia and COVID-19 pneumonia cases were pooled, the proposed system reached an accuracy of 95.7% for any pathology and the radiologists, 88.8%. The overall accuracy of consultants did not vary significantly compared with residents (65.0% ± 5.8% vs 67.4% ± 4.2%); however, consultants detected significantly more COVID-19 pneumonia cases (P = 0.008) and less healthy cases (P < 0.00001).
The system showed robust accuracy for COVID-19 pneumonia detection on CXR and surpassed radiologists at various training levels. | Investigative radiology | "2020-12-02T00:00:00" | [
"MatthiasFontanellaz",
"LukasEbner",
"AdrianHuber",
"AlanPeters",
"LauraLöbelenz",
"CynthiaHourscht",
"JeremiasKlaus",
"JaroMunz",
"ThomasRuder",
"DionysiosDrakopoulos",
"DominikSieron",
"EliasPrimetis",
"Johannes THeverhagen",
"StavroulaMougiakakou",
"AndreasChriste"
] | 10.1097/RLI.0000000000000748
10.1097/RLI.0000000000000716
10.1016/j.eng.2020.04.010.
10.1101/2020.02.25.20021568. |
COVID-CheXNet: hybrid deep learning framework for identifying COVID-19 virus in chest X-rays images. | The outbreaks of Coronavirus (COVID-19) epidemic have increased the pressure on healthcare and medical systems worldwide. The timely diagnosis of infected patients is a critical step to limit the spread of the COVID-19 epidemic. The chest radiography imaging has shown to be an effective screening technique in diagnosing the COVID-19 epidemic. To reduce the pressure on radiologists and control of the epidemic, fast and accurate a hybrid deep learning framework for diagnosing COVID-19 virus in chest X-ray images is developed and termed as the COVID-CheXNet system. First, the contrast of the X-ray image was enhanced and the noise level was reduced using the contrast-limited adaptive histogram equalization and Butterworth bandpass filter, respectively. This was followed by fusing the results obtained from two different pre-trained deep learning models based on the incorporation of a ResNet34 and high-resolution network model trained using a large-scale dataset. Herein, the parallel architecture was considered, which provides radiologists with a high degree of confidence to discriminate between the healthy and COVID-19 infected people. The proposed COVID-CheXNet system has managed to correctly and accurately diagnose the COVID-19 patients with a detection accuracy rate of 99.99%, sensitivity of 99.98%, specificity of 100%, precision of 100%, F1-score of 99.99%, MSE of 0.011%, and RMSE of 0.012% using the weighted sum rule at the score-level. The efficiency and usefulness of the proposed COVID-CheXNet system are established along with the possibility of using it in real clinical centers for fast diagnosis and treatment supplement, with less than 2 s per image to get the prediction result. | Soft computing | "2020-12-01T00:00:00" | [
"Alaa SAl-Waisy",
"ShumoosAl-Fahdawi",
"Mazin AbedMohammed",
"Karrar HameedAbdulkareem",
"Salama AMostafa",
"Mashael SMaashi",
"MuhammadArif",
"BegonyaGarcia-Zapirain"
] | 10.1007/s00500-020-05424-3
10.1007/s00138-017-0870-2
10.1007/s10044-017-0656-1
10.1007/s13244-016-0534-1
10.1016/j.chaos.2020.110242
10.1016/S0140-6736(20)30211-7
10.1007/s00500-020-05275-y
10.1038/s41591-018-0107-6
10.1016/j.jds.2020.02.002
10.1016/S0140-6736(20)30183-5
10.1016/j.jocs.2018.11.008
10.1109/ACCESS.2020.2995597
10.14358/PERS.80.2.000
10.1016/j.compbiomed.2020.103792
10.1016/j.icte.2018.10.007
10.1109/tpami.2020.2983686
10.1001/jama.2020.3786
10.1023/B:VLSI.0000028532.53893.82 |
Viral Pneumonia Screening on Chest X-Rays Using Confidence-Aware Anomaly Detection. | Clusters of viral pneumonia occurrences over a short period may be a harbinger of an outbreak or pandemic. Rapid and accurate detection of viral pneumonia using chest X-rays can be of significant value for large-scale screening and epidemic prevention, particularly when other more sophisticated imaging modalities are not readily accessible. However, the emergence of novel mutated viruses causes a substantial dataset shift, which can greatly limit the performance of classification-based approaches. In this paper, we formulate the task of differentiating viral pneumonia from non-viral pneumonia and healthy controls into a one-class classification-based anomaly detection problem. We therefore propose the confidence-aware anomaly detection (CAAD) model, which consists of a shared feature extractor, an anomaly detection module, and a confidence prediction module. If the anomaly score produced by the anomaly detection module is large enough, or the confidence score estimated by the confidence prediction module is small enough, the input will be accepted as an anomaly case (i.e., viral pneumonia). The major advantage of our approach over binary classification is that we avoid modeling individual viral pneumonia classes explicitly and treat all known viral pneumonia cases as anomalies to improve the one-class model. The proposed model outperforms binary classification models on the clinical X-VIRAL dataset that contains 5,977 viral pneumonia (no COVID-19) cases, 37,393 non-viral pneumonia or healthy cases. Moreover, when directly testing on the X-COVID dataset that contains 106 COVID-19 cases and 107 normal controls without any fine-tuning, our model achieves an AUC of 83.61% and sensitivity of 71.70%, which is comparable to the performance of radiologists reported in the literature. | IEEE transactions on medical imaging | "2020-11-28T00:00:00" | [
"JianpengZhang",
"YutongXie",
"GuansongPang",
"ZhibinLiao",
"JohanVerjans",
"WenxingLi",
"ZongjiSun",
"JianHe",
"YiLi",
"ChunhuaShen",
"YongXia"
] | 10.1109/TMI.2020.3040950 |
Machine-learning classification of texture features of portable chest X-ray accurately classifies COVID-19 lung infection. | The large volume and suboptimal image quality of portable chest X-rays (CXRs) as a result of the COVID-19 pandemic could post significant challenges for radiologists and frontline physicians. Deep-learning artificial intelligent (AI) methods have the potential to help improve diagnostic efficiency and accuracy for reading portable CXRs.
The study aimed at developing an AI imaging analysis tool to classify COVID-19 lung infection based on portable CXRs.
Public datasets of COVID-19 (N = 130), bacterial pneumonia (N = 145), non-COVID-19 viral pneumonia (N = 145), and normal (N = 138) CXRs were analyzed. Texture and morphological features were extracted. Five supervised machine-learning AI algorithms were used to classify COVID-19 from other conditions. Two-class and multi-class classification were performed. Statistical analysis was done using unpaired two-tailed t tests with unequal variance between groups. Performance of classification models used the receiver-operating characteristic (ROC) curve analysis.
For the two-class classification, the accuracy, sensitivity and specificity were, respectively, 100%, 100%, and 100% for COVID-19 vs normal; 96.34%, 95.35% and 97.44% for COVID-19 vs bacterial pneumonia; and 97.56%, 97.44% and 97.67% for COVID-19 vs non-COVID-19 viral pneumonia. For the multi-class classification, the combined accuracy and AUC were 79.52% and 0.87, respectively.
AI classification of texture and morphological features of portable CXRs accurately distinguishes COVID-19 lung infection in patients in multi-class datasets. Deep-learning methods have the potential to improve diagnostic efficiency and accuracy for portable CXRs. | Biomedical engineering online | "2020-11-27T00:00:00" | [
"LalHussain",
"TonyNguyen",
"HaifangLi",
"Adeel AAbbasi",
"Kashif JLone",
"ZirunZhao",
"MahnoorZaib",
"AnneChen",
"Tim QDuong"
] | 10.1186/s12938-020-00831-x
10.1002/jmv.25678
10.1038/nrmicro3143
10.1016/S0140-6736(20)30211-7
10.1016/j.tmaid.2020.101567
10.1164/rccm.2014P7
10.1056/NEJMp2000929
10.1148/radiol.2020200230
10.1148/radiol.2020200280
10.1109/TMI.2005.862753
10.1016/S0140-6736(20)30154-9
10.1073/pnas.1505935112
10.1016/j.neubiorev.2012.01.004
10.1038/srep44196
10.1038/s41598-017-01931-w
10.1177/117693510600200030
10.1109/TMI.2017.2655486
10.1146/annurev-bioeng-071516-044442
10.1007/s10278-017-9955-8
10.1016/j.cmpb.2016.10.007
10.1007/s10278-017-9945-x
10.1016/j.ultras.2016.08.004
10.1142/S0129065716500258
10.1016/j.jalz.2015.01.010
10.1109/TMI.2016.2535865
10.1109/TMI.2020.2996645
10.1118/1.4944498
10.1016/j.nicl.2017.01.033
10.1109/JBHI.2016.2631401
10.1007/s10278-016-9914-9
10.1118/1.4967345
10.1016/S0031-3203(96)00142-2
10.1186/s12880-015-0069-9
10.1186/s40644-017-0106-8
10.1007/s00261-017-1144-1
10.1016/j.crad.2004.07.008
10.1007/978-3-540-69139-6_157
10.1109/4233.992163
10.1016/0169-2607(93)90068-V
10.1016/j.cmpb.2006.07.010
10.1214/aos/1013203451
10.1142/S0218339007002076
10.1109/TIFS.2012.2223675
10.1016/j.aap.2016.02.002
10.1016/j.knosys.2013.10.016
10.1016/j.patrec.2013.10.017
10.3233/CBM-170643
10.1016/j.compbiomed.2015.03.004 |
Deep-learning algorithms for the interpretation of chest radiographs to aid in the triage of COVID-19 patients: A multicenter retrospective study. | The recent medical applications of deep-learning (DL) algorithms have demonstrated their clinical efficacy in improving speed and accuracy of image interpretation. If the DL algorithm achieves a performance equivalent to that achieved by physicians in chest radiography (CR) diagnoses with Coronavirus disease 2019 (COVID-19) pneumonia, the automatic interpretation of the CR with DL algorithms can significantly reduce the burden on clinicians and radiologists in sudden surges of suspected COVID-19 patients. The aim of this study was to evaluate the efficacy of the DL algorithm for detecting COVID-19 pneumonia on CR compared with formal radiology reports. This is a retrospective study of adult patients that were diagnosed as positive COVID-19 cases based on the reverse transcription polymerase chain reaction among all the patients who were admitted to five emergency departments and one community treatment center in Korea from February 18, 2020 to May 1, 2020. The CR images were evaluated with a publicly available DL algorithm. For reference, CR images without chest computed tomography (CT) scans classified as positive for COVID-19 pneumonia were used given that the radiologist identified ground-glass opacity, consolidation, or other infiltration in retrospectively reviewed CR images. Patients with evidence of pneumonia on chest CT scans were also classified as COVID-19 pneumonia positive outcomes. The overall sensitivity and specificity of the DL algorithm for detecting COVID-19 pneumonia on CR were 95.6%, and 88.7%, respectively. The area under the curve value of the DL algorithm for the detection of COVID-19 with pneumonia was 0.921. The DL algorithm demonstrated a satisfactory diagnostic performance comparable with that of formal radiology reports in the CR-based diagnosis of pneumonia in COVID-19 patients. The DL algorithm may offer fast and reliable examinations that can facilitate patient screening and isolation decisions, which can reduce the medical staff workload during COVID-19 pandemic situations. | PloS one | "2020-11-25T00:00:00" | [
"Se BumJang",
"Suk HeeLee",
"Dong EunLee",
"Sin-YoulPark",
"Jong KunKim",
"Jae WanCho",
"JaekyungCho",
"Ki BeomKim",
"ByunggeonPark",
"JongminPark",
"Jae-KwangLim"
] | 10.1371/journal.pone.0242759
10.1056/NEJMoa2001017
10.3346/jkms.2020.35.e189
10.1148/radiol.2019191225
10.1001/jamanetworkopen.2019.1095
10.1148/radiol.2017162326
10.1148/radiol.2018180237
10.3390/jcm9061981
10.3346/jkms.2020.35.e140
10.2214/AJR.18.20490
10.3348/kjr.2020.0536
10.1097/RTI.0000000000000512
10.1148/radiol.2020200642
10.2214/AJR.20.22954
10.1148/radiol.2020203173
10.1148/radiol.2020201365
10.1148/radiol.2020201160
10.1007/s00330-020-06827-4 |
DeepCOVID-XR: An Artificial Intelligence Algorithm to Detect COVID-19 on Chest Radiographs Trained and Tested on a Large U.S. Clinical Data Set. | Background There are characteristic findings of coronavirus disease 2019 (COVID-19) on chest images. An artificial intelligence (AI) algorithm to detect COVID-19 on chest radiographs might be useful for triage or infection control within a hospital setting, but prior reports have been limited by small data sets, poor data quality, or both. Purpose To present DeepCOVID-XR, a deep learning AI algorithm to detect COVID-19 on chest radiographs, that was trained and tested on a large clinical data set. Materials and Methods DeepCOVID-XR is an ensemble of convolutional neural networks developed to detect COVID-19 on frontal chest radiographs, with reverse-transcription polymerase chain reaction test results as the reference standard. The algorithm was trained and validated on 14 788 images (4253 positive for COVID-19) from sites across the Northwestern Memorial Health Care System from February 2020 to April 2020 and was then tested on 2214 images (1192 positive for COVID-19) from a single hold-out institution. Performance of the algorithm was compared with interpretations from five experienced thoracic radiologists on 300 random test images using the McNemar test for sensitivity and specificity and the DeLong test for the area under the receiver operating characteristic curve (AUC). Results A total of 5853 patients (mean age, 58 years ± 19 [standard deviation]; 3101 women) were evaluated across data sets. For the entire test set, accuracy of DeepCOVID-XR was 83%, with an AUC of 0.90. For 300 random test images (134 positive for COVID-19), accuracy of DeepCOVID-XR was 82%, compared with that of individual radiologists (range, 76%-81%) and the consensus of all five radiologists (81%). DeepCOVID-XR had a significantly higher sensitivity (71%) than one radiologist (60%, | Radiology | "2020-11-25T00:00:00" | [
"Ramsey MWehbe",
"JiayueSheng",
"ShinjanDutta",
"SiyuanChai",
"AmilDravid",
"SemihBarutcu",
"YunanWu",
"Donald RCantrell",
"NicholasXiao",
"Bradley DAllen",
"Gregory AMacNealy",
"HaticeSavas",
"RishiAgrawal",
"NishantParekh",
"Aggelos KKatsaggelos"
] | 10.1148/radiol.2020203511
10.1109/CVPR.2016.90
10.1109/CVPR.2016.308
10.1109/CVPR.2017.369
10.1109/ICCV.2017.74
10.1101/2020.09.13.20193565 |
Deep Transfer Learning for COVID-19 Prediction: Case Study for Limited Data Problems. | Automatic prediction of COVID-19 using deep convolution neural networks based pre-trained transfer models and Chest X-ray images.
This research employs the advantages of computer vision and medical image analysis to develop an automated model that has the clinical potential for early detection of the disease. Using Deep Learning models, the research aims at evaluating the effectiveness and accuracy of different convolutional neural networks models in the automatic diagnosis of COVID-19 from X-ray images as compared to diagnosis performed by experts in the medical community.
Due to the fact that the dataset available for COVID-19 is still limited, the best model to use is the InceptionNetV3. Performance results show that the InceptionNetV3 model yielded the highest accuracy of 98.63% (with data augmentation) and 98.90% (without data augmentation) among the three models designed. However, as the dataset gets bigger, the Inception ResNetV2 and NASNetlarge will do a better job of classification. All the performed networks tend to over-fit when data augmentation is not used, this is due to the small amount of data used for training and validation.
A deep transfer learning is proposed to detecting the COVID-19 automatically from chest X-ray by training it with X-ray images gotten from both COVID-19 patients and people with normal chest X-rays. The study is aimed at helping doctors in making decisions in their clinical practice due its high performance and effectiveness, the study also gives an insight to how transfer learning was used to automatically detect the COVID-19. | Current medical imaging | "2020-11-25T00:00:00" | [
"SalehAlbahli",
"WaleedAlbattah"
] | 10.2174/1573405616666201123120417
10.1109/ACCESS.2020.3031614
10.1148/radiol.2020200642
10.1109/TMI.2016.2553401
10.1039/C8SC00148K
10.1016/j.catena.2019.104426
10.1021/acs.molpharmaceut.7b00578
10.1023/A:1007379606734
10.1109/TPAMI.2013.50
10.1016/j.cell.2018.02.010
10.1016/j.irbm.2020.05.003
10.1109/IIPHDW.2018.8388338
10.1118/1.1487426
10.1007/s10278-003-1655-x
10.1016/j.acra.2006.01.009
10.1007/s13089-009-0003-x
10.1007/BF03167768
10.1007/BF03167769
10.1088/0031-9155/56/24/004
10.1109/TITB.2005.859872
10.1109/CHASE.2017.59
10.1016/j.neucom.2018.12.086
10.1016/j.compbiomed.2020.103795
10.1109/BigComp48618.2020.00-25 |
COVID-19 pneumonia accurately detected on chest radiographs with artificial intelligence. | To investigate the diagnostic performance of an Artificial Intelligence (AI) system for detection of COVID-19 in chest radiographs (CXR), and compare results to those of physicians working alone, or with AI support.
An AI system was fine-tuned to discriminate confirmed COVID-19 pneumonia, from other viral and bacterial pneumonia and non-pneumonia patients and used to review 302 CXR images from adult patients retrospectively sourced from nine different databases. Fifty-four physicians blind to diagnosis, were invited to interpret images under identical conditions in a test set, and randomly assigned either to receive or not receive support from the AI system. Comparisons were then made between diagnostic performance of physicians working with and without AI support. AI system performance was evaluated using the area under the receiver operating characteristic (AUROC), and sensitivity and specificity of physician performance compared to that of the AI system.
Discrimination by the AI system of COVID-19 pneumonia showed an AUROC curve of 0.96 in the validation and 0.83 in the external test set, respectively. The AI system outperformed physicians in the AUROC overall (70% increase in sensitivity and 1% increase in specificity, p < 0.0001). When working with AI support, physicians increased their diagnostic sensitivity from 47% to 61% (p < 0.001), although specificity decreased from 79% to 75% (p = 0.007).
Our results suggest interpreting chest radiographs (CXR) supported by AI, increases physician diagnostic sensitivity for COVID-19 detection. This approach involving a human-machine partnership may help expedite triaging efforts and improve resource allocation in the current crisis. | Intelligence-based medicine | "2020-11-25T00:00:00" | [
"FranciscoDorr",
"HernánChaves",
"María MercedesSerra",
"AndrésRamirez",
"Martín ElíasCosta",
"JoaquínSeia",
"ClaudiaCejas",
"MarceloCastro",
"EduardoEyheremendy",
"DiegoFernández Slezak",
"Mauricio FFarez",
"NoneNone"
] | 10.1016/j.ibmed.2020.100014
10.1016/j.ijid.2020.01.009
10.1002/jmv.25678
10.1016/S0140-6736(20)30183-5
10.1101/2020.02.07.937862
10.1056/NEJMoa2001316
10.1016/S0140-6736(20)30211-7
10.1056/NEJMoa2002032
10.1016/j.chest.2020.04.003
10.1101/2020.02.11.20021493
10.1371/journal.pone.0204155
10.1371/journal.pmed.1002686
10.1097/RTI.0000000000000387
10.1016/j.cell.2018.02.010
10.1016/j.crad.2018.12.015
10.2214/AJR.20.23034
10.1016/j.jclinepi.2009.11.009
10.1148/radiol.2020200642
10.1148/radiol.2020200432
10.1038/s41591-020-0931-3
10.1148/radiol.2020201326
10.1136/pmj.79.930.214
10.1148/radiol.2020201160
10.1016/j.ejrad.2020.109272
10.1148/radiol.2020201874
10.1016/j.dsx.2020.04.012
10.1038/s41746-019-0189-7 |
Abnormal lung quantification in chest CT images of COVID-19 patients with deep learning and its application to severity prediction. | Computed tomography (CT) provides rich diagnosis and severity information of COVID-19 in clinical practice. However, there is no computerized tool to automatically delineate COVID-19 infection regions in chest CT scans for quantitative assessment in advanced applications such as severity prediction. The aim of this study was to develop a deep learning (DL)-based method for automatic segmentation and quantification of infection regions as well as the entire lungs from chest CT scans.
The DL-based segmentation method employs the "VB-Net" neural network to segment COVID-19 infection regions in CT scans. The developed DL-based segmentation system is trained by CT scans from 249 COVID-19 patients, and further validated by CT scans from other 300 COVID-19 patients. To accelerate the manual delineation of CT scans for training, a human-involved-model-iterations (HIMI) strategy is also adopted to assist radiologists to refine automatic annotation of each training case. To evaluate the performance of the DL-based segmentation system, three metrics, that is, Dice similarity coefficient, the differences of volume, and percentage of infection (POI), are calculated between automatic and manual segmentations on the validation set. Then, a clinical study on severity prediction is reported based on the quantitative infection assessment.
The proposed DL-based segmentation system yielded Dice similarity coefficients of 91.6% ± 10.0% between automatic and manual segmentations, and a mean POI estimation error of 0.3% for the whole lung on the validation dataset. Moreover, compared with the cases with fully manual delineation that often takes hours, the proposed HIMI training strategy can dramatically reduce the delineation time to 4 min after three iterations of model updating. Besides, the best accuracy of severity prediction was 73.4% ± 1.3% when the mass of infection (MOI) of multiple lung lobes and bronchopulmonary segments were used as features for severity prediction, indicating the potential clinical application of our quantification technique on severity prediction.
A DL-based segmentation system has been developed to automatically segment and quantify infection regions in CT scans of COVID-19 patients. Quantitative evaluation indicated high accuracy in automatic infection delineation and severity prediction. | Medical physics | "2020-11-24T00:00:00" | [
"FeiShan",
"YaozongGao",
"JunWang",
"WeiyaShi",
"NannanShi",
"MiaofeiHan",
"ZhongXue",
"DinggangShen",
"YuxinShi"
] | 10.1002/mp.14609
10.1109/rbme.2020.2990959
10.2139/ssrn.3546089
10.2214/AJR.20.23202 |
Rapid COVID-19 diagnosis using ensemble deep transfer learning models from chest radiographic images. | The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes novel coronavirus disease (COVID-19) outbreak in more than 200 countries around the world. The early diagnosis of infected patients is needed to discontinue this outbreak. The diagnosis of coronavirus infection from radiography images is the fastest method. In this paper, two different ensemble deep transfer learning models have been designed for COVID-19 diagnosis utilizing the chest X-rays. Both models have utilized pre-trained models for better performance. They are able to differentiate COVID-19, viral pneumonia, and bacterial pneumonia. Both models have been developed to improve the generalization capability of the classifier for binary and multi-class problems. The proposed models have been tested on two well-known datasets. Experimental results reveal that the proposed framework outperforms the existing techniques in terms of sensitivity, specificity, and accuracy. | Journal of ambient intelligence and humanized computing | "2020-11-24T00:00:00" | [
"NehaGianchandani",
"AayushJaiswal",
"DilbagSingh",
"VijayKumar",
"ManjitKaur"
] | 10.1007/s12652-020-02669-6
10.3390/s19194139
10.1049/trit.2019.0028
10.1016/j.radi.2020.04.017
10.3390/app10020559
10.2807/1560-7917.ES.2020.25.3.2000045
10.1007/s13246-020-00888-x
10.1016/j.irbm.2020.07.001
10.1049/trit.2019.0051
10.1049/trit.2018.1006
10.1186/s40708-018-0080-3
10.1016/j.imu.2020.100412
10.1016/j.cmpb.2020.105581
10.3390/sym12040651
10.1016/j.compbiomed.2020.103869
10.1504/IJHM.2019.098951
10.1016/j.compbiomed.2020.103792
10.1016/j.imu.2020.100360
10.1101/2020.06.21.20136598
10.1146/annurev-bioeng-071516-044442
10.2478/v10006-009-0026-2
10.1016/j.compbiomed.2020.103805
10.1016/j.chemolab.2020.104054
10.1109/ACCESS.2020.2994762
10.1007/s10916-018-0932-7
10.1504/IJHM.2019.102893
10.1504/IJHM.2019.098949 |
Automatic detection of COVID-19 from chest radiographs using deep learning. | The breakdown of a deadly infectious disease caused by a newly discovered coronavirus (named SARS n-CoV2) back in December 2019 has shown no respite to slow or stop in general. This contagious disease has spread across different lengths and breadths of the globe, taking a death toll to nearly 700 k by the start of August 2020. The number is well expected to rise even more significantly. In the absence of a thoroughly tested and approved vaccine, the onus primarily lies on obliging to standard operating procedures and timely detection and isolation of the infected persons. The detection of SARS n-CoV2 has been one of the core concerns during the fight against this pandemic. To keep up with the scale of the outbreak, testing needs to be scaled at par with it. With the conventional PCR testing, most of the countries have struggled to minimize the gap between the scale of outbreak and scale of testing.
One way of expediting the scale of testing is to shift to a rigorous computational model driven by deep neural networks, as proposed here in this paper. The proposed model is a non-contact process of determining whether a subject is infected or not and is achieved by using chest radiographs; one of the most widely used imaging technique for clinical diagnosis due to fast imaging and low cost. The dataset used in this work contains 1428 chest radiographs with confirmed COVID-19 positive, common bacterial pneumonia, and healthy cases (no infection). We explored the pre-trained VGG-16 model for classification tasks in this. Transfer learning with fine-tuning was used in this study to train the network on relatively small chest radiographs effectively.
Initial experiments showed that the model achieved promising results and can be significantly used to expedite COVID-19 detection. The experimentation showed an accuracy of 96% and 92.5% in two and three output class cases, respectively.
We believe that this study could be used as an initial screening, which can help healthcare professionals to treat the COVID patients by timely detecting better and screening the presence of disease.
Its simplicity drives the proposed deep neural network model, the capability to work on small image dataset, the non-contact method with acceptable accuracy is a potential alternative for rapid COVID-19 testing that can be adapted by the medical fraternity considering the criticality of the time along with the magnitudes of the outbreak. | Radiography (London, England : 1995) | "2020-11-24T00:00:00" | [
"M KPandit",
"S ABanday",
"RNaaz",
"M AChishti"
] | 10.1016/j.radi.2020.10.018 |
Epicardial adipose tissue is associated with extent of pneumonia and adverse outcomes in patients with COVID-19. | We sought to examine the association of epicardial adipose tissue (EAT) quantified on chest computed tomography (CT) with the extent of pneumonia and adverse outcomes in patients with coronavirus disease 2019 (COVID-19).
We performed a post-hoc analysis of a prospective international registry comprising 109 consecutive patients (age 64 ± 16 years; 62% male) with laboratory-confirmed COVID-19 and noncontrast chest CT imaging. Using semi-automated software, we quantified the burden (%) of lung abnormalities associated with COVID-19 pneumonia. EAT volume (mL) and attenuation (Hounsfield units) were measured using deep learning software. The primary outcome was clinical deterioration (intensive care unit admission, invasive mechanical ventilation, or vasopressor therapy) or in-hospital death.
In multivariable linear regression analysis adjusted for patient comorbidities, the total burden of COVID-19 pneumonia was associated with EAT volume (β = 10.6, p = 0.005) and EAT attenuation (β = 5.2, p = 0.004). EAT volume correlated with serum levels of lactate dehydrogenase (r = 0.361, p = 0.001) and C-reactive protein (r = 0.450, p < 0.001). Clinical deterioration or death occurred in 23 (21.1%) patients at a median of 3 days (IQR 1-13 days) following the chest CT. In multivariable logistic regression analysis, EAT volume (OR 5.1 [95% CI 1.8-14.1] per doubling p = 0.011) and EAT attenuation (OR 3.4 [95% CI 1.5-7.5] per 5 Hounsfield unit increase, p = 0.003) were independent predictors of clinical deterioration or death, as was total pneumonia burden (OR 2.5, 95% CI 1.4-4.6, p = 0.002), chronic lung disease (OR 1.3 [95% CI 1.1-1.7], p = 0.011), and history of heart failure (OR 3.5 [95% 1.1-8.2], p = 0.037).
EAT measures quantified from chest CT are independently associated with extent of pneumonia and adverse outcomes in patients with COVID-19, lending support to their use in clinical risk stratification. | Metabolism: clinical and experimental | "2020-11-23T00:00:00" | [
"KajetanGrodecki",
"AndrewLin",
"AryabodRazipour",
"SebastienCadet",
"Priscilla AMcElhinney",
"CatoChan",
"Barry DPressman",
"PeterJulien",
"PalMaurovich-Horvat",
"NicolaGaibazzi",
"UditThakur",
"ElisabettaMancini",
"CeciliaAgalbato",
"RobertMenè",
"GianfrancoParati",
"FrancoCernigliaro",
"NiteshNerlekar",
"CamillaTorlasco",
"GianlucaPontone",
"Piotr JSlomka",
"DaminiDey"
] | 10.1016/j.metabol.2020.154436
10.1111/dom.14125
10.1002/dmrr.3325
10.1002/oby.23019
10.1056/NEJMoa2021436 |
Deep learning applications to combat the dissemination of COVID-19 disease: a review. | Recent Coronavirus (COVID-19) is one of the respiratory diseases, and it is known as fast infectious ability. This dissemination can be decelerated by diagnosing and quarantining patients with COVID-19 at early stages, thereby saving numerous lives. Reverse transcription-polymerase chain reaction (RT-PCR) is known as one of the primary diagnostic tools. However, RT-PCR tests are costly and time-consuming; it also requires specific materials, equipment, and instruments. Moreover, most countries are suffering from a lack of testing kits because of limitations on budget and techniques. Thus, this standard method is not suitable to meet the requirements of fast detection and tracking during the COVID-19 pandemic, which motived to employ deep learning (DL)/convolutional neural networks (CNNs) technology with X-ray and CT scans for efficient analysis and diagnostic. This study provides insight about the literature that discussed the deep learning technology and its various techniques that are recently developed to combat the dissemination of COVID-19 disease. | European review for medical and pharmacological sciences | "2020-11-21T00:00:00" | [
"M HAlsharif",
"Y HAlsharif",
"KYahya",
"O AAlomari",
"M AAlbreem",
"AJahid"
] | 10.26355/eurrev_202011_23640 |
Open resource of clinical data from patients with pneumonia for the prediction of COVID-19 outcomes via deep learning. | Data from patients with coronavirus disease 2019 (COVID-19) are essential for guiding clinical decision making, for furthering the understanding of this viral disease, and for diagnostic modelling. Here, we describe an open resource containing data from 1,521 patients with pneumonia (including COVID-19 pneumonia) consisting of chest computed tomography (CT) images, 130 clinical features (from a range of biochemical and cellular analyses of blood and urine samples) and laboratory-confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) clinical status. We show the utility of the database for prediction of COVID-19 morbidity and mortality outcomes using a deep learning algorithm trained with data from 1,170 patients and 19,685 manually labelled CT slices. In an independent validation cohort of 351 patients, the algorithm discriminated between negative, mild and severe cases with areas under the receiver operating characteristic curve of 0.944, 0.860 and 0.884, respectively. The open database may have further uses in the diagnosis and management of patients with COVID-19. | Nature biomedical engineering | "2020-11-20T00:00:00" | [
"WanshanNing",
"ShijunLei",
"JingjingYang",
"YukunCao",
"PeiranJiang",
"QianqianYang",
"JiaoZhang",
"XiaobeiWang",
"FenghuaChen",
"ZhiGeng",
"LiangXiong",
"HongmeiZhou",
"YapingGuo",
"YulanZeng",
"HeshuiShi",
"LinWang",
"YuXue",
"ZhengWang"
] | 10.1038/s41551-020-00633-5
10.1056/NEJMoa2001017
10.1038/s41586-020-2008-3
10.1016/S0140-6736(20)30627-9
10.1148/radiol.2020200490
10.1038/s41586-020-2012-7
10.1097/RLI.0000000000000670
10.1056/NEJMoa2001316
10.1016/S0140-6736(20)30251-8
10.1016/S0140-6736(20)30183-5
10.1016/S0140-6736(20)30154-9
10.1016/j.cca.2020.03.009
10.1016/j.tmaid.2020.101623
10.1016/S1473-3099(20)30086-4
10.1136/bmj.m1443
10.1136/bmj.m1091
10.1016/S0140-6736(20)30566-3
10.1016/j.ejrad.2020.108941
10.1056/NEJMoa2002032
10.1016/S2213-2600(20)30079-5
10.1001/jamainternmed.2020.0994
10.1016/j.cell.2020.04.045
10.1007/s10916-020-01562-1
10.3348/kjr.2020.0146
10.1148/radiol.2020200905
10.21037/atm-20-3026
10.1109/TMI.2020.2995965
10.1007/s13246-020-00888-x
10.7717/peerj.453 |
An efficient mixture of deep and machine learning models for COVID-19 diagnosis in chest X-ray images. | A newly emerged coronavirus (COVID-19) seriously threatens human life and health worldwide. In coping and fighting against COVID-19, the most critical step is to effectively screen and diagnose infected patients. Among them, chest X-ray imaging technology is a valuable imaging diagnosis method. The use of computer-aided diagnosis to screen X-ray images of COVID-19 cases can provide experts with auxiliary diagnosis suggestions, which can reduce the burden of experts to a certain extent. In this study, we first used conventional transfer learning methods, using five pre-trained deep learning models, which the Xception model showed a relatively ideal effect, and the diagnostic accuracy reached 96.75%. In order to further improve the diagnostic accuracy, we propose an efficient diagnostic method that uses a combination of deep features and machine learning classification. It implements an end-to-end diagnostic model. The proposed method was tested on two datasets and performed exceptionally well on both of them. We first evaluated the model on 1102 chest X-ray images. The experimental results show that the diagnostic accuracy of Xception + SVM is as high as 99.33%. Compared with the baseline Xception model, the diagnostic accuracy is improved by 2.58%. The sensitivity, specificity and AUC of this model reached 99.27%, 99.38% and 99.32%, respectively. To further illustrate the robustness of our method, we also tested our proposed model on another dataset. Finally also achieved good results. Compared with related research, our proposed method has higher classification accuracy and efficient diagnostic performance. Overall, the proposed method substantially advances the current radiology based methodology, it can be very helpful tool for clinical practitioners and radiologists to aid them in diagnosis and follow-up of COVID-19 cases. | PloS one | "2020-11-18T00:00:00" | [
"DingdingWang",
"JiaqingMo",
"GangZhou",
"LiangXu",
"YajunLiu"
] | 10.1371/journal.pone.0242535
10.1016/j.ijsu.2020.02.034
10.1148/radiol.2020200330
10.1148/ryct.2020200034
10.1016/j.jinf.2020.03.007
10.1038/nature14539
10.1007/s13246-020-00865-4
10.1016/j.compbiomed.2020.103792
10.1016/j.chaos.2020.109944
10.1016/j.cmpb.2020.105581
10.1007/BF00116251
10.1007/BF00994018
10.1101/2020.02.14.20023028%JmedRxiv
10.1101/2020.03.12.20027185%JmedRxiv |
AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system. | The sudden outbreak of novel coronavirus 2019 (COVID-19) increased the diagnostic burden of radiologists. In the time of an epidemic crisis, we hope artificial intelligence (AI) to reduce physician workload in regions with the outbreak, and improve the diagnosis accuracy for physicians before they could acquire enough experience with the new disease. In this paper, we present our experience in building and deploying an AI system that automatically analyzes CT images and provides the probability of infection to rapidly detect COVID-19 pneumonia. The proposed system which consists of classification and segmentation will save about 30%-40% of the detection time for physicians and promote the performance of COVID-19 detection. Specifically, working in an interdisciplinary team of over 30 people with medical and/or AI background, geographically distributed in Beijing and Wuhan, we are able to overcome a series of challenges ( | Applied soft computing | "2020-11-18T00:00:00" | [
"BoWang",
"ShuoJin",
"QingsenYan",
"HaiboXu",
"ChuanLuo",
"LaiWei",
"WeiZhao",
"XuexueHou",
"WenshuoMa",
"ZhengqingXu",
"ZhuozhaoZheng",
"WenboSun",
"LanLan",
"WeiZhang",
"XiangdongMu",
"ChenxiShi",
"ZhongxiaoWang",
"JihaeLee",
"ZijianJin",
"MingguiLin",
"HongboJin",
"LiangZhang",
"JunGuo",
"BenqiZhao",
"ZhizhongRen",
"ShuhaoWang",
"WeiXu",
"XinghuanWang",
"JianmingWang",
"ZhengYou",
"JiahongDong"
] | 10.1016/j.asoc.2020.106897
10.1016/j.cviu.2020.103079 |