title
stringlengths
2
287
abstract
stringlengths
0
5.14k
journal
stringlengths
4
184
date
unknown
authors
sequencelengths
1
57
doi
stringlengths
16
6.63k
Predictors of venous thromboembolism in COVID-19 patients: results of the COVID-19 Brazilian Registry.
Previous studies that assessed risk factors for venous thromboembolism (VTE) in COVID-19 patients have shown inconsistent results. Our aim was to investigate VTE predictors by both logistic regression (LR) and machine learning (ML) approaches, due to their potential complementarity. This cohort study of a large Brazilian COVID-19 Registry included 4120 COVID-19 adult patients from 16 hospitals. Symptomatic VTE was confirmed by objective imaging. LR analysis, tree-based boosting, and bagging were used to investigate the association of variables upon hospital presentation with VTE. Among 4,120 patients (55.5% men, 39.3% critical patients), VTE was confirmed in 6.7%. In multivariate LR analysis, obesity (OR 1.50, 95% CI 1.11-2.02); being an ex-smoker (OR 1.44, 95% CI 1.03-2.01); surgery ≤ 90 days (OR 2.20, 95% CI 1.14-4.23); axillary temperature (OR 1.41, 95% CI 1.22-1.63); D-dimer ≥ 4 times above the upper limit of reference value (OR 2.16, 95% CI 1.26-3.67), lactate (OR 1.10, 95% CI 1.02-1.19), C-reactive protein levels (CRP, OR 1.09, 95% CI 1.01-1.18); and neutrophil count (OR 1.04, 95% CI 1.005-1.075) were independent predictors of VTE. Atrial fibrillation, peripheral oxygen saturation/inspired oxygen fraction (SF) ratio and prophylactic use of anticoagulants were protective. Temperature at admission, SF ratio, neutrophil count, D-dimer, CRP and lactate levels were also identified as predictors by ML methods. By using ML and LR analyses, we showed that D-dimer, axillary temperature, neutrophil count, CRP and lactate levels are risk factors for VTE in COVID-19 patients.
Internal and emergency medicine
"2022-06-02T00:00:00"
[ "Warley Cezarda Silveira", "Lucas Emanuel FerreiraRamos", "Rafael TavaresSilva", "Bruno Barbosa Mirandade Paiva", "Polianna DelfinoPereira", "Alexandre VargasSchwarzbold", "Andresa FontouraGarbini", "Bruna Schettino MoratoBarreira", "Bruno Mateusde Castro", "Carolina MarquesRamos", "Caroline DanubiaGomes", "Christiane Corrêa RodriguesCimini", "Elayne CrestaniPereira", "Eliane WürdigRoesch", "Emanuele Marianne SouzaKroger", "Felipe Ferraz Martins GraçaAranha", "FernandoAnschau", "Fernando AntonioBotoni", "Fernando GraçaAranha", "Gabriela PetryCrestani", "Giovanna GrunewaldVietta", "Gisele Alsina NaderBastos", "Jamille Hemétrio Salles MartinsCosta", "Jéssica Rayane Corrêa Silvada Fonseca", "Karen BrasilRuschel", "Leonardo Seixasde Oliveira", "Lílian SantosPinheiro", "Liliane SoutoPacheco", "Luciana BorgesSegala", "Luciana Siuves FerreiraCouto", "LucianeKopittke", "Maiara AnschauFloriani", "Majlla MagalhãesSilva", "MarceloCarneiro", "Maria Angélica PiresFerreira", "Maria Auxiliadora ParreirasMartins", "Marina Neves Zerbinide Faria", "Matheus Carvalho AlvesNogueira", "Milton HenriquesGuimarães Júnior", "Natália da Cunha SeverinoSampaio", "Neimy Ramosde Oliveira", "Nicole de MoraesPertile", "Pedro Guido SoaresAndrade", "Pedro LedicAssaf", "Reginaldo AparecidoValacio", "Rochele MosmannMenezes", "Saionara CristinaFrancisco", "Silvana Mangeon MeirellesGuimarães", "Silvia FerreiraAraújo", "Suely MeirelesRezende", "Susany AnastáciaPereira", "TatianaKurtz", "Tatiani OliveiraFereguetti", "Carísi AnnePolanczyk", "Magda CarvalhoPires", "Marcos AndréGonçalves", "Milena SorianoMarcolino" ]
10.1007/s11739-022-03002-z 10.1016/S2352-3026(15)00202-1 10.1001/archinte.162.11.1245 10.1016/j.amjmed.2013.02.024 10.1378/chest.09-0959 10.1093/eurheartj/ehaa623 10.1111/jth.14854 10.1161/CIRCULATIONAHA.120.050354 10.1007/s11739-020-02601-y 10.1148/radiol.2020203557 10.1177/1076029620967083 10.1182/bloodadvances.2020003083 10.1007/s00134-020-06062-x 10.1111/jth.14869 10.1001/jama.2020.13372 10.1016/j.eclinm.2020.100639 10.15585/mmwr.mm6924e2 10.1001/jama.2020.1585 10.1007/s11739-021-02891-w 10.1016/j.chest.2020.08.2064 10.1016/j.chest.2020.07.031 10.1093/eurheartj/ehaa500 10.1016/j.ajem.2021.09.004 10.1016/j.ijid.2021.11.038 10.1016/j.tru.2021.100037 10.1177/10760296211040868 10.1016/j.ijid.2021.01.019 10.1371/journal.pone.0243533 10.1055/a-1366-9656 10.1016/j.jbi.2008.08.010 10.1016/j.jbi.2019.103208 10.1016/j.apnr.2010.02.004 10.1111/jth.14929 10.1093/eurheartj/ehz405 10.1515/cclm-2020-0573 10.1111/jth.15261 10.1016/j.jacc.2020.04.031 10.1001/jamasurg.2019.3742 10.1016/j.ijid.2021.06.005 10.1046/j.1467-789X.2002.00056.x 10.1016/j.jacc.2006.08.040 10.1046/j.1538-7836.2003.00279.x 10.1038/oby.2002.98 10.1016/S1262-3636(07)70251-3 10.1038/ijo.2011.19 10.1007/s11739-020-02355-7 10.1378/chest.15-0287 10.1002/rth2.12065 10.1097/01.CCM.0000201882.23917.B8 10.1097/CRD.0000000000000347 10.1016/j.ijid.2021.07.049 10.1378/chest.07-0617 10.1177/10760296211008999 10.1001/jamainternmed.2021.6203 10.1515/spp-2019-0010 10.1214/aos/1013203451 10.1161/CIRCULATIONAHA.120.047407 10.1111/j.1538-7836.2010.04034.x 10.1093/cid/ciq125
An interpretable multi-task system for clinically applicable COVID-19 diagnosis using CXR.
With the emergence of continuously mutating variants of coronavirus, it is urgent to develop a deep learning model for automatic COVID-19 diagnosis at early stages from chest X-ray images. Since laboratory testing is time-consuming and requires trained laboratory personal, diagnosis using chest X-ray (CXR) is a befitting option. In this study, we proposed an interpretable multi-task system for automatic lung detection and COVID-19 screening in chest X-rays to find an alternate method of testing which are reliable, fast and easily accessible, and able to generate interpretable predictions that are strongly correlated with radiological findings. The proposed system consists of image preprocessing and an unsupervised machine learning (UML) algorithm for lung region detection, as well as a truncated CNN model based on deep transfer learning (DTL) to classify chest X-rays into three classes of COVID-19, pneumonia, and normal. The Grad-CAM technique was applied to create class-specific heatmap images in order to establish trust in the medical AI system. Experiments were performed with 15,884 frontal CXR images to show that the proposed system achieves an accuracy of 91.94% in a test dataset with 2,680 images including a sensitivity of 94.48% on COVID-19 cases, a specificity of 88.46% on normal cases, and a precision of 88.01% on pneumonia cases. Our system also produced state-of-the-art outcomes with a sensitivity of 97.40% on public test data and 88.23% on a previously unseen clinical data (1,000 cases) for binary classification of COVID-19-positive and COVID-19-negative films. Our automatic computerized evaluation for grading lung infections exhibited sensitivity comparable to that of radiologist interpretation in clinical applicability. Therefore, the proposed solution can be used as one element of patient evaluation along with gold-standard clinical and laboratory testing.
Journal of X-ray science and technology
"2022-06-01T00:00:00"
[ "YanZhuang", "Md FashiarRahman", "YuxinWen", "MichaelPokojovy", "PeterMcCaffrey", "AlexanderVo", "EricWalser", "ScottMoen", "HonglunXu", "Tzu-Liang BillTseng" ]
10.3233/XST-221151
An Analysis of New Feature Extraction Methods Based on Machine Learning Methods for Classification Radiological Images.
The lungs are COVID-19's most important focus, as it induces inflammatory changes in the lungs that can lead to respiratory insufficiency. Reducing the supply of oxygen to human cells negatively impacts humans, and multiorgan failure with a high mortality rate may, in certain circumstances, occur. Radiological pulmonary evaluation is a vital part of patient therapy for the critically ill patient with COVID-19. The evaluation of radiological imagery is a specialized activity that requires a radiologist. Artificial intelligence to display radiological images is one of the essential topics. Using a deep machine learning technique to identify morphological differences in the lungs of COVID-19-infected patients could yield promising results on digital images of chest X-rays. Minor differences in digital images that are not detectable or apparent to the human eye may be detected using computer vision algorithms. This paper uses machine learning methods to diagnose COVID-19 on chest X-rays, and the findings have been very promising. The dataset includes COVID-19-enhanced X-ray images for disease detection using chest X-ray images. The data were gathered from two publicly accessible datasets. The feature extractions are done using the gray level co-occurrence matrix methods.
Computational intelligence and neuroscience
"2022-06-01T00:00:00"
[ "Firoozeh AbolhasaniZadeh", "Mohammadreza VazifehArdalani", "Ali RezaeiSalehi", "RozaJalali Farahani", "MandanaHashemi", "Adil HusseinMohammed" ]
10.1155/2022/3035426 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1001/jama.2020.4861 10.5858/arpa.2020-0901-sa 10.1016/j.compbiomed.2020.103792 10.1016/j.acra.2020.03.003 10.1109/RBME.2020.2987975 10.1101/2020.04.04.20052092v2 10.1101/2020.04.04.20052092 10.1136/amiajnl-2012-001145 10.1038/nrg3208 10.4137/bii.s31559 10.1007/s11356-020-11644-9 10.1016/j.chaos.2020.110170 10.1007/s11356-021-13249-2 10.1016/j.compbiomed.2021.104454 10.1016/j.asoc.2021.107449 10.1155/2021/9995073 10.1016/j.asoc.2020.106912 10.1016/j.compbiomed.2021.104425 10.1109/access.2021.3058537 10.1038/nbt.4233 10.1038/s41591-018-0316-z 10.1146/annurev-bioeng-071516-044442 10.1016/j.drudis.2018.01.039 10.3390/s21155137 10.1016/j.media.2017.07.005 10.1109/ICCSRE.2019.8807741 10.1109/JBHI.2020.2986376 10.2174/1574893615999200607173829 10.3390/electronics8030292 10.1002/jctb.4820 10.1109/icip.2018.8451355 10.1109/cvpr.2017.243 10.1007/978-3-319-24574-4_28 10.1016/j.ejrad.2020.109041 10.1080/24749508.2019.1585657 10.24869/psyd.2020.570 10.1101/2020.05.04.20082081v1 10.1038/s41467-020-18685-1 10.24869/psyd.2020.262 10.1101/2020.04.01.20049825v1 10.1109/JIOT.2020.3007518 10.1007/s00521-020-05687-9 10.1016/s2468-2667(20)30073-6 10.21203/rs.3.rs-17715/v1 10.1111/exsy.12759 10.1148/radiol.2020200905 10.1007/s13246-020-00865-4 10.33889/IJMEMS.2020.5.4.052 10.1101/2020.04.04.20052092v2 10.1101/2020.03.30.20047787v1 10.1097/rli.0000000000000672 10.1101/2020.04.13.20063461v1 10.14299/ijser.2020.03.02 10.1101/2020.04.09.20058594v1 10.7717/peerj-cs.564 10.1007/s10044-020-00950-0 10.3390/s21217286 10.3390/app11199023 10.5589/m02-004 10.1080/00207454.2021.1883602 10.1109/tsmc.1985.6313426 10.1016/s0925-2312(03)00431-4 10.1080/10255842.2021.1921164 10.1016/j.cell.2018.02.010 10.1016/j.rxeng.2020.11.002 10.1016/j.idm.2020.04.001
Multiclass Classification of Chest X-Ray Images for the Prediction of COVID-19 Using Capsule Network.
It is critical to establish a reliable method for detecting people infected with COVID-19 since the pandemic has numerous harmful consequences worldwide. If the patient is infected with COVID-19, a chest X-ray can be used to determine this. In this work, an X-ray showing a COVID-19 infection is classified by the capsule neural network model we trained to recognise. 6310 chest X-ray pictures were used to train the models, separated into three categories: normal, pneumonia, and COVID-19. This work is considered an improved deep learning model for the classification of COVID-19 disease through X-ray images. Viewpoint invariance, fewer parameters, and better generalisation are some of the advantages of CapsNet compared with the classic convolutional neural network (CNN) models. The proposed model has achieved an accuracy greater than 95% during the model's training, which is better than the other state-of-the-art algorithms. Furthermore, to aid in detecting COVID-19 in a chest X-ray, the model could provide extra information.
Computational intelligence and neuroscience
"2022-06-01T00:00:00"
[ "MahmoudRagab", "SamahAlshehri", "Nabil AAlhakamy", "Romany FMansour", "DeepikaKoundal" ]
10.1155/2022/6185013 10.1108/WJE-10-2020-0529 10.1007/s12195-020-00642-z 10.4269/ajtmh.20-0280 10.3390/su12177090 10.3906/elk-2105-243 10.1155/2020/1289408 10.1038/s41598-020-76550-z 10.1111/exsy.12749 10.1080/17512433.2020.1832889 10.3390/info11120548 10.1016/j.bspc.2022.103778 10.1016/j.cmpb.2020.105581 10.1016/j.mehy.2020.109761 10.1007/s13246-020-00865-4 10.1016/b978-0-12-824536-1.00003-4 10.1101/2020.05.10.20097063 10.1007/s10489-020-01900-3 10.1038/s41597-021-00900-3 10.1016/j.cell.2018.02.010 10.1108/WJE-03-2021-0174 10.1155/2021/1233166 10.1016/j.jksuci.2019.09.014
COVLIAS 1.0
Background: COVID-19 is a disease with multiple variants, and is quickly spreading throughout the world. It is crucial to identify patients who are suspected of having COVID-19 early, because the vaccine is not readily available in certain parts of the world. Methodology: Lung computed tomography (CT) imaging can be used to diagnose COVID-19 as an alternative to the RT-PCR test in some cases. The occurrence of ground-glass opacities in the lung region is a characteristic of COVID-19 in chest CT scans, and these are daunting to locate and segment manually. The proposed study consists of a combination of solo deep learning (DL) and hybrid DL (HDL) models to tackle the lesion location and segmentation more quickly. One DL and four HDL models—namely, PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet—were trained by an expert radiologist. The training scheme adopted a fivefold cross-validation strategy on a cohort of 3000 images selected from a set of 40 COVID-19-positive individuals. Results: The proposed variability study uses tracings from two trained radiologists as part of the validation. Five artificial intelligence (AI) models were benchmarked against MedSeg. The best AI model, ResNet-UNet, was superior to MedSeg by 9% and 15% for Dice and Jaccard, respectively, when compared against MD 1, and by 4% and 8%, respectively, when compared against MD 2. Statistical tests—namely, the Mann−Whitney test, paired t-test, and Wilcoxon test—demonstrated its stability and reliability, with p < 0.0001. The online system for each slice was <1 s. Conclusions: The AI models reliably located and segmented COVID-19 lesions in CT scans. The COVLIAS 1.0Lesion lesion locator passed the intervariability test.
Diagnostics (Basel, Switzerland)
"2022-05-29T00:00:00"
[ "Jasjit SSuri", "SushantAgarwal", "Gian LucaChabert", "AlessandroCarriero", "AlessioPaschè", "Pietro S CDanna", "LucaSaba", "ArminMehmedović", "GavinoFaa", "Inder MSingh", "MonikaTurk", "Paramjit SChadha", "Amer MJohri", "Narendra NKhanna", "SophieMavrogeni", "John RLaird", "GyanPareek", "MartinMiner", "David WSobel", "AntonellaBalestrieri", "Petros PSfikakis", "GeorgeTsoulfas", "Athanasios DProtogerou", "Durga PrasannaMisra", "VikasAgarwal", "George DKitas", "Jagjit STeji", "MustafaAl-Maini", "Surinder KDhanjil", "AndrewNicolaides", "AdityaSharma", "VijayRathore", "MostafaFatemi", "AzraAlizad", "Pudukode RKrishnan", "FerencNagy", "ZoltanRuzsa", "Mostafa MFouda", "SubbaramNaidu", "KlaudijaViskovic", "Manudeep KKalra" ]
10.3390/diagnostics12051283 10.26355/eurrev_202012_24058 10.1007/s10554-020-02089-9 10.4239/wjd.v12.i3.215 10.1016/j.clinimag.2021.05.016 10.26355/eurrev_202108_26464 10.1101/gr.6.10.995 10.1677/jme.1.01755 10.1148/radiol.2020200432 10.1016/j.pbiomolbio.2006.07.026 10.1016/j.clinimag.2020.04.001 10.1109/TMI.2005.862753 10.1007/s11547-020-01269-w 10.4081/jphr.2021.2270 10.1148/ryct.2020200196 10.1016/j.ajem.2020.04.016 10.1148/radiol.2020200230 10.1016/j.ejrad.2020.109041 10.1016/j.irbm.2020.05.003 10.2214/AJR.20.23034 10.1007/s11604-021-01120-w 10.1148/radiol.2020200343 10.13140/RG 10.1016/j.asoc.2020.106912 10.1109/TIP.2021.3058783 10.1016/j.eng.2020.04.010 10.1016/j.metabol.2017.01.011 10.1308/147870804290 10.1007/s10916-010-9645-2 10.1177/0954411913483637 10.1016/j.cmpb.2012.09.008 10.1007/s11517-012-1019-0 10.1016/j.cmpb.2017.12.016 10.7785/tcrt.2012.500346 10.21037/atm-20-7676 10.1007/s11517-021-02322-0 10.21037/cdt.2019.09.01 10.1007/s11517-018-1897-x 10.1007/s10278-019-00227-x 10.1109/ACCESS.2017.2788044 10.1016/j.media.2017.07.005 10.1146/annurev-bioeng-071516-044442 10.1016/j.array.2019.100004 10.1016/j.jormas.2019.06.002 10.1109/JBHI.2021.3103839 10.1016/j.compbiomed.2021.104721 10.1016/j.compbiomed.2021.104803 10.3390/diagnostics11112025 10.3390/diagnostics11081405 10.1007/s11548-021-02317-0 10.1049/el.2020.2102 10.1049/el.2018.0989 10.2174/1573405616666201231100623 10.1109/TPAMI.2016.2644615 10.3390/sym11010001 10.1117/1.JMI.8.S1.014502 10.1109/TIT.1981.1056373 10.1007/s10479-005-5724-z 10.3390/e22010045 10.1364/BOE.449314 10.3390/s22072724 10.1109/TMI.2020.3002417 10.1093/clinchem/48.5.799 10.11613/BM.2015.015 10.1093/clinchem/39.4.561 10.1177/070674370705200210 10.20982/tqmp.04.1.p013 10.1002/sim.4780040112 10.1016/0169-2607(95)01703-8 10.1007/s10916-016-0504-7 10.1148/ryct.2020200034 10.3389/fmed.2020.00526 10.1016/j.dsx.2020.03.013 10.1177/2048872620974605 10.1080/17476348.2020.1787835 10.3389/fmed.2020.608525 10.7717/peerj-cs.368 10.1155/2021/5544742 10.1109/TCYB.2021.3123173 10.1155/2021/5208940 10.3390/diagnostics11020158 10.1016/j.cmpb.2021.106406 10.1109/TNNLS.2021.3054746 10.3390/diagnostics10110901 10.1007/s10278-021-00434-5 10.1016/j.compbiomed.2020.104037 10.1016/j.acra.2020.09.004 10.1002/mp.14676 10.1007/s11042-020-10010-8 10.1109/TPAMI.2019.2938758 10.1016/S0031-3203(00)00023-6 10.1007/s11263-010-0392-0 10.1109/TMI.2020.2996645 10.1186/s12880-020-00543-7 10.1109/TEM.2021.3094544 10.1088/0031-9155/41/1/009 10.1007/s00066-013-0464-5 10.1137/0733060 10.1109/TIP.2011.2169270 10.1016/j.ihj.2018.01.024 10.1016/j.compbiomed.2017.08.014 10.1007/s10916-015-0214-6 10.3390/diagnostics11112109 10.1016/j.cmpb.2012.05.008 10.1109/TCSVT.2022.3142771 10.1109/TPAMI.2021.3050918 10.1007/s00521-015-1916-x 10.1016/j.patrec.2017.05.018 10.23736/S0392-9590.21.04771-4 10.21037/cdt.2020.01.07 10.1016/j.compbiomed.2020.103804
Towards robust diagnosis of COVID-19 using vision self-attention transformer.
The outbreak of COVID-19, since its appearance, has affected about 200 countries and endangered millions of lives. COVID-19 is extremely contagious disease, and it can quickly incapacitate the healthcare systems if infected cases are not handled timely. Several Conventional Neural Networks (CNN) based techniques have been developed to diagnose the COVID-19. These techniques require a large, labelled dataset to train the algorithm fully, but there are not too many labelled datasets. To mitigate this problem and facilitate the diagnosis of COVID-19, we developed a self-attention transformer-based approach having self-attention mechanism using CT slices. The architecture of transformer can exploit the ample unlabelled datasets using pre-training. The paper aims to compare the performances of self-attention transformer-based approach with CNN and Ensemble classifiers for diagnosis of COVID-19 using binary Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection and multi-class Hybrid-learning for UnbiaSed predicTion of COVID-19 (HUST-19) CT scan dataset. To perform this comparison, we have tested Deep learning-based classifiers and ensemble classifiers with proposed approach using CT scan images. Proposed approach is more effective in detection of COVID-19 with an accuracy of 99.7% on multi-class HUST-19, whereas 98% on binary class SARS-CoV-2 dataset. Cross corpus evaluation achieves accuracy of 93% by training the model with Hust19 dataset and testing using Brazilian COVID dataset.
Scientific reports
"2022-05-27T00:00:00"
[ "FoziaMehboob", "AbdulRauf", "RichardJiang", "Abdul Khader JilaniSaudagar", "Khalid MahmoodMalik", "Muhammad BadruddinKhan", "Mozaherul Hoque AbdulHasnat", "AbdullahAlTameem", "MohammedAlKhathami" ]
10.1038/s41598-022-13039-x 10.1016/j.bea.2021.100003 10.2214/AJR.20.22954 10.1038/s41598-020-79139-8 10.1148/radiol.2020200905 10.1101/2020.02.14.20023028 10.1038/s41598-019-56847-4 10.1007/s11547-020-01232-9 10.3389/fbioe.2020.00670 10.1016/j.cmrp.2020.03.011 10.1007/s43465-020-00129-z 10.1016/j.susoc.2021.02.001 10.1016/j.compbiomed.2020.103795 10.1109/ACCESS.2021.3058854 10.1038/s41551-020-00633-5 10.1016/j.imu.2020.100427
An Interpretable Chest CT Deep Learning Algorithm for Quantification of COVID-19 Lung Disease and Prediction of Inpatient Morbidity and Mortality.
The burden of coronavirus disease 2019 (COVID-19) airspace opacities is time consuming and challenging to quantify on computed tomography. The purpose of this study was to evaluate the ability of a deep convolutional neural network (dCNN) to predict inpatient outcomes associated with COVID-19 pneumonia. A previously trained dCNN was tested on an external validation cohort of 241 patients who presented to the emergency department and received a chest computed tomography scan, 93 with COVID-19 and 168 without. Airspace opacity scoring systems were defined by the extent of airspace opacity in each lobe, totaled across the entire lungs. Expert and dCNN scores were concurrently evaluated for interobserver agreement, while both dCNN identified airspace opacity scoring and raw opacity values were used in the prediction of COVID-19 diagnosis and inpatient outcomes. Interobserver agreement for airspace opacity scoring was 0.892 (95% CI 0.834-0.930). Probability of each outcome behaved as a logistic function of the opacity scoring (25% intensive care unit admission at score of 13/25, 25% intubation at 17/25, and 25% mortality at 20/25). Length of hospitalization, intensive care unit stay, and intubation were associated with larger airspace opacity score (p = 0.032, 0.039, 0.036, respectively). The tested dCNN was highly predictive of inpatient outcomes, performs at a near expert level, and provides added value for clinicians in terms of prognostication and disease severity.
Academic radiology
"2022-05-25T00:00:00"
[ "Jordan HChamberlin", "GilbertoAquino", "Uwe JosephSchoepf", "SophiaNance", "FrancoGodoy", "LandinCarson", "Vincent MGiovagnoli", "Callum EGill", "Liam JMcGill", "JimO'Doherty", "TilmanEmrich", "Jeremy RBurt", "DhirajBaruah", "AkosVarga-Szemes", "Ismail MKabakus" ]
10.1016/j.acra.2022.03.023
SPIE Computer-Aided Diagnosis conference anniversary review.
The SPIE Computer-Aided Diagnosis conference has been held for 16 consecutive years at the annual SPIE Medical Imaging symposium. The conference remains vibrant, with a core group of submitters as well as new submitters and attendees each year. Recent developments include a marked shift in submissions relating to the artificial intelligence revolution in medical image analysis. This review describes the topics and trends observed in research presented at the Computer-Aided Diagnosis conference as part of the 50th-anniversary celebration of SPIE Medical Imaging.
Journal of medical imaging (Bellingham, Wash.)
"2022-05-25T00:00:00"
[ "Ronald MSummers", "Maryellen LGiger" ]
10.1117/1.JMI.9.S1.012208 10.1117/12.911612 10.1117/12.2254262 10.1117/12.2293719 10.1117/12.2083124 10.1117/12.2216307 10.1117/12.2293699 10.1117/12.2293725 10.1117/12.2293408 10.1117/12.2277121 10.1117/12.2254423 10.1117/12.2580948 10.1117/12.2582102 10.1117/12.2582130 10.1117/12.2581977 10.1117/12.2581057 10.1117/12.2582179 10.1117/12.2580892 10.1117/12.2581873 10.1117/12.2582318 10.1117/12.2580738 10.1117/12.772298 10.1117/12.2216747 10.1117/12.967022 10.1117/12.713672 10.1117/12.2081489 10.1117/12.2082820 10.1117/12.91090 10.1117/12.878180 10.1117/12.911398 10.1117/12.2253905 10.1117/12.2083128 10.1117/12.2217480 10.1117/12.2008083 10.1117/12.2216958 10.1117/12.844932 10.1117/12.2217752 10.1117/12.2217587 10.1117/12.2512208 10.1117/12.2550868 10.1117/12.2511787 10.1117/12.2293140 10.1117/12.2293140 10.1117/12.2007822 10.1117/12.2044333 10.1117/12.2043737 10.1117/12.912420 10.1117/12.2082488 10.1117/12.713857 10.1117/12.773016 10.1117/12.2582115 10.1117/12.713851 10.1117/12.844511 10.1117/12.713640 10.1117/12.708819 10.1117/12.910531 10.1117/12.812971 10.1117/12.2043343 10.1117/12.2082691 10.1117/12.2217775 10.1117/12.2254128 10.1117/12.2254516 10.1117/12.2513561 10.1117/12.710088 10.1117/12.2043648 10.1117/12.811654 10.1117/12.769824 10.1117/12.878196 10.1117/12.844571 10.1117/12.911708 10.1117/12.709780 10.1117/12.844352 10.1117/12.2217681 10.1117/12.911177 10.1117/12.912847 10.1117/12.2081977 10.1117/12.2081480 10.1117/12.2254187 10.1117/12.2007546 10.1117/12.2254476 10.1117/12.2007738 10.1117/12.2008034 10.1117/12.2551093 10.1117/12.2255626 10.1117/12.2549873 10.1117/12.2007979 10.1117/12.2043751 10.1117/12.771970 10.1117/12.2007927 10.1117/12.2217906 10.1117/12.2082309 10.1117/12.911836 10.1117/12.844406 10.1117/12.2008282 10.1117/12.2080871 10.1117/12.2208583 10.1117/12.2216645 10.1117/12.2582203 10.1117/12.2216978 10.1117/12.811639 10.1117/12.911216 10.1117/12.2513134 10.1117/12.2254136 10.1117/12.2250910 10.1117/12.2255247 10.1117/12.2513228 10.1117/12.2083596 10.1117/12.709410 10.1117/12.845530 10.1117/12.2007829 10.1117/12.2081600 10.1117/12.2217084 10.1117/12.2255553 10.1117/12.2251322 10.1117/12.2254212 10.1117/12.2293199 10.1117/12.2295374 10.1117/12.878055 10.1117/12.911703 10.1117/12.911169 10.1117/12.911700 10.1117/12.911335 10.1117/12.2043755 10.1117/12.2216173 10.1117/12.713732 10.1117/12.2007970 10.1117/12.2548857 10.1117/12.2293495 10.1117/12.812968 10.1117/12.769858 10.1117/12.812088 10.1117/12.813468 10.1117/12.2081521 10.1117/12.2216917 10.1117/12.2216929 10.1117/12.2512584 10.1117/12.2293207 10.1117/12.2293661 10.1117/12.2293764 10.1117/12.2513567 10.1117/12.2292573 10.1117/12.2295178 10.1117/12.2082811 10.1117/12.813892 10.1117/12.2217382 10.1117/12.911500 10.1117/12.2082226 10.1117/12.2043791 10.1117/12.2293334 10.1117/12.2293297 10.1117/12.2082204 10.1117/12.2292962 10.1117/12.2042469
Deep learning based model for classification of COVID -19 images for healthcare research progress.
As imaging technology plays an important role in the diagnosis and evaluation of the new coronavirus pneumonia (COVID-19), COVID-19 related data sets have been published one after another, but there are relatively few data sets and research progress in related literature. To this end, through COVID-19-related journal papers, reports, and related open-source data set websites, organize and analyze the new coronary pneumonia data set and the deep learning models involved, including computed tomography (CT) image data sets and X-ray (CXR) Image dataset. Analyze the characteristics of the medical images presented in these data sets; focus on open-source data sets, as well as classification and segmentation models that perform well on related data sets. Finally, the future development trend of lung imaging technology is discussed.
Materials today. Proceedings
"2022-05-24T00:00:00"
[ "SarojKumar", "LChandra Sekhar Redd", "SusheelGeorge Joseph", "VinayKumar Sharma", "SabireenH" ]
10.1016/j.matpr.2022.04.884 10.1109/NSS/MIC42677.2020.9507847 10.1109/TNNLS.2021.3099165 10.1109/JBHI.2021.3067465 10.1109/JBHI.2020.3042523 10.1109/TBDATA.2020.3035935 10.1109/ICSPIS51611.2020.9349605 10.1108/WJE-09-2020-0450 10.1108/WJE-12-2020-0631 10.1109/TMI.2020.2995965 10.1109/ITCA52113.2020.00146 10.1109/TII.2021.3059023 10.1007/s11277-021-08565-2 10.1007/s11277-021-08767-8
Automatic detection of pneumonia in chest X-ray images using textural features.
Fast and accurate diagnosis is critical for the triage and management of pneumonia, particularly in the current scenario of a COVID-19 pandemic, where this pathology is a major symptom of the infection. With the objective of providing tools for that purpose, this study assesses the potential of three textural image characterisation methods: radiomics, fractal dimension and the recently developed superpixel-based histon, as biomarkers to be used for training Artificial Intelligence (AI) models in order to detect pneumonia in chest X-ray images. Models generated from three different AI algorithms have been studied: K-Nearest Neighbors, Support Vector Machine and Random Forest. Two open-access image datasets were used in this study. In the first one, a dataset composed of paediatric chest X-ray, the best performing generated models achieved an 83.3% accuracy with 89% sensitivity for radiomics, 89.9% accuracy with 93.6% sensitivity for fractal dimension and 91.3% accuracy with 90.5% sensitivity for superpixels based histon. Second, a dataset derived from an image repository developed primarily as a tool for studying COVID-19 was used. For this dataset, the best performing generated models resulted in a 95.3% accuracy with 99.2% sensitivity for radiomics, 99% accuracy with 100% sensitivity for fractal dimension and 99% accuracy with 98.6% sensitivity for superpixel-based histons. The results confirm the validity of the tested methods as reliable and easy-to-implement automatic diagnostic tools for pneumonia.
Computers in biology and medicine
"2022-05-20T00:00:00"
[ "CésarOrtiz-Toro", "AngelGarcía-Pedrero", "MarioLillo-Saavedra", "ConsueloGonzalo-Martín" ]
10.1016/j.compbiomed.2022.105466 10.1109/NAFIPS.2000.877448 10.1109/ICCV.2003.1238308
Automated COVID-19 Grading With Convolutional Neural Networks in Computed Tomography Scans: A Systematic Comparison.
Amidst the ongoing pandemic, the assessment of computed tomography (CT) images for COVID-19 presence can exceed the workload capacity of radiologists. Several studies addressed this issue by automating COVID-19 classification and grading from CT images with convolutional neural networks (CNNs). Many of these studies reported initial results of algorithms that were assembled from commonly used components. However, the choice of the components of these algorithms was often pragmatic rather than systematic and systems were not compared to each other across papers in a fair manner. We systematically investigated the effectiveness of using 3-D CNNs instead of 2-D CNNs for seven commonly used architectures, including DenseNet, Inception, and ResNet variants. For the architecture that performed best, we furthermore investigated the effect of initializing the network with pretrained weights, providing automatically computed lesion maps as additional network input, and predicting a continuous instead of a categorical output. A 3-D DenseNet-201 with these components achieved an area under the receiver operating characteristic curve of 0.930 on our test set of 105 CT scans and an AUC of 0.919 on a publicly available set of 742 CT scans, a substantial improvement in comparison with a previously published 2-D CNN. This article provides insights into the performance benefits of various components for COVID-19 classification and grading systems. We have created a challenge on grand-challenge.org to allow for a fair comparison between the results of this and future research.
IEEE transactions on artificial intelligence
"2022-05-19T00:00:00"
[ "Coende Vente", "Luuk HBoulogne", "Kiran VaidhyaVenkadesh", "CherylSital", "NikolasLessmann", "ColinJacobs", "Clara ISanchez", "Bramvan Ginneken" ]
10.1109/TAI.2021.3115093
A Deep Learning Framework Integrating the Spectral and Spatial Features for Image-Assisted Medical Diagnostics.
The development of a computer-aided disease detection system to ease the long and arduous manual diagnostic process is an emerging research interest. Living through the recent outbreak of the COVID-19 virus, we propose a machine learning and computer vision algorithms-based automatic diagnostic solution for detecting the COVID-19 infection. Our proposed method applies to chest radiograph that uses readily available infrastructure. No studies in this direction have considered the spatial aspect of the medical images. This motivates us to investigate the role of spectral-domain information of medical images along with the spatial content towards improved disease detection ability. Successful integration of spatial and spectral features is demonstrated on the COVID-19 infection detection task. Our proposed method comprises three stages - Feature extraction, Dimensionality reduction via projection, and prediction. At first, images are transformed into spectral and spatio-spectral domains by using Discrete cosine transform (DCT) and Discrete Wavelet transform (DWT), two powerful image processing algorithms. Next, features from spatial, spectral, and spatio-spectral domains are projected into a lower dimension through the Convolutional Neural Network (CNN), and those three types of projected features are then fed to Multilayer Perceptron (MLP) for final prediction. The combination of the three types of features yielded superior performance than any of the features when used individually. This indicates the presence of complementary information in the spectral domain of the chest radiograph to characterize the considered medical condition. Moreover, saliency maps corresponding to classes representing different medical conditions demonstrate the reliability of the proposed method. The study is further extended to identify different medical conditions using diverse medical image datasets and shows the efficiency of leveraging the combined features. Altogether, the proposed method exhibits potential as a generalized and robust medical image-assisted diagnostic solution.
IEEE access : practical innovations, open solutions
"2022-05-19T00:00:00"
[ "SusmitaGhosh", "SwagatamDas", "RammohanMallipeddi" ]
10.1109/ACCESS.2021.3133338
Deep learning model for the automatic classification of COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy: a multi-center retrospective study.
This retrospective study aimed to develop and validate a deep learning model for the classification of coronavirus disease-2019 (COVID-19) pneumonia, non-COVID-19 pneumonia, and the healthy using chest X-ray (CXR) images. One private and two public datasets of CXR images were included. The private dataset included CXR from six hospitals. A total of 14,258 and 11,253 CXR images were included in the 2 public datasets and 455 in the private dataset. A deep learning model based on EfficientNet with noisy student was constructed using the three datasets. The test set of 150 CXR images in the private dataset were evaluated by the deep learning model and six radiologists. Three-category classification accuracy and class-wise area under the curve (AUC) for each of the COVID-19 pneumonia, non-COVID-19 pneumonia, and healthy were calculated. Consensus of the six radiologists was used for calculating class-wise AUC. The three-category classification accuracy of our model was 0.8667, and those of the six radiologists ranged from 0.5667 to 0.7733. For our model and the consensus of the six radiologists, the class-wise AUC of the healthy, non-COVID-19 pneumonia, and COVID-19 pneumonia were 0.9912, 0.9492, and 0.9752 and 0.9656, 0.8654, and 0.8740, respectively. Difference of the class-wise AUC between our model and the consensus of the six radiologists was statistically significant for COVID-19 pneumonia (p value = 0.001334). Thus, an accurate model of deep learning for the three-category classification could be constructed; the diagnostic performance of our model was significantly better than that of the consensus interpretation by the six radiologists for COVID-19 pneumonia.
Scientific reports
"2022-05-18T00:00:00"
[ "MizuhoNishio", "DaigoKobayashi", "EikoNishioka", "HidetoshiMatsuo", "YasuyoUrase", "KojiOnoue", "ReiichiIshikura", "YuriKitamura", "EiroSakai", "MasaruTomita", "AkihiroHamanaka", "TakamichiMurakami" ]
10.1038/s41598-022-11990-3 10.1148/radiol.2020200432 10.1148/radiol.2020200823 10.1007/s13244-018-0639-9 10.1016/j.patcog.2020.107700 10.1109/ACCESS.2021.3058537 10.1016/j.compbiomed.2020.103792 10.1038/s41598-019-56847-4 10.1038/s41598-019-56847-4 10.3389/fmed.2021.629134 10.1148/radiol.2020203511 10.1016/S0140-6736(18)31645-3 10.1038/nature21056 10.1001/jama.2016.17216 10.1016/j.compbiomed.2021.104375 10.1016/j.compbiomed.2020.104181 10.1007/s11263-019-01228-7 10.1016/j.media.2020.101797 10.1038/s41746-021-00399-3 10.1186/1471-2105-12-77 10.3348/kjr.2021.0048 10.1016/S1071-5819(03)00038-7
The state of the art for artificial intelligence in lung digital pathology.
Lung diseases carry a significant burden of morbidity and mortality worldwide. The advent of digital pathology (DP) and an increase in computational power have led to the development of artificial intelligence (AI)-based tools that can assist pathologists and pulmonologists in improving clinical workflow and patient management. While previous works have explored the advances in computational approaches for breast, prostate, and head and neck cancers, there has been a growing interest in applying these technologies to lung diseases as well. The application of AI tools on radiology images for better characterization of indeterminate lung nodules, fibrotic lung disease, and lung cancer risk stratification has been well documented. In this article, we discuss methodologies used to build AI tools in lung DP, describing the various hand-crafted and deep learning-based unsupervised feature approaches. Next, we review AI tools across a wide spectrum of lung diseases including cancer, tuberculosis, idiopathic pulmonary fibrosis, and COVID-19. We discuss the utility of novel imaging biomarkers for different types of clinical problems including quantification of biomarkers like PD-L1, lung disease diagnosis, risk stratification, and prediction of response to treatments such as immune checkpoint inhibitors. We also look briefly at some emerging applications of AI tools in lung DP such as multimodal data analysis, 3D pathology, and transplant rejection. Lastly, we discuss the future of DP-based AI tools, describing the challenges with regulatory approval, developing reimbursement models, planning clinical deployment, and addressing AI biases. © 2022 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
The Journal of pathology
"2022-05-18T00:00:00"
[ "Vidya SankarViswanathan", "PaulaToro", "GermánCorredor", "SanjayMukhopadhyay", "AnantMadabhushi" ]
10.1002/path.5966 10.23919/SpliTech.2019.8783041 10.1117/12.2296646.full 10.1117/12.2542360.full 10.1117/12.2293147.full 10.1109/ICCVW.2017.15 10.1016/j.semcancer.2021.02.011
Effective multiscale deep learning model for COVID19 segmentation tasks: A further step towards helping radiologist.
Infection by the SARS-CoV-2 leading to COVID-19 disease is still rising and techniques to either diagnose or evaluate the disease are still thoroughly investigated. The use of CT as a complementary tool to other biological tests is still under scrutiny as the CT scans are prone to many false positives as other lung diseases display similar characteristics on CT scans. However, fully investigating CT images is of tremendous interest to better understand the disease progression and therefore thousands of scans need to be segmented by radiologists to study infected areas. Over the last year, many deep learning models for segmenting CT-lungs were developed. Unfortunately, the lack of large and shared annotated multicentric datasets led to models that were either under-tested (small dataset) or not properly compared (own metrics, none shared dataset), often leading to poor generalization performance. To address, these issues, we developed a model that uses a multiscale and multilevel feature extraction strategy for COVID19 segmentation and extensively validated it on several datasets to assess its generalization capability for other segmentation tasks on similar organs. The proposed model uses a novel encoder and decoder with a proposed kernel-based atrous spatial pyramid pooling module that is used at the bottom of the model to extract small features with a multistage skip connection concatenation approach. The results proved that our proposed model could be applied on a small-scale dataset and still produce generalizable performances on other segmentation tasks. The proposed model produced an efficient Dice score of 90% on a 100 cases dataset, 95% on the NSCLC dataset, 88.49% on the COVID19 dataset, and 97.33 on the StructSeg 2019 dataset as compared to existing state-of-the-art models. The proposed solution could be used for COVID19 segmentation in clinic applications. The source code is publicly available at https://github.com/RespectKnowledge/Mutiscale-based-Covid-_segmentation-usingDeep-Learning-models.
Neurocomputing
"2022-05-18T00:00:00"
[ "AbdulQayyum", "AlainLalande", "FabriceMeriaudeau" ]
10.1016/j.neucom.2022.05.009 10.1148/radiol.2020200905 10.1109/TPAMI.2017.2699184
A lightweight CNN-based network on COVID-19 detection using X-ray and CT images.
The traditional method of detecting COVID-19 disease mainly rely on the interpretation of computer tomography (CT) or X-ray images (X-ray) by doctors or professional researchers to identify whether it is COVID-19 disease, which is easy to cause identification mistakes. In this study, the technology of convolutional neural network is expected to be able to efficiently and accurately identify the COVID-19 disease. This study uses and fine-tunes seven convolutional neural networks including InceptionV3, ResNet50V2, Xception, DenseNet121, MobileNetV2, EfficientNet-B0, and EfficientNetV2 on COVID-19 detection. In addition, we proposes a lightweight convolutional neural network, LightEfficientNetV2, on small number of chest X-ray and CT images. Five-fold cross-validation was used to evaluate the performance of each model. To confirm the performance of the proposed model, LightEfficientNetV2 was carried out on three different datasets (NIH Chest X-rays, SARS-CoV-2 and COVID-CT). On chest X-ray image dataset, the highest accuracy 96.50% was from InceptionV3 before fine-tuning; and the highest accuracy 97.73% was from EfficientNetV2 after fine-tuning. The accuracy of the LightEfficientNetV2 model proposed in this study is 98.33% on chest X-ray image. On CT images, the best transfer learning model before fine-tuning is MobileNetV2, with an accuracy of 94.46%; the best transfer learning model after fine-tuning is Xception, with an accuracy of 96.78%. The accuracy of the LightEfficientNetV2 model proposed in this study is 97.48% on CT image. Compared with the SOTA, LightEfficientNetV2 proposed in this study demonstrates promising performance on chest X-ray images, CT images and three different datasets.
Computers in biology and medicine
"2022-05-17T00:00:00"
[ "Mei-LingHuang", "Yu-ChiehLiao" ]
10.1016/j.compbiomed.2022.105604 10.1016/j.imu.2020.100405 10.1016/j.asoc.2020.106912 10.1016/j.compmedimag.2019.05.005 10.1016/j.bspc.2021.103182 10.1016/j.imu.2020.100505 10.1016/j.mlwa.2021.100138 10.1016/j.chaos.2020.110071 10.1016/j.bbe.2021.09.004 10.1016/j.imu.2021.100620 10.1016/j.compbiomed.2022.105244 10.1016/j.eswa.2021.114883 10.1007/s13246-020-00865-4 10.1016/j.asoc.2020.106691 10.1016/j.asoc.2020.106859 10.1016/j.imu.2020.100360 10.1016/j.asoc.2021.107675 10.1016/j.compbiomed.2021.105134 10.1016/j.iot.2021.100377 10.1016/j.compbiomed.2020.103795 10.1016/j.compbiomed.2021.104857 10.1016/j.patrec.2021.08.035 10.1016/j.compbiomed.2021.104575 10.1016/j.jiph.2021.07.015 10.1016/j.displa.2022.102150 10.32604/cmc.2021.018040 10.1016/j.ibmed.2021.100027 10.1016/j.asoc.2020.106885 10.1016/j.compbiomed.2021.104608 10.1007/s11760-021-01991-6 10.1016/j.compbiomed.2021.104729 10.1016/j.patcog.2021.107848 10.1016/j.bspc.2021.102920 10.1016/j.bbe.2021.06.011 10.1016/j.measurement.2021.110289 10.1016/j.patrec.2021.06.021 10.1016/j.compbiomed.2021.104742 10.1016/j.imu.2021.100687 10.1016/j.ultrasmedbio.2022.01.023 10.1016/j.compbiomed.2021.105002 10.1016/j.aej.2021.01.011 10.1016/j.compbiomed.2021.104348 10.1016/j.chaos.2020.110495 10.1016/j.chaos.2020.110190 10.1016/j.bspc.2021.102987 10.1016/j.neucom.2021.06.012 10.1016/j.compbiomed.2021.105014 10.1016/j.imu.2020.100505 10.1109/JPROC.2020.3004555 10.1016/j.compag.2021.106184 10.1109/CVPR.2015.7298594 10.1109/CVPR.2016.308 10.1109/CVPR.2016.90 10.1109/CVPR.2017.195 10.1109/CVPR.2017.243 10.1016/j.compbiomed.2021.105014 10.1016/j.chaos.2020.109944 10.1016/j.patrec.2018.10.027 10.1016/j.compbiomed.2021.105127 10.1101/2020.04.13.20063941 10.1016/j.patrec.2020.10.001 10.1109/CVPR.2017.369 10.1109/CVPR.2018.00865 10.1016/j.imu.2020.100391 10.1101/2020.04.24.20078584v3%0Ahttps://www.medrxiv.org/content/10.1101/2020.04.24.20078584v3.abstract 10.1080/07391102.2020.1788642
The effect of machine learning explanations on user trust for automated diagnosis of COVID-19.
Recent years have seen deep neural networks (DNN) gain widespread acceptance for a range of computer vision tasks that include medical imaging. Motivated by their performance, multiple studies have focused on designing deep convolutional neural network architectures tailored to detect COVID-19 cases from chest computerized tomography (CT) images. However, a fundamental challenge of DNN models is their inability to explain the reasoning for a diagnosis. Explainability is essential for medical diagnosis, where understanding the reason for a decision is as important as the decision itself. A variety of algorithms have been proposed that generate explanations and strive to enhance users' trust in DNN models. Yet, the influence of the generated machine learning explanations on clinicians' trust for complex decision tasks in healthcare has not been understood. This study evaluates the quality of explanations generated for a deep learning model that detects COVID-19 based on CT images and examines the influence of the quality of these explanations on clinicians' trust. First, we collect radiologist-annotated explanations of the CT images for the diagnosis of COVID-19 to create the ground truth. We then compare ground truth explanations with machine learning explanations. Our evaluation shows that the explanations produced. by different algorithms were often correct (high precision) when compared to the radiologist annotated ground truth but a significant number of explanations were missed (significantly lower recall). We further conduct a controlled experiment to study the influence of machine learning explanations on clinicians' trust for the diagnosis of COVID-19. Our findings show that while the clinicians' trust in automated diagnosis increases with the explanations, their reliance on the diagnosis reduces as clinicians are less likely to rely on algorithms that are not close to human judgement. Clinicians want higher recall of the explanations for a better understanding of an automated diagnosis system.
Computers in biology and medicine
"2022-05-14T00:00:00"
[ "KanikaGoel", "RenukaSindhgatta", "SumitKalra", "RohanGoel", "PreetiMutreja" ]
10.1016/j.compbiomed.2022.105587 10.1148/radiol.2020200432 10.3389/fmed.2020.608525 10.3390/electronics10050593 10.1145/2939672.2939778 10.1109/ICCV.2019.00304 10.1109/ICCV.2019.00304 10.3390/jimaging6060052 10.3390/jimaging6060052 10.1117/12.2549298 10.1038/s41598-020-76550-z 10.1109/re.2019.00032 10.1145/3290607.3312962 10.2139/ssrn.3064761 10.1007/978-3-030-29726-8\_7 10.1016/j.cell.2020.04.045 10.1007/s11263-017-1059-x 10.1109/CVPR.2015.7298640 10.1016/j.media.2020.101857 10.1016/j.inffus.2021.01.008
SSA-Net: Spatial self-attention network for COVID-19 pneumonia infection segmentation with semi-supervised few-shot learning.
Coronavirus disease (COVID-19) broke out at the end of 2019, and has resulted in an ongoing global pandemic. Segmentation of pneumonia infections from chest computed tomography (CT) scans of COVID-19 patients is significant for accurate diagnosis and quantitative analysis. Deep learning-based methods can be developed for automatic segmentation and offer a great potential to strengthen timely quarantine and medical treatment. Unfortunately, due to the urgent nature of the COVID-19 pandemic, a systematic collection of CT data sets for deep neural network training is quite difficult, especially high-quality annotations of multi-category infections are limited. In addition, it is still a challenge to segment the infected areas from CT slices because of the irregular shapes and fuzzy boundaries. To solve these issues, we propose a novel COVID-19 pneumonia lesion segmentation network, called Spatial Self-Attention network (SSA-Net), to identify infected regions from chest CT images automatically. In our SSA-Net, a self-attention mechanism is utilized to expand the receptive field and enhance the representation learning by distilling useful contextual information from deeper layers without extra training time, and spatial convolution is introduced to strengthen the network and accelerate the training convergence. Furthermore, to alleviate the insufficiency of labeled multi-class data and the long-tailed distribution of training data, we present a semi-supervised few-shot iterative segmentation framework based on re-weighting the loss and selecting prediction values with high confidence, which can accurately classify different kinds of infections with a small number of labeled image data. Experimental results show that SSA-Net outperforms state-of-the-art medical image segmentation networks and provides clinically interpretable saliency maps, which are useful for COVID-19 diagnosis and patient triage. Meanwhile, our semi-supervised iterative segmentation model can improve the learning ability in small and unbalanced training set and can achieve higher performance.
Medical image analysis
"2022-05-12T00:00:00"
[ "XiaoyanWang", "YiwenYuan", "DongyanGuo", "XiaojieHuang", "YingCui", "MingXia", "ZhenhuaWang", "CongBai", "ShengyongChen" ]
10.1016/j.media.2022.102459 10.1148/radiol.2020200642 10.1148/radiol.2020200823 10.1016/S0140-6736(20)30211-7 10.1148/radiol.2020200230 10.1109/RBME.2020.2990959 10.1101/2020.04.22.20074948 10.1148/radiol.2020200432 10.1016/j.media.2020.101836 10.1109/TMI.2019.2903562 10.1109/cvpr.2016.90 10.1101/2020.04.13.20063941 10.1186/s41747-020-00173-2 10.1109/ICCV.2019.00110 10.1016/S0140-6736(20)30183-5 10.5281/zenodo.3757476 10.1109/TMI.2020.2992546 10.1016/j.media.2020.101851 10.1109/cvpr42600.2020.00487 10.1148/radiol.2020200236 10.1016/j.media.2020.101794 10.1109/tmi.2020.2993291 10.1109/isbi45749.2020.9098541 10.1007/978-3-319-24574-4_28 10.1148/radiol.2020201365 10.1109/iccv.2017.74 10.1002/mp.14609 10.1109/rbme.2020.2987975 10.1109/tmi.2020.3000314 10.1109/tmi.2020.2994908 10.1109/tmi.2020.2995965 10.1109/cvpr42600.2020.01229 10.1148/radiol.2020201160 10.1007/978-3-030-58548-8_10 10.1148/radiol.2020200343 10.1109/cvpr42600.2020.01308 10.1109/cvpr.2016.319 10.1109/tmi.2020.3001810
Study on transfer learning capabilities for pneumonia classification in chest-x-rays images.
over the last year, the severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) and its variants have highlighted the importance of screening tools with high diagnostic accuracy for new illnesses such as COVID-19. In that regard, deep learning approaches have proven as effective solutions for pneumonia classification, especially when considering chest-x-rays images. However, this lung infection can also be caused by other viral, bacterial or fungi pathogens. Consequently, efforts are being poured toward distinguishing the infection source to help clinicians to diagnose the correct disease origin. Following this tendency, this study further explores the effectiveness of established neural network architectures on the pneumonia classification task through the transfer learning paradigm. to present a comprehensive comparison, 12 well-known ImageNet pre-trained models were fine-tuned and used to discriminate among chest-x-rays of healthy people, and those showing pneumonia symptoms derived from either a viral (i.e., generic or SARS-CoV-2) or bacterial source. Furthermore, since a common public collection distinguishing between such categories is currently not available, two distinct datasets of chest-x-rays images, describing the aforementioned sources, were combined and employed to evaluate the various architectures. the experiments were performed using a total of 6330 images split between train, validation, and test sets. For all models, standard classification metrics were computed (e.g., precision, f1-score), and most architectures obtained significant performances, reaching, among the others, up to 84.46% average f1-score when discriminating the four identified classes. Moreover, execution times, areas under the receiver operating characteristic (AUROC), confusion matrices, activation maps computed via the Grad-CAM algorithm, and additional experiments to assess the robustness of each model using only 50%, 20%, and 10% of the training set were also reported to present an informed discussion on the networks classifications. this paper examines the effectiveness of well-known architectures on a joint collection of chest-x-rays presenting pneumonia cases derived from either viral or bacterial sources, with particular attention to SARS-CoV-2 contagions for viral pathogens; demonstrating that existing architectures can effectively diagnose pneumonia sources and suggesting that the transfer learning paradigm could be a crucial asset in diagnosing future unknown illnesses.
Computer methods and programs in biomedicine
"2022-05-11T00:00:00"
[ "DaniloAvola", "AndreaBacciu", "LuigiCinque", "AlessioFagioli", "Marco RaoulMarini", "RiccardoTaiello" ]
10.1016/j.cmpb.2022.106833 10.1001/jamainternmed.2014.4344 10.1001/jama.2017.9039 10.1259/bjr/31200593 10.1016/S0140-6736(20)30183-5 10.1016/j.scs.2020.102589 10.1038/d41586-021-00396-2 10.1016/S0140-6736(21)00370-6 10.1038/s41591-021-01345-2 10.1016/j.cmpb.2018.05.034 10.1016/j.cmpb.2020.105728 10.1016/j.cmpb.2020.105348 10.1145/3447243 10.1016/j.media.2020.101794 10.1016/j.cmpb.2020.105608 10.1016/j.asoc.2020.106906 10.1109/JBHI.2020.3037127 10.1109/TMI.2020.3040950 10.1109/TNNLS.2021.3054306 10.1016/j.asoc.2021.107160 10.1016/j.compbiomed.2020.103792 10.1016/j.asoc.2020.106859 10.1109/TII.2021.3057683 10.1016/j.asoc.2021.107645 10.1016/j.asoc.2020.106580 10.1016/j.cmpb.2020.105581 10.1109/TMI.2020.2993291 10.1109/JBHI.2021.3058293 10.1016/j.compbiomed.2021.104401 10.1016/j.compbiomed.2020.103869 10.1016/j.asoc.2020.106744 10.1016/j.patcog.2021.107826 10.1016/j.knosys.2020.106647 10.3390/s21062215 10.1038/s41591-021-01506-3 10.1109/JSEN.2021.3076767 10.1007/978-3-030-78618-2_4 10.1016/j.cmpb.2021.106004 10.1016/S2589-7500(20)30102-3 10.1016/S2589-7500(21)00076-5 10.1016/j.asoc.2020.106691 10.1145/3437120.3437300 10.1007/s13246-020-00865-4 10.1016/j.asoc.2020.106912 10.1016/j.irbm.2020.05.003 10.1109/ACCESS.2020.3016780 10.1016/j.cell.2018.02.010 10.1007/s11263-019-01228-7 10.1109/TKDE.2009.191 10.1016/j.media.2019.03.009 10.1145/3065386 10.1109/CVPR.2017.243 10.1109/CVPR.2015.7298594 10.1109/CVPR.2019.00293 10.1109/CVPR.2018.00474 10.1109/ICCV.2019.00140 10.1109/CVPR.2016.90 10.1109/CVPR.2017.634 10.1109/CVPR.2018.00716 10.1007/s11263-015-0816-y 10.1109/CVPR.2018.00745 10.1145/2487575.2487629
DMDF-Net: Dual multiscale dilated fusion network for accurate segmentation of lesions related to COVID-19 in lung radiographic scans.
The recent disaster of COVID-19 has brought the whole world to the verge of devastation because of its highly transmissible nature. In this pandemic, radiographic imaging modalities, particularly, computed tomography (CT), have shown remarkable performance for the effective diagnosis of this virus. However, the diagnostic assessment of CT data is a human-dependent process that requires sufficient time by expert radiologists. Recent developments in artificial intelligence have substituted several personal diagnostic procedures with computer-aided diagnosis (CAD) methods that can make an effective diagnosis, even in real time. In response to COVID-19, various CAD methods have been developed in the literature, which can detect and localize infectious regions in chest CT images. However, most existing methods do not provide cross-data analysis, which is an essential measure for assessing the generality of a CAD method. A few studies have performed cross-data analysis in their methods. Nevertheless, these methods show limited results in real-world scenarios without addressing generality issues. Therefore, in this study, we attempt to address generality issues and propose a deep learning-based CAD solution for the diagnosis of COVID-19 lesions from chest CT images. We propose a dual multiscale dilated fusion network (DMDF-Net) for the robust segmentation of small lesions in a given CT image. The proposed network mainly utilizes the strength of multiscale deep features fusion inside the encoder and decoder modules in a mutually beneficial manner to achieve superior segmentation performance. Additional pre- and post-processing steps are introduced in the proposed method to address the generality issues and further improve the diagnostic performance. Mainly, the concept of post-region of interest (ROI) fusion is introduced in the post-processing step, which reduces the number of false-positives and provides a way to accurately quantify the infected area of lung. Consequently, the proposed framework outperforms various state-of-the-art methods by accomplishing superior infection segmentation results with an average Dice similarity coefficient of 75.7%, Intersection over Union of 67.22%, Average Precision of 69.92%, Sensitivity of 72.78%, Specificity of 99.79%, Enhance-Alignment Measure of 91.11%, and Mean Absolute Error of 0.026.
Expert systems with applications
"2022-05-10T00:00:00"
[ "MuhammadOwais", "Na RaeBaek", "Kang RyoungPark" ]
10.1016/j.eswa.2022.117360
C-COVIDNet: A CNN Model for COVID-19 Detection Using Image Processing.
COVID-19 has become a global disaster that has disturbed the socioeconomic fabric of the world. Efficient and cost-effective diagnosis methods are very much required for better treatment and eliminating false cases for COVID-19. COVID-19 disease is a type of respiratory syndrome, thus lung X-ray analysis has got the attention for an effective diagnosis. Hence, the proposed study introduces an Image processing based COVID-19 detection model C-COVIDNet, which is trained on a dataset of chest X-ray images belonging to three categories: COVID-19, Pneumonia, and Normal person. Image preprocessing pipeline is used for extracting the region of interest (ROI), so that the required features may be present in the input. This lightweight convolution neural network (CNN) based approach has achieved an accuracy of 97.5% and an F1-score of 97.91%. Model input images are generated in batches using a custom data generator. The performance of C-COVIDNet has outperformed the state-of-the-art. The promising results will surely help in accelerating the development of deep learning-based COVID-19 diagnosis tools using radiography.
Arabian journal for science and engineering
"2022-05-10T00:00:00"
[ "NehaRajawat", "Bharat SinghHada", "MayankMeghawat", "SoniyaLalwani", "RajeshKumar" ]
10.1007/s13369-022-06841-2 10.1038/s41598-019-56847-4 10.1007/s10489-020-01829-7 10.1007/s00138-020-01119-9 10.1016/j.patrec.2018.08.010 10.1016/j.patrec.2018.07.026 10.1016/j.ins.2018.02.060 10.4018/IJSWIS.2020040101 10.1109/TIP.2015.2512108 10.1109/76.915354 10.1109/JSEN.2018.2828312
Learning COVID-19 Pneumonia Lesion Segmentation From Imperfect Annotations via Divergence-Aware Selective Training.
Automatic segmentation of COVID-19 pneumonia lesions is critical for quantitative measurement for diagnosis and treatment management. For this task, deep learning is the state-of-the-art method while requires a large set of accurately annotated images for training, which is difficult to obtain due to limited access to experts and the time-consuming annotation process. To address this problem, we aim to train the segmentation network from imperfect annotations, where the training set consists of a small clean set of accurately annotated images by experts and a large noisy set of inaccurate annotations by non-experts. To avoid the labels with different qualities corrupting the segmentation model, we propose a new approach to train segmentation networks to deal with noisy labels. We introduce a dual-branch network to separately learn from the accurate and noisy annotations. To fully exploit the imperfect annotations as well as suppressing the noise, we design a Divergence-Aware Selective Training (DAST) strategy, where a divergence-aware noisiness score is used to identify severely noisy annotations and slightly noisy annotations. For severely noisy samples we use an regularization through dual-branch consistency between predictions from the two branches. We also refine slightly noisy samples and use them as supplementary data for the clean branch to avoid overfitting. Experimental results show that our method achieves a higher performance than standard training process for COVID-19 pneumonia lesion segmentation when learning from imperfect labels, and our framework outperforms the state-of-the-art noise-tolerate methods significantly with various clean label percentages.
IEEE journal of biomedical and health informatics
"2022-05-07T00:00:00"
[ "ShuojueYang", "GuotaiWang", "HuiSun", "XiangdeLuo", "PengSun", "KangLi", "QijunWang", "ShaotingZhang" ]
10.1109/JBHI.2022.3172978
An externally validated fully automated deep learning algorithm to classify COVID-19 and other pneumonias on chest computed tomography.
In this study, we propose an artificial intelligence (AI) framework based on three-dimensional convolutional neural networks to classify computed tomography (CT) scans of patients with coronavirus disease 2019 (COVID-19), influenza/community-acquired pneumonia (CAP), and no infection, after automatic segmentation of the lungs and lung abnormalities. The AI classification model is based on inflated three-dimensional Inception architecture and was trained and validated on retrospective data of CT images of 667 adult patients (no infection n=188, COVID-19 n=230, influenza/CAP n=249) and 210 adult patients (no infection n=70, COVID-19 n=70, influenza/CAP n=70), respectively. The model's performance was independently evaluated on an internal test set of 273 adult patients (no infection n=55, COVID-19 n= 94, influenza/CAP n=124) and an external validation set from a different centre (305 adult patients: COVID-19 n=169, no infection n=76, influenza/CAP n=60). The model showed excellent performance in the external validation set with area under the curve of 0.90, 0.92 and 0.92 for COVID-19, influenza/CAP and no infection, respectively. The selection of the input slices based on automatic segmentation of the abnormalities in the lung reduces analysis time (56 s per scan) and computational burden of the model. The Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) score of the proposed model is 47% (15 out of 32 TRIPOD items). This AI solution provides rapid and accurate diagnosis in patients suspected of COVID-19 infection and influenza.
ERJ open research
"2022-05-06T00:00:00"
[ "AkshayaaVaidyanathan", "JulienGuiot", "FadilaZerka", "FloreBelmans", "IngridVan Peufflik", "LouisDeprez", "DenisDanthine", "GregoryCanivet", "PhilippeLambin", "SeanWalsh", "MariaelenaOcchipinti", "PaulMeunier", "WimVos", "PierreLovinfosse", "Ralph T HLeijenaar" ]
10.1183/23120541.00579-2021 10.1016/j.ejrad.2009.11.005 10.1016/S0140-6736(20)30211-7 10.1056/NEJMoa2002032 10.1148/radiol.2020200823 10.1016/j.media.2017.07.005 10.1016/j.eng.2020.04.010 10.1016/j.compbiomed.2020.103795 10.1056/NEJMsb2005114 10.1038/s41597-021-00900-3 10.1016/j.neunet.2018.07.011 10.5334/jbr-btr.1229 10.1109/CVPR.2017.502 10.48550/arXiv.1705.06950 10.1109/CVPR.2015.7298594 10.48550/arXiv.1412.6980 10.1186/s12916-014-0241-z 10.1016/j.compbiomed.2021.104348 10.1148/radiol.2020200905 10.1155/2021/6677314 10.3389/fcvm.2021.638011 10.1016/j.ejrad.2021.109602 10.1183/13993003.00775-2020 10.1007/s00259-020-05075-4 10.1186/s12967-020-02683-4 10.1016/j.cell.2020.04.045 10.1155/2020/9756518 10.3390/jimaging6060052 10.1002/widm.1312 10.1515/icom-2020-0024 10.1038/s41467-020-18685-1 10.1007/s00330-021-07715-1 10.1109/ACCESS.2018.2877890 10.1038/s41598-020-70479-z 10.1007/s11548-020-02286-w 10.1080/07391102.2020.1767212 10.1128/microbe.10.354.1 10.2147/RMHP.S269315 10.1109/ACCESS.2020.3029445 10.1200/CCI.19.00047
Sketch guided and progressive growing GAN for realistic and editable ultrasound image synthesis.
Ultrasound (US) imaging is widely used for anatomical structure inspection in clinical diagnosis. The training of new sonographers and deep learning based algorithms for US image analysis usually requires a large amount of data. However, obtaining and labeling large-scale US imaging data are not easy tasks, especially for diseases with low incidence. Realistic US image synthesis can alleviate this problem to a great extent. In this paper, we propose a generative adversarial network (GAN) based image synthesis framework. Our main contributions include: (1) we present the first work that can synthesize realistic B-mode US images with high-resolution and customized texture editing features; (2) to enhance structural details of generated images, we propose to introduce auxiliary sketch guidance into a conditional GAN. We superpose the edge sketch onto the object mask and use the composite mask as the network input; (3) to generate high-resolution US images, we adopt a progressive training strategy to gradually generate high-resolution images from low-resolution images. In addition, a feature loss is proposed to minimize the difference of high-level features between the generated and real images, which further improves the quality of generated images; (4) the proposed US image synthesis method is quite universal and can also be generalized to the US images of other anatomical structures besides the three ones tested in our study (lung, hip joint, and ovary); (5) extensive experiments on three large US image datasets are conducted to validate our method. Ablation studies, customized texture editing, user studies, and segmentation tests demonstrate promising results of our method in synthesizing realistic US images.
Medical image analysis
"2022-05-06T00:00:00"
[ "JiaminLiang", "XinYang", "YuhaoHuang", "HaomingLi", "ShuangchiHe", "XindiHu", "ZejianChen", "WufengXue", "JunCheng", "DongNi" ]
10.1016/j.media.2022.102461
Diagnosis of COVID-19 Pneumonia via a Novel Deep Learning Architecture.
COVID-19 is a contagious infection that has severe effects on the global economy and our daily life. Accurate diagnosis of COVID-19 is of importance for consultants, patients, and radiologists. In this study, we use the deep learning network AlexNet as the backbone, and enhance it with the following two aspects: 1) adding batch normalization to help accelerate the training, reducing the internal covariance shift; 2) replacing the fully connected layer in AlexNet with three classifiers: SNN, ELM, and RVFL. Therefore, we have three novel models from the deep COVID network (DC-Net) framework, which are named DC-Net-S, DC-Net-E, and DC-Net-R, respectively. After comparison, we find the proposed DC-Net-R achieves an average accuracy of 90.91% on a private dataset (available upon email request) comprising of 296 images while the specificity reaches 96.13%, and has the best performance among all three proposed classifiers. In addition, we show that our DC-Net-R also performs much better than other existing algorithms in the literature. The online version contains supplementary material available at 10.1007/s11390-020-0679-8.
Journal of computer science and technology
"2022-05-03T00:00:00"
[ "XinZhang", "SiyuanLu", "Shui-HuaWang", "XiangYu", "Su-JingWang", "LunYao", "YiPan", "Yu-DongZhang" ]
10.1007/s11390-020-0679-8 10.1016/S0140-6736(20)30185-9 10.1001/jama.2020.1585 10.1166/jmihi.2016.1901 10.1007/s11042-016-3559-z 10.2174/1871527315666161019153259 10.3233/FI-2017-1492 10.1166/jmihi.2019.2804 10.3233/FI-2019-1829 10.1109/TPAMI.2005.159 10.1148/radiol.2020200230 10.17818/NM/2020/1.2 10.1007/s11356-020-07703-w 10.1162/NECO_a_00892 10.1016/j.jmapro.2020.03.006 10.1016/0925-2312(94)90053-1 10.1016/S0165-0114(02)00521-3 10.1038/s41591-019-0447-x 10.1016/j.acra.2019.05.018 10.1148/rg.2018170048
COVID-19 prognosis using limited chest X-ray images.
The COrona VIrus Disease 2019 (COVID-19) pandemic is an ongoing global pandemic that has claimed millions of lives till date. Detecting COVID-19 and isolating affected patients at an early stage is crucial to contain its rapid spread. Although accurate, the primary viral test 'Reverse Transcription Polymerase Chain Reaction' (RT-PCR) for COVID-19 diagnosis has an elaborate test kit, and the turnaround time is high. This has motivated the research community to develop CXR based automated COVID-19 diagnostic methodologies. However, COVID-19 being a novel disease, there is no annotated large-scale CXR dataset for this particular disease. To address the issue of limited data, we propose to exploit a large-scale CXR dataset collected in the pre-COVID era and train a deep neural network in a self-supervised fashion to extract CXR specific features. Further, we compute attention maps between the global and the local features of the backbone convolutional network while finetuning using a limited COVID-19 CXR dataset. We empirically demonstrate the effectiveness of the proposed method. We provide a thorough ablation study to understand the effect of each proposed component. Finally, we provide visualizations highlighting the critical patches instrumental to the predictive decision made by our model. These saliency maps are not only a stepping stone towards explainable AI but also aids radiologists in localizing the infected area.
Applied soft computing
"2022-05-03T00:00:00"
[ "Arnab KumarMondal" ]
10.1016/j.asoc.2022.108867 10.1145/3293353.3293408 10.1016/j.asoc.2020.107052 10.1109/JBHI.2021.3069798 10.1016/j.asoc.2021.107323 10.1016/j.cmpb.2020.105581 10.1016/j.asoc.2020.106859 10.1016/j.ijmedinf.2020.104284 10.1016/j.asoc.2020.106742 10.1148/radiol.2020202944 10.1109/JBHI.2020.3037127 10.1016/j.asoc.2021.107160 10.1016/j.asoc.2021.107645 10.1109/JBHI.2021.3058293 10.1109/JBHI.2021.3074893 10.1016/j.patcog.2020.107613 10.1016/j.asoc.2021.107330 10.1016/j.asoc.2021.107522 10.1016/j.chaos.2020.110122 10.1109/CVPR.2018.00745 10.1016/j.neucom.2020.07.144 10.1007/s11263-015-0816-y
Diagnosis of COVID-19 patients by adapting hyper parametertuned deep belief network using hosted cuckoo optimization algorithm.
COVID-19 is an infection caused by recently discovered corona virus. The symptoms of COVID-19 are fever, cough and dumpiness of breathing. A quick and accurate identification is essential for an efficient fight against COVID-19. A machine learning technique is initiated for categorizing the chest x-ray images into two cases: COVID-19 positive case or negative case. In this manuscript, the categorization of COVID-19 can be determined by hyper parameter tuned deep belief network using hosted cuckoo optimization algorithm. At first, the input chest x-ray images are pre-processed for removing noises. In this manuscript, the deep belief network method is enhanced by hosted cuckoo optimization approach for getting optimum hyper tuning parameters. By this, exact categorization of COVID-19 is attained effectively. The proposed methodology is stimulated at MATLAB. The proposed approach attains 28.3% and 23.5% higher accuracy for Normal and 32.3% and 31.5% higher accuracy for COVID-19, 19.3% and 28.5% higher precision for Normal and 45.3% and 28.5% higher precision for COVID-19, 20.3% and 21.5% higher F-score for Normal and 40.3% and 21.5% higher F-score for COVID-19. The proposed methodology is analyzed using two existing methodologies, as Convolutional Neural Network with Social Mimic Optimization (CNN-SMO) and Support Vector Machine classifier using Bayesian Optimization algorithm (SVM-BOA).
Electromagnetic biology and medicine
"2022-05-03T00:00:00"
[ "VeerrajuGampala", "KarunyaRathan", "Christalin NelsonS", "Francis HShajin", "PRajesh" ]
10.1080/15368378.2022.2065679
Does imbalance in chest X-ray datasets produce biased deep learning approaches for COVID-19 screening?
The health crisis resulting from the global COVID-19 pandemic highlighted more than ever the need for rapid, reliable and safe methods of diagnosis and monitoring of respiratory diseases. To study pulmonary involvement in detail, one of the most common resources is the use of different lung imaging modalities (like chest radiography) to explore the possible affected areas. The study of patient characteristics like sex and age in pathologies of this type is crucial for gaining knowledge of the disease and for avoiding biases due to the clear scarcity of data when developing representative systems. In this work, we performed an analysis of these factors in chest X-ray images to identify biases. Specifically, 11 imbalance scenarios were defined with female and male COVID-19 patients present in different proportions for the sex analysis, and 6 scenarios where only one specific age range was used for training for the age factor. In each study, 3 different approaches for automatic COVID-19 screening were used: Normal vs COVID-19, Pneumonia vs COVID-19 and Non-COVID-19 vs COVID-19. The study was validated using two public chest X-ray datasets, allowing a reliable analysis to support the clinical decision-making process. The results for the sex-related analysis indicate this factor slightly affects the system in the Normal VS COVID-19 and Pneumonia VS COVID-19 approaches, although the identified differences are not relevant enough to worsen considerably the system. Regarding the age-related analysis, this factor was observed to be influencing the system in a more consistent way than the sex factor, as it was present in all considered scenarios. However, this worsening does not represent a major factor, as it is not of great magnitude. Multiple studies have been conducted in other fields in order to determine if certain patient characteristics such as sex or age influenced these deep learning systems. However, to the best of our knowledge, this study has not been done for COVID-19 despite the urgency and lack of COVID-19 chest x-ray images. The presented results evidenced that the proposed methodology and tested approaches allow a robust and reliable analysis to support the clinical decision-making process in this pandemic scenario.
BMC medical research methodology
"2022-04-29T00:00:00"
[ "LorenaÁlvarez-Rodríguez", "Joaquim deMoura", "JorgeNovo", "MarcosOrtega" ]
10.1186/s12874-022-01578-w 10.3389/fcvm.2021.638011 10.1038/s41598-020-76550-z 10.1109/TMI.2020.3040950 10.1016/j.compbiomed.2020.103792 10.1101/2020.06.21.20136598 10.3389/fmed.2020.00427 10.1016/j.eswa.2020.114054 10.1109/ACCESS.2020.2994762 10.1016/j.eswa.2021.115681 10.1109/ACCESS.2020.3033762 10.1073/pnas.1919012117 10.1016/j.eswa.2021.114677 10.1016/j.patcog.2020.107613 10.1016/j.media.2020.101794 10.1109/ACCESS.2020.3044858 10.1016/j.compbiomed.2021.104210
External COVID-19 Deep Learning Model Validation on ACR AI-LAB: It's a Brave New World.
Deploying external artificial intelligence (AI) models locally can be logistically challenging. We aimed to use the ACR AI-LAB software platform for local testing of a chest radiograph (CXR) algorithm for COVID-19 lung disease severity assessment. An externally developed deep learning model for COVID-19 radiographic lung disease severity assessment was loaded into the AI-LAB platform at an independent academic medical center, which was separate from the institution in which the model was trained. The data set consisted of CXR images from 141 patients with reverse transcription-polymerase chain reaction-confirmed COVID-19, which were routed to AI-LAB for model inference. The model calculated a Pulmonary X-ray Severity (PXS) score for each image. This score was correlated with the average of a radiologist-based assessment of severity, the modified Radiographic Assessment of Lung Edema score, independently interpreted by three radiologists. The associations between the PXS score and patient admission and intubation or death were assessed. The PXS score deployed in AI-LAB correlated with the radiologist-determined modified Radiographic Assessment of Lung Edema score (r = 0.80). PXS score was significantly higher in patients who were admitted (4.0 versus 1.3, P < .001) or intubated or died within 3 days (5.5 versus 3.3, P = .001). AI-LAB was successfully used to test an external COVID-19 CXR AI algorithm on local data with relative ease, showing generalizability of the PXS score model. For AI models to scale and be clinically useful, software tools that facilitate the local testing process, like the freely available AI-LAB, will be important to cross the AI implementation gap in health care systems.
Journal of the American College of Radiology : JACR
"2022-04-29T00:00:00"
[ "AliArdestani", "Matthew DLi", "PauleyChea", "Jeremy RWortman", "AdamMedina", "JayashreeKalpathy-Cramer", "ChristophWald" ]
10.1016/j.jacr.2022.03.013 10.1007/s00330-020-07269-8 10.1101/2020.09.15.20195453
Deep learning representations to support COVID-19 diagnosis on CT slices.
The coronavirus disease 2019 (COVID-19) has become a significant public health problem worldwide. In this context, CT-scan automatic analysis has emerged as a COVID-19 complementary diagnosis tool allowing for radiological finding characterization, patient categorization, and disease follow-up. However, this analysis depends on the radiologist's expertise, which may result in subjective evaluations. To explore deep learning representations, trained from thoracic CT-slices, to automatically distinguish COVID-19 disease from control samples. Two datasets were used: SARS-CoV-2 CT Scan (Set-1) and FOSCAL clinic's dataset (Set-2). The deep representations took advantage of supervised learning models previously trained on the natural image domain, which were adjusted following a transfer learning scheme. The deep classification was carried out: (a) via an end-to-end deep learning approach and (b) via random forest and support vector machine classifiers by feeding the deep representation embedding vectors into these classifiers. The end-to-end classification achieved an average accuracy of 92.33% (89.70% precision) for Set-1 and 96.99% (96.62% precision) for Set-2. The deep feature embedding with a support vector machine achieved an average accuracy of 91.40% (95.77% precision) and 96.00% (94.74% precision) for Set-1 and Set-2, respectively. Deep representations have achieved outstanding performance in the identification of COVID-19 cases on CT scans demonstrating good characterization of the COVID-19 radiological patterns. These representations could potentially support the COVID-19 diagnosis in clinical settings. Introducción. La enfermedad por coronavirus (COVID-19) es actualmente el principal problema de salud pública en el mundo. En este contexto, el análisis automático de tomografías computarizadas (TC) surge como una herramienta diagnóstica complementaria que permite caracterizar hallazgos radiológicos, y categorizar y hacer el seguimiento de pacientes con COVID-19. Sin embargo, este análisis depende de la experiencia de los radiólogos, por lo que las valoraciones pueden ser subjetivas. Objetivo. Explorar representaciones de aprendizaje profundo entrenadas con cortes de TC torácica para diferenciar automáticamente entre los casos de COVID-19 y personas no infectadas. Materiales y métodos. Se usaron dos conjuntos de datos de TC: de SARS-CoV-2 CT (conjunto 1) y de la clínica FOSCAL (conjunto 2). Los modelos de aprendizaje supervisados y previamente entrenados en imágenes naturales, se ajustaron usando aprendizaje por transferencia. La clasificación se llevó a cabo mediante aprendizaje de extremo a extremo y clasificadores tales como los árboles de decisiones y las máquinas de soporte vectorial, alimentados por la representación profunda previamente aprendida. Resultados. El enfoque de extremo a extremo alcanzó una exactitud promedio de 92,33 % (89,70 % de precisión) para el conjunto 1 y de 96,99 % (96,62 % de precisión) para el conjunto-2. La máquina de soporte vectorial alcanzó una exactitud promedio de 91,40 % (precisión del 95,77 %) para el conjunto-1 y del 96,00 % (precisión del 94,74 %) para el conjunto 2. Conclusión. Las representaciones profundas lograron resultados sobresalientes al caracterizar patrones radiológicos usados en la detección de casos de COVID-19 a partir de estudios de TC y demostraron ser una potencial herramienta de apoyo del diagnóstico. La enfermedad por coronavirus (COVID-19) es actualmente el principal problema de salud pública en el mundo. En este contexto, el análisis automático de tomografías computarizadas (TC) surge como una herramienta diagnóstica complementaria que permite caracterizar hallazgos radiológicos, y categorizar y hacer el seguimiento de pacientes con COVID-19. Sin embargo, este análisis depende de la experiencia de los radiólogos, por lo que las valoraciones pueden ser subjetivas. Explorar representaciones de aprendizaje profundo entrenadas con cortes de TC torácica para diferenciar automáticamente entre los casos de COVID-19 y personas no infectadas. Se usaron dos conjuntos de datos de TC: de SARS-CoV-2 CT (conjunto 1) y de la clínica FOSCAL (conjunto 2). Los modelos de aprendizaje supervisados y previamente entrenados en imágenes naturales, se ajustaron usando aprendizaje por transferencia. La clasificación se llevó a cabo mediante aprendizaje de extremo a extremo y clasificadores tales como los árboles de decisiones y las máquinas de soporte vectorial, alimentados por la representación profunda previamente aprendida. El enfoque de extremo a extremo alcanzó una exactitud promedio de 92,33 % (89,70 % de precisión) para el conjunto 1 y de 96,99 % (96,62 % de precisión) para el conjunto-2. La máquina de soporte vectorial alcanzó una exactitud promedio de 91,40 % (precisión del 95,77 %) para el conjunto-1 y del 96,00 % (precisión del 94,74 %) para el conjunto 2. Las representaciones profundas lograron resultados sobresalientes al caracterizar patrones radiológicos usados en la detección de casos de COVID-19 a partir de estudios de TC y demostraron ser una potencial herramienta de apoyo del diagnóstico.
Biomedica : revista del Instituto Nacional de Salud
"2022-04-27T00:00:00"
[ "JosuéRuano", "JohnArcila", "DavidRomo-Bucheli", "CarlosVargas", "JeffersonRodríguez", "ÓscarMendoza", "MiguelPlazas", "LolaBautista", "JorgeVillamizar", "GabrielPedraza", "AlejandraMoreno", "DianaValenzuela", "LinaVázquez", "CarolinaValenzuela-Santos", "PaulCamacho", "DanielMantilla", "FabioMartínez Carrillo" ]
10.7705/biomedica.5927 10.1093/ajcp/aqaa029 10.1016/j.cca.2020.03.009 10.1001/jama.2020.12839 10.1148/radiol.2020200905 10.1038/s41562-020-0931-9 10.1001/jama.2020.3786 10.7326/M20-1495 10.1371/journal.pone.0251661 10.1038/s41598-020-68862-x 10.1148/ryct.2020200110 10.1177/0846537120913033 10.1148/radiol.2020200432 10.1016/S0140-6736(20)30728-5 10.1148/radiol.2020200823 10.1002/jmv.25855 10.1007/s00247-019-04593-0 10.1016/j.cell.2018.02.010 10.1016/j.imu.2020.100427 10.7717/peerj-cs.306 10.1109/CVPR.2009.5206848 10.1101/2020.04.24.20078584 10.1148/radiol.2462070712 10.1148/radiol.2020200370 10.1148/radiol.2020200463 10.1148/radiol.2020202504 10.1109/CVPR.2016.90 10.1109/CVPR.2016.308 10.1007/978-3-030-01424-7_27 10.1109/GlobalSIP.2017.8309150 10.1038/s41598-020-74164-z 10.1109/SCORED.2019.8896277 10.1023/A:1010933404324 10.1023/A:1010933404324 10.3390/s19235219 10.48550/arXiv.1507.06020
Artificial intelligence at the time of COVID-19: who does the lion's share?
The development and use of artificial intelligence (AI) methodologies, especially machine learning (ML) and deep learning (DL), have been considerably fostered during the ongoing coronavirus disease 2019 (COVID-19) pandemic. Several models and algorithms have been developed and applied for both identifying COVID-19 cases and for assessing and predicting the risk of developing unfavourable outcomes. Our aim was to summarize how AI is being currently applied to COVID-19. We conducted a PubMed search using as query MeSH major terms "Artificial Intelligence" AND "COVID-19", searching for articles published until December 31, 2021, which explored the possible role of AI in COVID-19. The dataset origin (internal dataset or public datasets available online) and data used for training and testing the proposed ML/DL model(s) were retrieved. Our analysis finally identified 292 articles in PubMed. These studies displayed large heterogeneity in terms of imaging test, laboratory parameters and clinical-demographic data included. Most models were based on imaging data, in particular CT scans or chest X-rays images. C-Reactive protein, leukocyte count, creatinine, lactate dehydrogenase, lymphocytes and platelets counts were found to be the laboratory biomarkers most frequently included in COVID-19 related AI models. The lion's share of AI applied to COVID-19 seems to be played by diagnostic imaging. However, AI in laboratory medicine is also gaining momentum, especially with digital tools characterized by low cost and widespread applicability.
Clinical chemistry and laboratory medicine
"2022-04-27T00:00:00"
[ "DavideNegrini", "ElisaDanese", "Brandon MHenry", "GiuseppeLippi", "MartinaMontagnana" ]
10.1515/cclm-2022-0306
COVID-opt-aiNet: A clinical decision support system for COVID-19 detection.
Coronavirus disease (COVID-19) has had a major and sometimes lethal effect on global public health. COVID-19 detection is a difficult task that necessitates the use of intelligent diagnosis algorithms. Numerous studies have suggested the use of artificial intelligence (AI) and machine learning (ML) techniques to detect COVID-19 infection in patients through chest X-ray image analysis. The use of medical imaging with different modalities for COVID-19 detection has become an important means of containing the spread of this disease. However, medical images are not sufficiently adequate for routine clinical use; there is, therefore, an increasing need for AI to be applied to improve the diagnostic performance of medical image analysis. Regrettably, due to the evolving nature of the COVID-19 global epidemic, the systematic collection of a large data set for deep neural network (DNN)/ML training is problematic. Inspired by these studies, and to aid in the medical diagnosis and control of this contagious disease, we suggest a novel approach that ensembles the feature selection capability of the optimized artificial immune networks (opt-aiNet) algorithm with deep learning (DL) and ML techniques for better prediction of the disease. In this article, we experimented with a DNN, a convolutional neural network (CNN), bidirectional long-short-term memory, a support vector machine (SVM), and logistic regression for the effective detection of COVID-19 in patients. We illustrate the effectiveness of this proposed technique by using COVID-19 image datasets with a variety of modalities. An empirical study using the COVID-19 image dataset demonstrates that the proposed hybrid approaches, named COVID-opt-aiNet, improve classification accuracy by up to 98%-99% for SVM, 96%-97% for DNN, and 70.85%-71% for CNN, to name a few examples. Furthermore, statistical analysis ensures the validity of our proposed algorithms. The source code can be downloaded from Github: https://github.com/faizakhan1925/COVID-opt-aiNet.
International journal of imaging systems and technology
"2022-04-26T00:00:00"
[ "SummrinaKanwal", "FaizaKhan", "SultanAlamri", "KiaDashtipur", "MandarGogate" ]
10.1002/ima.22695
Multimodal covid network: Multimodal bespoke convolutional neural network architectures for COVID-19 detection from chest X-ray's and computerized tomography scans.
AI-based tools were developed in the existing works, which focused on one type of image data; either CXR's or computerized tomography (CT) scans for COVID-19 prediction. There is a need for an AI-based tool that predicts COVID-19 detection from chest images such as Chest X-ray (CXR) and CT scans given as inputs. This research gap is considered the core objective of the proposed work. In the proposed work, multimodal CNN architectures were developed based on the parameters and hyperparameters of neural networks. Nine experiments evaluate optimizers, learning rates, and the number of epochs. Based on the experimental results, suitable parameters are fixed for multimodal architecture development for COVID-19 detection. We have constructed a bespoke convolutional neural network (CNN) architecture named multimodal covid network (MMCOVID-NET) by varying the number of layers from two to seven, which can predict covid or normal images from both CXR's and CT scans. In the proposed work, we have experimented by constructing 24 models for COVID-19 prediction. Among them, four models named MMCOVID-NET-I, MMCOVID-NET-II, MMCOVID-NET-III, and MMCOVID-NET-IV performed well by producing an accuracy of 100%. We obtained these results from a small dataset. So we repeated these experiments in a larger dataset. We inferred that MMCOVID-NET-III outperformed all the state-of-the-art methods by producing an accuracy of 99.75%. The experiments carried out in this work conclude that the parameters and hyperparameters play a vital role in increasing or decreasing the model's performance.
International journal of imaging systems and technology
"2022-04-26T00:00:00"
[ "ThiyagarajanPadmapriya", "ThiruvenkatamKalaiselvi", "VenugopalPriyadharshini" ]
10.1002/ima.22712
Leveraging deep learning for COVID-19 diagnosis through chest imaging.
COVID-19 has taken a toll on the entire world, rendering serious illness and high mortality rate. In the present day, when the globe is hit by a pandemic, those suspected to be infected by the virus need to confirm its presence to seek immediate medical attention to avoid adverse outcomes and also to prevent further transmission of the virus in their close contacts by ensuring timely isolation. The most reliable laboratory testing currently available is the reverse transcription-polymerase chain reaction (RT-PCR) test. Although the test is considered gold standard, 20-25% of results can still be false negatives, which has lately led physicians to recommend medical imaging in specific cases. Our research examines the aspect of chest imaging as a method to diagnose COVID-19. This work is not directed to establish an alternative to RT-PCR, but to aid physicians in determining the presence of virus in medical images. As the disease presents lung involvement, it provides a basis to explore computer vision for classification in radiographic images. In this paper, authors compare the performance of various models, namely ResNet-50, EfficientNetB0, VGG-16 and a custom convolutional neural network (CNN) for detecting the presence of virus in chest computed tomography (CT) scan and chest X-ray images. The most promising results have been derived by using ResNet-50 on CT scans with an accuracy of 98.9% and ResNet-50 on X-rays with an accuracy of 98.7%, which offer an opportunity to further explore these methods for prospective use.
Neural computing & applications
"2022-04-26T00:00:00"
[ "YashikaKhurana", "UmangSoni" ]
10.1007/s00521-022-07250-0 10.1038/s41579-020-00459-7 10.1371/journal.pone.0249090 10.1371/journal.pone.0242958 10.1148/radiol.2020200642 10.1148/radiol.2020200463 10.1109/ACCESS.2020.3010287 10.1016/j.compbiomed.2021.104319 10.1148/radiol.2020203173 10.1038/s41467-020-20657-4 10.1371/journal.pone.0250952 10.1183/13993003.00775-2020 10.1007/s10489-020-01902-1 10.3389/fmed.2020.00427 10.1007/s00521-020-05410-8
Author Correction: Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study.
null
NPJ digital medicine
"2022-04-26T00:00:00"
[ "QiDou", "Tiffany YSo", "MeiruiJiang", "QuandeLiu", "VarutVardhanabhuti", "GeorgiosKaissis", "ZejuLi", "WeixinSi", "Heather H CLee", "KevinYu", "ZuxinFeng", "LiDong", "EgonBurian", "FriederikeJungmann", "RickmerBraren", "MarcusMakowski", "BernhardKainz", "DanielRueckert", "BenGlocker", "Simon C HYu", "Pheng AnnHeng" ]
10.1038/s41746-022-00600-1
QUCoughScope: An Intelligent Application to Detect COVID-19 Patients Using Cough and Breath Sounds.
Problem-Since the outbreak of the COVID-19 pandemic, mass testing has become essential to reduce the spread of the virus. Several recent studies suggest that a significant number of COVID-19 patients display no physical symptoms whatsoever. Therefore, it is unlikely that these patients will undergo COVID-19 testing, which increases their chances of unintentionally spreading the virus. Currently, the primary diagnostic tool to detect COVID-19 is a reverse-transcription polymerase chain reaction (RT-PCR) test from the respiratory specimens of the suspected patient, which is invasive and a resource-dependent technique. It is evident from recent researches that asymptomatic COVID-19 patients cough and breathe in a different way than healthy people. Aim-This paper aims to use a novel machine learning approach to detect COVID-19 (symptomatic and asymptomatic) patients from the convenience of their homes so that they do not overburden the healthcare system and also do not spread the virus unknowingly by continuously monitoring themselves. Method-A Cambridge University research group shared such a dataset of cough and breath sound samples from 582 healthy and 141 COVID-19 patients. Among the COVID-19 patients, 87 were asymptomatic while 54 were symptomatic (had a dry or wet cough). In addition to the available dataset, the proposed work deployed a real-time deep learning-based backend server with a web application to crowdsource cough and breath datasets and also screen for COVID-19 infection from the comfort of the user's home. The collected dataset includes data from 245 healthy individuals and 78 asymptomatic and 18 symptomatic COVID-19 patients. Users can simply use the application from any web browser without installation and enter their symptoms, record audio clips of their cough and breath sounds, and upload the data anonymously. Two different pipelines for screening were developed based on the symptoms reported by the users: asymptomatic and symptomatic. An innovative and novel stacking CNN model was developed using three base learners from of eight state-of-the-art deep learning CNN algorithms. The stacking CNN model is based on a logistic regression classifier meta-learner that uses the spectrograms generated from the breath and cough sounds of symptomatic and asymptomatic patients as input using the combined (Cambridge and collected) dataset. Results-The stacking model outperformed the other eight CNN networks with the best classification performance for binary classification using cough sound spectrogram images. The accuracy, sensitivity, and specificity for symptomatic and asymptomatic patients were 96.5%, 96.42%, and 95.47% and 98.85%, 97.01%, and 99.6%, respectively. For breath sound spectrogram images, the metrics for binary classification of symptomatic and asymptomatic patients were 91.03%, 88.9%, and 91.5% and 80.01%, 72.04%, and 82.67%, respectively. Conclusion-The web-application QUCoughScope records coughing and breathing sounds, converts them to a spectrogram, and applies the best-performing machine learning model to classify the COVID-19 patients and healthy subjects. The result is then reported back to the test user in the application interface. Therefore, this novel system can be used by patients in their premises as a pre-screening method to aid COVID-19 diagnosis by prioritizing the patients for RT-PCR testing and thereby reducing the risk of spreading of the disease.
Diagnostics (Basel, Switzerland)
"2022-04-24T00:00:00"
[ "TawsifurRahman", "NabilIbtehaz", "AmithKhandakar", "Md Sakib AbrarHossain", "Yosra Magdi SalihMekki", "MaymounaEzeddin", "Enamul HaqueBhuiyan", "Mohamed ArseleneAyari", "AnasTahir", "YazanQiblawey", "SakibMahmud", "Susu MZughaier", "TariqAbbas", "SomayaAl-Maadeed", "Muhammad E HChowdhury" ]
10.3390/diagnostics12040920 10.1002/rmv.2112 10.1016/S2665-9913(20)30212-5 10.1016/j.dsx.2020.06.060 10.1136/bmj.n1315 10.1016/j.dsx.2020.06.067 10.1016/j.jcv.2020.104455 10.3390/jcm10163493 10.1016/j.cmi.2020.11.004 10.1109/ACCESS.2020.3010287 10.1016/j.compbiomed.2021.104319 10.1007/s12559-021-09955-1 10.1016/j.compbiomed.2021.105002 10.3390/diagnostics11050893 10.1101/2020.04.13.20063941 10.1007/s13755-021-00169-1 10.1016/j.rinp.2021.105045 10.1111/exsy.12759 10.32604/cmc.2021.012955 10.1109/JIOT.2021.3050775 10.1088/1361-6579/ac1d59 10.3390/s19122781 10.1016/S0020-7373(86)80012-8 10.1007/s10044-020-00921-5 10.3390/s17010171 10.1109/JSEN.2016.2585039 10.1007/s00702-017-1676-0 10.1371/journal.pone.0182428 10.1016/j.mayocp.2017.12.025 10.1007/s10115-019-01337-2 10.1038/tp.2016.123 10.1101/2020.04.07.20051060 10.1016/j.imu.2020.100378 10.1109/ACCESS.2020.3018028 10.1007/s42979-020-00422-6 10.1016/j.compbiomed.2021.104765 10.1016/j.compbiomed.2021.104572 10.1109/OJEMB.2020.3026928 10.3389/fmed.2021.585578 10.1145/3421725 10.1136/bmjinnov-2021-000668 10.1038/s41597-021-00937-4 10.1109/ACCESS.2020.3031384 10.3390/app10093233 10.1007/s12559-020-09812-7 10.1016/j.compbiomed.2021.104838 10.1016/j.compbiomed.2021.104944 10.1016/j.bea.2022.100025
A Literature Review on the Use of Artificial Intelligence for the Diagnosis of COVID-19 on CT and Chest X-ray.
A COVID-19 diagnosis is primarily determined by RT-PCR or rapid lateral-flow testing, although chest imaging has been shown to detect manifestations of the virus. This article reviews the role of imaging (CT and X-ray), in the diagnosis of COVID-19, focusing on the published studies that have applied artificial intelligence with the purpose of detecting COVID-19 or reaching a differential diagnosis between various respiratory infections. In this study, ArXiv, MedRxiv, PubMed, and Google Scholar were searched for studies using the criteria terms 'deep learning', 'artificial intelligence', 'medical imaging', 'COVID-19' and 'SARS-CoV-2'. The identified studies were assessed using a modified version of the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD). Twenty studies fulfilled the inclusion criteria for this review. Out of those selected, 11 papers evaluated the use of artificial intelligence (AI) for chest X-ray and 12 for CT. The size of datasets ranged from 239 to 19,250 images, with sensitivities, specificities and AUCs ranging from 0.789-1.00, 0.843-1.00 and 0.850-1.00. While AI demonstrates excellent diagnostic potential, broader application of this method is hindered by the lack of relevant comparators in studies, sufficiently sized datasets, and independent testing.
Diagnostics (Basel, Switzerland)
"2022-04-24T00:00:00"
[ "CiaraMulrenan", "KawalRhode", "Barbara MaleneFischer" ]
10.3390/diagnostics12040869 10.1038/s41579-018-0118-9 10.1016/S0140-6736(03)14630-2 10.1016/S0140-6736(20)30211-7 10.7326/M20-1495 10.1136/bmj.m4469 10.1145/3466690 10.1007/s42979-021-00690-w 10.1016/j.imu.2020.100405 10.1186/s41747-018-0061-6 10.1016/j.neunet.2014.09.003 10.1186/s12916-014-0241-z 10.1136/bmj.m689 10.1053/j.semnuclmed.2020.09.001 10.12788/fp.0045 10.1007/s00330-021-08050-1 10.1101/2020.08.31.20175828 10.1155/2021/8828404 10.7717/peerj.10309 10.1016/j.compbiomed.2020.103792 10.1101/2020.05.16.20103408 10.1101/2020.06.08.20125963 10.1101/2020.08.14.20170290 10.1101/2020.03.12.20027185 10.1038/s41598-021-83424-5 10.3390/diagnostics11010041 10.2196/19569 10.1038/s41467-020-17971-2 10.1038/s41591-020-0931-3 10.1038/s41467-020-18685-1 10.1016/S2589-7500(20)30186-2 10.1038/s42256-021-00307-0
Assessing clinical applicability of COVID-19 detection in chest radiography with deep learning.
The coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems across the world. Chest radiography (CXR) can be used as a complementary method for diagnosing/following COVID-19 patients. However, experience level and workload of technicians and radiologists may affect the decision process. Recent studies suggest that deep learning can be used to assess CXRs, providing an important second opinion for radiologists and technicians in the decision process, and super-human performance in detection of COVID-19 has been reported in multiple studies. In this study, the clinical applicability of deep learning systems for COVID-19 screening was assessed by testing the performance of deep learning systems for the detection of COVID-19. Specifically, four datasets were used: (1) a collection of multiple public datasets (284.793 CXRs); (2) BIMCV dataset (16.631 CXRs); (3) COVIDGR (852 CXRs) and 4) a private dataset (6.361 CXRs). All datasets were collected retrospectively and consist of only frontal CXR views. A ResNet-18 was trained on each of the datasets for the detection of COVID-19. It is shown that a high dataset bias was present, leading to high performance in intradataset train-test scenarios (area under the curve 0.55-0.84 on the collection of public datasets). Significantly lower performances were obtained in interdataset train-test scenarios however (area under the curve > 0.98). A subset of the data was then assessed by radiologists for comparison to the automatic systems. Finetuning with radiologist annotations significantly increased performance across datasets (area under the curve 0.61-0.88) and improved the attention on clinical findings in positive COVID-19 CXRs. Nevertheless, tests on CXRs from different hospital services indicate that the screening performance of CXR and automatic systems is limited (area under the curve < 0.6 on emergency service CXRs). However, COVID-19 manifestations can be accurately detected when present, motivating the use of these tools for evaluating disease progression on mild to severe COVID-19 patients.
Scientific reports
"2022-04-23T00:00:00"
[ "JoãoPedrosa", "GuilhermeAresta", "CarlosFerreira", "CatarinaCarvalho", "JoanaSilva", "PedroSousa", "LucasRibeiro", "Ana MariaMendonça", "AurélioCampilho" ]
10.1038/s41598-022-10568-3 10.1001/jama.2020.12458 10.1186/s12879-021-06528-3 10.1016/j.tmaid.2020.101623 10.1148/radiol.2020201365 10.1016/j.radi.2014.02.007 10.1109/RBME.2020.2987975 10.1038/s41598-019-56847-4 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2020.103792 10.1148/ryct.2020200337 10.1038/s42256-021-00338-7 10.1016/j.media.2020.101797 10.11613/BM.2012.031 10.1007/BF02289261 10.1109/TMI.2020.3006437 10.1111/j.0006-341X.2000.01134.x 10.1080/01621459.1961.10482090
MS-ResNet: disease-specific survival prediction using longitudinal CT images and clinical data.
Medical imaging data of lung cancer in different stages contain a large amount of time information related to its evolution (emergence, development, or extinction). We try to explore the evolution process of lung images in time dimension to improve the prediction of lung cancer survival by using longitudinal CT images and clinical data jointly. In this paper, we propose an innovative multi-branch spatiotemporal residual network (MS-ResNet) for disease-specific survival (DSS) prediction by integrating the longitudinal computed tomography (CT) images at different times and clinical data. Specifically, we first extract the deep features from the multi-period CT images by an improved residual network. Then, the feature selection algorithm is used to select the most relevant feature subset from the clinical data. Finally, we integrate the deep features and feature subsets to take full advantage of the complementarity between the two types of data to generate the final prediction results. The experimental results demonstrate that our MS-ResNet model is superior to other methods, achieving a promising 86.78% accuracy in the classification of short-survivor, med-survivor, and long-survivor. In computer-aided prognostic analysis of cancer, the time dimension features of the course of disease and the integration of patient clinical data and CT data can effectively improve the prediction accuracy.
International journal of computer assisted radiology and surgery
"2022-04-22T00:00:00"
[ "JiahaoHan", "NingXiao", "WantingYang", "ShichaoLuo", "JunZhao", "YanQiang", "SumanChaudhary", "JuanjuanZhao" ]
10.1007/s11548-022-02625-z 10.1016/j.compbiomed.2021.104294 10.2174/0929867327666200204141952 10.1371/journal.pone.0123694 10.1109/TNB.2019.2936398 10.1016/j.ijmedinf.2020.104371 10.1007/s10151-019-01997-w 10.7150/jca.43268 10.1016/j.compbiomed.2020.104037 10.1109/JTEHM.2019.2955458 10.1049/iet-ipr.2020.0496 10.21037/jtd.2017.12.123 10.3389/fnins.2019.00966 10.3389/fnins.2019.00810 10.3390/cancers11081140 10.3390/s20092649 10.1016/j.inffus.2020.05.005 10.1007/s11227-020-03367-y 10.1016/j.ebiom.2018.12.028 10.1158/1078-0432.CCR-17-2236 10.1056/NEJMoa1208962 10.1016/j.compbiomed.2021.104348 10.1088/1361-6560/ab6e51
Think positive: An interpretable neural network for image recognition.
The COVID-19 pandemic is an ongoing pandemic and is placing additional burden on healthcare systems around the world. Timely and effectively detecting the virus can help to reduce the spread of the disease. Although, RT-PCR is still a gold standard for COVID-19 testing, deep learning models to identify the virus from medical images can also be helpful in certain circumstances. In particular, in situations when patients undergo routine X-rays and/or CT-scans tests but within a few days of such tests they develop respiratory complications. Deep learning models can also be used for pre-screening prior to RT-PCR testing. However, the transparency/interpretability of the reasoning process of predictions made by such deep learning models is essential. In this paper, we propose an interpretable deep learning model that uses positive reasoning process to make predictions. We trained and tested our model over the dataset of chest CT-scan images of COVID-19 patients, normal people and pneumonia patients. Our model gives the accuracy, precision, recall and F-score equal to 99.48%, 0.99, 0.99 and 0.99, respectively.
Neural networks : the official journal of the International Neural Network Society
"2022-04-20T00:00:00"
[ "GurmailSingh" ]
10.1016/j.neunet.2022.03.034 10.1007/s00500-020-05424-3 10.32604/cmc.2021.012955 10.1016/j.scs.2020.102589 10.1109/HEALTHCOM49281.2021.9398980 10.1155/2020/8828855 10.1155/2020/8889023 10.7759/cureus.9448 10.1007/s00500-020-05275-y 10.1007/s11063-019-10043-7 10.1109/CVPR.2014.81 10.3389/fmed.2020.608525 10.3389/fmed.2020.608525 10.1109/CVPR.2016.90 10.1007/978-3-642-35289-8_32 10.1109/CVPR.2017.243 10.1007/s10489-020-01902-1 10.1016/j.bbe.2020.08.008 10.1101/2020.04.13.20063461 10.1145/1553374.1553453 10.1109/ICCKE.2017.8167877 10.1016/j.compbiomed.2020.103792 10.1007/s00357-003-0003-7 10.1371/journal.pone.0242301 10.1109/ic-ETITE47903.2020.235 10.1109/ICCV.2015.136 10.1109/ACCESS.2021.3087583 10.3390/diagnostics11091732 10.1109/ACCESS.2021.3064838 10.1007/s11263-013-0620-5 10.1109/TPAMI.2020.2975798 10.1109/TCSVT.2021.3067449 10.1145/3404374 10.1145/3472810 10.1145/3468872 10.1007/s10489-020-01867-1 10.1007/978-3-319-10590-1_53 10.1007/978-3-319-10590-1_54 10.1109/ICCV.2017.557 10.1109/CVPR.2016.319
Diagnosis of Lumbar Spondylolisthesis Using Optimized Pretrained CNN Models.
Spondylolisthesis refers to the slippage of one vertebral body over the adjacent one. It is a chronic condition that requires early detection to prevent unpleasant surgery. The paper presents an optimized deep learning model for detecting spondylolisthesis in X-ray radiographs. The dataset contains a total of 299 X-ray radiographs from which 156 images are showing the spine with spondylolisthesis and 143 images are of the normal spine. Image augmentation technique is used to increase the data samples. In this study, VGG16 and InceptionV3 models were used for the image classification task. The developed model is optimized by utilizing the TFLite model optimization technique. The experimental result shows that the VGG16 model has achieved a 98% accuracy rate, which is higher than InceptionV3's 96% accuracy rate. The size of the implemented model is reduced up to four times so it can be used on small devices. The compressed VGG16 and InceptionV3 models have achieved 100% and 96% accuracy rate, respectively. Our finding shows that the implemented models were outperformed in the diagnosis of lumbar spondylolisthesis as compared to the model suggested by Varcin et al. (which had a maximum of 93% accuracy rate). Also, the developed quantized model has achieved higher accuracy rate than Zebin and Rezvy's (VGG16 + TFLite) model with 90% accuracy. Furthermore, by evaluating the model's performance on other publicly available datasets, we have generalised our approach on the public platform.
Computational intelligence and neuroscience
"2022-04-19T00:00:00"
[ "DeepikaSaravagi", "ShwetaAgrawal", "ManishaSaravagi", "Jyotir MoyChatterjee", "MohitAgarwal" ]
10.1155/2022/7459260 10.1002/jsp2.1044 10.5435/00124635-200607000-00004 10.1155/2014/182956 10.1007/978-3-030-40850-3_11 10.1038/s41591-018-0307-0 10.1038/s41568-018-0016-5 10.1504/IJESMS.2021.115534 10.12928/TELKOMNIKA.v18i3.14753 10.1016/j.ejrad.2019.02.038 10.1016/j.imu.2020.100391 10.1016/j.cmpb.2016.10.007 10.1109/CVPR.2016.90 10.14419/ijet.v7i2.7.10930 10.1109/TKDE.2009.191 10.1016/j.tice.2019.02.001 10.1109/IDAP.2019.8875988 10.1109/EHB50910.2020.9280227 10.17632/rscbjbr9sj.2 10.1007/s42600-021-00163-2 10.1007/S10489-020-01867-1/FIGURES/8 10.1109/ICESC51422.2021.9532711 10.1109/ICIMTech.2019.8843844 10.1109/CCWC.2018.8301729 10.1038/s41598-020-59108-x 10.1007/s42979-020-0114-9 10.1016/j.imu.2020.100505 10.1109/EMBC.2018.8512750 10.1109/CIBEC.2018.8641815 10.1016/j.neuroimage.2016.08.055 10.1109/SMARTCOMP50058.2020.00027 10.1177/2192568218770769 10.1016/j.measurement.2015.09.013 10.1016/j.patrec.2020.07.042 10.1504/IJESMS.2021.115532
MA-Net:Mutex attention network for COVID-19 diagnosis on CT images.
COVID-19 is an infectious pneumonia caused by 2019-nCoV. The number of newly confirmed cases and confirmed deaths continues to remain at a high level. RT-PCR is the gold standard for the COVID-19 diagnosis, but the computed tomography (CT) imaging technique is an important auxiliary diagnostic tool. In this paper, a deep learning network mutex attention network (MA-Net) is proposed for COVID-19 auxiliary diagnosis on CT images. Using positive and negative samples as mutex inputs, the proposed network combines mutex attention block (MAB) and fusion attention block (FAB) for the diagnosis of COVID-19. MAB uses the distance between mutex inputs as a weight to make features more distinguishable for preferable diagnostic results. FAB acts to fuse features to obtain more representative features. Particularly, an adaptive weight multiloss function is proposed for better effect. The accuracy, specificity and sensitivity were reported to be as high as 98.17%, 97.25% and 98.79% on the COVID-19 dataset-A provided by the Affiliated Medical College of Qingdao University, respectively. State-of-the-art results have also been achieved on three other public COVID-19 datasets. The results show that compared with other methods, the proposed network can provide effective auxiliary information for the diagnosis of COVID-19 on CT images.
Applied intelligence (Dordrecht, Netherlands)
"2022-04-19T00:00:00"
[ "BingBingZheng", "YuZhu", "QinShi", "DaweiYang", "YanmeiShao", "TaoXu" ]
10.1007/s10489-022-03431-5 10.1016/S0140-6736(20)30795-9 10.15585/mmwr.mm7003e2 10.1080/14737159.2020.1757437 10.1007/s00330-020-06801-0 10.1016/j.ejrad.2020.108961 10.1038/s41591-020-0931-3 10.1016/j.neucom.2020.09.068 10.1038/s41467-020-17280-8 10.1155/2020/9756518 10.1007/s10489-020-01770-9 10.1016/j.neucom.2021.03.122 10.1109/TMI.2020.3000314 10.1016/j.media.2020.101794 10.1007/s10489-020-01826-w 10.1109/ACCESS.2020.3001973 10.1007/s10489-020-01829-7 10.1109/JBHI.2020.3030853 10.1016/j.media.2020.101836 10.1016/j.ijleo.2021.167100 10.1148/radiol.2020200905 10.1007/s10489-020-02122-3 10.1109/JBHI.2020.3023246 10.1109/TIP.2021.3109518
CapsNet-COVID19: Lung CT image classification method based on CapsNet model.
The outbreak of the Corona Virus Disease 2019 (COVID-19) has posed a serious threat to human health and life around the world. As the number of COVID-19 cases continues to increase, many countries are facing problems such as errors in nucleic acid testing (RT-PCR), shortage of testing reagents, and lack of testing personnel. In order to solve such problems, it is necessary to propose a more accurate and efficient method as a supplement to the detection and diagnosis of COVID-19. This research uses a deep network model to classify some of the COVID-19, general pneumonia, and normal lung CT images in the 2019 Novel Coronavirus Information Database. The first level of the model uses convolutional neural networks to locate lung regions in lung CT images. The second level of the model uses the capsule network to classify and predict the segmented images. The accuracy of our method is 84.291% on the test set and 100% on the training set. Experiment shows that our classification method is suitable for medical image classification with complex background, low recognition rate, blurred boundaries and large image noise. We believe that this classification method is of great value for monitoring and controlling the growth of patients in COVID-19 infected areas.
Mathematical biosciences and engineering : MBE
"2022-04-19T00:00:00"
[ "XiaoQingZhang", "GuangYuWang", "Shu-GuangZhao" ]
10.3934/mbe.2022236
COV-DLS: Prediction of COVID-19 from X-Rays Using Enhanced Deep Transfer Learning Techniques.
In this paper, modifications in neoteric architectures such as VGG16, VGG19, ResNet50, and InceptionV3 are proposed for the classification of COVID-19 using chest X-rays. The proposed architectures termed "COV-DLS" consist of two phases: heading model construction and classification. The heading model construction phase utilizes four modified deep learning architectures, namely Modified-VGG16, Modified-VGG19, Modified-ResNet50, and Modified-InceptionV3. An attempt is made to modify these neoteric architectures by incorporating the average pooling and dense layers. The dropout layer is also added to prevent the overfitting problem. Two dense layers with different activation functions are also added. Thereafter, the output of these modified models is applied during the classification phase, when COV-DLS are applied on a COVID-19 chest X-ray image data set. Classification accuracy of 98.61% is achieved by Modified-VGG16, 97.22% by Modified-VGG19, 95.13% by Modified-ResNet50, and 99.31% by Modified-InceptionV3. COV-DLS outperforms existing deep learning models in terms of accuracy and F1-score.
Journal of healthcare engineering
"2022-04-16T00:00:00"
[ "VijayKumar", "AnisZarrad", "RahulGupta", "OmarCheikhrouhou" ]
10.1155/2022/6216273 10.3390/su132413642 10.7717/peerj-cs.655 10.1007/s10044-021-00984-y 10.33889/ijmems.2020.5.4.052 10.1016/j.compbiomed.2020.103792 10.1007/s13246-020-00865-4 10.1007/s10096-020-03901-z 10.1016/j.cmpb.2020.105581 10.3390/jimaging7050081 10.1016/j.mehy.2020.109761 10.1016/b978-0-12-824536-1.00003-4 10.1016/j.patcog.2020.107613 10.1016/j.patcog.2020.107747 10.1016/j.patcog.2021.108341 10.1016/j.asoc.2021.107947 10.1109/access.2021.3120717 10.1155/2021/6621607 10.1109/tkde.2009.191 10.1162/neco_a_00990 10.1038/s41598-020-76550-z 10.1016/j.patrec.2020.09.010
Evaluation of AI-Based Segmentation Tools for COVID-19 Lung Lesions on Conventional and Ultra-low Dose CT Scans.
A reliable diagnosis and accurate monitoring are pivotal steps for treatment and prevention of COVID-19. Chest computed tomography (CT) has been considered a crucial diagnostic imaging technique for the injury assessment of the viral pneumonia. Furthermore, the automatization of the segmentation methods for lung alterations helps to speed up the diagnosis and lighten radiologists' workload. Considering the assiduous pathology monitoring, ultra-low dose (ULD) chest CT protocols have been implemented to drastically reduce the radiation burden. Unfortunately, the available AI technologies have not been trained on ULD-CT data and validated and their applicability deserves careful evaluation. Therefore, this work aims to compare the results of available AI tools (BCUnet, CORADS AI, NVIDIA CLARA Train SDK and CT Pneumonia Analysis) on a dataset of 73 CT examinations acquired both with conventional dose (CD) and ULD protocols. COVID-19 volume percentage, resulting from each tool, was statistically compared. This study demonstrated high comparability of the results on CD-CT and ULD-CT data among the four AI tools, with high correlation between the results obtained on both protocols (R > .68, P < .001, for all AI tools).
Dose-response : a publication of International Hormesis Society
"2022-04-16T00:00:00"
[ "MarcoAiello", "DarioBaldi", "GiuseppinaEsposito", "MarikaValentino", "MarcoRandon", "MarcoSalvatore", "CarloCavaliere" ]
10.1177/15593258221082896 10.1080/14737159.2020.1757437 10.1148/radiol.2020200230 10.2214/AJR.20.23078 10.1148/radiol.2020201102 10.1148/radiol.2020201365 10.2214/ajr.176.2.1760289 10.1007/s003300050062 10.1007/s00247-002-0678-7 10.2214/ajr.177.2.1770289 10.1148/radiology.213.1.r99oc29289 10.1007/s003300050114 10.2214/ajr.164.3.7863879 10.1148/radiology.175.3.2343122 10.1148/radiology.209.1.9769838 10.1148/radiology.210.3.r99mr05645 10.1177/1559325820973131 10.1371/journal.pone.0168979 10.1148/ryct.2020200196 10.1038/s41746-021-00438-z 10.1016/j.zemedi.2018.11.002 10.1007/s11548-021-02317-0 10.1016/j.cell.2018.02.010 10.3390/app8101715 10.1148/radiol.2020200905 10.1038/s41467-020-17971-2 10.1007/s00330-017-4800-5 10.21203/rs.3.rs-571332/v1 10.1148/radiol.2020202439 10.1148/ryai.2020200048 10.2307/2987937 10.1109/ISBI.2019.8759468 10.1097/MD.0000000000026034 10.21037/qims-20-1176 10.1148/rg.2021200196 10.3390/app11062456 10.1093/rpd/ncy212 10.3348/kjr.2020.0237 10.1186/s41747-021-00210-8 10.1007/s10278-017-9988-z 10.1016/j.ejrad.2019.01.028 10.1159/000503996 10.1038/s41597-020-00715-8
Using artificial intelligence to improve the diagnostic efficiency of pulmonologists in differentiating COVID-19 pneumonia from community-acquired pneumonia.
Coronavirus disease 2019 (COVID-19) has quickly turned into a global health problem. Computed tomography (CT) findings of COVID-19 pneumonia and community-acquired pneumonia (CAP) may be similar. Artificial intelligence (AI) is a popular topic among medical imaging techniques and has caused significant developments in diagnostic techniques. This retrospective study aims to analyze the contribution of AI to the diagnostic performance of pulmonologists in distinguishing COVID-19 pneumonia from CAP using CT scans. A deep learning-based AI model was created to be utilized in the detection of COVID-19, which extracted visual data from volumetric CT scans. The final data set covered a total of 2496 scans (887 patients), which included 1428 (57.2%) from the COVID-19 group and 1068 (42.8%) from the CAP group. CT slices were classified into training, validation, and test datasets in an 8:1:1. The independent test data set was analyzed by comparing the performance of four pulmonologists in differentiating COVID-19 pneumonia both with and without the help of the AI. The accuracy, sensitivity, and specificity values of the proposed AI model for determining COVID-19 in the independent test data set were 93.2%, 85.8%, and 99.3%, respectively, with the area under the receiver operating characteristic curve of 0.984. With the assistance of the AI, the pulmonologists accomplished a higher mean accuracy (88.9% vs. 79.9%, p < 0.001), sensitivity (79.1% vs. 70%, p < 0.001), and specificity (96.5% vs. 87.5%, p < 0.001). AI support significantly increases the diagnostic efficiency of pulmonologists in the diagnosis of COVID-19 via CT. Studies in the future should focus on real-time applications of AI to fight the COVID-19 infection.
Journal of medical virology
"2022-04-15T00:00:00"
[ "Erdalİn", "Ayşegül AGeçkil", "GürkanKavuran", "MahmutŞahin", "Nurcan KBerber", "MutluKuluöztürk" ]
10.1002/jmv.27777 10.1101/2020.03.20.20039834 10.1101/2020.02.14.20023028
Deep learning of chest X-rays can predict mechanical ventilation outcome in ICU-admitted COVID-19 patients.
The COVID-19 pandemic repeatedly overwhelms healthcare systems capacity and forced the development and implementation of triage guidelines in ICU for scarce resources (e.g. mechanical ventilation). These guidelines were often based on known risk factors for COVID-19. It is proposed that image data, specifically bedside computed X-ray (CXR), provide additional predictive information on mortality following mechanical ventilation that can be incorporated in the guidelines. Deep transfer learning was used to extract convolutional features from a systematically collected, multi-institutional dataset of COVID-19 ICU patients. A model predicting outcome of mechanical ventilation (remission or mortality) was trained on the extracted features and compared to a model based on known, aggregated risk factors. The model reached a 0.702 area under the curve (95% CI 0.707-0.694) at predicting mechanical ventilation outcome from pre-intubation CXRs, higher than the risk factor model. Combining imaging data and risk factors increased model performance to 0.743 AUC (95% CI 0.746-0.732). Additionally, a post-hoc analysis showed an increase performance on high-quality than low-quality CXRs, suggesting that using only high-quality images would result in an even stronger model.
Scientific reports
"2022-04-15T00:00:00"
[ "DanielGourdeau", "OlivierPotvin", "Jason HenryBiem", "FlorenceCloutier", "LynaAbrougui", "PatrickArchambault", "CarlChartrand-Lefebvre", "LouisDieumegarde", "ChristianGagné", "LouisGagnon", "RaphaelleGiguère", "AlexandreHains", "HuyLe", "SimonLemieux", "Marie-HélèneLévesque", "SimonNepveu", "LorneRosenbloom", "AnTang", "IssacYang", "NathalieDuchesne", "SimonDuchesne" ]
10.1038/s41598-022-10136-9 10.1001/jama.2020.4031 10.1111/bioe.12836 10.1016/j.media.2020.101860 10.1016/S1473-3099(20)30134-1 10.1001/jama.2020.1585 10.2214/AJR.20.22976 10.1016/j.crad.2020.03.003 10.1016/j.ijid.2020.05.021 10.1007/s11547-020-01200-3 10.1038/s41597-019-0322-0 10.1038/s42256-021-00307-0 10.1038/s41598-022-09356-w 10.1371/journal.pone.0236621 10.1373/clinchem.2015.246280 10.1038/s41586-020-2521-4 10.1186/s12931-019-1261-1 10.1038/s41746-021-00453-0
Detection of COVID-19 from CT and Chest X-ray Images Using Deep Learning Models.
Coronavirus 2019 (COVID-19) is a highly transmissible and pathogenic virus caused by severe respiratory syndrome coronavirus 2 (SARS-CoV-2), which first appeared in Wuhan, China, and has since spread in the whole world. This pathology has caused a major health crisis in the world. However, the early detection of this anomaly is a key task to minimize their spread. Artificial intelligence is one of the approaches commonly used by researchers to discover the problems it causes and provide solutions. These estimates would help enable health systems to take the necessary steps to diagnose and track cases of COVID. In this review, we intend to offer a novel method of automatic detection of COVID-19 using tomographic images (CT) and radiographic images (Chest X-ray). In order to improve the performance of the detection system for this outbreak, we used two deep learning models: the VGG and ResNet. The results of the experiments show that our proposed models achieved the best accuracy of 99.35 and 96.77% respectively for VGG19 and ResNet50 with all the chest X-ray images.
Annals of biomedical engineering
"2022-04-14T00:00:00"
[ "WassimZouch", "DhouhaSagga", "AmiraEchtioui", "RafikKhemakhem", "MohamedGhorbel", "ChokriMhiri", "Ahmed BenHamida" ]
10.1007/s10439-022-02958-5 10.1007/s10439-020-02636-4 10.1007/s10439-020-02648-0 10.1007/s10439-020-02676-w 10.1007/s10439-02002580-3 10.1007/s13246-020-00865-4 10.1016/j.envres.2020.109819 10.1016/j.crad.2018.12.015 10.1007/s10439-020-02636-4 10.1148/radiol.2020200905 10.1007/s10439-020-02599-6 10.1016/j.compbiomed.2020.103792 10.1007/s10439-018-02190-0 10.3390/app10103641 10.1148/radiol.2020200642 10.1016/j.eng.2020.04.010
Automated detection of COVID-19 cases from chest X-ray images using deep neural network and XGBoost.
In late 2019 and after the COVID-19 pandemic in the world, many researchers and scholars tried to provide methods for detecting COVID-19 cases. Accordingly, this study focused on identifying patients with COVID-19 from chest X-ray images. In this paper, a method for diagnosing coronavirus disease from X-ray images was developed. In this method, DenseNet169 Deep Neural Network (DNN) was used to extract the features of X-ray images taken from the patients' chests. The extracted features were then given as input to the Extreme Gradient Boosting (XGBoost) algorithm to perform the classification task. Evaluation of the proposed approach and its comparison with the methods presented in recent years revealed that this method was more accurate and faster than the existing ones and had an acceptable performance for detecting COVID-19 cases from X-ray images. The experiments showed 98.23% and 89.70% accuracy, 99.78% and 100% specificity, 92.08% and 95.20% sensitivity in two and three-class problems, respectively. This study aimed to detect people with COVID-19, focusing on non-clinical approaches. The developed method could be employed as an initial detection tool to assist the radiologists in more accurate and faster diagnosing the disease. The proposed method's simple implementation, along with its acceptable accuracy, allows it to be used in COVID-19 diagnosis. Moreover, the gradient-based class activation mapping (Grad-CAM) can be used to represent the deep neural network's decision area on a heatmap. Radiologists might use this heatmap to evaluate the chest area more accurately.
Radiography (London, England : 1995)
"2022-04-13T00:00:00"
[ "HNasiri", "SHasani" ]
10.1016/j.radi.2022.03.011
Reduced Chest Computed Tomography Scan Length for Patients Positive for Coronavirus Disease 2019: Dose Reduction and Impact on Diagnostic Utility.
This study used the Personalized Rapid Estimation of Dose in CT (PREDICT) tool to estimate patient-specific organ doses from CT image data. The PREDICT is a research tool that combines a linear Boltzmann transport equation solver for radiation dose map generation with deep learning algorithms for organ contouring. Computed tomography images from 74 subjects in the Medical Imaging Data Resource Center-RSNA International COVID-19 Open Radiology Database data set (chest CT of adult patients positive for COVID-19), which included expert annotations including "infectious opacities," were analyzed. First, the full z-scan length of the CT image data set was evaluated. Next, the z-scan length was reduced from the left hemidiaphragm to the top of the aortic arch. Generic dose reduction based on dose length product (DLP) and patient-specific organ dose reductions were calculated. The percentage of infectious opacities excluded from the reduced z-scan length was used to quantify the effect on diagnostic utility. Generic dose reduction, based on DLP, was 69%. The organ dose reduction ranged from approximately equal to 18% (breasts) to approximately equal to 64% (bone surface and bone marrow). On average, 12.4% of the infectious opacities were not included in the reduced z-coverage, per patient, of which 5.1% were above the top of the arch and 7.5% below the left hemidiaphragm. Limiting z-scan length of chest CTs reduced radiation dose without significantly compromising diagnostic utility in COVID-19 patients. The PREDICT demonstrated that patient-specific organ dose reductions varied from generic dose reduction based on DLP.
Journal of computer assisted tomography
"2022-04-12T00:00:00"
[ "SaraPrincipi", "StacyO'Connor", "LubaFrank", "Taly GilatSchmidt" ]
10.1097/RCT.0000000000001312 10.1016/j.chest.2020.04.003 10.1183/16000617.0076-2018 10.1007/s00330-020-07034-x 10.1148/radiol.2020203453 10.1542/peds.2007-1910 10.1001/jama.298.3.317 10.1111/j.1526-4610.2008.01071.x 10.1007/s10140-015-1340-7 10.7937/VTW4-X588 10.1148/radiol.2021203957 10.1007/s10278-013-9622-7 10.1118/1.4824918 10.1118/1.4933197 10.1002/mp.13305 10.1002/mp.14494 10.1002/mp.15485 10.1002/mp.15301 10.1097/RCT.0b013e318198cd18 10.1007/s11547-020-01237-4 10.1148/ryct.2020209004 10.17226/11340 10.7937/91ah-v663 10.1002/mp.13141 10.7937/K9/TCIA.2017.3r3fvz08 10.1002/acm2.12505 10.1118/1.3298015
Deep Learning-Based Automatic CT Quantification of Coronavirus Disease 2019 Pneumonia: An International Collaborative Study.
We aimed to develop and validate the automatic quantification of coronavirus disease 2019 (COVID-19) pneumonia on computed tomography (CT) images. This retrospective study included 176 chest CT scans of 131 COVID-19 patients from 14 Korean and Chinese institutions from January 23 to March 15, 2020. Two experienced radiologists semiautomatically drew pneumonia masks on CT images to develop the 2D U-Net for segmenting pneumonia. External validation was performed using Japanese (n = 101), Italian (n = 99), Radiopaedia (n = 9), and Chinese data sets (n = 10). The primary measures for the system's performance were correlation coefficients for extent (%) and weight (g) of pneumonia in comparison with visual CT scores or human-derived segmentation. Multivariable logistic regression analyses were performed to evaluate the association of the extent and weight with symptoms in the Japanese data set and composite outcome (respiratory failure and death) in the Spanish data set (n = 115). In the internal test data set, the intraclass correlation coefficients between U-Net outputs and references for the extent and weight were 0.990 and 0.993. In the Japanese data set, the Pearson correlation coefficients between U-Net outputs and visual CT scores were 0.908 and 0.899. In the other external data sets, intraclass correlation coefficients were between 0.949-0.965 (extent) and between 0.978-0.993 (weight). Extent and weight in the top quartile were independently associated with symptoms (odds ratio, 5.523 and 10.561; P = 0.041 and 0.016) and the composite outcome (odds ratio, 9.365 and 7.085; P = 0.021 and P = 0.035). Automatically quantified CT extent and weight of COVID-19 pneumonia were well correlated with human-derived references and independently associated with symptoms and prognosis in multinational external data sets.
Journal of computer assisted tomography
"2022-04-12T00:00:00"
[ "Seung-JinYoo", "XiaolongQi", "ShoheiInui", "HyungjinKim", "Yeon JooJeong", "Kyung HeeLee", "Young KyungLee", "Bae YoungLee", "Jin YongKim", "Kwang NamJin", "Jae-KwangLim", "Yun-HyeonKim", "Ki BeomKim", "ZichengJiang", "ChuxiaoShao", "JunqiangLei", "ShengqiangZou", "HongqiuPan", "YeGu", "GuoZhang", "Jin MoGoo", "Soon HoYoon" ]
10.1097/RCT.0000000000001303
COVID-CCD-Net: COVID-19 and colon cancer diagnosis system with optimized CNN hyperparameters using gradient-based optimizer.
Coronavirus disease-2019 (COVID-19) is a new types of coronavirus which have turned into a pandemic within a short time. Reverse transcription-polymerase chain reaction (RT-PCR) test is used for the diagnosis of COVID-19 in national healthcare centers. Because the number of PCR test kits is often limited, it is sometimes difficult to diagnose the disease at an early stage. However, X-ray technology is accessible nearly all over the world, and it succeeds in detecting symptoms of COVID-19 more successfully. Another disease which affects people's lives to a great extent is colorectal cancer. Tissue microarray (TMA) is a technological method which is widely used for its high performance in the analysis of colorectal cancer. Computer-assisted approaches which can classify colorectal cancer in TMA images are also needed. In this respect, the present study proposes a convolutional neural network (CNN) classification approach with optimized parameters using gradient-based optimizer (GBO) algorithm. Thanks to the proposed approach, COVID-19, normal, and viral pneumonia in various chest X-ray images can be classified accurately. Additionally, other types such as epithelial and stromal regions in epidermal growth factor receptor (EFGR) colon in TMAs can also be classified. The proposed approach was called COVID-CCD-Net. AlexNet, DarkNet-19, Inception-v3, MobileNet, ResNet-18, and ShuffleNet architectures were used in COVID-CCD-Net, and the hyperparameters of this architecture was optimized for the proposed approach. Two different medical image classification datasets, namely, COVID-19 and Epistroma, were used in the present study. The experimental findings demonstrated that proposed approach increased the classification performance of the non-optimized CNN architectures significantly and displayed a very high classification performance even in very low value of epoch.
Medical & biological engineering & computing
"2022-04-10T00:00:00"
[ "SonerKiziloluk", "EserSert" ]
10.1007/s11517-022-02553-9 10.1038/s41564-020-0695-z 10.1080/07391102.2020.1767212 10.1016/j.bspc.2020.102365 10.1148/radiol.2020200527 10.1016/j.compbiomed.2020.104181 10.1016/j.chaos.2020.110245 10.1016/j.media.2020.101794 10.1016/j.chaos.2020.110495 10.1016/j.asoc.2020.106859 10.1038/nm0798-844 10.1093/annonc/mdi006 10.1016/j.ins.2020.06.037 10.1016/j.envpol.2020.115618 10.1016/j.swevo.2019.100643 10.1016/j.eswa.2020.113506 10.1016/j.irbm.2020.10.006 10.1016/j.ijleo.2018.07.044 10.1016/j.isatra.2020.10.052 10.1016/j.compag.2020.105456 10.1016/j.procs.2020.09.075 10.5555/2188385.2188395 10.1016/j.swevo.2019.06.002 10.1049/iet-its.2018.5127 10.1111/coin.12350 10.1016/j.eswa.2021.115525 10.1109/TEVC.2021.3060833 10.1109/JPROC.2015.2494218 10.1109/ACCESS.2021.3091729 10.1016/j.compbiomed.2019.03.017 10.1016/j.cmpb.2020.105608 10.1016/j.chaos.2020.109944 10.1007/s10096-020-03901-z 10.1007/s13246-020-00865-4 10.1109/RBME.2020.2987975 10.1186/s41747-020-00203-z 10.1016/j.cmpb.2020.105581 10.1016/j.neucom.2016.01.034 10.1186/1746-1596-7-22 10.1016/j.autcon.2018.07.008 10.1007/s13244-018-0639-9 10.1155/2020/2616510 10.1109/ACCESS.2020.3010287 10.1016/j.compbiomed.2020.103792 10.1007/s00330-021-07715-1 10.1007/s00330-021-08050-1 10.1007/s12539-020-00393-5 10.1007/s10489-020-01904-z 10.1109/JBHI.2017.2691738 10.1117/1.JEI.27.1.011002
Generalizability assessment of COVID-19 3D CT data for deep learning-based disease detection.
Artificial intelligence technologies in classification/detection of COVID-19 positive cases suffer from generalizability. Moreover, accessing and preparing another large dataset is not always feasible and time-consuming. Several studies have combined smaller COVID-19 CT datasets into "supersets" to maximize the number of training samples. This study aims to assess generalizability by splitting datasets into different portions based on 3D CT images using deep learning. Two large datasets, including 1110 3D CT images, were split into five segments of 20% each. Each dataset's first 20% segment was separated as a holdout test set. 3D-CNN training was performed with the remaining 80% from each dataset. Two small external datasets were also used to independently evaluate the trained models. The total combination of 80% of each dataset has an accuracy of 91% on Iranmehr and 83% on Moscow holdout test datasets. Results indicated that 80% of the primary datasets are adequate for fully training a model. The additional fine-tuning using 40% of a secondary dataset helps the model generalize to a third, unseen dataset. The highest accuracy achieved through transfer learning was 85% on LDCT dataset and 83% on Iranmehr holdout test sets when retrained on 80% of Iranmehr dataset. While the total combination of both datasets produced the best results, different combinations and transfer learning still produced generalizable results. Adopting the proposed methodology may help to obtain satisfactory results in the case of limited external datasets.
Computers in biology and medicine
"2022-04-08T00:00:00"
[ "MaryamFallahpoor", "SubrataChakraborty", "Mohammad TavakoliHeshejin", "HosseinChegeni", "Michael JamesHorry", "BiswajeetPradhan" ]
10.1016/j.compbiomed.2022.105464 10.1056/NEJMp2006141 10.1016/S0140-6736(20)30211-7 10.1056/NEJMoa2108891 10.1002/jmv.27515 10.1016/j.ijsu.2020.02.034 10.1016/j.jmii.2020.02.012 10.1016/S2213-2600(20)30076 10.1016/j.jpha.2020.02.010 10.1148/radiol.2020200432 10.1155/2021/5528144 10.1109/ACCESS.2020.3027685 10.22114/ajem.v4i2s.451 10.1007/s10489-020-01826-w 10.1109/ACCESS.2020.3005510 10.1016/S1473-3099(20)30241-3 10.1148/radiol.2020200343 10.1016/j.jacr.2019.05.036 10.1007/s12525-021-00475-2 10.1186/s40537-021-00444-8 10.1038/s41467-020-17971-2 10.1016/j.compbiomed.2020.104037 10.1016/j.compbiomed.2020.103795 10.1136/bmjopen-2020-042946 10.1016/j.cell.2020.04.045 10.1109/RBME.2020.2987975 10.1148/radiol.2020200905 10.1038/s41746-021-00399-3 10.2196/19569 10.1007/s10140-020-01886-y 10.1080/07391102.2020.1788642 10.1109/TMI.2020.2996256 10.1016/S0140-6736(20)32589-7 10.1109/ACCESS.2021.3079716 10.1016/j.imu.2020.100427 10.1016/j.patrec.2021.09.012 10.3390/s21238045 10.1109/TMI.2020.2995965 10.1016/j.inffus.2021.04.008 10.3934/mbe.2021456 10.1016/j.bspc.2021.102588 10.17816/DD46826 10.1016/j.patcog.2021.107848 10.1016/j.patcog.2021.108135 10.1109/JSEN.2021.3076767 10.1101/2020.04.24.20078584 10.21227/mxb3-7j48 10.1186/s41747-020-00173-2 10.3390/s22020506 10.1109/ICASSP39728.2021.9414007 10.1371/journal.pone.0258214 10.1148/radiol.2020200905 10.1017/S1481803500013336 10.1148/radiol.2020192224 10.1109/ACCESS.2020.3016780 10.1109/CVPR.2009.5206848
Automatic COVID-19 detection mechanisms and approaches from medical images: a systematic review.
Since early 2020, Coronavirus Disease 2019 (COVID-19) has spread widely around the world. COVID-19 infects the lungs, leading to breathing difficulties. Early detection of COVID-19 is important for the prevention and treatment of pandemic. Numerous sources of medical images (e.g., Chest X-Rays (CXR), Computed Tomography (CT), and Magnetic Resonance Imaging (MRI)) are regarded as a desirable technique for diagnosing COVID-19 cases. Medical images of coronavirus patients show that the lungs are filled with sticky mucus that prevents them from inhaling. Today, Artificial Intelligence (AI) based algorithms have made a significant shift in the computer aided diagnosis due to their effective feature extraction capabilities. In this survey, a complete and systematic review of the application of Machine Learning (ML) methods for the detection of COVID-19 is presented, focused on works that used medical images. We aimed to evaluate various ML-based techniques in detecting COVID-19 using medical imaging. A total of 26 papers were extracted from ACM, ScienceDirect, Springerlink, Tech Science Press, and IEEExplore. Five different ML categories to review these mechanisms are considered, which are supervised learning-based, deep learning-based, active learning-based, transfer learning-based, and evolutionary learning-based mechanisms. A number of articles are investigated in each group. Also, some directions for further research are discussed to improve the detection of COVID-19 using ML techniques in the future. In most articles, deep learning is used as the ML method. Also, most of the researchers used CXR images to diagnose COVID-19. Most articles reported accuracy of the models to evaluate model performance. The accuracy of the studied models ranged from 0.84 to 0.99. The studies demonstrated the current status of AI techniques in using AI potentials in the fight against COVID-19.
Multimedia tools and applications
"2022-04-07T00:00:00"
[ "Amir MasoudRahmani", "ElhamAzhir", "MortezaNaserbakht", "MokhtarMohammadi", "Adil Hussein MohammedAldalwie", "Mohammed KamalMajeed", "Sarkhel HTaher Karim", "MehdiHosseinzadeh" ]
10.1007/s11042-022-12952-7 10.1007/s10489-020-01829-7 10.1109/ACCESS.2020.2990893 10.1109/JIOT.2021.3050775 10.2196/19104 10.32604/cmc.2021.014265 10.1016/j.jiph.2020.06.028 10.1016/j.radi.2020.09.010 10.32604/cmc.2021.012955 10.1007/s13246-020-00865-4 10.1186/s41824-020-00086-8 10.1016/j.scs.2020.102589 10.24086/cuesj.v6n1y2022.pp1-6 10.1016/j.cmpb.2020.105608 10.1016/j.eswa.2020.113909 10.1145/3465398 10.1109/RBME.2020.2990959 10.1109/ACCESS.2020.3028012 10.1016/j.asoc.2020.106859 10.1016/j.jiph.2020.03.019 10.1109/ACCESS.2020.3016780 10.1109/ACCESS.2020.3005510 10.1016/j.artmed.2020.101981 10.1109/ACCESS.2019.2945338 10.5812/archcid.103232 10.1016/j.chaos.2020.110059 10.1038/nature14539 10.1016/j.artmed.2020.101985 10.1016/j.asoc.2020.106691 10.1109/ACCESS.2020.2995597 10.32604/cmc.2021.012874 10.1155/2020/9756518 10.1016/j.radi.2020.10.018 10.1016/j.chaos.2020.109944 10.1016/j.imu.2020.100360 10.1007/s10916-020-01562-1 10.1109/RBME.2020.2987975 10.1007/s10489-020-01862-6 10.1016/j.imu.2020.100427 10.1007/s13198-019-00863-0 10.1016/j.dsx.2020.04.012 10.1016/j.ins.2020.09.041 10.1186/s40537-016-0043-6 10.1016/j.media.2020.101913
Pre-processing methods in chest X-ray image classification.
The SARS-CoV-2 pandemic began in early 2020, paralyzing human life all over the world and threatening our security. Thus, the need for an effective, novel approach to diagnosing, preventing, and treating COVID-19 infections became paramount. This article proposes a machine learning-based method for the classification of chest X-ray images. We also examined some of the pre-processing methods such as thresholding, blurring, and histogram equalization. We found the F1-score results rose to 97%, 96%, and 99% for the three analyzed classes: healthy, COVID-19, and pneumonia, respectively. Our research provides proof that machine learning can be used to support medics in chest X-ray classification and improving pre-processing leads to improvements in accuracy, precision, recall, and F1-scores.
PloS one
"2022-04-06T00:00:00"
[ "AgataGiełczyk", "AnnaMarciniak", "MartynaTarczewska", "ZbigniewLutowski" ]
10.1371/journal.pone.0265949 10.1148/ryct.2020200034 10.1016/j.future.2020.04.013 10.1109/ACCESS.2020.3007656 10.3390/e23010090 10.1186/s12880-020-00529-5 10.1016/j.media.2020.101693 10.32604/cmc.2021.012955 10.1016/j.ijmedinf.2020.104284 10.1016/j.mehy.2020.109761 10.1016/j.cmpb.2018.04.025 10.1016/j.measurement.2019.05.076 10.1016/j.imu.2020.100391 10.1016/j.compbiomed.2020.103795 10.1016/j.eswa.2020.114054 10.1007/s10489-020-01902-1 10.1016/j.chaos.2020.110495 10.1016/j.compbiomed.2021.104425 10.1007/s42979-021-00762-x 10.1016/j.asoc.2021.107238 10.1016/j.cmpb.2020.105608 10.1007/s00521-020-05636-6 10.1038/s41598-021-95680-6
Tracking and predicting COVID-19 radiological trajectory on chest X-rays using deep learning.
Radiological findings on chest X-ray (CXR) have shown to be essential for the proper management of COVID-19 patients as the maximum severity over the course of the disease is closely linked to the outcome. As such, evaluation of future severity from current CXR would be highly desirable. We trained a repurposed deep learning algorithm on the CheXnet open dataset (224,316 chest X-ray images of 65,240 unique patients) to extract features that mapped to radiological labels. We collected CXRs of COVID-19-positive patients from an open-source dataset (COVID-19 image data collection) and from a multi-institutional local ICU dataset. The data was grouped into pairs of sequential CXRs and were categorized into three categories: 'Worse', 'Stable', or 'Improved' on the basis of radiological evolution ascertained from images and reports. Classical machine-learning algorithms were trained on the deep learning extracted features to perform immediate severity evaluation and prediction of future radiological trajectory. Receiver operating characteristic analyses and Mann-Whitney tests were performed. Deep learning predictions between "Worse" and "Improved" outcome categories and for severity stratification were significantly different for three radiological signs and one diagnostic ('Consolidation', 'Lung Lesion', 'Pleural effusion' and 'Pneumonia'; all P < 0.05). Features from the first CXR of each pair could correctly predict the outcome category between 'Worse' and 'Improved' cases with a 0.81 (0.74-0.83 95% CI) AUC in the open-access dataset and with a 0.66 (0.67-0.64 95% CI) AUC in the ICU dataset. Features extracted from the CXR could predict disease severity with a 52.3% accuracy in a 4-way classification. Severity evaluation trained on the COVID-19 image data collection had good out-of-distribution generalization when testing on the local dataset, with 81.6% of intubated ICU patients being classified as critically ill, and the predicted severity was correlated with the clinical outcome with a 0.639 AUC. CXR deep learning features show promise for classifying disease severity and trajectory. Once validated in studies incorporating clinical data and with larger sample sizes, this information may be considered to inform triage decisions.
Scientific reports
"2022-04-06T00:00:00"
[ "DanielGourdeau", "OlivierPotvin", "PatrickArchambault", "CarlChartrand-Lefebvre", "LouisDieumegarde", "RezaForghani", "ChristianGagné", "AlexandreHains", "DavidHornstein", "HuyLe", "SimonLemieux", "Marie-HélèneLévesque", "DiegoMartin", "LorneRosenbloom", "AnTang", "FabrizioVecchio", "IssacYang", "NathalieDuchesne", "SimonDuchesne" ]
10.1038/s41598-022-09356-w 10.1503/cmaj.200465 10.1056/NEJMe2005477 10.1016/S1473-3099(20)30134-1 10.2214/AJR.20.23034 10.1001/jama.2020.1585 10.2214/AJR.20.22976 10.1148/radiol.2020201160 10.1148/ryct.2020200028 10.1016/j.crad.2020.03.003 10.1038/s41598-020-79139-8 10.1016/j.clinimag.2020.11.004 10.1007/s00330-020-07270-1 10.1016/j.ijid.2020.05.021 10.1056/NEJMp2005689 10.1016/j.cell.2020.04.045 10.1038/s41591-020-0931-3 10.1186/s43055-021-00524-y 10.3233/XST-200831 10.1016/j.cmpb.2020.105581 10.1007/s13246-020-00865-4 10.1371/journal.pone.0236621 10.5152/dir.2020.20205 10.1136/bmjinnov-2020-000593 10.1373/clinchem.2015.246280 10.1086/589754 10.1038/s42256-021-00307-0 10.1136/bmj.m1328 10.1038/s41586-020-2521-4
Low-dose COVID-19 CT Image Denoising Using CNN and its Method Noise Thresholding.
Noise in computed tomography (CT) images may occur due to low radiation doses. Hence, the main aim of this paper is to reduce the noise from low-dose CT images so that the risk of high radiation dose can be reduced. The novel coronavirus outbreak has ushered in different new areas of research in medical instrumentation and technology. Medical diagnostics and imaging are one of the ways in which the area and level of infection can be detected. COVID-19 attacks people with less immunity, so infants, kids, and pregnant women are more vulnerable to the infection. So, they need to undergo CT scanning to find the infection level. But the high radiation diagnostic is also fatal for them, so the intensity of radiation needs to be reduced significantly, which may generate the noise in the CT images. This paper introduces a new denoising technique for low-dose Covid-19 CT images using a convolution neural network (CNN) and noise-based thresholding method. The major concern of the methodology for reducing the risk associated with radiation while diagnosing. The results are evaluated visually and using standard performance metrics. From comparative analysis, it was observed that proposed works give better outcomes. The proposed low-dose COVID-19 CT image denoising model is therefore concluded to have a better potential to be effective in various pragmatic medical image processing applications in noise suppression and clinical edge preservation.
Current medical imaging
"2022-04-06T00:00:00"
[ "ManojDiwakar", "Neeraj KumarPandey", "RavinderSingh", "DilipSisodia", "ChandrakalaArya", "PrabhishekSingh", "ChinmayChakraborty" ]
10.2174/1573405618666220404162241
COVID-19 prognostic modeling using CT radiomic features and machine learning algorithms: Analysis of a multi-institutional dataset of 14,339 patients.
We aimed to analyze the prognostic power of CT-based radiomics models using data of 14,339 COVID-19 patients. Whole lung segmentations were performed automatically using a deep learning-based model to extract 107 intensity and texture radiomics features. We used four feature selection algorithms and seven classifiers. We evaluated the models using ten different splitting and cross-validation strategies, including non-harmonized and ComBat-harmonized datasets. The sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were reported. In the test dataset (4,301) consisting of CT and/or RT-PCR positive cases, AUC, sensitivity, and specificity of 0.83 ± 0.01 (CI95%: 0.81-0.85), 0.81, and 0.72, respectively, were obtained by ANOVA feature selector + Random Forest (RF) classifier. Similar results were achieved in RT-PCR-only positive test sets (3,644). In ComBat harmonized dataset, Relief feature selector + RF classifier resulted in the highest performance of AUC, reaching 0.83 ± 0.01 (CI95%: 0.81-0.85), with a sensitivity and specificity of 0.77 and 0.74, respectively. ComBat harmonization did not depict statistically significant improvement compared to a non-harmonized dataset. In leave-one-center-out, the combination of ANOVA feature selector and RF classifier resulted in the highest performance. Lung CT radiomics features can be used for robust prognostic modeling of COVID-19. The predictive power of the proposed CT radiomics model is more reliable when using a large multicentric heterogeneous dataset, and may be used prospectively in clinical setting to manage COVID-19 patients.
Computers in biology and medicine
"2022-04-05T00:00:00"
[ "IsaacShiri", "YazdanSalimi", "MasoumehPakbin", "GhasemHajianfar", "Atlas HaddadiAvval", "AmirhosseinSanaat", "ShayanMostafaei", "AzadehAkhavanallaf", "AbdollahSaberi", "ZahraMansouri", "DariushAskari", "MohammadrezaGhasemian", "EhsanSharifipour", "SalehSandoughdaran", "AhmadSohrabi", "ElhamSadati", "SomayehLivani", "PooyaIranpour", "ShahriarKolahi", "MaziarKhateri", "SalarBijari", "Mohammad RezaAtashzar", "Sajad PShayesteh", "BardiaKhosravi", "Mohammad RezaBabaei", "ElnazJenabi", "MohammadHasanian", "AlirezaShahhamzeh", "Seyaed YaserForoghi Ghomi", "AbolfazlMozafari", "ArashTeimouri", "FatemehMovaseghi", "AzinAhmari", "NedaGoharpey", "RamaBozorgmehr", "HesamaddinShirzad-Aski", "RoozbehMortazavi", "JalalKarimi", "NazaninMortazavi", "SimaBesharat", "MandanaAfsharpad", "HamidAbdollahi", "ParhamGeramifar", "Amir RezaRadmard", "HosseinArabi", "KiaraRezaei-Kalantari", "MehrdadOveisi", "ArmanRahmim", "HabibZaidi" ]
10.1016/j.compbiomed.2022.105467
COVID-WideNet-A capsule network for COVID-19 detection.
Ever since the outbreak of COVID-19, the entire world is grappling with panic over its rapid spread. Consequently, it is of utmost importance to detect its presence. Timely diagnostic testing leads to the quick identification, treatment and isolation of infected people. A number of deep learning classifiers have been proved to provide encouraging results with higher accuracy as compared to the conventional method of RT-PCR testing. Chest radiography, particularly using X-ray images, is a prime imaging modality for detecting the suspected COVID-19 patients. However, the performance of these approaches still needs to be improved. In this paper, we propose a capsule network called COVID-WideNet for diagnosing COVID-19 cases using Chest X-ray (CXR) images. Experimental results have demonstrated that a discriminative trained, multi-layer capsule network achieves state-of-the-art performance on the
Applied soft computing
"2022-04-05T00:00:00"
[ "P KGupta", "Mohammad KhubebSiddiqui", "XiaodiHuang", "RubenMorales-Menendez", "HarshPawar", "HugoTerashima-Marin", "Mohammad SaifWajid" ]
10.1016/j.asoc.2022.108780 10.1007/s00330-020-06748-2 10.1016/j.chaos.2020.110122 10.1590/1678-4324-2020190736 10.3923/jas.2013.416.422 10.1016/j.chaos.2020.109944
Automated System for Identifying COVID-19 Infections in Computed Tomography Images Using Deep Learning Models.
Coronavirus disease 2019 (COVID-19) is a novel disease that affects healthcare on a global scale and cannot be ignored because of its high fatality rate. Computed tomography (CT) images are presently being employed to assist doctors in detecting COVID-19 in its early stages. In several scenarios, a combination of epidemiological criteria (contact during the incubation period), the existence of clinical symptoms, laboratory tests (nucleic acid amplification tests), and clinical imaging-based tests are used to diagnose COVID-19. This method can miss patients and cause more complications. Deep learning is one of the techniques that has been proven to be prominent and reliable in several diagnostic domains involving medical imaging. This study utilizes a convolutional neural network (CNN), stacked autoencoder, and deep neural network to develop a COVID-19 diagnostic system. In this system, classification undergoes some modification before applying the three CT image techniques to determine normal and COVID-19 cases. A large-scale and challenging CT image dataset was used in the training process of the employed deep learning model and reporting their final performance. Experimental outcomes show that the highest accuracy rate was achieved using the CNN model with an accuracy of 88.30%, a sensitivity of 87.65%, and a specificity of 87.97%. Furthermore, the proposed system has outperformed the current existing state-of-the-art models in detecting the COVID-19 virus using CT images.
Journal of healthcare engineering
"2022-04-05T00:00:00"
[ "Karrar HameedAbdulkareem", "Salama AMostafa", "Zainab NAl-Qudsy", "Mazin AbedMohammed", "Alaa SAl-Waisy", "SeifedineKadry", "JinseokLee", "YunyoungNam" ]
10.1155/2022/5329014 10.1016/j.bspc.2021.103128 10.1016/j.bspc.2021.103182 10.1016/j.idm.2020.02.002 10.2196/21788 10.1259/bjr.20210759 10.32604/cmc.2021.012955 10.1007/978-3-030-79753-9_4 10.22059/jitm.2020.79187 10.1111/exsy.12759 10.1016/j.imu.2020.100427 10.3390/biology11010043 10.1016/j.rinp.2021.105045 10.1007/978-981-16-1342-5_23 10.1007/s00521-021-05820-2 10.1016/j.bspc.2021.103326 10.1007/s00330-021-07715-1 10.1007/978-3-030-55258-9_17 10.1002/mp.14609 10.1148/radiol.2020200905 10.1016/j.eng.2020.04.010 10.1109/TCBB.2021.3065361 10.3390/e22050517 10.1111/all.14238 10.1148/radiol.2020200432 10.1007/s11042-021-11153-y 10.1148/rg.246045065 10.1148/rg.2015140232 10.2147/OTT.S80733 10.1038/srep46479 10.1007/s13246-020-00865-4 10.1007/s10489-020-01829-7 10.3389/fpubh.2021.744100 10.3390/covid1010034 10.1109/ACCESS.2020.3010287 10.3390/healthcare9121614 10.1007/978-3-642-21735-7_7 10.1155/2017/5218247 10.1038/s41597-021-00900-3 10.24018/ejeng.2021.6.5.2485 10.1016/j.bbe.2021.05.013 10.1016/j.compbiomed.2020.104037 10.1016/j.imu.2020.100427
Lung Disease Classification in CXR Images Using Hybrid Inception-ResNet-v2 Model and Edge Computing.
Chest X-ray (CXR) imaging is one of the most widely used and economical tests to diagnose a wide range of diseases. However, even for expert radiologists, it is a challenge to accurately diagnose diseases from CXR samples. Furthermore, there remains an acute shortage of trained radiologists worldwide. In the present study, a range of machine learning (ML), deep learning (DL), and transfer learning (TL) approaches have been evaluated to classify diseases in an openly available CXR image dataset. A combination of the synthetic minority over-sampling technique (SMOTE) and weighted class balancing is used to alleviate the effects of class imbalance. A hybrid Inception-ResNet-v2 transfer learning model coupled with data augmentation and image enhancement gives the best accuracy. The model is deployed in an edge environment using Amazon IoT Core to automate the task of disease detection in CXR images with three categories, namely pneumonia, COVID-19, and normal. Comparative analysis has been given in various metrics such as precision, recall, accuracy, AUC-ROC score, etc. The proposed technique gives an average accuracy of 98.66%. The accuracies of other TL models, namely SqueezeNet, VGG19, ResNet50, and MobileNetV2 are 97.33%, 91.66%, 90.33%, and 76.00%, respectively. Further, a DL model, trained from scratch, gives an accuracy of 92.43%. Two feature-based ML classification techniques, namely support vector machine with local binary pattern (SVM + LBP) and decision tree with histogram of oriented gradients (DT + HOG) yield an accuracy of 87.98% and 86.87%, respectively.
Journal of healthcare engineering
"2022-04-05T00:00:00"
[ "Chandra ManiSharma", "LakshayGoyal", "Vijayaraghavan MChariar", "NavelSharma" ]
10.1155/2022/9036457 10.3390/sym12071146 10.3390/diagnostics11122208 10.1109/ssci.2018.8628869 10.3390/diagnostics11112025 10.1016/j.compbiomed.2021.104319 10.1109/access.2020.3010287 10.1016/j.sysarc.2020.101830 10.1007/s00500-021-06514-6 10.1016/j.compbiomed.2021.104401 10.1155/2021/8828404 10.1155/2021/9437538 10.1016/j.bspc.2019.04.031 10.1016/j.eswa.2020.114054 10.1016/j.chaos.2020.110495 10.1016/j.patrec.2019.11.013 10.2174/1573405616666200604163954 10.1016/j.cmpb.2019.06.005 10.1007/s10489-020-01829-7 10.1016/j.cmpb.2020.105532 10.1016/j.irbm.2019.10.006 10.1007/978-981-15-1624-5_9 10.3390/s21217116 10.1109/jbhi.2021.3110805 10.1109/cvpr.2018.00943 10.1016/j.media.2020.101839 10.7717/peerj-cs.495 10.1038/s41598-019-42294-8 10.1016/j.asoc.2021.107692 10.1007/s12553-021-00520-2 10.3892/etm.2020.8797 10.1002/ett.3710 10.1109/access.2020.3021983 10.1109/iotm.0001.2000138 10.1109/jiot.2021.3051844 10.1613/jair.953 10.1109/ACCESS.2019.2961511 10.1109/iccerec.2016.7814989 10.1007/978-3-319-23192-1_50 10.1016/j.imu.2021.100642
Machine Learning with Quantum Seagull Optimization Model for COVID-19 Chest X-Ray Image Classification.
Early and accurate detection of COVID-19 is an essential process to curb the spread of this deadly disease and its mortality rate. Chest radiology scan is a significant tool for early management and diagnosis of COVID-19 since the virus targets the respiratory system. Chest X-ray (CXR) images are highly useful in the effective detection of COVID-19, thanks to its availability, cost-effective means, and rapid outcomes. In addition, Artificial Intelligence (AI) techniques such as deep learning (DL) models play a significant role in designing automated diagnostic processes using CXR images. With this motivation, the current study presents a new Quantum Seagull Optimization Algorithm with DL-based COVID-19 diagnosis model, named QSGOA-DL technique. The proposed QSGOA-DL technique intends to detect and classify COVID-19 with the help of CXR images. In this regard, the QSGOA-DL technique involves the design of EfficientNet-B4 as a feature extractor, whereas hyperparameter optimization is carried out with the help of QSGOA technique. Moreover, the classification process is performed by a multilayer extreme learning machine (MELM) model. The novelty of the study lies in the designing of QSGOA for hyperparameter optimization of the EfficientNet-B4 model. An extensive series of simulations was carried out on the benchmark test CXR dataset, and the results were assessed under different aspects. The simulation results demonstrate the promising performance of the proposed QSGOA-DL technique compared to recent approaches.
Journal of healthcare engineering
"2022-04-05T00:00:00"
[ "MahmoudRagab", "SamahAlshehri", "Nabil AAlhakamy", "WafaaAlsaggaf", "Hani AAlhadrami", "JaberAlyami" ]
10.1155/2022/6074538 10.1109/access.2021.3058537 10.1155/2021/6799202 10.1155/2021/3514821 10.1155/2021/5528441 10.1148/radiol.2020201160 10.1016/j.compbiomed.2020.103792 10.1109/tmi.2020.2995965 10.1038/s41591-019-0447-x 10.1109/tmi.2020.2994459 10.1109/access.2020.3040245 10.1109/access.2020.3025010 10.1109/jtehm.2021.3077142 10.26599/bdma.2020.9020012 10.1109/jbhi.2020.3018181 10.1109/tnnls.2021.3054306 10.1109/tip.2021.3058783 10.1109/tmi.2020.2996256 10.1016/b978-0-12-811318-9.00020-x 10.1016/j.compag.2020.105652 10.1155/2021/6639671 10.3390/app9081707 10.3390/rs10122036 10.1016/j.compbiomed.2021.104816
A privacy-aware method for COVID-19 detection in chest CT images using lightweight deep conventional neural network and blockchain.
With the global spread of the COVID-19 epidemic, a reliable method is required for identifying COVID-19 victims. The biggest issue in detecting the virus is a lack of testing kits that are both reliable and affordable. Due to the virus's rapid dissemination, medical professionals have trouble finding positive patients. However, the next real-life issue is sharing data with hospitals around the world while considering the organizations' privacy concerns. The primary worries for training a global Deep Learning (DL) model are creating a collaborative platform and personal confidentiality. Another challenge is exchanging data with health care institutions while protecting the organizations' confidentiality. The primary concerns for training a universal DL model are creating a collaborative platform and preserving privacy. This paper provides a model that receives a small quantity of data from various sources, like organizations or sections of hospitals, and trains a global DL model utilizing blockchain-based Convolutional Neural Networks (CNNs). In addition, we use the Transfer Learning (TL) technique to initialize layers rather than initialize randomly and discover which layers should be removed before selection. Besides, the blockchain system verifies the data, and the DL method trains the model globally while keeping the institution's confidentiality. Furthermore, we gather the actual and novel COVID-19 patients. Finally, we run extensive experiments utilizing Python and its libraries, such as Scikit-Learn and TensorFlow, to assess the proposed method. We evaluated works using five different datasets, including Boukan Dr. Shahid Gholipour hospital, Tabriz Emam Reza hospital, Mahabad Emam Khomeini hospital, Maragheh Dr.Beheshti hospital, and Miandoab Abbasi hospital datasets, and our technique outperform state-of-the-art methods on average in terms of precision (2.7%), recall (3.1%), F1 (2.9%), and accuracy (2.8%).
Computers in biology and medicine
"2022-04-03T00:00:00"
[ "ArashHeidari", "ShivaToumaj", "Nima JafariNavimipour", "MehmetUnal" ]
10.1016/j.compbiomed.2022.105461
Improved-Mask R-CNN: Towards an accurate generic MSK MRI instance segmentation platform (data from the Osteoarthritis Initiative).
Objective assessment of osteoarthritis (OA) Magnetic Resonance Imaging (MRI) scans can address the limitations of the current OA assessment approaches. Detecting and extracting bone, cartilage, and joint fluid is a necessary component for the objective assessment of OA, which helps to quantify tissue characteristics such as volume and thickness. Many algorithms, based on Artificial Intelligence (AI), have been proposed over recent years for segmenting bone and soft tissues. Most of these segmentation methods suffer from the class imbalance problem, can't differentiate between the same anatomic structure, or do not support segmenting different rang of tissue sizes. Mask R-CNN is an instance segmentation framework, meaning it segments and distinct each object of interest like different anatomical structures (e.g. bone and cartilage) using a single model. In this study, the Mask R-CNN architecture was deployed to address the need for a segmentation method that is applicable to use for different tissue scales, pathologies, and MRI sequences associated with OA, without having a problem with imbalanced classes. In addition, we modified the Mask R-CNN to improve segmentation accuracy around instance edges. A total of 500 adult knee MRI scans from the publicly available Osteoarthritis Initiative (OAI), and 97 hip MRI scans from adults with symptomatic hip OA, evaluated by two readers, were used for training and validating the network. Three specific modifications to Mask R-CNN yielded the improved-Mask R-CNN (iMaskRCNN): an additional ROIAligned block, an extra decoder block in the segmentation header, and connecting them using a skip connection. The results were evaluated using Hausdorff distance, dice score for bone and cartilage segmentation, and differences in detected volume, dice score, and coefficients of variation (CoV) for effusion segmentation. The iMaskRCNN led to improved bone and cartilage segmentation compared to Mask RCNN as indicated with the increase in dice score from 95% to 98% for the femur, 95-97% for the tibia, 71-80% for the femoral cartilage, and 81-82% for the tibial cartilage. For the effusion detection, the dice score improved with iMaskRCNN 72% versus Mask R-CNN 71%. The CoV values for effusion detection between Reader1 and Mask R-CNN (0.33), Reader1 and iMaskRCNN (0.34), Reader2 and Mask R-CNN (0.22), Reader2 and iMaskRCNN (0.29) are close to CoV between two readers (0.21), indicating a high agreement between the human readers and both Mask R-CNN and iMaskRCNN. Mask R-CNN and iMaskRCNN can reliably and simultaneously extract different scale articular tissues involved in OA, forming the foundation for automated assessment of OA. The iMaskRCNN results show that the modification improved the network performance around the edges.
Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
"2022-04-02T00:00:00"
[ "BanafsheFelfeliyan", "AbhilashHareendranathan", "GregorKuntze", "Jacob LJaremko", "Janet LRonsky" ]
10.1016/j.compmedimag.2022.102056
Facilitating standardized COVID-19 suspicion prediction based on computed tomography radiomics in a multi-demographic setting.
To develop an automatic COVID-19 Reporting and Data System (CO-RADS)-based classification in a multi-demographic setting. This multi-institutional review boards-approved retrospective study included 2720 chest CT scans (mean age, 58 years [range 18-100 years]) from Italian and Russian patients. Three board-certified radiologists from three countries assessed randomly selected subcohorts from each population and provided CO-RADS-based annotations. CT radiomic features were extracted from the selected subcohorts after preprocessing steps like lung lobe segmentation and automatic noise reduction. We compared three machine learning models, logistic regression (LR), multilayer perceptron (MLP), and random forest (RF) for the automated CO-RADS classification. Model evaluation was carried out in two scenarios, first, training on a mixed multi-demographic subcohort and testing on an independent hold-out dataset. In the second scenario, training was done on a single demography and externally validated on the other demography. The overall inter-observer agreement for the CO-RADS scoring between the radiologists was substantial (k = 0.80). Irrespective of the type of validation test scenario, suspected COVID-19 CT scans were identified with an accuracy of 84%. SHapley Additive exPlanations (SHAP) interpretation showed that the "wavelet_(LH)_GLCM_Imc1" feature had a positive impact on COVID prediction both with and without noise reduction. The application of noise reduction improved the overall performance between the classifiers for all types. Using an automated model based on the COVID-19 Reporting and Data System (CO-RADS), we achieved clinically acceptable performance in a multi-demographic setting. This approach can serve as a standardized tool for automated COVID-19 assessment. • Automatic CO-RADS scoring of large-scale multi-demographic chest CTs with mean AUC of 0.93 ± 0.04. • Validation procedure resembles TRIPOD 2b and 3 categories, enhancing the quality of experimental design to test the cross-dataset domain shift between institutions aiding clinical integration. • Identification of COVID-19 pneumonia in the presence of community-acquired pneumonia and other comorbidities with an AUC of 0.92.
European radiology
"2022-04-02T00:00:00"
[ "YeshaswiniNagaraj", "Gondade Jonge", "AnnaAndreychenko", "GabrielePresti", "Matthias AFink", "NikolayPavlov", "Carlo CQuattrocchi", "SergeyMorozov", "RaymondVeldhuis", "MatthijsOudkerk", "Peter M Avan Ooijen" ]
10.1007/s00330-022-08730-6 10.1177/0846537120938328 10.1109/RBME.2020.2990959 10.1097/IM9.0000000000000022 10.1148/radiol.2020201473 10.1148/ryct.2020200152 10.5152/dir.2021.201032 10.1148/radiol.2020202439 10.1038/s42256-021-00307-0 10.1259/bjr.20160665 10.1007/s00330-020-06939-x 10.1038/nrclinonc.2017.141 10.17816/DD46826 10.21037/qims-20-782 10.1007/s11604-018-0798-0 10.1259/bjr.20181019 10.1259/bjr.20200677 10.1158/0008-5472.CAN-17-0339 10.1038/srep37241 10.1613/jair.953 10.1038/s41551-018-0304-0 10.15212/bioi-2020-0015 10.1007/s00330-020-07032-z 10.1148/ryct.2020200322 10.1007/s00259-020-05075-4 10.1038/s41598-021-83237-6 10.1186/s12916-014-0241-z 10.1186/s12967-020-02692-3 10.1038/s41746-020-00369-1 10.1148/radiol.2020200905 10.1007/s00330-020-06956-w 10.3390/diagnostics11010041 10.1186/s12879-021-06331-0 10.1016/j.media.2020.101844 10.1002/mp.15178
Applications of Artificial Intelligence in Myopia: Current and Future Directions.
With the continuous development of computer technology, big data acquisition and imaging methods, the application of artificial intelligence (AI) in medical fields is expanding. The use of machine learning and deep learning in the diagnosis and treatment of ophthalmic diseases is becoming more widespread. As one of the main causes of visual impairment, myopia has a high global prevalence. Early screening or diagnosis of myopia, combined with other effective therapeutic interventions, is very important to maintain a patient's visual function and quality of life. Through the training of fundus photography, optical coherence tomography, and slit lamp images and through platforms provided by telemedicine, AI shows great application potential in the detection, diagnosis, progression prediction and treatment of myopia. In addition, AI models and wearable devices based on other forms of data also perform well in the behavioral intervention of myopia patients. Admittedly, there are still some challenges in the practical application of AI in myopia, such as the standardization of datasets; acceptance attitudes of users; and ethical, legal and regulatory issues. This paper reviews the clinical application status, potential challenges and future directions of AI in myopia and proposes that the establishment of an AI-integrated telemedicine platform will be a new direction for myopia management in the post-COVID-19 period.
Frontiers in medicine
"2022-04-02T00:00:00"
[ "ChenchenZhang", "JingZhao", "ZheZhu", "YanxiaLi", "KeLi", "YuanpingWang", "YajuanZheng" ]
10.3389/fmed.2022.840498 10.1016/j.preteyeres.2019.04.003 10.1016/j.ejrad.2019.02.038 10.1016/j.jacc.2018.12.054 10.3322/caac.21552 10.1001/jama.2017.18152 10.1001/jama.2016.17216 10.1016/j.ophtha.2017.02.008 10.1167/iovs.16-19964 10.1001/jamaophthalmol.2017.3782 10.1007/s00417-017-3850-3 10.1016/j.ophtha.2017.10.031 10.1016/j.ajo.2018.10.007 10.1038/s41598-018-33013-w 10.1016/j.ophtha.2018.04.020 10.1016/j.ophtha.2016.05.029 10.1167/iovs.18-23887 10.21037/atm.2019.12.39 10.1097/apo.0000000000000394 10.1038/s41572-020-00231-4 10.1609/aimag.v27i4.1904 10.3390/s21134412 10.1109/tpami.2013.50 10.21037/atm-20-976 10.1038/nature14539 10.1016/j.ejca.2019.06.012 10.1016/j.ejca.2019.06.013 10.1016/j.ebiom.2019.04.055 10.1007/978-3-030-33128-3_4 10.1161/circulationaha.115.001593 10.1016/j.preteyeres.2017.09.004 10.1016/j.ophtha.2016.01.006 10.1016/j.ajo.2020.07.034 10.1001/jamaophthalmol.2020.6239 10.1038/s41598-017-14507-5 10.1016/j.preteyeres.2020.100900 10.1109/iembs.2009.5333517 10.1371/journal.pone.0065736 10.1016/j.cmpb.2020.105920 10.1016/j.oret.2021.02.006 10.1016/s2589-7500(21)00055-8 10.1136/bjophthalmol-2020-317825 10.1371/journal.pone.0227240 10.1136/bjophthalmol-2021-319129 10.1371/journal.pmed.1002674 10.3390/ijerph17020463 10.3928/1081-597X-19980501-15 10.1080/08820538.2019.1569075 10.1001/jamaophthalmol.2020.0507 10.1167/tvst.9.2.8 10.1016/j.ajo.2019.10.015 10.1159/000453528 10.3390/bios11060182 10.1097/apo.0000000000000293 10.1016/j.ophtha.2017.08.027 10.7717/peerj.7202 10.1016/j.ajo.2019.04.019 10.1136/bjophthalmol-2020-316193 10.1016/j.jcrs.2016.12.021 10.1016/j.jcrs.2019.08.014 10.1136/bmjophth-2018-000251 10.1016/j.ophtha.2007.12.019 10.1016/j.ophtha.2012.04.020 10.1167/tvst.6.3.20 10.1007/s00417-016-3440-9 10.1155/2018/9781987 10.1097/md.0000000000017992 10.1167/tvst.8.6.15 10.1038/ng.2554 10.1371/journal.pgen.1002753 10.1016/j.exer.2019.107778 10.1152/physiolgenomics.00119.2017 10.1007/s00439-019-01970-5 10.1186/s13073-019-0689-8 10.1097/icu.0000000000000791 10.1007/s11882-018-0808-4 10.1056/NEJMp2003539 10.1097/icl.0000000000000051 10.1038/s41433-020-1085-8 10.1136/bjophthalmol-2019-314729 10.1001/jamanetworkopen.2018.5474 10.1117/1.Jmi.7.1.012703 10.1088/1361-6560/aada6d 10.3390/s19102361 10.1109/tmi.2018.2827462 10.1016/j.media.2019.101552 10.3348/kjr.2017.18.4.570 10.1007/s10384-019-00659-6 10.1001/jama.2018.11029 10.1097/icu.0000000000000694 10.1177/1120672120934405 10.1001/amajethics.2019.160
Deep-Precognitive Diagnosis: Preventing Future Pandemics by Novel Disease Detection With Biologically-Inspired Conv-Fuzzy Network.
Deep learning-based Computer-Aided Diagnosis has gained immense attention in recent years due to its capability to enhance diagnostic performance and elucidate complex clinical tasks. However, conventional supervised deep learning models are incapable of recognizing novel diseases that do not exist in the training dataset. Automated early-stage detection of novel infectious diseases can be vital in controlling their rapid spread. Moreover, the development of a conventional CAD model is only possible after disease outbreaks and datasets become available for training (viz. COVID-19 outbreak). Since novel diseases are unknown and cannot be included in training data, it is challenging to recognize them through existing supervised deep learning models. Even after data becomes available, recognizing new classes with conventional models requires a complete extensive re-training. The present study is the
IEEE access : practical innovations, open solutions
"2022-04-02T00:00:00"
[ "AviralChharia", "RahulUpadhyay", "VinayKumar", "ChaoCheng", "JingZhang", "TianyangWang", "MinXu" ]
10.1109/access.2022.3153059 10.1007/s00500-020-05275-y 10.17632/rscbjbr9sj.2 10.1101/2020.05.10.20097063 10.1109/TEM.2021.3059664
Comparison and ensemble of 2D and 3D approaches for COVID-19 detection in CT images.
Detecting COVID-19 in computed tomography (CT) or radiography images has been proposed as a supplement to the RT-PCR test. We compare slice-based (2D) and volume-based (3D) approaches to this problem and propose a deep learning ensemble, called IST-CovNet, combining the best 2D and 3D systems with novel preprocessing and attention modules and the use of a bidirectional Long Short-Term Memory model for combining slice-level decisions. The proposed ensemble obtains 90.80% accuracy and 0.95 AUC score overall on the newly collected IST-C dataset in detecting COVID-19 among normal controls and other types of lung pathologies; and 93.69% accuracy and 0.99 AUC score on the publicly available MosMedData dataset that consists of COVID-19 scans and normal controls only. The system also obtains state-of-art results (90.16% accuracy and 0.94 AUC) on the COVID-CT-MD dataset which is only used for testing. The system is deployed at Istanbul University Cerrahpaşa School of Medicine where it is used to automatically screen CT scans of patients, while waiting for RT-PCR tests or radiologist evaluation.
Neurocomputing
"2022-03-30T00:00:00"
[ "Sara AtitoAli Ahmed", "Mehmet CanYavuz", "Mehmet UmutŞen", "FatihGülşen", "OnurTutar", "BoraKorkmazer", "CesurSamancı", "SabriŞirolu", "RaufHamid", "Ali ErgunEryürekli", "ToghrulMammadov", "BerrinYanikoglu" ]
10.1016/j.neucom.2022.02.018
Trends and hot topics in radiology, nuclear medicine and medical imaging from 2011-2021: a bibliometric analysis of highly cited papers.
To spotlight the trends and hot topics looming from the highly cited papers in the subject category of Radiology, Nuclear Medicine & Medical Imaging with bibliometric analysis. Based on the Essential Science Indicators, this study employed a bibliometric method to examine the highly cited papers in the subject category of Radiology, Nuclear Medicine & Medical Imaging in Web of Science (WoS) Categories, both quantitatively and qualitatively. In total, 1325 highly cited papers were retrieved and assessed spanning from the years of 2011 to 2021. In particular, the bibliometric information of the highly cited papers based on WoS database such as the main publication venues, the most productive countries, and the top cited publications was presented. An Abstract corpus was built to help identify the most frequently explored topics. VoSviewer was used to visualize the co-occurrence networks of author keywords. The top three active journals are Neuroimage, Radiology and IEEE T Med Imaging. The United States, Germany and England have the most influential publications. The top cited publications unrelated to COVID-19 can be grouped in three categories: recommendations or guidelines, processing software, and analysis methods. The top cited publications on COVID-19 are dominantly in China. The most frequently explored topics based on the Abstract corpus and the author keywords with the great link strengths overlap to a great extent. Specifically, phrases such as magnetic resonance imaging, deep learning, prostate cancer, chest CT, computed tomography, CT images, coronavirus disease, convolutional neural network(s) are among the most frequently mentioned. The bibliometric analysis of the highly cited papers provided the most updated trends and hot topics which may provide insights and research directions for medical researchers and healthcare practitioners in the future.
Japanese journal of radiology
"2022-03-29T00:00:00"
[ "ShengYan", "HuitingZhang", "JunWang" ]
10.1007/s11604-022-01268-z 10.1023/B:SCIE.0000018529.58334.eb 10.1002/asi.21454 10.1007/s11192-007-1859-9 10.1007/s11192-011-0416-8 10.1007/s11192-007-1913-7 10.1007/s11192-015-1699-y 10.1093/reseval/rvu002 10.1016/j.joi.2018.09.006 10.1038/514561a 10.3152/147154403781776645 10.1209/0295-5075/105/28002 10.1209/0295-5075/86/68001 10.1177/0165551519877049 10.1007/s11192-007-2068-x 10.1016/j.joi.2015.05.007 10.1177/0266666912458515 10.1093/applin/amy003 10.1016/j.omega.2018.11.005 10.1177/0165551518761013 10.1186/s40537-017-0088-1 10.1186/s12911-018-0594-x 10.3390/su12156058 10.1371/journal.pbio.1002541 10.1016/j.ijpe.2015.01.003 10.1016/j.joi.2015.08.001 10.1093/ehjci/jev014 10.1186/s13000-021-01085-4 10.1001/jama.2016.17216 10.1001/jama.2017.14580 10.1148/radiol.2020200905 10.1371/journal.pone.0252573
Transfer learning with fine-tuned deep CNN ResNet50 model for classifying COVID-19 from chest X-ray images.
COVID-19 cases are putting pressure on healthcare systems all around the world. Due to the lack of available testing kits, it is impractical for screening every patient with a respiratory ailment using traditional methods (RT-PCR). In addition, the tests have a high turn-around time and low sensitivity. Detecting suspected COVID-19 infections from the chest X-ray might help isolate high-risk people before the RT-PCR test. Most healthcare systems already have X-ray equipment, and because most current X-ray systems have already been computerized, there is no need to transfer the samples. The use of a chest X-ray to prioritize the selection of patients for subsequent RT-PCR testing is the motivation of this work. Transfer learning (TL) with fine-tuning on deep convolutional neural network-based ResNet50 model has been proposed in this work to classify COVID-19 patients from the COVID-19 Radiography Database. Ten distinct pre-trained weights, trained on varieties of large-scale datasets using various approaches such as supervised learning, self-supervised learning, and others, have been utilized in this work. Our proposed
Informatics in medicine unlocked
"2022-03-29T00:00:00"
[ "Md BelalHossain", "S M Hasan SazzadIqbal", "Md MonirulIslam", "Md NasimAkhtar", "Iqbal HSarker" ]
10.1016/j.imu.2022.100916
A deep learning-based framework for detecting COVID-19 patients using chest X-rays.
Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) has caused outbreaks of new coronavirus disease (COVID-19) around the world. Rapid and accurate detection of COVID-19 coronavirus is an important step in limiting the spread of the COVID-19 epidemic. To solve this problem, radiography techniques (such as chest X-rays and computed tomography (CT)) can play an important role in the early prediction of COVID-19 patients, which will help to treat patients in a timely manner. We aimed to quickly develop a highly efficient lightweight CNN architecture for detecting COVID-19-infected patients. The purpose of this paper is to propose a robust deep learning-based system for reliably detecting COVID-19 from chest X-ray images. First, we evaluate the performance of various pre-trained deep learning models (InceptionV3, Xception, MobileNetV2, NasNet and DenseNet201) recently proposed for medical image classification. Second, a lightweight shallow convolutional neural network (CNN) architecture is proposed for classifying X-ray images of a patient with a low false-negative rate. The data set used in this work contains 2,541 chest X-rays from two different public databases, which have confirmed COVID-19 positive and healthy cases. The performance of the proposed model is compared with the performance of pre-trained deep learning models. The results show that the proposed shallow CNN provides a maximum accuracy of 99.68% and more importantly sensitivity, specificity and AUC of 99.66%, 99.70% and 99.98%. The proposed model has fewer parameters and low complexity compared to other deep learning models. The experimental results of our proposed method show that it is superior to the existing state-of-the-art methods. We believe that this model can help healthcare professionals to treat COVID-19 patients through improved and faster patient screening.
Multimedia systems
"2022-03-29T00:00:00"
[ "SohaibAsif", "MingZhao", "FengxiaoTang", "YusenZhu" ]
10.1007/s00530-022-00917-7 10.1001/jama.2020.2648 10.1016/j.bj.2020.05.016 10.1016/S2213-2600(20)30167-3 10.1016/S0140-6736(20)30211-7 10.2807/1560-7917.ES.2020.25.3.2000045 10.1148/radiol.2020200343 10.1109/RBME.2020.2990959 10.1148/radiol.2020200527 10.1109/ACCESS.2017.2762703 10.1007/s11548-017-1696-0 10.3390/app10020559 10.1148/radiol.2017162326 10.1038/s41598-020-76550-z 10.1007/s13246-020-00865-4 10.1016/j.chaos.2020.109944 10.1016/j.patrec.2020.09.010 10.1109/TMI.2020.2993291 10.1016/j.compbiomed.2020.103792 10.1109/ACCESS.2020.2994762 10.1016/j.cmpb.2020.105581 10.1016/j.mehy.2020.109761 10.1080/07391102.2020.1767212 10.1007/s42979-021-00695-5 10.1007/s10044-021-00984-y 10.1016/j.irbm.2020.07.001 10.1007/s00330-021-07715-1 10.3390/v12070769 10.32604/cmc.2022.020140 10.1016/j.patrec.2021.06.021 10.1109/TKDE.2009.191 10.1109/ACCESS.2020.3010287 10.1016/j.fss.2007.12.023 10.1109/ACCESS.2020.3016780 10.1016/j.imu.2020.100360 10.1016/j.compbiomed.2020.103805 10.1016/j.chaos.2020.110122 10.1007/s10489-020-01943-6 10.1371/journal.pone.0242535 10.1016/j.bspc.2021.102490 10.1016/j.chaos.2021.110713
Comparison of CO-RADS Scores Based on Visual and Artificial Intelligence Assessments in a Non-Endemic Area.
In this study, we first developed an artificial intelligence (AI)-based algorithm for classifying chest computed tomography (CT) images using the coronavirus disease 2019 Reporting and Data System (CO-RADS). Subsequently, we evaluated its accuracy by comparing the calculated scores with those assigned by radiologists with varying levels of experience. This study included patients with suspected SARS-CoV-2 infection who underwent chest CT imaging between February and October 2020 in Japan, a non-endemic area. For each chest CT, the CO-RADS scores, determined by consensus among three experienced chest radiologists, were used as the gold standard. Images from 412 patients were used to train the model, whereas images from 83 patients were tested to obtain AI-based CO-RADS scores for each image. Six independent raters (one medical student, two residents, and three board-certified radiologists) evaluated the test images. Intraclass correlation coefficients (ICC) and weighted kappa values were calculated to determine the inter-rater agreement with the gold standard. The mean ICC and weighted kappa were 0.754 and 0.752 for the medical student and residents (taken together), 0.851 and 0.850 for the diagnostic radiologists, and 0.913 and 0.912 for AI, respectively. The CO-RADS scores calculated using our AI-based algorithm were comparable to those assigned by radiologists, indicating the accuracy and high reproducibility of our model. Our study findings would enable accurate reading, particularly in areas where radiologists are unavailable, and contribute to improvements in patient management and workflow.
Diagnostics (Basel, Switzerland)
"2022-03-26T00:00:00"
[ "YoshinobuIshiwata", "KentaroMiura", "MayukoKishimoto", "KoichiroNomura", "ShungoSawamura", "ShigeruMagami", "MizukiIkawa", "TsuneoYamashiro", "DaisukeUtsunomiya" ]
10.3390/diagnostics12030738 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.2214/AJR.20.23418 10.1007/s00330-020-06801-0 10.1148/radiol.2020200490 10.1007/s00330-020-06928-0 10.1148/radiol.2020201473 10.1148/radiol.2020202708 10.1016/j.chest.2020.11.026 10.1007/s00330-020-07273-y 10.3390/diagnostics10090608 10.1007/s11604-020-01009-0 10.1007/s11604-020-00986-6 10.1007/s11604-020-01070-9 10.1148/radiol.2020200905 10.1038/s41591-020-0931-3 10.1109/ACCESS.2021.3120717 10.1097/MD.0000000000026161 10.2196/19569 10.1007/s00330-020-07044-9 10.1148/radiol.2020202439 10.1038/bmt.2012.244 10.1259/bjro.20200053 10.3390/ijerph18189804 10.1038/s41467-020-18786-x 10.1002/rmv.2146 10.1038/s41598-020-80061-2
Four Types of Multiclass Frameworks for Pneumonia Classification and Its Validation in X-ray Scans Using Seven Types of Deep Learning Artificial Intelligence Models.
Background and Motivation: The novel coronavirus causing COVID-19 is exceptionally contagious, highly mutative, decimating human health and life, as well as the global economy, by consistent evolution of new pernicious variants and outbreaks. The reverse transcriptase polymerase chain reaction currently used for diagnosis has major limitations. Furthermore, the multiclass lung classification X-ray systems having viral, bacterial, and tubercular classes—including COVID-19—are not reliable. Thus, there is a need for a robust, fast, cost-effective, and easily available diagnostic method. Method: Artificial intelligence (AI) has been shown to revolutionize all walks of life, particularly medical imaging. This study proposes a deep learning AI-based automatic multiclass detection and classification of pneumonia from chest X-ray images that are readily available and highly cost-effective. The study has designed and applied seven highly efficient pre-trained convolutional neural networks—namely, VGG16, VGG19, DenseNet201, Xception, InceptionV3, NasnetMobile, and ResNet152—for classification of up to five classes of pneumonia. Results: The database consisted of 18,603 scans with two, three, and five classes. The best results were using DenseNet201, VGG16, and VGG16, respectively having accuracies of 99.84%, 96.7%, 92.67%; sensitivity of 99.84%, 96.63%, 92.70%; specificity of 99.84, 96.63%, 92.41%; and AUC of 1.0, 0.97, 0.92 (p < 0.0001 for all), respectively. Our system outperformed existing methods by 1.2% for the five-class model. The online system takes <1 s while demonstrating reliability and stability. Conclusions: Deep learning AI is a powerful paradigm for multiclass pneumonia classification.
Diagnostics (Basel, Switzerland)
"2022-03-26T00:00:00"
[ "NoneNillmani", "Pankaj KJain", "NeerajSharma", "Mannudeep KKalra", "KlaudijaViskovic", "LucaSaba", "Jasjit SSuri" ]
10.3390/diagnostics12030652 10.1111/cns.13372 10.1056/NEJMoa2001017 10.23750/abm.v91i1.9397 10.1038/s41579-020-00468-6 10.7759/cureus.7423 10.1007/s10554-020-02089-9 10.4239/wjd.v12.i3.215 10.1016/j.clinimag.2021.05.016 10.1001/jama.2020.25381 10.1001/jama.2020.11787 10.1001/jamainternmed.2020.2306 10.1136/bmj.n230 10.1093/pubmed/fdaa165 10.1001/jama.2020.3786 10.4081/jphr.2021.2270 10.2807/1560-7917.ES.2020.25.50.2000568 10.1002/jmv.25786 10.3390/diagnostics10030165 10.1148/ryct.2020200034 10.1016/j.jinf.2020.03.007 10.1148/radiol.2020200230 10.1097/RTI.0000000000000404 10.1016/S0140-6736(20)30211-7 10.1016/S0140-6736(20)30183-5 10.2807/1560-7917.ES.2020.25.3.2000045 10.1093/clinchem/hvaa029 10.1002/jmv.25674 10.2214/AJR.20.22969 10.2214/AJR.20.23034 10.2741/4725 10.1016/j.ejrad.2019.02.038 10.1016/j.compbiomed.2021.104803 10.1016/j.irbm.2018.08.002 10.1016/j.bbe.2020.07.001 10.1016/j.bbe.2016.10.006 10.1109/JBHI.2017.2715078 10.1007/978-3-319-97982-3_16 10.1109/TIP.2017.2725580 10.1109/TIP.2018.2809606 10.1109/TCYB.2020.2983860 10.1016/j.compbiomed.2022.105273 10.1016/j.ecoinf.2020.101093 10.1109/TPAMI.2015.2491929 10.1109/ACCESS.2019.2927169 10.1016/j.neucom.2016.09.010 10.1016/j.isprsjprs.2017.07.014 10.1109/TPAMI.2019.2950923 10.1016/j.compbiomed.2020.103804 10.3390/cancers11010111 10.1080/21681163.2020.1818628 10.1016/j.compmedimag.2018.09.004 10.1016/j.patrec.2019.03.022 10.1109/ACCESS.2019.2913847 10.1007/s00296-021-05062-4 10.1007/s11883-018-0736-8 10.3390/diagnostics11122257 10.1016/j.compbiomed.2021.104721 10.1007/s11517-019-02099-3 10.1007/s10916-017-0745-0 10.1016/j.compbiomed.2016.11.011 10.1016/j.measurement.2019.05.076 10.1007/s11548-021-02317-0 10.1016/j.eswa.2017.11.028 10.1007/s00138-020-01069-2 10.1145/3331453.3361658 10.1109/LGRS.2018.2876378 10.3390/rs11111374 10.1109/TGRS.2018.2868851 10.1109/ACCESS.2020.3010287 10.1016/j.chaos.2020.110495 10.1007/s10489-020-01902-1 10.1101/2020.03.30.20047787 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2020.103792 10.1016/j.cmpb.2020.105581 10.1038/s41598-020-76550-z 10.1016/j.patrec.2020.09.010 10.1038/s41598-021-99015-3 10.1016/j.bspc.2020.102365 10.1016/j.bspc.2021.103182 10.1016/j.bspc.2021.103126 10.1007/s13755-021-00166-4 10.1109/TMI.2020.2993291 10.1016/j.compbiomed.2021.104319 10.1016/j.cell.2018.02.010 10.1109/ACCESS.2020.3031384 10.1016/j.compbiomed.2020.103958 10.1007/s10554-020-02124-9 10.1007/s11517-021-02322-0 10.1186/s40537-019-0197-0 10.1016/j.knosys.2021.107517 10.1080/17476348.2021.1826315 10.1109/ACCESS.2021.3085240 10.3390/biology11010125 10.1016/j.bspc.2016.03.001 10.3390/diagnostics11112109 10.1117/1.JMI.8.S1.014001 10.1038/s42003-020-01535-7 10.1007/s10916-017-0797-1 10.1109/ACCESS.2020.3003810 10.1016/j.ibmed.2021.100034 10.1038/s41598-021-90411-3
Automatic Deep-Learning Segmentation of Epicardial Adipose Tissue from Low-Dose Chest CT and Prognosis Impact on COVID-19.
Background: To develop a deep-learning (DL) pipeline that allowed an automated segmentation of epicardial adipose tissue (EAT) from low-dose computed tomography (LDCT) and investigate the link between EAT and COVID-19 clinical outcomes. Methods: This monocentric retrospective study included 353 patients: 95 for training, 20 for testing, and 238 for prognosis evaluation. EAT segmentation was obtained after thresholding on a manually segmented pericardial volume. The model was evaluated with Dice coefficient (DSC), inter-and intraobserver reproducibility, and clinical measures. Uni-and multi-variate analyzes were conducted to assess the prognosis value of the EAT volume, EAT extent, and lung lesion extent on clinical outcomes, including hospitalization, oxygen therapy, intensive care unit admission and death. Results: The mean DSC for EAT volumes was 0.85 ± 0.05. For EAT volume, the mean absolute error was 11.7 ± 8.1 cm3 with a non-significant bias of −4.0 ± 13.9 cm3 and a correlation of 0.963 with the manual measures (p < 0.01). The multivariate model providing the higher AUC to predict adverse outcome include both EAT extent and lung lesion extent (AUC = 0.805). Conclusions: A DL algorithm was developed and evaluated to obtain reproducible and precise EAT segmentation on LDCT. EAT extent in association with lung lesion extent was associated with adverse clinical outcomes with an AUC = 0.805.
Cells
"2022-03-26T00:00:00"
[ "AxelBartoli", "JorisFournel", "LéaAit-Yahia", "FarahCadour", "FaroukTradi", "BadihGhattas", "SébastienCortaredona", "MatthieuMillion", "AdèleLasbleiz", "AnneDutour", "BénédicteGaborit", "AlexisJacquier" ]
10.3390/cells11061034 10.5935/abc.20130138 10.1002/cphy.c160034 10.14797/mdcj-13-1-20 10.1016/j.jacc.2012.11.062 10.1016/j.amjcard.2008.04.002 10.1016/j.numecd.2009.10.010 10.1016/j.atherosclerosis.2012.02.029 10.1016/j.atherosclerosis.2011.09.041 10.1093/eurheartj/ehaa471 10.1172/JCI137647 10.3389/fendo.2021.726967 10.18087/cardio.2021.8.n1638 10.1080/00015385.2021.2010009 10.1016/j.metabol.2020.154436 10.1016/j.intimp.2020.107174 10.1016/j.acra.2020.09.012 10.1016/j.numecd.2021.04.020 10.1016/j.media.2017.07.005 10.2174/1874431101004010126 10.1016/j.atherosclerosis.2009.08.032 10.3390/jcm10235650 10.1016/j.mri.2012.05.001 10.1016/j.diii.2021.10.001 10.1007/s13139-012-0175-3 10.1109/TMI.2018.2804799 10.1016/j.cmpb.2020.105395 10.1007/s10554-021-02276-2 10.1161/JAHA.117.006379 10.1118/1.4927375 10.1016/j.compbiomed.2019.103424 10.1148/ryai.2019190045 10.1016/j.metabol.2020.154319 10.1186/s12933-021-01327-1 10.1016/j.recesp.2021.07.005 10.2337/db15-0399 10.1111/obr.13225 10.1161/CIRCULATIONAHA.120.052009
Review of Machine Learning in Lung Ultrasound in COVID-19 Pandemic.
Ultrasound imaging of the lung has played an important role in managing patients with COVID-19-associated pneumonia and acute respiratory distress syndrome (ARDS). During the COVID-19 pandemic, lung ultrasound (LUS) or point-of-care ultrasound (POCUS) has been a popular diagnostic tool due to its unique imaging capability and logistical advantages over chest X-ray and CT. Pneumonia/ARDS is associated with the sonographic appearances of pleural line irregularities and B-line artefacts, which are caused by interstitial thickening and inflammation, and increase in number with severity. Artificial intelligence (AI), particularly machine learning, is increasingly used as a critical tool that assists clinicians in LUS image reading and COVID-19 decision making. We conducted a systematic review from academic databases (PubMed and Google Scholar) and preprints on arXiv or TechRxiv of the state-of-the-art machine learning technologies for LUS images in COVID-19 diagnosis. Openly accessible LUS datasets are listed. Various machine learning architectures have been employed to evaluate LUS and showed high performance. This paper will summarize the current development of AI for COVID-19 management and the outlook for emerging trends of combining AI-based LUS with robotics, telehealth, and other techniques.
Journal of imaging
"2022-03-25T00:00:00"
[ "JingWang", "XiaofengYang", "BoranZhou", "James JSohn", "JunZhou", "Jesse TJacob", "Kristin AHiggins", "Jeffrey DBradley", "TianLiu" ]
10.3390/jimaging8030065 10.1056/NEJMoa2001316 10.1016/S0140-6736(20)30183-5 10.1002/jum.15417 10.15585/mmwr.mm6924e2 10.1109/RBME.2020.2990959 10.1515/dx-2020-0058 10.1186/s13089-020-00171-w 10.1136/postgradmedj-2020-138137 10.6061/clinics/2020/e2027 10.2196/19673 10.1109/TUFFC.2020.3020055 10.3389/fmed.2020.00375 10.1016/j.ultrasmedbio.2020.05.012 10.1186/2110-5820-4-1 10.5644/ama2006-124.162 10.1136/bmj.m4944 10.1136/bmj.n158 10.1136/bmj.m4857 10.1016/j.ejro.2020.100231 10.1016/j.ultrasmedbio.2020.04.026 10.2214/AJR.20.23513 10.21203/rs.2.24369/v1 10.1016/j.ultrasmedbio.2020.09.014 10.4269/ajtmh.20-0280 10.14366/usg.20084 10.1186/s13089-020-00198-z 10.1002/jum.15284 10.1016/j.ultrasmedbio.2020.05.006 10.1097/CCM.0b013e31824e68ae 10.1007/s11547-008-0247-8 10.1002/uog.22028 10.1002/jum.15285 10.1183/20734735.004717 10.1007/s13089-011-0066-3 10.1378/chest.07-2800 10.7861/clinmed.2020-0123 10.1080/17476348.2019.1565997 10.1109/JBHI.2019.2936151 10.1111/anae.15082 10.1016/j.advms.2020.06.005 10.15557/JoU.2020.0025 10.1213/ANE.0000000000004929 10.1093/ehjci/jeaa163 10.1007/s00134-020-05996-6 10.1002/jum.15508 10.1002/uog.22034 10.1007/s00134-020-06048-9 10.1016/j.acra.2020.07.002 10.1007/s00134-020-06212-1 10.1136/bmj.m1328 10.1038/s41598-020-76550-z 10.3390/app11020672 10.1109/TMI.2020.2994459 10.1177/15533506211018671 10.1109/TUFFC.2020.3005512 10.1109/TMI.2009.2024415 10.1109/TGRS.2016.2616949 10.1109/TUFFC.2021.3107598 10.1371/journal.pone.0255886 10.1016/0165-0114(95)00133-6 10.1016/j.ejmp.2021.02.023 10.1121/10.0004855 10.1002/jum.15902 10.1016/j.imu.2021.100687 10.1121/10.0007272 10.36227/techrxiv.17912387.v2 10.1109/TUFFC.2021.3068190 10.3390/s21165486 10.1016/j.compbiomed.2021.104296 10.1016/j.inffus.2021.02.013 10.1007/s13755-021-00154-8 10.1109/TUFFC.2020.3002249 10.1136/bmjopen-2020-045120 10.1016/j.media.2021.101975 10.1016/j.inffus.2021.05.015 10.1186/s12938-021-00863-x 10.1109/ACCESS.2020.3016780 10.1016/j.compbiomed.2021.104742 10.7150/ijbs.58855 10.1111/exsy.12759 10.21037/atm-20-3043 10.1016/j.acra.2020.04.032 10.1016/j.jcrc.2015.08.021 10.1002/jum.15765 10.3389/frobt.2021.610677 10.3389/frobt.2021.645756 10.1002/jum.15406 10.3389/frobt.2021.645424 10.1007/s11227-021-04166-9 10.1007/s00521-021-06396-7
Deep Learning-Based Classification of Reduced Lung Ultrasound Data From COVID-19 Patients.
The application of lung ultrasound (LUS) imaging for the diagnosis of lung diseases has recently captured significant interest within the research community. With the ongoing COVID-19 pandemic, many efforts have been made to evaluate LUS data. A four-level scoring system has been introduced to semiquantitatively assess the state of the lung, classifying the patients. Various deep learning (DL) algorithms supported with clinical validations have been proposed to automate the stratification process. However, no work has been done to evaluate the impact on the automated decision by varying pixel resolution and bit depth, leading to the reduction in size of overall data. This article evaluates the performance of DL algorithm over LUS data with varying pixel and gray-level resolution. The algorithm is evaluated over a dataset of 448 LUS videos captured from 34 examinations of 20 patients. All videos are resampled by a factor of 2, 3, and 4 of original resolution, and quantized to 128, 64, and 32 levels, followed by score prediction. The results indicate that the automated scoring shows negligible variation in accuracy when it comes to the quantization of intensity levels only. Combined effect of intensity quantization with spatial down-sampling resulted in a prognostic agreement ranging from 73.5% to 82.3%.These results also suggest that such level of prognostic agreement can be achieved over evaluation of data reduced to 32 times of its original size. Thus, laying foundation to efficient processing of data in resource constrained environments.
IEEE transactions on ultrasonics, ferroelectrics, and frequency control
"2022-03-24T00:00:00"
[ "UmairKhan", "FedericoMento", "LucreziaNicolussi Giacomaz", "RiccardoTrevisan", "AndreaSmargiassi", "RiccardoInchingolo", "TizianoPerrone", "LibertarioDemi" ]
10.1109/TUFFC.2022.3161716
Deep Learning and Medical Image Analysis for COVID-19 Diagnosis and Prediction.
The coronavirus disease 2019 (COVID-19) pandemic has imposed dramatic challenges to health-care organizations worldwide. To combat the global crisis, the use of thoracic imaging has played a major role in the diagnosis, prediction, and management of COVID-19 patients with moderate to severe symptoms or with evidence of worsening respiratory status. In response, the medical image analysis community acted quickly to develop and disseminate deep learning models and tools to meet the urgent need of managing and interpreting large amounts of COVID-19 imaging data. This review aims to not only summarize existing deep learning and medical image analysis methods but also offer in-depth discussions and recommendations for future investigations. We believe that the wide availability of high-quality, curated, and benchmarked COVID-19 imaging data sets offers the great promise of a transformative test bed to develop, validate, and disseminate novel deep learning methods in the frontiers of data science and artificial intelligence.
Annual review of biomedical engineering
"2022-03-23T00:00:00"
[ "TianmingLiu", "EliotSiegel", "DinggangShen" ]
10.1146/annurev-bioeng-110220-012203
EDNC: Ensemble Deep Neural Network for COVID-19 Recognition.
The automatic recognition of COVID-19 diseases is critical in the present pandemic since it relieves healthcare staff of the burden of screening for infection with COVID-19. Previous studies have proven that deep learning algorithms can be utilized to aid in the diagnosis of patients with potential COVID-19 infection. However, the accuracy of current COVID-19 recognition models is relatively low. Motivated by this fact, we propose three deep learning architectures, F-EDNC, FC-EDNC, and O-EDNC, to quickly and accurately detect COVID-19 infections from chest computed tomography (CT) images. Sixteen deep learning neural networks have been modified and trained to recognize COVID-19 patients using transfer learning and 2458 CT chest images. The proposed EDNC has then been developed using three of sixteen modified pre-trained models to improve the performance of COVID-19 recognition. The results suggested that the F-EDNC method significantly enhanced the recognition of COVID-19 infections with 97.75% accuracy, followed by FC-EDNC and O-EDNC (97.55% and 96.12%, respectively), which is superior to most of the current COVID-19 recognition models. Furthermore, a localhost web application has been built that enables users to easily upload their chest CT scans and obtain their COVID-19 results automatically. This accurate, fast, and automatic COVID-19 recognition system will relieve the stress of medical professionals for screening COVID-19 infections.
Tomography (Ann Arbor, Mich.)
"2022-03-23T00:00:00"
[ "LinYang", "Shui-HuaWang", "Yu-DongZhang" ]
10.3390/tomography8020071 10.1503/cmaj.211248 10.3390/ijms21083004 10.2214/AJR.20.23418 10.2147/JMDH.S293601 10.1007/s00330-020-06915-5 10.1038/s41746-020-00376-2 10.1016/S2589-7500(19)30123-2 10.7717/peerj.7702 10.4236/jbise.2020.137014 10.1007/s00521-020-05437-x 10.1016/j.patrec.2020.10.001 10.2196/19569 10.1016/j.bspc.2021.102588 10.1155/2021/6633755 10.1016/j.inffus.2020.11.005 10.1016/j.inffus.2020.10.004 10.1007/s12559-020-09776-8 10.1186/s40537-019-0197-0 10.1109/TKDE.2009.191 10.1109/JPROC.2020.3004555 10.1186/s40537-016-0043-6 10.1016/S0933-3657(01)00077-X 10.7150/thno.38065 10.1186/s40537-021-00444-8 10.1109/TNNLS.2021.3084827 10.1117/1.OE.58.4.040901 10.1109/ACCESS.2020.3003810 10.1016/j.chaos.2020.110190 10.1155/2020/8843664 10.1016/j.compbiomed.2021.104575 10.3390/s21020455 10.1186/s12938-020-00807-x 10.1080/07391102.2020.1788642 10.1007/s00259-020-04929-1 10.20944/preprints202005.0151.v3 10.1183/13993003.00775-2020 10.1007/s00330-021-07715-1 10.1148/radiol.2020200905
CHS-Net: A Deep Learning Approach for Hierarchical Segmentation of COVID-19 via CT Images.
The pandemic of novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) also known as COVID-19 has been spreading worldwide, causing rampant loss of lives. Medical imaging such as computed tomography (CT), X-ray, etc., plays a significant role in diagnosing the patients by presenting the visual representation of the functioning of the organs. However, for any radiologist analyzing such scans is a tedious and time-consuming task. The emerging deep learning technologies have displayed its strength in analyzing such scans to aid in the faster diagnosis of the diseases and viruses such as COVID-19. In the present article, an automated deep learning based model, COVID-19 hierarchical segmentation network (CHS-Net) is proposed that functions as a semantic hierarchical segmenter to identify the COVID-19 infected regions from lungs contour via CT medical imaging using two cascaded residual attention inception U-Net (RAIU-Net) models. RAIU-Net comprises of a residual inception U-Net model with spectral spatial and depth attention network (SSD) that is developed with the contraction and expansion phases of depthwise separable convolutions and hybrid pooling (max and spectral pooling) to efficiently encode and decode the semantic and varying resolution information. The CHS-Net is trained with the segmentation loss function that is the defined as the average of binary cross entropy loss and dice loss to penalize false negative and false positive predictions. The approach is compared with the recently proposed approaches and evaluated using the standard metrics like accuracy, precision, specificity, recall, dice coefficient and Jaccard similarity along with the visualized interpretation of the model prediction with GradCam++ and uncertainty maps. With extensive trials, it is observed that the proposed approach outperformed the recently proposed approaches and effectively segments the COVID-19 infected regions in the lungs.
Neural processing letters
"2022-03-22T00:00:00"
[ "Narinder SinghPunn", "SonaliAgarwal" ]
10.1007/s11063-022-10785-x 10.1007/s11517-019-01965-4 10.1148/radiol.2019181960 10.1016/j.cell.2018.02.010 10.1007/s00330-020-06817-6 10.1007/s10096-020-03901-z 10.1109/TMI.2020.2996645 10.2214/AJR.20.22954 10.1016/j.ejrad.2020.109009 10.1016/j.jinf.2020.04.004 10.1109/RBME.2020.2987975 10.1109/ACCESS.2020.3005510 10.1016/j.patcog.2020.107747 10.1016/j.patcog.2021.108168 10.1016/j.asoc.2021.107947 10.1016/j.inffus.2021.05.008 10.1007/s00521-019-04296-5 10.1109/TPAMI.2016.2644615
COVID-19 Identification System Using Transfer Learning Technique With Mobile-NetV2 and Chest X-Ray Images.
Diagnosis is a crucial precautionary step in research studies of the coronavirus disease, which shows indications similar to those of various pneumonia types. The COVID-19 pandemic has caused a significant outbreak in more than 150 nations and has significantly affected the wellness and lives of many individuals globally. Particularly, discovering the patients infected with COVID-19 early and providing them with treatment is an important way of fighting the pandemic. Radiography and radiology could be the fastest techniques for recognizing infected individuals. Artificial intelligence strategies have the potential to overcome this difficulty. Particularly, transfer learning MobileNetV2 is a convolutional neural network architecture that can perform well on mobile devices. In this study, we used MobileNetV2 with transfer learning and augmentation data techniques as a classifier to recognize the coronavirus disease. Two datasets were used: the first consisted of 309 chest X-ray images (102 with COVID-19 and 207 were normal), and the second consisted of 516 chest X-ray images (102 with COVID-19 and 414 were normal). We assessed the model based on its sensitivity rate, specificity rate, confusion matrix, and F1-measure. Additionally, we present a receiver operating characteristic curve. The numerical simulation reveals that the model accuracy is 95.8% and 100% at dropouts of 0.3 and 0.4, respectively. The model was implemented using Keras and Python programming.
Frontiers in public health
"2022-03-22T00:00:00"
[ "MahmoudRagab", "SamahAlshehri", "Gamil AbdelAzim", "Hibah MAldawsari", "AdeebNoor", "JaberAlyami", "SAbdel-Khalek" ]
10.3389/fpubh.2022.819156 10.1016/j.ijantimicag.2020.105924 10.1016/S0140-6736(20)30185-9 10.1148/radiol.2020201160 10.1016/j.ejmp.2020.01.004 10.1148/radiol.2020200642 10.1148/radiol.2020200527 10.1038/s41598-020-76550-z 10.1186/s13244-019-0832-5 10.1109/ACCESS.2020.3007336 10.1109/MSP.2017.2749125 10.1109/ACCESS.2020.2982906 10.1109/ACCESS.2019.2962284 10.1007/s13244-018-0639-9 10.1109/5.726791 10.1109/TPAMI.2016.2644615 10.1007/978-3-319-10593-2_13 10.1016/j.media.2017.07.005 10.1109/CVPRW.2014.131 10.1109/ISCAS.2018.8351550 10.1109/CVPR.2016.90 10.1109/CVPR.2015.7298594 10.1016/j.scs.2020.102589 10.1109/GCWkshps50303.2020.9367469 10.1016/j.cell.2018.02.010 10.3390/app8101715 10.1007/s00330-021-07715-1 10.1007/s13246-020-00865-4 10.1007/s40747-020-00199-4 10.1016/j.cmpb.2020.105581 10.1186/s13007-020-00624-2 10.1145/3331453.3361658 10.1109/OJEMB.2021.3066097 10.1186/s13640-021-00554-6 10.3390/biology11010043 10.1109/CVPR.2017.195 10.1088/1742-6596/2071/1/012003 10.7717/peerj-cs.390
MultiR-Net: A Novel Joint Learning Network for COVID-19 segmentation and classification.
The outbreak of COVID-19 has caused a severe shortage of healthcare resources. Ground Glass Opacity (GGO) and consolidation of chest CT scans have been an essential basis for imaging diagnosis since 2020. The similarity of imaging features between COVID-19 and other pneumonia makes it challenging to distinguish between them and affects radiologists' diagnosis. Recently, deep learning in COVID-19 has been mainly divided into disease classification and lesion segmentation, yet little work has focused on the feature correlation between the two tasks. To address these issues, in this study, we propose MultiR-Net, a 3D deep learning model for combined COVID-19 classification and lesion segmentation, to achieve real-time and interpretable COVID-19 chest CT diagnosis. Precisely, the proposed network consists of two subnets: a multi-scale feature fusion UNet-like subnet for lesion segmentation and a classification subnet for disease diagnosis. The features between the two subnets are fused by the reverse attention mechanism and the iterable training strategy. Meanwhile, we proposed a loss function to enhance the interaction between the two subnets. Individual metrics can not wholly reflect network effectiveness. Thus we quantify the segmentation results with various evaluation metrics such as average surface distance, volume Dice, and test on the dataset. We employ a dataset containing 275 3D CT scans for classifying COVID-19, Community-acquired Pneumonia (CAP), and healthy people and segmented lesions in pneumonia patients. We split the dataset into 70% and 30% for training and testing. Extensive experiments showed that our multi-task model framework obtained an average recall of 93.323%, an average precision of 94.005% on the classification test set, and a 69.95% Volume Dice score on the segmentation test set of our dataset.
Computers in biology and medicine
"2022-03-20T00:00:00"
[ "Cheng-FanLi", "Yi-DuoXu", "Xue-HaiDing", "Jun-JuanZhao", "Rui-QiDu", "Li-ZhongWu", "Wen-PingSun" ]
10.1016/j.compbiomed.2022.105340
COVID-19 image classification using deep learning: Advances, challenges and opportunities.
Corona Virus Disease-2019 (COVID-19), caused by Severe Acute Respiratory Syndrome-Corona Virus-2 (SARS-CoV-2), is a highly contagious disease that has affected the lives of millions around the world. Chest X-Ray (CXR) and Computed Tomography (CT) imaging modalities are widely used to obtain a fast and accurate diagnosis of COVID-19. However, manual identification of the infection through radio images is extremely challenging because it is time-consuming and highly prone to human errors. Artificial Intelligence (AI)-techniques have shown potential and are being exploited further in the development of automated and accurate solutions for COVID-19 detection. Among AI methodologies, Deep Learning (DL) algorithms, particularly Convolutional Neural Networks (CNN), have gained significant popularity for the classification of COVID-19. This paper summarizes and reviews a number of significant research publications on the DL-based classification of COVID-19 through CXR and CT images. We also present an outline of the current state-of-the-art advances and a critical discussion of open challenges. We conclude our study by enumerating some future directions of research in COVID-19 imaging classification.
Computers in biology and medicine
"2022-03-20T00:00:00"
[ "PriyaAggarwal", "Narendra KumarMishra", "BinishFatimah", "PushpendraSingh", "AnubhaGupta", "Shiv DuttJoshi" ]
10.1016/j.compbiomed.2022.105350 10.1007/s00500-021-06137-x 10.5281/zenodo.3757476
INASNET: Automatic identification of coronavirus disease (COVID-19) based on chest X-ray using deep neural network.
Testing is one of the important methodologies used by various countries in order to fight against COVID-19 infection. The infection is considered as one of the deadliest ones although the mortality rate is not very high. COVID-19 infection is being caused by SARS-CoV2 which is termed as severe acute respiratory syndrome coronavirus 2 virus. To prevent the community, transfer among the masses, testing plays an important role. Efficient and quicker testing techniques helps in identification of infected person which makes it easier for to isolate the patient. Deep learning methods have proved their presence and effectiveness in medical image analysis and in the identification of some of the diseases like pneumonia. Authors have been proposed a deep learning mechanism and system to identify the COVID-19 infected patient on analyzing the X-ray images. Symptoms in the COVID-19 infection is well similar to the symptoms occurring in the influenza and pneumonia. The proposed model Inception Nasnet (INASNET) is being able to separate out and classify the X-ray images in the corresponding normal, COVID-19 infected or pneumonia infected classes. This testing method will be a boom for the doctors and for the state as it is a way cheaper method as compared to the other testing kits used by the healthcare workers for the diagnosis of the disease. Continuous analysis by convolutional neural network and regular evaluation will result in better accuracy and helps in eliminating the false-negative results. INASNET is based on the combined platform of InceptionNet and Neural network architecture search which will result in having higher and faster predictions. Regular testing, faster results, economically viable testing using X-ray images will help the front line workers to make a win over COVID-19.
ISA transactions
"2022-03-19T00:00:00"
[ "MurukessanPerumal", "AkshayNayak", "R PraneethaSree", "MSrinivas" ]
10.1016/j.isatra.2022.02.033
Metaheuristics based COVID-19 detection using medical images: A review.
Many countries in the world have been facing the rapid spread of COVID-19 since February 2020. There is a dire need for efficient and cheap automated diagnosis systems that can reduce the pressure on healthcare systems. Extensive research is being done on the use of image classification for the detection of COVID-19 through X-ray and CT-scan images of patients. Deep learning has been the most popular technique for image classification during the last decade. However, the performance of deep learning-based methods heavily depends on the architecture of the deep neural network. Over the last few years, metaheuristics have gained popularity for optimizing the architecture of deep neural networks. Metaheuristics have been widely used to solve different complex non-linear optimization problems due to their flexibility, simplicity, and problem independence. This paper aims to study the different image classification techniques for chest images, including the applications of metaheuristics for optimization and feature selection of deep learning and machine learning models. The motivation of this study is to focus on applications of different types of metaheuristics for COVID-19 detection and to shed some light on future challenges in COVID-19 detection from medical images. The aim is to inspire researchers to focus their research on overlooked aspects of COVID-19 detection.
Computers in biology and medicine
"2022-03-17T00:00:00"
[ "MamoonaRiaz", "MaryamBashir", "IrfanYounas" ]
10.1016/j.compbiomed.2022.105344 10.1155/2021/8829829 10.1155/2015/232193
Efficacy of Transfer Learning-based ResNet models in Chest X-ray image classification for detecting COVID-19 Pneumonia.
Because of COVID-19's effect on pulmonary tissues, Chest X-ray(CXR) and Computed Tomography (CT) images have become the preferred imaging modality for detecting COVID-19 infections at the early diagnosis stages, particularly when the symptoms are not specific. A significant fraction of individuals with COVID-19 have negative polymerase chain reaction (PCR) test results; therefore, imaging studies coupled with epidemiological, clinical, and laboratory data assist in the decision making. With the newer variants of COVID-19 emerging, the burden on diagnostic laboratories has increased manifold. Therefore, it is important to employ beyond laboratory measures to solve complex CXR image classification problems. One such tool is Convolutional Neural Network (CNN), one of the most dominant Deep Learning (DL) architectures. DL entails training a CNN for a task such as classification using extensive datasets. However, the labelled data for COVID-19 is scarce, proving to be a prime impediment to applying DL-assisted analysis. The available datasets are either scarce or too diversified to learn effective feature representations; therefore Transfer Learning (TL) approach is utilized. TL-based ResNet architecture has a powerful representational ability, making it popular in Computer Vision. The aim of this study is two-fold- firstly, to assess the performance of ResNet models for classifying Pneumonia cases from CXR images and secondly, to build a customized ResNet model and evaluate its contribution to the performance improvement. The global accuracies achieved by the five models i.e., ResNet18_v1, ResNet34_v1, ResNet50_v1, ResNet101_v1, ResNet152_v1 are 91.35%, 90.87%, 92.63%, 92.95%, and 92.95% respectively. ResNet50_v1 displayed the highest sensitivity of 97.18%, ResNet101_v1 showed the specificity of 94.02%, and ResNet18_v1 had the highest precision of 93.53%. The findings are encouraging, demonstrating the effectiveness of ResNet in the automatic detection of Pneumonia for COVID-19 diagnosis. The customized ResNet model presented in this study achieved 95% global accuracy, 95.65% precision, 92.74% specificity, and 95.9% sensitivity, thereby allowing a reliable analysis of CXR images to facilitate the clinical decision-making process. All simulations were carried in PyTorch utilizing Quadro 4000 GPU with Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60 ​GHz processor and 63.9 ​GB useable RAM.
Chemometrics and intelligent laboratory systems : an international journal sponsored by the Chemometrics Society
"2022-03-17T00:00:00"
[ "SadiaShowkat", "ShaimaQureshi" ]
10.1016/j.chemolab.2022.104534 10.3201/eid2701.201543 10.18051/UnivMed.2021.v40.77-78 10.22270/jddt.v11i2-S.4644 10.1016/S1473-3099(21)00146-8 10.1016/j.rxeng.2020.11.003 10.1097/RTI.0000000000000347 10.1177/0846537120924606 10.1016/j.ijtb.2020.09.023 10.1109/ICEngTechnol.2017.8308186 10.1002/9781119711278.ch11 10.1109/ACCESS.2021.3058537 10.1101/2020.05.04.20090803 10.1186/s40537-016-0043-6 10.1109/JPROC.2020.3004555 10.1109/TMI.2016.2535302 10.1021/acs.jmedchem.9b02147 10.1016/j.compag.2020.105393 10.1016/j.compag.2019.01.041 10.1109/TII.2018.2864759 10.18653/v1/N19-5004 10.1117/12.2243849 10.1109/WACV45572.2020.9093290 10.1109/CVPR.2015.7298594 10.1109/CVPR.2017.195 10.1016/j.irbm.2020.05.003 10.1007/s00330-021-07715-1 10.1007/s13246-020-00865-4 10.1101/2020.05.01.20088211 10.1109/ACCESS.2020.3033762 10.1109/72.279181 10.1142/S0218488598000094 10.1007/978-3-319-46493-0_38 10.1109/ICTAI.2018.00017 10.1007/978-3-319-54184-6_12 10.1007/978-3-319-61316-1_6 10.1016/j.media.2021.101985 10.1109/ACCESS.2021.3054484
An automated diagnosis and classification of COVID-19 from chest CT images using a transfer learning-based convolutional neural network.
Researchers have developed more intelligent, highly responsive, and efficient detection methods owing to the COVID-19 demands for more widespread diagnosis. The work done deals with developing an AI-based framework that can help radiologists and other healthcare professionals diagnose COVID-19 cases at a high level of accuracy. However, in the absence of publicly available CT datasets, the development of such AI tools can prove challenging. Therefore, an algorithm for performing automatic and accurate COVID-19 classification using Convolutional Neural Network (CNN), pre-trained model, and Sparrow search algorithm (SSA) on CT lung images was proposed. The pre-trained CNN models used are SeresNext50, SeresNext101, SeNet154, MobileNet, MobileNetV2, MobileNetV3Small, and MobileNetV3Large. In addition, the SSA will be used to optimize the different CNN and transfer learning(TL) hyperparameters to find the best configuration for the pre-trained model used and enhance its performance. Two datasets are used in the experiments. There are two classes in the first dataset, while three in the second. The authors combined two publicly available COVID-19 datasets as the first dataset, namely the COVID-19 Lung CT Scans and COVID-19 CT Scan Dataset. In total, 14,486 images were included in this study. The authors analyzed the Large COVID-19 CT scan slice dataset in the second dataset, which utilized 17,104 images. Compared to other pre-trained models on both classes datasets, MobileNetV3Large pre-trained is the best model. As far as the three-classes dataset is concerned, a model trained on SeNet154 is the best available. Results show that, when compared to other CNN models like LeNet-5 CNN, COVID faster R-CNN, Light CNN, Fuzzy + CNN, Dynamic CNN, CNN and Optimized CNN, the proposed Framework achieves the best accuracy of 99.74% (two classes) and 98% (three classes).
Computers in biology and medicine
"2022-03-16T00:00:00"
[ "Nadiah ABaghdadi", "AmerMalki", "Sally FAbdelaliem", "HossamMagdy Balaha", "MahmoudBadawy", "MostafaElhosseini" ]
10.1016/j.compbiomed.2022.105383
A hybrid machine learning/deep learning COVID-19 severity predictive model from CT images and clinical data.
COVID-19 clinical presentation and prognosis are highly variable, ranging from asymptomatic and paucisymptomatic cases to acute respiratory distress syndrome and multi-organ involvement. We developed a hybrid machine learning/deep learning model to classify patients in two outcome categories, non-ICU and ICU (intensive care admission or death), using 558 patients admitted in a northern Italy hospital in February/May of 2020. A fully 3D patient-level CNN classifier on baseline CT images is used as feature extractor. Features extracted, alongside with laboratory and clinical data, are fed for selection in a Boruta algorithm with SHAP game theoretical values. A classifier is built on the reduced feature space using CatBoost gradient boosting algorithm and reaching a probabilistic AUC of 0.949 on holdout test set. The model aims to provide clinical decision support to medical doctors, with the probability score of belonging to an outcome class and with case-based SHAP interpretation of features importance.
Scientific reports
"2022-03-16T00:00:00"
[ "MatteoChieregato", "FabioFrangiamore", "MauroMorassi", "ClaudiaBaresi", "StefaniaNici", "ChiaraBassetti", "ClaudioBnà", "MarcoGalelli" ]
10.1038/s41598-022-07890-1
Supervised and weakly supervised deep learning models for COVID-19 CT diagnosis: A systematic review.
Artificial intelligence (AI) and computer vision (CV) methods become reliable to extract features from radiological images, aiding COVID-19 diagnosis ahead of the pathogenic tests and saving critical time for disease management and control. Thus, this review article focuses on cascading numerous deep learning-based COVID-19 computerized tomography (CT) imaging diagnosis research, providing a baseline for future research. Compared to previous review articles on the topic, this study pigeon-holes the collected literature very differently (i.e., its multi-level arrangement). For this purpose, 71 relevant studies were found using a variety of trustworthy databases and search engines, including Google Scholar, IEEE Xplore, Web of Science, PubMed, Science Direct, and Scopus. We classify the selected literature in multi-level machine learning groups, such as supervised and weakly supervised learning. Our review article reveals that weak supervision has been adopted extensively for COVID-19 CT diagnosis compared to supervised learning. Weakly supervised (conventional transfer learning) techniques can be utilized effectively for real-time clinical practices by reusing the sophisticated features rather than over-parameterizing the standard models. Few-shot and self-supervised learning are the recent trends to address data scarcity and model efficacy. The deep learning (artificial intelligence) based models are mainly utilized for disease management and control. Therefore, it is more appropriate for readers to comprehend the related perceptive of deep learning approaches for the in-progress COVID-19 CT diagnosis research.
Computer methods and programs in biomedicine
"2022-03-15T00:00:00"
[ "HaseebHassan", "ZhaoyuRen", "ChengminZhou", "Muazzam AKhan", "YiPan", "JianZhao", "BingdingHuang" ]
10.1016/j.cmpb.2022.106731
Multi-task semantic segmentation of CT images for COVID-19 infections using DeepLabV3+ based on dilated residual network.
COVID-19 is a deadly outbreak that has been declared a public health emergency of international concern. The massive damage of the disease to public health, social life, and the global economy increases the importance of alternative rapid diagnosis and follow-up methods. RT-PCR assay, which is considered the gold standard in diagnosing the disease, is complicated, expensive, time-consuming, prone to contamination, and may give false-negative results. These drawbacks reinforce the trend toward medical imaging techniques such as computed tomography (CT). Typical visual signs such as ground-glass opacity (GGO) and consolidation of CT images allow for quantitative assessment of the disease. In this context, it is aimed at the segmentation of the infected lung CT images with the residual network-based DeepLabV3+, which is a redesigned convolutional neural network (CNN) model. In order to evaluate the robustness of the proposed model, three different segmentation tasks as Task-1, Task-2, and Task-3 were applied. Task-1 represents binary segmentation as lung (infected and non-infected tissues) and background. Task-2 represents multi-class segmentation as lung (non-infected tissue), COVID (GGO, consolidation, and pleural effusion irregularities are gathered under a single roof), and background. Finally, the segmentation in which each lesion type is considered as a separate class is defined as Task-3. COVID-19 imaging data for each segmentation task consists of 100 CT single-slice scans from over 40 diagnosed patients. The performance of the model was evaluated using Dice similarity coefficient (DSC), intersection over union (IoU), sensitivity, specificity, and accuracy by performing five-fold cross-validation. The average DSC performance for three different segmentation tasks was obtained as 0.98, 0.858, and 0.616, respectively. The experimental results demonstrate that the proposed method has robust performance and great potential in evaluating COVID-19 infection.
Physical and engineering sciences in medicine
"2022-03-15T00:00:00"
[ "HasanPolat" ]
10.1007/s13246-022-01110-w 10.1016/j.compbiomed.2020.103805 10.1016/j.iot.2021.100377 10.1016/j.measurement.2020.108288 10.1016/j.eng.2020.04.010 10.1016/j.bbe.2021.04.006 10.1016/j.ibmed.2020.100013 10.1016/j.jrid.2020.04.001 10.1016/j.compbiomed.2020.104037 10.1016/j.clinimag.2021.01.019 10.1186/s12880-020-00529-5 10.1016/j.patrec.2020.07.029 10.1007/978-3-030-01234-2_49 10.33889/IJMEMS.2020.5.4.052 10.1002/ima.22558 10.1016/j.bspc.2021.102987 10.1016/j.bbe.2021.05.013 10.1109/JAS.2020.1003393 10.1016/j.knosys.2021.106849 10.1007/978-3-319-24574-4_28 10.1109/TPAMI.2016.2644615 10.1109/TPAMI.2016.2572683 10.1016/j.procs.2021.01.025 10.1007/978-3-319-10578-9_23 10.1002/mp.14676 10.1109/ACCESS.2021.3067047 10.3390/s20113183
Automated assessment of hyoid movement during normal swallow using ultrasound.
The potential for using ultrasound by speech and language therapists (SLTs) as an adjunct clinical tool to assess swallowing function has received increased attention during the COVID-19 pandemic, with a recent review highlighting the need for further research on normative data, objective measurement, elicitation protocol and training. The dynamic movement of the hyoid, visible in ultrasound, is crucial in facilitating bolus transition and protection of the airway during a swallow and has shown promise as a biomarker of swallowing function. To examine the kinematics of the hyoid during a swallow using ultrasound imaging and to relate the patterns to the different stages of a normal swallow. To evaluate the accuracy and robustness of two different automatic hyoid tracking methods relative to manual hyoid position estimation. Ultrasound data recorded from 15 healthy participants swallowing a 10 ml water bolus delivered by cup or spoon were analysed. The movement of the hyoid was tracked using manually marked frame-to-frame positions, automated hyoid shadow tracking and deep neural net (DNN) tracking. Hyoid displacement along the horizontal image axis (HxD) was charted throughout a swallow, and the maximum horizontal displacement (HxD max) and maximum hyoid velocity (HxV max) along the same axis were automatically calculated. The HxD and HxV of 10 ml swallows are similar to values reported in the literature. The trajectory of the hyoid movement and its location at significant swallow event time points showed increased hyoid displacement towards the peak of the swallow. Using an interclass correlation coefficient, HxD max and HxV max values derived from the DNN tracker and shadow tracker are shown to be in high agreement and moderate agreement, respectively, when compared with values derived from manual tracking. The similarity of the hyoid tracking results using ultrasound to previous reports based on different instrumental tools supports the possibility of using hyoid movement as a measure of swallowing function in ultrasound. The use of machine learning to automatically track the hyoid movement potentially provides a reliable and efficient way to quantify swallowing function. These findings contribute towards improving the clinical utility of ultrasound as a swallowing assessment tool. Further research on both normative and clinical populations is needed to validate hyoid movement metrics as a means of differentiating normal and abnormal swallows and to verify the reliability of automatic tracking. What is already known on this subject There is growing interest in the use of ultrasound as an adjunct tool for assessing swallowing function. However, there is currently insufficient knowledge about the patterning and timing of lingual and hyoid movement in a typical swallow. We know that movement of the hyoid plays an essential role in bolus transition and airway protection. However, manual tracking of hyoid movement is time-consuming and restricts the extent of large-scale normative studies. What this study adds We show that hyoid movement can be tracked automatically, providing measurable continuous positional data. Measurements derived from this objective data are comparable with similar measures previously reported using videofluoroscopy and of the two automatic trackers assessed, the DNN approach demonstrates better robustness and higher agreement with manually derived measures. Using this kinematic data, hyoid movement can be related to different stages of swallowing. Clinical implications of this study This study contributes towards our understanding of the kinematics of a typical swallow by evaluating an automated hyoid tracking method, paving the way for future studies of typical and disordered swallow. The challenges of image acquisition highlight issues to be considered when establishing clinical protocols. The application of machine learning enhances the utility of ultrasound swallowing assessment by reducing the labour required and permitting a wider range of hyoid measurements. Further research in normative and clinical populations is facilitated by automatic data extraction allowing the validity of prospective hyoid measures in differentiating different types of swallows to be rigorously assessed.
International journal of language & communication disorders
"2022-03-15T00:00:00"
[ "Joan K-YMa", "Alan AWrench" ]
10.1111/1460-6984.12712 10.1155/2014/738971 10.1186/s12877-020-01832-0 10.1186/s12938-017-0412-1 10.1038/s41598-020-80871-4
Diagnosis of COVID-19 using chest X-ray images based on modified DarkCovidNet model.
Coronavirus disease, also known as COVID-19, is an infectious disease caused by SARS-CoV-2. It has a direct impact on the upper and lower respiratory tract and threatened the health of many people around the world. The latest statistics show that the number of people diagnosed with COVID-19 is growing exponentially. Diagnosing positive cases of COVID-19 is important for preventing further spread of the disease. Currently, Coronavirus is a serious threat to scientists, medical experts and researchers around the world from its detection to its treatment. It is currently detected using reverse transcription polymerase chain reaction (RT-PCR) analysis at the most test centers around the world. Yet, knowing the reliability of a deep learning based medical diagnosis is important for doctors to build confidence in the technology and improve treatment. The goal of this study is to develop a model that automatically identifies COVID-19 by using chest X-ray images. To achieve this, we modified the DarkCovidNet model which is based on a convolutional neural network (CNN) and plotted the experimental results for two scenarios: binary classification (COVID-19 versus No-findings) and multi-class classification (COVID-19 versus pneumonia versus No-findings). The model is trained on more than 10 thousand X-ray images and achieved an average accuracy of 99.53% and 94.18% for binary and multi-class classification, respectively. Therefore, the proposed method demonstrates the effectiveness of COVID-19 detection using X-ray images. Our model can be used to test the patient via cloud and also be used in situations where RT-PCR tests and other options aren't available.
Evolutionary intelligence
"2022-03-15T00:00:00"
[ "Dawit KirosRedie", "Abdulhakim EdaoSirko", "Tensaie MelkamuDemissie", "Semagn SisayTeferi", "Vimal KumarShrivastava", "Om PrakashVerma", "Tarun KumarSharma" ]
10.1007/s12065-021-00679-7 10.1056/NEJMoa2002032 10.1016/S1473-3099(20)30237-1 10.1016/j.compbiomed.2020.103792 10.1016/j.heliyon.2018.e00938 10.1007/s12065-020-00493-7 10.1080/08839514.2020.1792034 10.1049/iet-ipr.2019.0561
Classification of COVID-19 and Influenza Patients Using Deep Learning.
Coronavirus (COVID-19) is a deadly virus that initially starts with flu-like symptoms. COVID-19 emerged in China and quickly spread around the globe, resulting in the coronavirus epidemic of 2019-22. As this virus is very similar to influenza in its early stages, its accurate detection is challenging. Several techniques for detecting the virus in its early stages are being developed. Deep learning techniques are a handy tool for detecting various diseases. For the classification of COVID-19 and influenza, we proposed tailored deep learning models. A publicly available dataset of X-ray images was used to develop proposed models. According to test results, deep learning models can accurately diagnose normal, influenza, and COVID-19 cases. Our proposed long short-term memory (LSTM) technique outperformed the CNN model in the evaluation phase on chest X-ray images, achieving 98% accuracy.
Contrast media & molecular imaging
"2022-03-15T00:00:00"
[ "MuhammadAftab", "RashidAmin", "DeepikaKoundal", "HamzaAldabbas", "BaderAlouffi", "ZeshanIqbal" ]
10.1155/2022/8549707 10.1109/iccca49541.2020.9250907 10.2807/1560-7917.ES.2020.25.47.2001943 10.1016/j.jcmg.2020.10.023 10.1109/iscv49265.2020.9204043 10.1016/j.gene.2020.145145 10.1007/s00330-021-07715-1 10.1016/j.jcv.2020.104543 10.1016/s0169-5347(02)02502-8 10.1128/mr.56.1.152-179.1992 10.1016/s0140-6736(99)01241-6 10.1086/515616 10.1016/j.bbe.2021.05.013 10.1515/cclm-2020-1294 10.1016/j.eswa.2020.114054 10.36548/jismac.2021.2.006 10.1016/j.cmpb.2020.105608 10.1073/pnas.95.17.10224 10.32604/cmc.2020.012148 10.1142/s0219720020400028 10.18632/aging.104132 10.1007/s00592-020-01522-8 10.1177/1756286420917830 10.1002/jmv.26125 10.1016/j.vaccine.2020.07.058 10.1001/archinte.160.21.3243 10.1086/529211 10.1016/j.earlhumdev.2020.105116 10.1186/s12916-020-01816-2 10.1016/j.micinf.2020.05.016 10.1038/s41598-021-83967-7 10.1093/emph/eoaa050 10.1109/access.2019.2927169 10.1109/icarcv.2014.7064414 10.1007/s11042-020-10165-4 10.1016/j.isprsjprs.2019.01.015
Ensemble Deep Learning and Internet of Things-Based Automated COVID-19 Diagnosis Framework.
Coronavirus disease (COVID-19) is a viral infection caused by SARS-CoV-2. The modalities such as computed tomography (CT) have been successfully utilized for the early stage diagnosis of COVID-19 infected patients. Recently, many researchers have utilized deep learning models for the automated screening of COVID-19 suspected cases. An ensemble deep learning and Internet of Things (IoT) based framework is proposed for screening of COVID-19 suspected cases. Three well-known pretrained deep learning models are ensembled. The medical IoT devices are utilized to collect the CT scans, and automated diagnoses are performed on IoT servers. The proposed framework is compared with thirteen competitive models over a four-class dataset. Experimental results reveal that the proposed ensembled deep learning model yielded 98.98% accuracy. Moreover, the model outperforms all competitive models in terms of other performance metrics achieving 98.56% precision, 98.58% recall, 98.75% F-score, and 98.57% AUC. Therefore, the proposed framework can improve the acceleration of COVID-19 diagnosis.
Contrast media & molecular imaging
"2022-03-15T00:00:00"
[ "Anita SKini", "A NandaGopal Reddy", "ManjitKaur", "SSatheesh", "JagendraSingh", "ThomasMartinetz", "HammamAlshazly" ]
10.1155/2022/7377502 10.1109/jbhi.2020.3001216 10.3390/su132413642 10.1007/s12652-020-02669-6 10.1109/tmi.2020.2995965 10.1109/tmi.2020.2996256 10.1142/s0218001421510046 10.1109/JIOT.2020.3034074 10.1109/access.2020.3010287 10.7717/peerj-cs.564 10.1007/s10462-021-09985-z 10.1109/access.2020.3001973 10.1007/978-3-030-55258-9_17 10.1002/ima.22469 10.7717/peerj-cs.306 10.1109/tmi.2020.2993291 10.1109/jbhi.2020.3023246 10.1109/access.2020.3005510 10.1109/access.2020.3003810 10.1109/access.2020.3033762 10.1109/access.2020.3025010 10.1007/s10489-020-02149-6 10.1109/tmi.2020.2994908 10.1109/access.2020.2994762 10.1109/jbhi.2020.3019505 10.1109/access.2020.3025164 10.3390/s21020455 10.7717/peerj-cs.655 10.3390/app11157004 10.1155/2021/8829829 10.2196/23811 10.1109/access.2021.3120717 10.1049/trit.2018.1006 10.37965/jait.2021.0017 10.1049/trit.2019.0028 10.37965/jait.2020.0037 10.1049/trit.2019.0051 10.37965/jait.2020.0051 10.1109/access.2020.3024116 10.1109/access.2021.3109441 10.3233/xst-200715 10.1109/34.58871 10.3390/s19194139 10.1142/S0129065716500258 10.1016/j.bspc.2021.103009 10.1109/access.2021.3101142 10.1016/j.asoc.2020.106885 10.1038/s41467-020-17971-2 10.1016/j.cell.2020.04.045 10.1016/j.compbiomed.2020.104037
Evaluation of Pulmonary Edema Using Ultrasound Imaging in Patients With COVID-19 Pneumonia Based on a Non-local Channel Attention ResNet.
Recent research has revealed that COVID-19 pneumonia is often accompanied by pulmonary edema. Pulmonary edema is a manifestation of acute lung injury (ALI), and may progress to hypoxemia and potentially acute respiratory distress syndrome (ARDS), which have higher mortality. Precise classification of the degree of pulmonary edema in patients is of great significance in choosing a treatment plan and improving the chance of survival. Here we propose a deep learning neural network named Non-local Channel Attention ResNet to analyze the lung ultrasound images and automatically score the degree of pulmonary edema of patients with COVID-19 pneumonia. The proposed method was designed by combining the ResNet with the non-local module and the channel attention mechanism. The non-local module was used to extract the information on characteristics of A-lines and B-lines, on the basis of which the degree of pulmonary edema could be defined. The channel attention mechanism was used to assign weights to decisive channels. The data set contains 2220 lung ultrasound images provided by Huoshenshan Hospital, Wuhan, China, of which 2062 effective images with accurate scores assigned by two experienced clinicians were used in the experiment. The experimental results indicated that our method achieved high accuracy in classifying the degree of pulmonary edema in patients with COVID-19 pneumonia by comparison with previous deep learning methods, indicating its potential to monitor patients with COVID-19 pneumonia.
Ultrasound in medicine & biology
"2022-03-13T00:00:00"
[ "QinghuaHuang", "YeLei", "WenyuXing", "ChaoHe", "GaofengWei", "ZhaojiMiao", "YifanHao", "GuannanLi", "YanWang", "QingliLi", "XuelongLi", "WenfangLi", "JiangangChen" ]
10.1016/j.ultrasmedbio.2022.01.023
Study of Different Deep Learning Methods for Coronavirus (COVID-19) Pandemic: Taxonomy, Survey and Insights.
COVID-19 has evolved into one of the most severe and acute illnesses. The number of deaths continues to climb despite the development of vaccines and new strains of the virus have appeared. The early and precise recognition of COVID-19 are key in viably treating patients and containing the pandemic on the whole. Deep learning technology has been shown to be a significant tool in diagnosing COVID-19 and in assisting radiologists to detect anomalies and numerous diseases during this epidemic. This research seeks to provide an overview of novel deep learning-based applications for medical imaging modalities, computer tomography (CT) and chest X-rays (CXR), for the detection and classification COVID-19. First, we give an overview of the taxonomy of medical imaging and present a summary of types of deep learning (DL) methods. Then, utilizing deep learning techniques, we present an overview of systems created for COVID-19 detection and classification. We also give a rundown of the most well-known databases used to train these networks. Finally, we explore the challenges of using deep learning algorithms to detect COVID-19, as well as future research prospects in this field.
Sensors (Basel, Switzerland)
"2022-03-11T00:00:00"
[ "LamiaAwassa", "ImenJdey", "HabibDhahri", "GhazalaHcini", "AwaisMahmood", "EsamOthman", "MuhammadHaneef" ]
10.3390/s22051890 10.7717/peerj.9725 10.1016/j.diii.2020.03.014 10.1109/ACCESS.2020.3010287 10.47419/bjbabs.v2i01.25 10.15585/mmwr.mm7003e2 10.1101/2021.02.10.21251247 10.1016/j.cmi.2021.05.022 10.1101/2021.01.03.21249169 10.1101/2020.12.30.20249034 10.32604/cmc.2020.013232 10.1097/RTI.0000000000000533 10.1038/npre.2009.3267.2 10.3174/ajnr.A4967 10.1016/j.ejrad.2020.109151 10.1148/radiol.2020202568 10.1088/1742-6596/1228/1/012045 10.1016/j.media.2017.07.005 10.1007/s00259-020-04953-1 10.1038/s41746-021-00399-3 10.1007/s42600-021-00151-6 10.1016/j.irbm.2020.05.003 10.1007/s11548-020-02286-w 10.1155/2021/5527923 10.3390/app10165683 10.1109/ACCESS.2020.3025010 10.32604/cmc.2021.018449 10.4249/scholarpedia.32832 10.1016/j.imu.2020.100412 10.1016/j.jbi.2020.103627 10.4249/scholarpedia.5947 10.1016/j.cie.2022.107960 10.1186/s12880-020-00529-5 10.1109/TNNLS.2021.3054746 10.1002/mp.14609 10.1186/s12967-021-02992-2 10.3390/diagnostics11081405 10.1016/j.patcog.2021.108452 10.11591/ijece.v11i1.pp844-850 10.11591/ijece.v11i1.pp365-374 10.1007/s00138-020-01119-9 10.3390/computation9010003 10.1007/s12539-020-00408-1 10.32604/cmc.2020.012585 10.1007/s00330-021-07715-1 10.1016/j.bspc.2021.102588 10.1080/07391102.2021.1875049 10.1016/j.ipm.2020.102411 10.1148/radiol.2020203511 10.1007/s00521-020-05410-8 10.1016/j.imu.2020.100505 10.1016/j.bbe.2020.08.008 10.1101/2020.03.12.20027185 10.1007/s10140-020-01886-y 10.1007/s10278-013-9622-7 10.1038/s41597-020-00741-6 10.3390/sym13010113 10.1007/s40747-020-00199-4 10.1007/s10489-020-01867-1 10.1007/s12559-020-09787-5 10.1007/s10489-020-01902-1 10.1080/07391102.2020.1767212 10.1038/s42256-021-00338-7 10.1038/s41598-020-76550-z 10.1016/j.asoc.2020.106885 10.1097/RLI.0000000000000748 10.1016/j.patcog.2020.107613 10.1145/3431804 10.1016/j.asoc.2020.106912 10.1016/j.asoc.2020.106859 10.1016/j.asoc.2020.106744 10.1016/j.compbiomed.2020.104181 10.1016/j.bspc.2020.102257 10.1016/j.chaos.2020.110495 10.1088/1757-899X/1051/1/012007 10.1007/s10489-020-01978-9 10.1016/j.asoc.2021.107160 10.1038/s42003-020-01535-7 10.1016/j.neucom.2021.03.034 10.1016/j.compbiomed.2020.103805 10.1109/TMI.2020.3040950 10.1016/j.chaos.2020.110245 10.1016/j.inffus.2021.04.008 10.1117/1.JMI.3.4.044506 10.1016/j.media.2017.06.015 10.1109/ACCESS.2021.3058537 10.1186/s40537-019-0197-0
A radiomics-boosted deep-learning model for COVID-19 and non-COVID-19 pneumonia classification using chest x-ray images.
To develop a deep learning model design that integrates radiomics analysis for enhanced performance of COVID-19 and non-COVID-19 pneumonia detection using chest x-ray images. As a novel radiomics approach, a 2D sliding kernel was implemented to map the impulse response of radiomic features throughout the entire chest x-ray image; thus, each feature is rendered as a 2D map in the same dimension as the x-ray image. Based on each of the three investigated deep neural network architectures, including VGG-16, VGG-19, and DenseNet-121, a pilot model was trained using x-ray images only. Subsequently, two radiomic feature maps (RFMs) were selected based on cross-correlation analysis in reference to the pilot model saliency map results. The radiomics-boosted model was then trained based on the same deep neural network architecture using x-ray images plus the selected RFMs as input. The proposed radiomics-boosted design was developed using 812 chest x-ray images with 262/288/262 COVID-19/non-COVID-19 pneumonia/healthy cases, and 649/163 cases were assigned as training-validation/independent test sets. For each model, 50 runs were trained with random assignments of training/validation cases following the 7:1 ratio in the training-validation set. Sensitivity, specificity, accuracy, and ROC curves together with area-under-the-curve (AUC) from all three deep neural network architectures were evaluated. After radiomics-boosted implementation, all three investigated deep neural network architectures demonstrated improved sensitivity, specificity, accuracy, and ROC AUC results in COVID-19 and healthy individual classifications. VGG-16 showed the largest improvement in COVID-19 classification ROC (AUC from 0.963 to 0.993), and DenseNet-121 showed the largest improvement in healthy individual classification ROC (AUC from 0.962 to 0.989). The reduced variations suggested improved robustness of the model to data partition. For the challenging non-COVID-19 pneumonia classification task, radiomics-boosted implementation of VGG-16 (AUC from 0.918 to 0.969) and VGG-19 (AUC from 0.964 to 0.970) improved ROC results, while DenseNet-121 showed a slight yet insignificant ROC performance reduction (AUC from 0.963 to 0.949). The achieved highest accuracy of COVID-19/non-COVID-19 pneumonia/healthy individual classifications were 0.973 (VGG-19)/0.936 (VGG-19)/ 0.933 (VGG-16), respectively. The inclusion of radiomic analysis in deep learning model design improved the performance and robustness of COVID-19/non-COVID-19 pneumonia/healthy individual classification, which holds great potential for clinical applications in the COVID-19 pandemic.
Medical physics
"2022-03-10T00:00:00"
[ "ZongshengHu", "ZhenyuYang", "Kyle JLafata", "Fang-FangYin", "ChunhaoWang" ]
10.1002/mp.15582 10.1007/s00259-020-05075-4 10.1007/s00261-021-03254-x 10.48550/arXiv.2003.11597 10.1109/cvpr.2009.5206848 10.1088/2057-1976/ab779c
Truncating fined-tuned vision-based models to lightweight deployable diagnostic tools for SARS-CoV-2 infected chest X-rays and CT-scans.
In such a brief period, the recent coronavirus (COVID-19) already infected large populations worldwide. Diagnosing an infected individual requires a Real-Time Polymerase Chain Reaction (RT-PCR) test, which can become expensive and limited in most developing countries, making them rely on alternatives like Chest X-Rays (CXR) or Computerized Tomography (CT) scans. However, results from these imaging approaches radiated confusion for medical experts due to their similarities with other diseases like pneumonia. Other solutions based on Deep Convolutional Neural Network (DCNN) recently improved and automated the diagnosis of COVID-19 from CXRs and CT scans. However, upon examination, most proposed studies focused primarily on accuracy rather than deployment and reproduction, which may cause them to become difficult to reproduce and implement in locations with inadequate computing resources. Therefore, instead of focusing only on accuracy, this work investigated the effects of parameter reduction through a proposed truncation method and analyzed its effects. Various DCNNs had their architectures truncated, which retained only their initial core block, reducing their parameter sizes to <1 M. Once trained and validated, findings have shown that a DCNN with robust layer aggregations like the InceptionResNetV2 had less vulnerability to the adverse effects of the proposed truncation. The results also showed that from its full-length size of 55 M with 98.67% accuracy, the proposed truncation reduced its parameters to only 441 K and still attained an accuracy of 97.41%, outperforming other studies based on its size to performance ratio.
Multimedia tools and applications
"2022-03-10T00:00:00"
[ "Francis JesmarMontalbo" ]
10.1007/s11042-022-12484-0 10.1016/j.chaos.2020.110120 10.2196/19673 10.1016/j.radi.2020.09.010 10.1148/radiol.2020201491 10.1109/ACCESS.2014.2325029 10.1007/s13246-020-00888-x 10.1134/s1054661816010065 10.1016/j.cmpb.2018.01.025 10.1016/j.bsheal.2020.05.002 10.1016/j.ibmed.2021.100027 10.5121/ijdkp.2015.5201 10.1016/j.compbiomed.2021.104348 10.3390/app10103359 10.1155/2018/2061516 10.1148/radiol.2020200905 10.1016/j.bspc.2021.102583 10.1016/j.patrec.2020.10.001 10.1016/j.ijnss.2020.03.012 10.5555/2627435.2670313 10.1016/j.immuni.2020.05.004 10.1089/omi.2019.0142 10.1021/acsnano.0c02624 10.1016/j.eng.2020.04.010 10.1016/j.inffus.2019.06.024
An Improved COVID-19 Detection using GAN-Based Data Augmentation and Novel QuNet-Based Classification.
COVID-19 is a fatal disease caused by the SARS-CoV-2 virus that has caused around 5.3 Million deaths globally as of December 2021. The detection of this disease is a time taking process that have worsen the situation around the globe, and the disease has been identified as a world pandemic by the WHO. Deep learning-based approaches are being widely used to diagnose the COVID-19 cases, but the limitation of immensity in the publicly available dataset causes the problem of model over-fitting. Modern artificial intelligence-based techniques can be used to increase the dataset to avoid from the over-fitting problem. This research work presents the use of various deep learning models along with the state-of-the-art augmentation methods, namely, classical and generative adversarial network- (GAN-) based data augmentation. Furthermore, four existing deep convolutional networks, namely, DenseNet-121, InceptionV3, Xception, and ResNet101 have been used for the detection of the virus in X-ray images after training on augmented dataset. Additionally, we have also proposed a novel convolutional neural network (QuNet) to improve the COVID-19 detection. The comparative analysis of achieved results reflects that both QuNet and Xception achieved high accuracy with classical augmented dataset, whereas QuNet has also outperformed and delivered 90% detection accuracy with GAN-based augmented dataset.
BioMed research international
"2022-03-09T00:00:00"
[ "UsmanAsghar", "MuhammadArif", "KhurramEjaz", "DragosVicoveanu", "DianaIzdrui", "OanaGeman" ]
10.1155/2022/8925930 10.1109/ACCESS.2020.2994762 10.1007/s00521-020-05437-x 10.1016/j.compbiomed.2021.104930 10.1007/s10462-021-10066-4 10.32604/cmc.2021.014265 10.1007/s00500-021-06075-8 10.1016/j.compbiomed.2020.104130 10.1016/j.bspc.2020.102365 10.1007/s10489-020-01826-w 10.1016/j.compbiomed.2020.104181 10.1007/978-3-030-86340-1_47 10.1007/s40009-020-01009-8
Radiological Analysis of COVID-19 Using Computational Intelligence: A Broad Gauge Study.
Pulmonary medical image analysis using image processing and deep learning approaches has made remarkable achievements in the diagnosis, prognosis, and severity check of lung diseases. The epidemic of COVID-19 brought out by the novel coronavirus has triggered a critical need for artificial intelligence assistance in diagnosing and controlling the disease to reduce its effects on people and global economies. This study aimed at identifying the various COVID-19 medical imaging analysis models proposed by different researchers and featured their merits and demerits. It gives a detailed discussion on the existing COVID-19 detection methodologies (diagnosis, prognosis, and severity/risk detection) and the challenges encountered for the same. It also highlights the various preprocessing and post-processing methods involved to enhance the detection mechanism. This work also tries to bring out the different unexplored research areas that are available for medical image analysis and how the vast research done for COVID-19 can advance the field. Despite deep learning methods presenting high levels of efficiency, some limitations have been briefly described in the study. Hence, this review can help understand the utilization and pros and cons of deep learning in analyzing medical images.
Journal of healthcare engineering
"2022-03-08T00:00:00"
[ "SVineth Ligi", "Soumya SnigdhaKundu", "RKumar", "RNarayanamoorthi", "Khin WeeLai", "SamiappanDhanalakshmi" ]
10.1155/2022/5998042 10.1001/jama.2020.3786 10.1148/rg.2020200159 10.1016/j.mayocp.2020.04.004 10.1148/ryct.2020200034 10.1016/s0140-6736(20)30183-5 10.12669/pjms.36.COVID19-S4.2778 10.1007/s12065-020-00540-3 10.1148/radiol.2018180547 10.1016/j.procs.2018.05.198 10.1109/tmi.2016.2528162 10.1109/tpami.2021.3059968 10.1007/978-3-030-01261-8_1 10.1007/s11042-020-09894-3 10.2174/1573405614666180402124438 10.1109/icectech.2011.5941891 10.1186/2193-8636-1-6 10.1109/icosec49089.2020.9215356 10.1016/j.compeleceng.2021.107225 10.1109/cvpr.2018.00474 10.1113/jphysiol.1959.sp006308 10.1007/bf00344251 10.1109/cvpr.2015.7298594 10.1109/acpr.2015.7486599 10.1109/cvpr.2016.90 10.1007/978-3-319-46493-0_38 10.1109/cvpr.2017.195 10.1109/cvpr.2017.634 10.1109/cvpr.2018.00745 10.1109/cvpr.2018.00907 10.1109/iccv.2017.74 10.3390/s21020455 10.1007/s00521-020-05636-6 10.1186/s40537-020-00392-9 10.1038/s41467-020-17971-2 10.1109/tmi.2020.2995508 10.1016/j.ejrad.2020.109041 10.1016/j.compbiomed.2020.103795 10.1109/jbhi.2020.3019505 10.1007/s10044-021-00984-y 10.1109/cvpr.2017.369 10.1016/j.bbe.2020.08.005 10.1016/j.cell.2018.02.010 10.1007/s13246-020-00952-6 10.1038/s41598-020-76550-z 10.1109/access.2020.3010287 10.1109/tmi.2020.2994908 10.3390/e22050517 10.1109/tcbb.2021.3065361 10.1016/j.compbiomed.2020.103805 10.1038/s41598-020-74539-2 10.1016/j.bspc.2020.102257 10.3390/healthcare9050522 10.1016/j.compbiomed.2020.103792 10.1016/j.compbiomed.2020.103869 10.1016/j.chaos.2020.110182 10.1007/s10489-020-01831-z 10.1007/s10096-020-03901-z 10.1109/cbms.2015.49 10.3390/ijerph18063056 10.1038/s41598-020-76282-0 10.1093/jamia/ocv080 10.1038/s41598-020-74164-z 10.1016/j.chaos.2020.109944 10.1016/j.bspc.2021.102588 10.7150/ijms.46684 10.1148/ryct.2020200082 10.1016/j.asoc.2020.106897 10.1148/ryct.2020200075 10.1183/13993003.00775-2020 10.1038/s41467-020-18786-x 10.1001/jamainternmed.2020.2033 10.7150/thno.46428 10.7759/cureus.9448 10.1148/ryai.2019180041 10.1609/aaai.v33i01.3301590 10.1148/radiol.2019191293 10.1016/j.media.2020.101797 10.1371/journal.pone.0236621 10.1109/jbhi.2020.3037127 10.1016/j.patcog.2020.107700 10.1155/2020/9756518 10.1186/s12864-019-6413-7 10.26355/eurrev_202011_23640 10.1109/access.2021.3058537 10.1016/j.cmpb.2018.06.006 10.1038/s41598-019-39071-y 10.1155/2021/9208138 10.1001/jamainternmed.2013.3023 10.1093/intqhc/mzaa144 10.1002/jmv.27281 10.1109/wacv48630.2021.00362 10.1109/cvpr.2016.319 10.1186/s12880-015-0068-x 10.1016/j.media.2021.101978 10.1109/icacci.2014.6968381 10.1016/s0734-189x(87)80186-x 10.1109/42.816070 10.1007/978-3-642-34303-2_11 10.1109/icaca.2016.7887983 10.1016/j.ijleo.2021.166652 10.1109/access.2020.2994762 10.1142/s021800142051009x 10.1016/j.patcog.2020.107747 10.1016/j.inffus.2021.04.008 10.1007/s00500-020-05424-3
A CNN based coronavirus disease prediction system for chest X-rays.
Coronavirus disease (COVID-19) proliferated globally in early 2020, causing existential dread in the whole world. Radiography is crucial in the clinical staging and diagnosis of COVID-19 and offers high potential to improve healthcare plans for tackling the pandemic. However high variations in infection characteristics and low contrast between normal and infected regions pose great challenges in preparing radiological reports. To address these challenges, this study presents CODISC-CNN (CNN based Coronavirus DIsease Prediction System for Chest X-rays) that can automatically extract the features from chest X-ray images for the disease prediction. However, to get the infected region of X-ray, edges of the images are detected by applying image preprocessing. Furthermore, to attenuate the shortage of labeled datasets data augmentation has been adapted. Extensive experiments have been performed to classify X-ray images into two classes (Normal and COVID), three classes (Normal, COVID, and Virus Bacteria), and four classes (Normal, COVID, and Virus Bacteria, and Virus Pneumonia) with the accuracy of 97%, 89%, and 84% respectively. The proposed CNN-based model outperforms many cutting-edge classification models and boosts state-of-the-art performance.
Journal of ambient intelligence and humanized computing
"2022-03-08T00:00:00"
[ "UmairHafeez", "MuhammadUmer", "AhmadHameed", "HassanMustafa", "AhmedSohaib", "MicheleNappi", "Hamza AhmadMadni" ]
10.1007/s12652-022-03775-3 10.1007/s10489-020-01829-7 10.1007/s1324 10.1016/j.compbiomed.2020.103795 10.1016/j.compbiomed.2020.103795 10.1093/aje/kws259 10.1109/TII.2021.3057524 10.1016/j.chaos.2020.109864 10.3390/ijerph17082690 10.1073/pnas.2004168117 10.1111/vox.12939 10.1016/S0140-6736(20)30183-5 10.1145/3065386 10.1371/journal.ppat.0030151 10.1016/j.chaos.2020.109853 10.1001/jama.2020.1585 10.1038/s41598-019-56847-4 10.1016/S2213-2600(20)30076-X 10.1016/j.compbiomed.2020.103671 10.1016/j.compbiolchem.2009.07.005