title
stringlengths
2
287
abstract
stringlengths
0
5.14k
journal
stringlengths
4
184
date
unknown
authors
sequencelengths
1
57
doi
stringlengths
16
6.63k
COVID-19 diagnosis on CT images with Bayes optimization-based deep neural networks and machine learning algorithms.
Early diagnosis of COVID-19, the new coronavirus disease, is considered important for the treatment and control of this disease. The diagnosis of COVID-19 is based on two basic approaches of laboratory and chest radiography, and there has been a significant increase in studies performed in recent months by using chest computed tomography (CT) scans and artificial intelligence techniques. Classification of patient CT scans results in a serious loss of radiology professionals' valuable time. Considering the rapid increase in COVID-19 infections, in order to automate the analysis of CT scans and minimize this loss of time, in this paper a new method is proposed using BO (BO)-based MobilNetv2, ResNet-50 models, SVM and kNN machine learning algorithms. In this method, an accuracy of 99.37% was achieved with an average precision of 99.38%, 99.36% recall and 99.37% F-score on datasets containing COVID and non-COVID classes. When we examine the performance results of the proposed method, it is predicted that it can be used as a decision support mechanism with high classification success for the diagnosis of COVID-19 with CT scans.
Neural computing & applications
"2022-03-08T00:00:00"
[ "MuratCanayaz", "SanemŞehribanoğlu", "RecepÖzdağ", "MuratDemir" ]
10.1007/s00521-022-07052-4 10.1155/2020/9756518 10.1148/radiol.2020200642 10.1007/s10096-020-03901-z 10.1148/radiol.2020200823 10.1016/j.crad.2020.06.005 10.1049/trit.2019.0017 10.1148/radiol.2020200432 10.1016/j.imu.2020.100427 10.1101/2020.04.16.20064709 10.1109/TMI.2020.2995965 10.1080/07391102.2020.1788642 10.3390/math9222921 10.3389/fpubh.2020.00441 10.1016/j.mehy.2020.109761 10.1007/s42979-021-00980-3 10.1016/j.asoc.2020.106580 10.1371/journal.pcbi.1009472 10.3390/app10186448 10.1016/j.media.2020.101836 10.1016/j.chaos.2020.110190 10.1101/2020.04.13.20063941 10.1109/TIP.2021.3058783 10.1016/j.jnlest.2020.100007 10.1007/s00138-020-01087-0 10.1016/j.catena.2019.104249 10.1109/JPROC.2015.2494218 10.4304/jcp.8.10.2632-2639 10.1088/1742-6596/1442/1/012027 10.1155/2014/795624 10.1023/A:1012487302797 10.1007/s10489-017-0992-2 10.1007/BF00994018 10.21037/atm.2016.03.37 10.4249/scholarpedia.1883 10.1101/2020.04.24.20078584 10.1016/j.bspc.2020.102257
Distribution Atlas of COVID-19 Pneumonia on Computed Tomography: A Deep Learning Based Description.
To construct a distribution atlas of coronavirus disease 2019 (COVID-19) pneumonia on computed tomography (CT) and further explore the difference in distribution by location and disease severity through a retrospective study of 484 cases in Jiangsu, China. All patients diagnosed with COVID-19 from January 10 to February 18 in Jiangsu Province, China, were enrolled in our study. The patients were further divided into asymptomatic/mild, moderate, and severe/critically ill groups. A deep learning algorithm was applied to the anatomic pulmonary segmentation and pneumonia lesion extraction. The frequency of opacity on CT was calculated, and a color-coded distribution atlas was built. A further comparison was made between the upper and lower lungs, between bilateral lungs, and between various severity groups. Additional lesion-based radiomics analysis was performed to ascertain the features associated with the disease severity. A total of 484 laboratory-confirmed patients with 945 repeated CT scans were included. Pulmonary opacity was mainly distributed in the subpleural and peripheral areas. The distances from the opacity to the nearest parietal/visceral pleura were shortest in the asymptomatic/mild group. More diffused lesions were found in the severe/critically ill group. The frequency of opacity increased with increased severity and peaked at about 3-4 or 7-8 o'clock direction in the upper lungs, as opposed to the 5 or 6 o'clock direction in the lower lungs. Lesions with greater energy, more circle-like, and greater surface area were more likely found in severe/critically ill cases than the others. This study constructed a detailed distribution atlas of COVID-19 pneumonia and compared specific patterns in different parts of the lungs at various severities. The radiomics features most associated with the severity were also found. These results may be valuable in determining the COVID-19 sub-phenotype. The online version contains supplementary material available at 10.1007/s43657-021-00011-4.
Phenomics (Cham, Switzerland)
"2022-03-03T00:00:00"
[ "ShanHuang", "YuanchengWang", "ZhenZhou", "QianYu", "YizhouYu", "YiYang", "ShenghongJu" ]
10.1007/s43657-021-00011-4 10.1148/radiol.2020200642 10.1016/j.compbiomed.2020.104037 10.1148/radiol.2020201237 10.1148/radiol.2020200230 10.1038/s41591-018-0177-5 10.1038/s41467-020-18786-x 10.1056/NEJMoa2002032 10.1148/ryct.2020200075 10.1016/S0140-6736(20)30183-5 10.1148/radiol.2020200241 10.1007/s10278-018-0079-6 10.1016/j.ejca.2011.11.036 10.1148/radiol.2020200236 10.3389/fmed.2020.00190 10.1001/jama.2020.4683 10.1007/s00330-020-06731-x 10.1002/mp.13264 10.2214/AJR.20.23034 10.1007/s00330-018-5509-9 10.1148/radiol.2020200269 10.1016/S1473-3099(20)30086-4 10.1016/S1470-2045(18)30413-3 10.1016/j.jinf.2020.02.017 10.1007/s00134-020-05976-w
Deep Residual Neural Network for COVID-19 Detection from Chest X-ray Images.
The COVID-19 diffused quickly throughout the world and converted as a pandemic. It has caused a destructive effect on both regular lives, common health and global business. It is crucial to identify positive patients as shortly as desirable to limit this epidemic's further diffusion and to manage immediately affected cases. The demand for quick assistant distinguishing devices has developed. Recent findings achieved utilizing radiology imaging systems propose that such images include salient data about the COVID-19. The utilization of progressive artificial intelligence (AI) methods linked by radiological imaging can help the reliable diagnosis of COVID-19. As radiography images can recognize pneumonia infections, this research brings an accurate and automatic technique based on a deep residual network to analyze chest X-ray images to monitor COVID-19 and diagnose verified patients. The physician states that it is significantly challenging to separate COVID-19 from common viral and bacterial pneumonia, while COVID-19 is additionally a variety of viruses. The proposed network is expanded to perform detailed diagnostics for two multi-class classification (COVID-19, Normal, Viral Pneumonia) and (COVID-19, Normal, Viral Pneumonia, Bacterial Pneumonia) and binary classification. By comparing the proposed network with the popular methods on public databases, the results show that the proposed algorithm can provide an accuracy of 92.1% in classifying multi-classes of COVID-19, normal, viral pneumonia, and bacterial pneumonia cases. It can be applied to support radiologists in verifying their first viewpoint.
SN computer science
"2022-03-01T00:00:00"
[ "AmirhosseinPanahi", "RezaAskari Moghadam", "MohammadrezaAkrami", "KuroshMadani" ]
10.1007/s42979-022-01067-3 10.1016/j.scitotenv.2020.138817 10.1016/S0140-6736(20)30211-7 10.1016/j.jaut.2020.102433 10.1016/j.jbef.2020.100326 10.1148/radiol.11092149 10.1016/j.crad.2018.12.015 10.3390/app10093233 10.3390/s20040957 10.1148/radiol.2019181960 10.1016/j.cell.2018.02.010 10.1007/s00345-019-03059-0 10.1016/j.mehy.2020.109761 10.1016/j.cmpb.2020.105581 10.1007/s13246-020-00865-4 10.1016/j.patrec.2020.09.010 10.1007/s10489-020-01829-7 10.1038/s41598-019-56847-4 10.1007/s10044-020-00887-4 10.1016/j.compbiomed.2020.103869 10.1007/s42979-021-00881-5 10.1016/j.bspc.2021.103272 10.1016/j.bbe.2021.09.004 10.1109/ACCESS.2020.3010287 10.1007/s10916-021-01747-2 10.1109/TGRS.2017.2755542 10.1109/TCSVT.2018.2869680 10.1016/j.compbiomed.2020.103792
Machine learning-based automatic detection of novel coronavirus (COVID-19) disease.
The pandemic was announced by the world health organization coronavirus (COVID-19) universal health dilemma. Any scientific appliance which contributes expeditious detection of coronavirus with a huge recognition rate may be excessively fruitful to doctors. In this environment, innovative automation like deep learning, machine learning, image processing and medical image like chest radiography (CXR), computed tomography (CT) has been refined promising solution contrary to COVID-19. Currently, a reverse transcription-polymerase chain reaction (RT-PCR) test has been used to detect the coronavirus. Due to the moratorium period is high on results tested and huge false negative estimates, substitute solutions are desired. Thus, an automated machine learning-based algorithm is proposed for the detection of COVID-19 and the grading of nine different datasets. This research impacts the grant of image processing and machine learning to expeditious and definite coronavirus detection using CXR and CT medical imaging. This results in early detection, diagnosis, and cure for the accomplishment of COVID-19 as early as possible. Firstly, images are preprocessed by normalization to enhance the quality of the image and removing of noise. Secondly, segmentation of images is done by fuzzy c-means clustering. Then various features namely, statistical, textural, histogram of gradients, and discrete wavelet transform are extracted (92) and selected from the feature vector by principle component analysis. Lastly, k-NN, SRC, ANN, and SVM are used to make decisions for normal, pneumonia, COVID-19 positive patients. The performance of the system has been validated by the k (5) fold cross-validation technique. The proposed algorithm achieves 91.70% (k-Nearest Neighbor), 94.40% (Sparse Representation Classifier), 96.16% (Artificial Neural Network), and 99.14% (Support Vector Machine) for COVID detection. The proposed results show feature combination and selection improves the performance in 14.34 s with machine learning and image processing techniques. Among k-NN, SRC, ANN, and SVM classifiers, SVM shows more efficient results that are promising and comparable with the literature. The proposed approach results in an improved recognition rate as compared to the literature review. Therefore, the algorithm proposed shows immense potential to benefit the radiologist for their findings. Also, fruitful in prior virus diagnosis and discriminate pneumonia between COVID-19 and other pandemics.
Multimedia tools and applications
"2022-03-01T00:00:00"
[ "AnujaBhargava", "AtulBansal", "VishalGoyal" ]
10.1007/s11042-022-12508-9 10.1016/j.bbe.2020.08.005 10.1016/j.chaos.2020.110071 10.1016/j.asoc.2020.106912 10.1016/j.compag.2017.05.019 10.3389/fnins.2019.01346 10.1109/MC.2013.42 10.1109/ACCESS.2020.3005510 10.32604/cmc.2020.010691 10.1016/j.compbiomed.2020.104181 10.1056/NEJMoa2001316 10.1109/JBHI.2020.3018181 10.1093/bib/bbx044 10.1016/j.ijpharm.2013.10.024 10.1016/j.compbiomed.2020.103792 10.1016/j.compag.2012.11.009 10.1007/s10096-019-03782-x 10.1109/JBHI.2020.3019505 10.1007/s10661-012-2874-8 10.1016/j.eng.2020.04.010 10.1016/j.asoc.2020.106885
Detecting COVID-19 from chest computed tomography scans using AI-driven android application.
The COVID-19 (coronavirus disease 2019) pandemic affected more than 186 million people with over 4 million deaths worldwide by June 2021. The magnitude of which has strained global healthcare systems. Chest Computed Tomography (CT) scans have a potential role in the diagnosis and prognostication of COVID-19. Designing a diagnostic system, which is cost-efficient and convenient to operate on resource-constrained devices like mobile phones would enhance the clinical usage of chest CT scans and provide swift, mobile, and accessible diagnostic capabilities. This work proposes developing a novel Android application that detects COVID-19 infection from chest CT scans using a highly efficient and accurate deep learning algorithm. It further creates an attention heatmap, augmented on the segmented lung parenchyma region in the chest CT scans which shows the regions of infection in the lungs through an algorithm developed as a part of this work, and verified through radiologists. We propose a novel selection approach combined with multi-threading for a faster generation of heatmaps on a Mobile Device, which reduces the processing time by about 93%. The neural network trained to detect COVID-19 in this work is tested with a F1 score and accuracy, both of 99.58% and sensitivity of 99.69%, which is better than most of the results in the domain of COVID diagnosis from CT scans. This work will be beneficial in high-volume practices and help doctors triage patients for the early diagnosis of COVID-19 quickly and efficiently.
Computers in biology and medicine
"2022-02-28T00:00:00"
[ "AryanVerma", "Sagar BAmin", "MuhammadNaeem", "MonjoySaha" ]
10.1016/j.compbiomed.2022.105298 10.1016/j.ijantimicag.2020.105924 10.7326/M20-1382 10.1101/2021.07.06.21260109 10.1148/radiol.2020200230 10.1109/IPRIA53572.2021.9483563 10.1109/INMIC50486.2020.9318212 10.1016/j.compbiomed.2020.103792 10.1109/TIM.2020.3033072 10.1007/s00530-020-00728-8 10.1016/S0531-5131(03)00388-1 10.1118/1.4793409 10.1007/s10278-009-9229-1 10.1109/CVPR.2009.5206848 10.1007/s00521-020-05410-8 10.3390/v12070769
Optimized chest X-ray image semantic segmentation networks for COVID-19 early detection.
Although detection of COVID-19 from chest X-ray radiography (CXR) images is faster than PCR sputum testing, the accuracy of detecting COVID-19 from CXR images is lacking in the existing deep learning models. This study aims to classify COVID-19 and normal patients from CXR images using semantic segmentation networks for detecting and labeling COVID-19 infected lung lobes in CXR images. For semantically segmenting infected lung lobes in CXR images for COVID-19 early detection, three structurally different deep learning (DL) networks such as SegNet, U-Net and hybrid CNN with SegNet plus U-Net, are proposed and investigated. Further, the optimized CXR image semantic segmentation networks such as GWO SegNet, GWO U-Net, and GWO hybrid CNN are developed with the grey wolf optimization (GWO) algorithm. The proposed DL networks are trained, tested, and validated without and with optimization on the openly available dataset that contains 2,572 COVID-19 CXR images including 2,174 training images and 398 testing images. The DL networks and their GWO optimized networks are also compared with other state-of-the-art models used to detect COVID-19 CXR images. All optimized CXR image semantic segmentation networks for COVID-19 image detection developed in this study achieved detection accuracy higher than 92%. The result shows the superiority of optimized SegNet in segmenting COVID-19 infected lung lobes and classifying with an accuracy of 98.08% compared to optimized U-Net and hybrid CNN. The optimized DL networks has potential to be utilised to more objectively and accurately identify COVID-19 disease using semantic segmentation of COVID-19 CXR images of the lungs.
Journal of X-ray science and technology
"2022-02-26T00:00:00"
[ "AnandbabuGopatoti", "PVijayalakshmi" ]
10.3233/XST-211113
Proposing a novel deep network for detecting COVID-19 based on chest images.
The rapid outbreak of coronavirus threatens humans' life all around the world. Due to the insufficient diagnostic infrastructures, developing an accurate, efficient, inexpensive, and quick diagnostic tool is of great importance. To date, researchers have proposed several detection models based on chest imaging analysis, primarily based on deep neural networks; however, none of which could achieve a reliable and highly sensitive performance yet. Therefore, the nature of this study is primary epidemiological research that aims to overcome the limitations mentioned above by proposing a large-scale publicly available dataset of chest computed tomography scan (CT-scan) images consisting of more than 13k samples. Secondly, we propose a more sensitive deep neural networks model for CT-scan images of the lungs, providing a pixel-wise attention layer on top of the high-level features extracted from the network. Moreover, the proposed model is extended through a transfer learning approach for being applicable in the case of chest X-Ray (CXR) images. The proposed model and its extension have been trained and evaluated through several experiments. The inclusion criteria were patients with suspected PE and positive real-time reverse-transcription polymerase chain reaction (RT-PCR) for SARS-CoV-2. The exclusion criteria were negative or inconclusive RT-PCR and other chest CT indications. Our model achieves an AUC score of 0.886, significantly better than its closest competitor, whose AUC is 0.843. Moreover, the obtained results on another commonly-used benchmark show an AUC of 0.899, outperforming related models. Additionally, the sensitivity of our model is 0.858, while that of its closest competitor is 0.81, explaining the efficiency of pixel-wise attention strategy in detecting coronavirus. Our promising results and the efficiency of the models imply that the proposed models can be considered reliable tools for assisting doctors in detecting coronavirus.
Scientific reports
"2022-02-26T00:00:00"
[ "MaryamDialameh", "AliHamzeh", "HosseinRahmani", "Amir RezaRadmard", "SafouraDialameh" ]
10.1038/s41598-022-06802-7 10.1371/journal.pone.0230548 10.1016/j.clindermatol.2020.12.009 10.1021/acs.molpharmaceut.5b00982 10.26599/BDMA.2018.9020001 10.1109/JBHI.2016.2636665 10.1016/j.drudis.2018.01.039 10.1038/s41467-020-17971-2 10.1109/ACCESS.2020.3007939 10.1016/j.sysarc.2020.101830 10.1038/srep10241 10.1016/S2213-2600(13)70164-4 10.1016/j.neunet.2020.07.010 10.1016/j.ins.2009.12.010
COVID-19 Detection in CT/X-ray Imagery Using Vision Transformers.
The steady spread of the 2019 Coronavirus disease has brought about human and economic losses, imposing a new lifestyle across the world. On this point, medical imaging tests such as computed tomography (CT) and X-ray have demonstrated a sound screening potential. Deep learning methodologies have evidenced superior image analysis capabilities with respect to prior handcrafted counterparts. In this paper, we propose a novel deep learning framework for Coronavirus detection using CT and X-ray images. In particular, a Vision Transformer architecture is adopted as a backbone in the proposed network, in which a Siamese encoder is utilized. The latter is composed of two branches: one for processing the original image and another for processing an augmented view of the original image. The input images are divided into patches and fed through the encoder. The proposed framework is evaluated on public CT and X-ray datasets. The proposed system confirms its superiority over state-of-the-art methods on CT and X-ray data in terms of accuracy, precision, recall, specificity, and F1 score. Furthermore, the proposed system also exhibits good robustness when a small portion of training data is allocated.
Journal of personalized medicine
"2022-02-26T00:00:00"
[ "Mohamad MahmoudAl Rahhal", "YakoubBazi", "Rami MJomaa", "AhmadAlShibli", "NaifAlajlan", "Mohamed LamineMekhalfi", "FaridMelgani" ]
10.3390/jpm12020310 10.1016/j.arth.2020.04.055 10.1080/14737159.2020.1757437 10.1001/jama.2020.2783 10.1080/22221751.2020.1745095 10.1109/JAS.2020.1003450 10.1016/j.ins.2016.01.082 10.1109/JBHI.2016.2635663 10.1109/JBHI.2018.2793534 10.1109/JBHI.2018.2866873 10.1016/j.ijid.2020.05.021 10.1007/s11547-020-01232-9 10.1007/s42399-020-00553-0 10.1111/acem.14004 10.1016/j.acra.2020.04.016 10.1183/13993003.04188-2020 10.1136/bmjopen-2020-042946 10.4329/wjr.v12.i9.195 10.1007/s11517-006-0044-2 10.1109/TMI.2002.806290 10.1016/j.cmpb.2015.10.010 10.1049/ic:19981039 10.2147/BCTT.S175311 10.1016/j.compbiomed.2011.06.010 10.1002/scj.4690250207 10.1016/j.nima.2006.08.134 10.1007/s10278-019-00227-x 10.1007/s00330-019-06170-3 10.1109/TMI.2018.2833385 10.1038/s41591-019-0447-x 10.1002/mp.13300 10.1148/radiol.2017162326 10.1038/nature21056 10.1148/radiol.2020200527 10.1148/radiol.2021204522 10.1016/j.pulmoe.2020.04.011 10.1038/s42256-021-00307-0 10.1101/2020.03.30.20047787v1 10.1016/j.asoc.2020.106691 10.1016/j.imu.2020.100412 10.1109/JBHI.2021.3058293 10.26599/BDMA.2020.9020012 10.1109/ACCESS.2021.3085418 10.1109/TII.2021.3057683 10.1007/s10489-020-01829-7 10.1148/ryct.2020200337 10.1016/j.bea.2021.100003 10.1016/j.compbiomed.2020.104037 10.1016/j.eng.2020.04.010 10.1109/JBHI.2020.3019505 10.2196/19569 10.1109/TIP.2021.3058783 10.1109/JBHI.2020.3042523 10.1002/ppul.25313 10.21037/atm.2020.02.71 10.1007/s12553-021-00520-2 10.3390/sym12040651 10.32604/cmc.2021.014956 10.1038/s41598-020-76550-z 10.1101/2020.04.24.20078584 10.1148/radiol.2020200463 10.1016/j.imu.2020.100427 10.1109/TCBB.2020.3009859
COVID-19 Identification from Low-Quality Computed Tomography Using a Modified Enhanced Super-Resolution Generative Adversarial Network Plus and Siamese Capsule Network.
Computed Tomography has become a vital screening method for the detection of coronavirus 2019 (COVID-19). With the high mortality rate and overload for domain experts, radiologists, and clinicians, there is a need for the application of a computerized diagnostic technique. To this effect, we have taken into consideration improving the performance of COVID-19 identification by tackling the issue of low quality and resolution of computed tomography images by introducing our method. We have reported about a technique named the modified enhanced super resolution generative adversarial network for a better high resolution of computed tomography images. Furthermore, in contrast to the fashion of increasing network depth and complexity to beef up imaging performance, we incorporated a Siamese capsule network that extracts distinct features for COVID-19 identification.The qualitative and quantitative results establish that the proposed model is effective, accurate, and robust for COVID-19 screening. We demonstrate the proposed model for COVID-19 identification on a publicly available dataset COVID-CT, which contains 349 COVID-19 and 463 non-COVID-19 computed tomography images. The proposed method achieves an accuracy of 97.92%, sensitivity of 98.85%, specificity of 97.21%, AUC of 98.03%, precision of 98.44%, and F1 score of 97.52%. Our approach obtained state-of-the-art performance, according to experimental results, which is helpful for COVID-19 screening. This new conceptual framework is proposed to play an influential task in the issue facing COVID-19 and related ailments, with the availability of few datasets.
Healthcare (Basel, Switzerland)
"2022-02-26T00:00:00"
[ "Grace UgochiNneji", "JianhuaDeng", "Happy NkantaMonday", "Md AltabHossin", "SandraObiora", "SaifunNahar", "JingyeCai" ]
10.3390/healthcare10020403 10.1148/radiol.2020200343 10.1002/cpe.5130 10.3389/fnins.2019.00422 10.1109/TMI.2020.3040950 10.1109/ACCESS.2020.3010287 10.1038/s41598-020-76550-z 10.1016/j.patrec.2020.10.001 10.1007/s10096-020-03901-z 10.1109/TMI.2020.2995508 10.1101/2020.03.12.20027185 10.1148/radiol.2020200905 10.3390/diagnostics12020325 10.1088/1361-6560/abe838 10.1038/s41467-020-18685-1 10.1109/TCBB.2021.3065361 10.1016/j.eng.2020.04.010 10.1007/s00330-021-07715-1 10.1007/s10489-020-02149-6 10.1016/j.eswa.2021.116366 10.3390/s21217286 10.3390/app11199023 10.1109/TMI.2019.2922960 10.1109/ACCESS.2020.2994762 10.2196/19673 10.1109/TGRS.2018.2871782
An Efficient Deep Learning Model to Detect COVID-19 Using Chest X-ray Images.
The tragic pandemic of COVID-19, due to the Severe Acute Respiratory Syndrome coronavirus-2 or SARS-CoV-2, has shaken the entire world, and has significantly disrupted healthcare systems in many countries. Because of the existing challenges and controversies to testing for COVID-19, improved and cost-effective methods are needed to detect the disease. For this purpose, machine learning (ML) has emerged as a strong forecasting method for detecting COVID-19 from chest X-ray images. In this paper, we used a Deep Learning Method (DLM) to detect COVID-19 using chest X-ray (CXR) images. Radiographic images are readily available and can be used effectively for COVID-19 detection compared to other expensive and time-consuming pathological tests. We used a dataset of 10,040 samples, of which 2143 had COVID-19, 3674 had pneumonia (but not COVID-19), and 4223 were normal (not COVID-19 or pneumonia). Our model had a detection accuracy of 96.43% and a sensitivity of 93.68%. The area under the ROC curve was 99% for COVID-19, 97% for pneumonia (but not COVID-19 positive), and 98% for normal cases. In conclusion, ML approaches may be used for rapid analysis of CXR images and thus enable radiologists to filter potential candidates in a time-effective manner to detect COVID-19.
International journal of environmental research and public health
"2022-02-26T00:00:00"
[ "SomenathChakraborty", "BeddhuMurali", "Amal KMitra" ]
10.3390/ijerph19042013 10.1016/S0140-6736(20)30183-5 10.1056/NEJMe2002387 10.1038/s41586-020-2008-3 10.1007/s10238-020-00648-x 10.1001/jama.2020.3786 10.15585/mmwr.mm7019a3 10.1148/radiol.2020200642 10.1007/s42058-020-00031-5 10.1148/radiol.2020200823 10.1101/2020.04.13.20063941v1.full.pdf 10.3389/fmed.2020.608525/full 10.1038/s41467-020-17971-2 10.1007/s13246-020-00888-x 10.1016/j.measurement.2019.05.076 10.1148/radiol.2019194005 10.1038/nature14539 10.2214/ajr.174.1.1740071 10.3390/s20041068 10.1016/j.media.2005.02.002 10.1109/CVPRW.2017.156 10.1109/CVPR.2016.90 10.1145/3065386 10.1109/CVPR.2017.243 10.1109/TPAMI.2015.2502579 10.1016/j.irbm.2020.07.001 10.1038/s41598-020-76550-z 10.1007/s10044-021-00984-y 10.1016/j.chaos.2020.110071 10.1016/j.compbiomed.2020.103792 10.1016/j.media.2020.101794 10.1016/j.cmpb.2020.105581 10.3390/app10134640 10.1038/s41598-021-99015-3 10.1016/j.imu.2020.100412 10.1016/j.cmpb.2020.105532 10.1016/j.chaos.2020.110190 10.1109/ISCAS.2010.5537907 10.1016/j.cell.2020.04.045 10.3233/XST-200715 10.1117/1.JMI.8.S1.017503 10.1016/j.mlwa.2021.100138 10.1109/ACCESS.2019.2899578 10.1109/ACCESS.2021.3102399 10.1109/ACCESS.2020.3007801 10.1109/ACCESS.2019.2939755 10.1109/ACCESS.2020.2991800
A deep adversarial model for segmentation-assisted COVID-19 diagnosis using CT images.
The outbreak of coronavirus disease 2019 (COVID-19) is spreading rapidly around the world, resulting in a global pandemic. Imaging techniques such as computed tomography (CT) play an essential role in the diagnosis and treatment of the disease since lung infection or pneumonia is a common complication. However, training a deep network to learn how to diagnose COVID-19 rapidly and accurately in CT images and segment the infected regions like a radiologist is challenging. Since the infectious area is difficult to distinguish manually annotation, the segmentation results are time-consuming. To tackle these problems, we propose an efficient method based on a deep adversarial network to segment the infection regions automatically. Then, the predicted segment results can assist the diagnostic network in identifying the COVID-19 samples from the CT images. On the other hand, a radiologist-like segmentation network provides detailed information of the infectious regions by separating areas of ground-glass, consolidation, and pleural effusion, respectively. Our method can accurately predict the COVID-19 infectious probability and provide lesion regions in CT images with limited training data. Additionally, we have established a public dataset for multitask learning. Extensive experiments on diagnosis and segmentation show superior performance over state-of-the-art methods.
EURASIP journal on advances in signal processing
"2022-02-24T00:00:00"
[ "Hai-YanYao", "Wang-GenWan", "XiangLi" ]
10.1186/s13634-022-00842-x 10.1016/S0140-6736(20)30183-5 10.1002/mp.14609 10.1109/TMI.2020.2992546 10.1038/s41591-020-0931-3 10.1109/JBHI.2020.3023246 10.1109/TMI.2020.2996645 10.1109/TMI.2020.2995965 10.1148/ryct.2020200082 10.1148/ryct.2020200044 10.1145/3422622 10.1038/s41551-020-00633-5 10.1007/s10489-020-02149-6
WEENet: An Intelligent System for Diagnosing COVID-19 and Lung Cancer in IoMT Environments.
The coronavirus disease 2019 (COVID-19) pandemic has caused a major outbreak around the world with severe impact on health, human lives, and economy globally. One of the crucial steps in fighting COVID-19 is the ability to detect infected patients at early stages and put them under special care. Detecting COVID-19 from radiography images using computational medical imaging method is one of the fastest ways to diagnose the patients. However, early detection with significant results is a major challenge, given the limited available medical imaging data and conflicting performance metrics. Therefore, this work aims to develop a novel deep learning-based computationally efficient medical imaging framework for effective modeling and early diagnosis of COVID-19 from chest x-ray and computed tomography images. The proposed work presents "WEENet" by exploiting efficient convolutional neural network to extract high-level features, followed by classification mechanisms for COVID-19 diagnosis in medical image data. The performance of our method is evaluated on three benchmark medical chest x-ray and computed tomography image datasets using eight evaluation metrics including a novel strategy of cross-corpse evaluation as well as robustness evaluation, and the results are surpassing state-of-the-art methods. The outcome of this work can assist the epidemiologists and healthcare authorities in analyzing the infected medical chest x-ray and computed tomography images, management of the COVID-19 pandemic, bridging the early diagnosis, and treatment gap for Internet of Medical Things environments.
Frontiers in oncology
"2022-02-22T00:00:00"
[ "KhanMuhammad", "HayatUllah", "Zulfiqar AhmadKhan", "Abdul Khader JilaniSaudagar", "AbdullahAlTameem", "MohammedAlKhathami", "Muhammad BadruddinKhan", "Mozaherul HoqueAbul Hasanat", "KhalidMahmood Malik", "MohammadHijji", "MuhammadSajjad" ]
10.3389/fonc.2021.811355 10.1148/radiol.2020200274 10.1148/radiol.2020200343 10.1016/S0140-6736(20)30183-5 10.3389/fmed.2020.612962 10.3389/frai.2021.598932 10.1093/jtm/taaa008 10.3389/fmed.2020.00427 10.1016/S1473-3099(20)30086-4 10.3389/fcvm.2021.638011 10.1109/ACCESS.2020.3005510 10.1109/CBMS52027.2021.00103 10.1007/s10044-021-00984-y 10.1016/j.eswa.2020.114054 10.1109/TII.2021.3057683 10.1007/s10489-020-01902-1 10.3389/fmed.2021.704256 10.1016/j.media.2020.101794 10.1109/TII.2021.3057524 10.1109/TMI.2020.2993291 10.1109/TIP.2021.3058783 10.1109/TNNLS.2021.3054306 10.1109/JIOT.2020.3032544 10.1109/JIOT.2020.2981557 10.1109/MCOM.2018.1701148 10.1109/TNSE.2018.2843326 10.1109/JIOT.2020.3038009 10.1109/JSAC.2020.3020598 10.3389/frsc.2021.638743 10.1109/JIOT.2021.3056185 10.1016/j.asoc.2021.107330 10.1016/j.compbiomed.2021.104319 10.1002/aic.690370209 10.1109/TCE.2020.3043683 10.1109/TII.2021.3089462 10.1109/TII.2021.3070544 10.1016/j.compbiomed.2021.104348 10.3389/fmed.2021.707602
COVID-19 mortality prediction in the intensive care unit with deep learning based on longitudinal chest X-rays and clinical data.
We aimed to develop deep learning models using longitudinal chest X-rays (CXRs) and clinical data to predict in-hospital mortality of COVID-19 patients in the intensive care unit (ICU). Six hundred fifty-four patients (212 deceased, 442 alive, 5645 total CXRs) were identified across two institutions. Imaging and clinical data from one institution were used to train five longitudinal transformer-based networks applying five-fold cross-validation. The models were tested on data from the other institution, and pairwise comparisons were used to determine the best-performing models. A higher proportion of deceased patients had elevated white blood cell count, decreased absolute lymphocyte count, elevated creatine concentration, and incidence of cardiovascular and chronic kidney disease. A model based on pre-ICU CXRs achieved an AUC of 0.632 and an accuracy of 0.593, and a model based on ICU CXRs achieved an AUC of 0.697 and an accuracy of 0.657. A model based on all longitudinal CXRs (both pre-ICU and ICU) achieved an AUC of 0.702 and an accuracy of 0.694. A model based on clinical data alone achieved an AUC of 0.653 and an accuracy of 0.657. The addition of longitudinal imaging to clinical data in a combined model significantly improved performance, reaching an AUC of 0.727 (p = 0.039) and an accuracy of 0.732. The addition of longitudinal CXRs to clinical data significantly improves mortality prediction with deep learning for COVID-19 patients in the ICU. • Deep learning was used to predict mortality in COVID-19 ICU patients. • Serial radiographs and clinical data were used. • The models could inform clinical decision-making and resource allocation.
European radiology
"2022-02-21T00:00:00"
[ "JianhongCheng", "JohnSollee", "CelinaHsieh", "HailinYue", "NicholasVandal", "JustinShanahan", "Ji WhaeChoi", "Thi My LinhTran", "KaseyHalsey", "FranklinIheanacho", "JamesWarren", "AbdullahAhmed", "CarstenEickhoff", "MichaelFeldman", "EduardoMortani Barbosa", "IhabKamel", "Cheng TingLin", "ThomasYi", "TerranceHealey", "PaulZhang", "JingWu", "MichaelAtalay", "Harrison XBai", "ZhichengJiao", "JianxinWang" ]
10.1007/s00330-022-08588-8 10.1056/NEJMoa2001017 10.1016/S0140-6736(20)30183-5 10.1056/NEJMoa2108891 10.1007/s11547-020-01200-3 10.1016/j.ejro.2020.100231 10.1148/radiol.2020201160 10.1007/s00330-020-06827-4 10.1148/radiol.2020201491 10.1016/S2589-7500(21)00039-X 10.1148/radiol.2020200823 10.1148/radiol.2020201491 10.17849/insm-47-01-31-39.1 10.1007/s11548-020-02299-5 10.1007/s00330-020-07504-2 10.1002/emp2.12205 10.1093/ije/dyaa171 10.2196/25442 10.1038/s41467-019-13993-7 10.2196/24018 10.3390/ijerph17228386 10.2196/20259 10.1080/07853890.2020.1868564 10.2196/23458 10.1017/S0950268820001727 10.1016/j.smhl.2020.100178 10.1038/s41379-020-00700-x 10.1007/s00330-020-07269-8
Non-invasive coronary imaging in patients with COVID-19: A narrative review.
SARS-CoV-2 infection, responsible for COVID-19 outbreak, can cause cardiac complications, worsening outcome and prognosis. In particular, it can exacerbate any underlying cardiovascular condition, leading to atherosclerosis and increased plaque vulnerability, which may cause acute coronary syndrome. We review current knowledge on the mechanisms by which SARS-CoV-2 can trigger endothelial/myocardial damage and cause plaque formation, instability and deterioration. The aim of this review is to evaluate current non-invasive diagnostic techniques for coronary arteries evaluation in COVID-19 patients, such as coronary CT angiography and atherosclerotic plaque imaging, and their clinical implications. We also discuss the role of artificial intelligence, deep learning and radiomics in the context of coronary imaging in COVID-19 patients.
European journal of radiology
"2022-02-19T00:00:00"
[ "CarlottaOnnis", "GiuseppeMuscogiuri", "PierPaolo Bassareo", "RiccardoCau", "LorenzoMannelli", "ChristianCadeddu", "Jasjit SSuri", "GiuliaCerrone", "ClaraGerosa", "SandroSironi", "GavinoFaa", "AlessandroCarriero", "GianlucaPontone", "LucaSaba" ]
10.1016/j.ejrad.2022.110188 10.1001/jamacardio.2020.0950 10.1093/cvr/cvz062 10.1016/j.lfs.2020.117723 10.1016/j.atherosclerosis.2010.05.034 10.1161/ATVBAHA.120.312470 10.1111/micc.v28.710.1111/micc.12718 10.1016/j.jcct.2021.02.004 10.1161/CIRCULATIONAHA.120.046941 10.2478/jce-2020-0008 10.1016/j.jacc.2019.12.012 10.1016/j.jcct.2019.06.008 10.1148/ryct.2019180003 10.1148/radiol.2018172523 10.1016/j.jcmg.2020.04.012 10.1016/j.compbiomed.2020.103960 10.1016/j.jcmg.2019.06.009 10.1016/j.ejrad.2019.02.038 10.1183/13993003.00775-2020 10.1109/RBME.2020.2990959 10.1016/j.cmpb.2020.105651 10.1016/j.atherosclerosis.2019.12.001 10.1126/scitranslmed.aal2658 10.1016/j.metabol.2020.154436 10.1097/RTI.0000000000000268 10.1136/heartjnl-2021-BCS.238 10.1161/circ.142.suppl_3.16467 10.1016/j.jcct.2019.07.010 10.1093/eurheartj/ehaa381
A review of deep learning-based detection methods for COVID-19.
COVID-19 is a fast-spreading pandemic, and early detection is crucial for stopping the spread of infection. Lung images are used in the detection of coronavirus infection. Chest X-ray (CXR) and computed tomography (CT) images are available for the detection of COVID-19. Deep learning methods have been proven efficient and better performing in many computer vision and medical imaging applications. In the rise of the COVID pandemic, researchers are using deep learning methods to detect coronavirus infection in lung images. In this paper, the currently available deep learning methods that are used to detect coronavirus infection in lung images are surveyed. The available methodologies, public datasets, datasets that are used by each method and evaluation metrics are summarized in this paper to help future researchers. The evaluation metrics that are used by the methods are comprehensively compared.
Computers in biology and medicine
"2022-02-19T00:00:00"
[ "NandhiniSubramanian", "OmarElharrouss", "SomayaAl-Maadeed", "MuhammedChowdhury" ]
10.1016/j.compbiomed.2022.105233 10.1109/ICIoT48696.2020.9089566 10.1109/ACCESS.2021.3113953 10.1007/s11263-015-0816-y 10.1109/cvpr.2017.369 10.1016/j.cell.2018.02.010
Optimal Deep-Learning-Enabled Intelligent Decision Support System for SARS-CoV-2 Classification.
Intelligent decision support systems (IDSS) for complex healthcare applications aim to examine a large quantity of complex healthcare data to assist doctors, researchers, pathologists, and other healthcare professionals. A decision support system (DSS) is an intelligent system that provides improved assistance in various stages of health-related disease diagnosis. At the same time, the SARS-CoV-2 infection that causes COVID-19 disease has spread globally from the beginning of 2020. Several research works reported that the imaging pattern based on computed tomography (CT) can be utilized to detect SARS-CoV-2. Earlier identification and detection of the diseases is essential to offer adequate treatment and avoid the severity of the disease. With this motivation, this study develops an efficient deep-learning-based fusion model with swarm intelligence (EDLFM-SI) for SARS-CoV-2 identification. The proposed EDLFM-SI technique aims to detect and classify the SARS-CoV-2 infection or not. Also, the EDLFM-SI technique comprises various processes, namely, data augmentation, preprocessing, feature extraction, and classification. Moreover, a fusion of capsule network (CapsNet) and MobileNet based feature extractors are employed. Besides, a water strider algorithm (WSA) is applied to fine-tune the hyperparameters involved in the DL models. Finally, a cascaded neural network (CNN) classifier is applied for detecting the existence of SARS-CoV-2. In order to showcase the improved performance of the EDLFM-SI technique, a wide range of simulations take place on the COVID-19 CT data set and the SARS-CoV-2 CT scan data set. The simulation outcomes highlighted the supremacy of the EDLFM-SI technique over the recent approaches.
Journal of healthcare engineering
"2022-02-19T00:00:00"
[ "Ashit KumarDutta", "Nasser AliAljarallah", "TAbirami", "MSundarrajan", "SeifedineKadry", "YunyoungNam", "Chang-WonJeong" ]
10.1155/2022/4130674 10.3390/s21020455 10.1155/2021/8864522 10.1155/2021/8869372 10.1155/2021/8829829 10.1016/s0140-6736(20)30607-3 10.1016/s0140-6736(20)30211-7 10.1148/radiol.2020200642 10.3390/app11157004 10.1007/s12652-021-03282-x 10.1007/s00500-020-05275-y 10.1038/s41591-020-0931-3 10.22266/ijies2020.1031.07 10.1016/b978-0-12-824536-1.00039-3 10.1007/978-981-16-2594-7_30 10.1016/j.procs.2019.08.147 10.1109/jstars.2020.2968930 10.3311/ppci.16872 10.1088/1742-6596/1025/1/012097 10.1101/2020.04.13.20063941
The application research of AI image recognition and processing technology in the early diagnosis of the COVID-19.
This study intends to establish a combined prediction model that integrates the clinical symptoms,the lung lesion volume, and the radiomics features of patients with COVID-19, resulting in a new model to predict the severity of COVID-19. The clinical data of 386 patients with COVID-19 at several hospitals, as well as images of certain patients during their hospitalization, were collected retrospectively to create a database of patients with COVID-19 pneumonia. The contour of lungs and lesion locations may be retrieved from CT scans using a CT-image-based quantitative discrimination and trend analysis method for COVID-19 and the Mask R-CNN deep neural network model to create 3D data of lung lesions. The quantitative COVID-19 factors were then determined, on which the diagnosis of the development of the patients' symptoms could be established. Then, using an artificial neural network, a prediction model of the severity of COVID-19 was constructed by combining characteristic imaging features on CT slices with clinical factors. ANN neural network was used for training, and tenfold cross-validation was used to verify the prediction model. The diagnostic performance of this model is verified by the receiver operating characteristic (ROC) curve. CT radiomics features extraction and analysis based on a deep neural network can detect COVID-19 patients with an 86% sensitivity and an 85% specificity. According to the ROC curve, the constructed severity prediction model indicates that the AUC of patients with severe COVID-19 is 0.761, with sensitivity and specificity of 79.1% and 73.1%, respectively. The combined prediction model for severe COVID-19 pneumonia, which is based on deep learning and integrates clinical aspects, pulmonary lesion volume, and radiomics features of patients, has a remarkable differential ability for predicting the course of disease in COVID-19 patients. This may assist in the early prevention of severe COVID-19 symptoms.
BMC medical imaging
"2022-02-19T00:00:00"
[ "WenyuChen", "MingYao", "ZhenyuZhu", "YanbaoSun", "XiupingHan" ]
10.1186/s12880-022-00753-1 10.1631/jzus.B2000083 10.1002/jmv.25689 10.1080/14787210.2020.1797487 10.2174/1568009620666200414151419 10.1080/14737159.2020.1757437 10.1371/journal.pone.0242958 10.1016/j.diii.2020.03.014 10.1148/radiol.2020201160 10.1097/RLI.0000000000000672 10.1007/s11604-020-00967-9 10.1148/radiol.2020200343 10.1016/S2213-2600(18)30286-8 10.1016/S2213-2600(20)30003-5 10.2196/20756 10.1001/jamainternmed.2020.3539 10.2116/analsci.19R006 10.1007/s00330-020-06801-0 10.1007/s00330-019-06163-2 10.1016/j.media.2017.07.005 10.1097/RLI.0000000000000341 10.1016/j.cell.2018.02.010 10.7326/M20-6817 10.1002/jmv.26250 10.1148/radiol.2020200905 10.1016/j.radi.2020.09.010 10.1016/j.ijid.2020.10.036
Deep learning approach based on superpixel segmentation assisted labeling for automatic pressure ulcer diagnosis.
A pressure ulcer is an injury of the skin and underlying tissues adjacent to a bony eminence. Patients who suffer from this disease may have difficulty accessing medical care. Recently, the COVID-19 pandemic has exacerbated this situation. Automatic diagnosis based on machine learning (ML) brings promising solutions. Traditional ML requires complicated preprocessing steps for feature extraction. Its clinical applications are thus limited to particular datasets. Deep learning (DL), which extracts features from convolution layers, can embrace larger datasets that might be deliberately excluded in traditional algorithms. However, DL requires large sets of domain specific labeled data for training. Labeling various tissues of pressure ulcers is a challenge even for experienced plastic surgeons. We propose a superpixel-assisted, region-based method of labeling images for tissue classification. The boundary-based method is applied to create a dataset for wound and re-epithelialization (re-ep) segmentation. Five popular DL models (U-Net, DeeplabV3, PsPNet, FPN, and Mask R-CNN) with encoder (ResNet-101) were trained on the two datasets. A total of 2836 images of pressure ulcers were labeled for tissue classification, while 2893 images were labeled for wound and re-ep segmentation. All five models had satisfactory results. DeeplabV3 had the best performance on both tasks with a precision of 0.9915, recall of 0.9915 and accuracy of 0.9957 on the tissue classification; and a precision of 0.9888, recall of 0.9887 and accuracy of 0.9925 on the wound and re-ep segmentation task. Combining segmentation results with clinical data, our algorithm can detect the signs of wound healing, monitor the progress of healing, estimate the wound size, and suggest the need for surgical debridement.
PloS one
"2022-02-18T00:00:00"
[ "Che WeiChang", "MesakhChristian", "Dun HaoChang", "FeipeiLai", "Tom JLiu", "Yo ShenChen", "Wei JenChen" ]
10.1371/journal.pone.0264139 10.1111/j.1532-5415.2004.52106.x 10.1016/j.jaad.2018.12.069 10.1111/j.1365-2753.2006.00684.x 10.1097/00129334-200311000-00012 10.1038/s41698-020-0122-1 10.1038/s41598-019-56847-4 10.1016/j.artmed.2021.102020 10.1371/journal.pone.0204155 10.1371/journal.pone.0218808 10.1016/j.artmed.2019.101742 10.1016/j.cmpb.2018.02.018 10.1109/JBHI.2017.2743526 10.1111/jocn.13726 10.1007/s11517-018-1835-y 10.1016/j.addr.2018.06.019 10.1093/neuonc/noab071 10.1016/j.media.2016.03.002 10.1109/TIP.2017.2778569 10.1109/TMI.2009.2033595
COVID Detection From Chest X-Ray Images Using Multi-Scale Attention.
Deep learning based methods have shown great promise in achieving accurate automatic detection of Coronavirus Disease (covid) - 19 from Chest X-Ray (cxr) images.However, incorporating explainability in these solutions remains relatively less explored. We present a hierarchical classification approach for separating normal, non-covid pneumonia (ncp) and covid cases using cxr images. We demonstrate that the proposed method achieves clinically consistent explainations. We achieve this using a novel multi-scale attention architecture called Multi-scale Attention Residual Learning (marl) and a new loss function based on conicity for training the proposed architecture. The proposed classification strategy has two stages. The first stage uses a model derived from DenseNet to separate pneumonia cases from normal cases while the second stage uses the marl architecture to discriminate between covid and ncp cases. With a five-fold cross validation the proposed method achieves 93%, 96.28%, and 84.51% accuracy respectively over three large, public datasets for normal vs. ncp vs. covid classification. This is competitive to the state-of-the-art methods. We also provide explanations in the form of GradCAM attributions, which are well aligned with expert annotations. The attributions are also seen to clearly indicate that marl deems the peripheral regions of the lungs to be more important in the case of covid cases while central regions are seen as more important in ncp cases. This observation matches the criteria described by radiologists in clinical literature, thereby attesting to the utility of the derived explanations.
IEEE journal of biomedical and health informatics
"2022-02-15T00:00:00"
[ "AbhinavDhere", "JayanthiSivaswamy" ]
10.1109/JBHI.2022.3151171
ADA-COVID: Adversarial Deep Domain Adaptation-Based Diagnosis of COVID-19 from Lung CT Scans Using Triplet Embeddings.
Rapid diagnosis of COVID-19 with high reliability is essential in the early stages. To this end, recent research often uses medical imaging combined with machine vision methods to diagnose COVID-19. However, the scarcity of medical images and the inherent differences in existing datasets that arise from different medical imaging tools, methods, and specialists may affect the generalization of machine learning-based methods. Also, most of these methods are trained and tested on the same dataset, reducing the generalizability and causing low reliability of the obtained model in real-world applications. This paper introduces an adversarial deep domain adaptation-based approach for diagnosing COVID-19 from lung CT scan images, termed ADA-COVID. Domain adaptation-based training process receives multiple datasets with different input domains to generate domain-invariant representations for medical images. Also, due to the excessive structural similarity of medical images compared to other image data in machine vision tasks, we use the triplet loss function to generate similar representations for samples of the same class (infected cases). The performance of ADA-COVID is evaluated and compared with other state-of-the-art COVID-19 diagnosis algorithms. The obtained results indicate that ADA-COVID achieves classification improvements of at least 3%, 20%, 20%, and 11% in accuracy, precision, recall, and F
Computational intelligence and neuroscience
"2022-02-15T00:00:00"
[ "MehradAria", "EsmaeilNourani", "AminGolzari Oskouei" ]
10.1155/2022/2564022 10.1148/radiol.2020200642 10.3390/s21020455 10.1109/TNNLS.2021.3054306 10.1016/j.ejrad.2020.108961 10.1145/3472813.3472820 10.1148/ryct.2020200034 10.1007/s42600-021-00151-6 10.1038/s41598-020-76550-z 10.1016/j.patrec.2020.10.001 10.1016/j.compbiomed.2020.104037 10.1007/s00330-021-07715-1 10.2196/27468 10.1038/s41597-021-00900-3 10.1016/j.bspc.2021.102588 10.1016/j.imu.2020.100427 10.3390/ijerph17186933 10.1109/jbhi.2020.3037127 10.1016/j.patrec.2020.09.010 10.1038/s42256-020-00257-z 10.1148/radiol.2020200905 10.1016/j.eng.2020.04.010 10.1038/s41598-020-76282-0 10.1016/j.cmpb.2020.105581 10.1016/j.chaos.2020.110122 10.1109/jbhi.2020.3023246 10.1038/s41746-021-00399-3 10.1007/s10096-020-03901-z 10.1007/s13246-020-00865-4 10.1109/cvpr.2017.195 10.1016/j.media.2020.101794 10.1109/cvpr.2016.90 10.1109/cvpr.2017.243 10.1016/j.cmpb.2020.105608 10.1080/07391102.2020.1788642 10.1109/tcbb.2021.3065361 10.1016/j.chaos.2020.110190 10.1007/s10489-020-02055-x 10.1007/s13755-021-00152-w 10.1016/j.cmpb.2020.105532 10.1109/CVPR.2016.308 10.3390/e22050517 10.1109/cvpr.2009.5206848 10.3390/s19194139 10.1109/cvpr.2015.7298682 10.1016/j.asoc.2020.106897 10.1038/s41598-021-83424-5 10.1038/s41467-020-18685-1 10.1371/journal.pone.0250952 10.14299/ijser.2020.03.02 10.1016/j.asoc.2019.02.038 10.1016/j.asoc.2021.108005 10.1016/j.chaos.2021.111494 10.1016/j.compbiomed.2020.103795 10.1117/1.JMI.3.4.044506 10.1118/1.3528204 10.1016/j.compmedimag.2011.07.003 10.1148/ryct.2020200026
Diagnosis of hypercritical chronic pulmonary disorders using dense convolutional network through chest radiography.
Lung-related ailments are prevalent all over the world which majorly includes asthma, chronic obstructive pulmonary disease (COPD), tuberculosis, pneumonia, fibrosis, etc. and now COVID-19 is added to this list. Infection of COVID-19 poses respirational complications with other indications like cough, high fever, and pneumonia. WHO had identified cancer in the lungs as a fatal cancer type amongst others and thus, the timely detection of such cancer is pivotal for an individual's health. Since the elementary convolutional neural networks have not performed fairly well in identifying atypical image types hence, we recommend a novel and completely automated framework with a deep learning approach for the recognition and classification of chronic pulmonary disorders (CPD) and COVID-pneumonia using Thoracic or Chest X-Ray (CXR) images. A novel three-step, completely automated, approach is presented that first extracts the region of interest from CXR images for preprocessing, and they are then used to detects infected lungs X-rays from the Normal ones. Thereafter, the infected lung images are further classified into COVID-pneumonia, pneumonia, and other chronic pulmonary disorders (OCPD), which might be utilized in the current scenario to help the radiologist in substantiating their diagnosis and in starting well in time treatment of these deadly lung diseases. And finally, highlight the regions in the CXR which are indicative of severe chronic pulmonary disorders like COVID-19 and pneumonia. A detailed investigation of various pivotal parameters based on several experimental outcomes are made here. This paper presents an approach that detects the Normal lung X-rays from infected ones and the infected lung images are further classified into COVID-pneumonia, pneumonia, and other chronic pulmonary disorders with an utmost accuracy of 96.8%. Several other collective performance measurements validate the superiority of the presented model. The proposed framework shows effective results in classifying lung images into Normal, COVID-pneumonia, pneumonia, and other chronic pulmonary disorders (OCPD). This framework can be effectively utilized in this current pandemic scenario to help the radiologist in substantiating their diagnosis and in starting well in time treatment of these deadly lung diseases.
Multimedia tools and applications
"2022-02-08T00:00:00"
[ "RajatMehrotra", "RajeevAgrawal", "M AAnsari" ]
10.1007/s11042-021-11748-5 10.2214/ajr.181.4.1811083 10.1016/j.cmpb.2019.105162 10.3233/HIS-190263 10.3390/app10020559 10.1016/j.compmedimag.2007.02.002 10.1016/j.compbiomed.2018.10.011 10.1016/j.crad.2018.12.015 10.1016/j.cmpb.2020.105581 10.1145/3065386 10.1016/j.crad.2019.08.005 10.26599/BDMA.2018.9020001 10.1016/j.zemedi.2018.11.002 10.1016/S0140-6736(96)07492-2 10.1164/ajrccm.162.4.2002019 10.1109/TKDE.2009.191 10.1016/j.media.2017.06.015 10.1109/TPAMI.2016.2572683 10.1016/j.compbiomed.2017.04.006 10.1007/s13244-018-0639-9
A complete framework for accurate recognition and prognosis of COVID-19 patients based on deep transfer learning and feature classification approach.
The sudden appearance of COVID-19 has put the world in a serious situation. Due to the rapid spread of the virus and the increase in the number of infected patients and deaths, COVID-19 was declared a pandemic. This pandemic has its destructive effect not only on humans but also on the economy. Despite the development and availability of different vaccines for COVID-19, scientists still warn the citizens of new severe waves of the virus, and as a result, fast diagnosis of COVID-19 is a critical issue. Chest imaging proved to be a powerful tool in the early detection of COVID-19. This study introduces an entire framework for the early detection and early prognosis of COVID-19 severity in the diagnosed patients using laboratory test results. It consists of two phases (1) Early Diagnostic Phase (EDP) and (2) Early Prognostic Phase (EPP). In EDP, COVID-19 patients are diagnosed using CT chest images. In the current study, 5, 159 COVID-19 and 10, 376 normal computed tomography (CT) images of Egyptians were used as a dataset to train 7 different convolutional neural networks using transfer learning. Data augmentation normal techniques and generative adversarial networks (GANs), CycleGAN and CCGAN, were used to increase the images in the dataset to avoid overfitting issues. 28 experiments were applied and multiple performance metrics were captured. Classification with no augmentation yielded
Artificial intelligence review
"2022-02-08T00:00:00"
[ "Hossam MagdyBalaha", "Eman MEl-Gendy", "Mahmoud MSaafan" ]
10.1007/s10462-021-10127-8 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2020.103795 10.1109/ACCESS.2021.3060940 10.7717/peerj-cs.555 10.1007/s00521-020-05137-6 10.1007/s00521-020-05397-2 10.1016/j.bspc.2019.101734 10.1016/j.cmpb.2020.105608 10.1109/ACCESS.2018.2837621 10.1109/ACCESS.2020.3010287 10.1109/ACCESS.2019.2946622 10.1007/s10898-007-9162-0 10.1016/j.compbiomed.2019.103345 10.1016/j.neucom.2015.08.112 10.1007/s00603-015-0733-y 10.1007/s11042-018-5714-1 10.1016/j.procs.2016.05.512 10.1111/j.1469-1809.1936.tb02137.x 10.1080/01621459.1989.10478752 10.1016/j.eswa.2017.11.028 10.1109/ACCESS.2020.3016780 10.1109/ACCESS.2020.3005510 10.1109/TPAMI.2013.178 10.1109/ACCESS.2020.3001973 10.1109/ACCESS.2017.2672677 10.1007/s40747-020-00199-4 10.1016/j.eswa.2019.05.041 10.1016/j.cmpb.2020.105581 10.1109/ACCESS.2019.2901568 10.1109/72.554195 10.1109/ACCESS.2018.2833888 10.1016/j.neucom.2016.12.038 10.1002/mrm.26841 10.1093/bioinformatics/btq302 10.1016/j.asoc.2020.106580 10.1016/j.compbiomed.2020.103792 10.1016/j.chaos.2020.110190 10.1148/radiol.2020202504 10.1109/ACCESS.2020.3003810 10.1038/s42256-021-00307-0 10.1109/ACCESS.2017.2779794 10.1007/s42399-020-00655-9 10.1109/ACCESS.2020.3025010 10.1109/LSP.2017.2657381 10.1016/j.bspc.2021.102717 10.1093/bioinformatics/btl170 10.1186/s40537-019-0197-0 10.1007/s12098-020-03263-6 10.1016/j.ijsu.2020.02.034 10.1016/j.catena.2016.06.004 10.1109/ACCESS.2018.2796018 10.1109/ACCESS.2019.2959033 10.1109/ACCESS.2020.2994762 10.1109/ACCESS.2019.2892795 10.1016/S0140-6736(20)30185-9 10.1016/j.inffus.2020.11.005 10.1148/radiol.2020201160 10.1109/ACCESS.2019.2918221 10.1038/s41586-020-2008-3 10.1016/S0140-6736(20)30845-X 10.1109/ACCESS.2019.2930958 10.1007/s13244-018-0639-9 10.3390/s16071148 10.1109/ACCESS.2018.2868813 10.1109/TNNLS.2017.2673241 10.1016/j.isprsjprs.2017.07.014 10.1016/j.cell.2020.04.045 10.1016/j.patrec.2021.06.021
Quantitative CT comparison between COVID-19 and mycoplasma pneumonia suspected as COVID-19: a longitudinal study.
The purpose of this study was to compare imaging features between COVID-19 and mycoplasma pneumonia (MP). The data of patients with mild COVID-19 and MP who underwent chest computed tomography (CT) examination from February 1, 2020 to April 17, 2020 were retrospectively analyzed. The Pneumonia-CT-LKM-PP model based on a deep learning algorithm was used to automatically quantify the number, volume, and involved lobes of pulmonary lesions, and longitudinal changes in quantitative parameters were assessed in three CT follow-ups. A total of 10 patients with mild COVID-19 and 13 patients with MP were included in this study. There was no difference in lymphocyte counts at baseline between the two groups (1.43 ± 0.45 vs. 1.44 ± 0.50, p = 0.279). C-reactive protein levels were significantly higher in MP group than in COVID-19 group (p < 0.05). The number, volume, and involved lobes of pulmonary lesions reached a peak in 7-14 days in the COVID-19 group, but there was no peak or declining trend over time in the MP group (p < 0.05). Based on the longitudinal changes of quantitative CT, pulmonary lesions peaked at 7-14 days in patients with COVID-19, and this may be useful to distinguish COVID-19 from MP and evaluate curative effects and prognosis.
BMC medical imaging
"2022-02-08T00:00:00"
[ "JunzhongLiu", "YuzhenWang", "GuanghuiHe", "XinhuaWang", "MinfengSun" ]
10.1186/s12880-022-00750-4 10.1007/s00330-020-06934-2 10.1259/bjr.20200243 10.1007/s00330-005-0026-z 10.1186/1471-2342-9-7 10.2214/ajr.174.1.1740037 10.1148/radiol.2381040088 10.1148/radiol.2020201178 10.1007/s11547-020-01195-x 10.1007/s11547-020-01197-9 10.1148/radiol.2020202708 10.1016/S0140-6736(20)30183-5 10.1148/ryct.2020200075 10.1148/radiol.2020200370 10.1148/radiol.2020200843 10.1016/j.ejrad.2020.108972 10.3348/kjr.2020.0181 10.2214/AJR.20.22975 10.1016/j.ejrad.2020.109009 10.2214/AJR.20.22959
COVID-19 Detection Based on Lung Ct Scan Using Deep Learning Techniques.
SARS-CoV-2 is a novel virus, responsible for causing the COVID-19 pandemic that has emerged as a pandemic in recent years. Humans are becoming infected with the virus. In 2019, the city of Wuhan reported the first-ever incidence of COVID-19. COVID-19 infected people have symptoms that are related to pneumonia, and the virus affects the body's respiratory organs, making breathing difficult. A real-time reverse transcriptase-polymerase chain reaction (RT-PCR) kit is used to diagnose the disease. Due to a shortage of kits, suspected patients cannot be treated promptly, resulting in disease spread. To develop an alternative, radiologists looked at the changes in radiological imaging, like CT scans, that produce comprehensive pictures of the body of excellent quality. The suspected patient's computed tomography (CT) scan is used to distinguish between a healthy individual and a COVID-19 patient using deep learning algorithms. A lot of deep learning methods have been proposed for COVID-19. The proposed work utilizes CNN architectures like VGG16, DeseNet121, MobileNet, NASNet, Xception, and EfficientNet. The dataset contains 3873 total CT scan images with "COVID" and "Non-COVID." The dataset is divided into train, test, and validation. Accuracies obtained for VGG16 are 97.68%, DenseNet121 is 97.53%, MobileNet is 96.38%, NASNet is 89.51%, Xception is 92.47%, and EfficientNet is 80.19%, respectively. From the obtained analysis, the results show that the VGG16 architecture gives better accuracy compared to other architectures.
Computational and mathematical methods in medicine
"2022-02-05T00:00:00"
[ "S VKogilavani", "JPrabhu", "RSandhiya", "M SandeepKumar", "UmaShankarSubramaniam", "AlagarKarthick", "MMuhibbullah", "Sharmila Banu SheikImam" ]
10.1155/2022/7672196 10.1007/s12652-020-02641-4 10.1108/IJPCC-06-2020-0054 10.1007/s13198-021-01072-4 10.1016/j.measurement.2020.108432 10.1016/j.patcog.2020.107747 10.1016/j.eswa.2021.114883 10.1016/j.chaos.2020.110170 10.1016/j.bspc.2021.102750 10.1016/j.irbm.2021.01.004 10.1016/j.bbe.2021.05.013 10.1016/j.bspc.2020.102365 10.1016/j.compbiomed.2021.104306 10.1016/j.compbiomed.2020.103795 10.1155/2021/1896762 10.1016/j.compbiomed.2020.103792 10.1016/j.bspc.2021.102920 10.1016/j.bbe.2021.04.006 10.1166/jmihi.2019.2654 10.1166/jmihi.2020.3169 10.1155/2021/5990999 10.1155/2021/5582418 10.1155/2021/5584004 10.1016/j.comcom.2021.06.011 10.1155/2021/2921737 10.1016/j.jbi.2021.103751 10.1016/j.bbe.2020.08.005 10.1109/ACCESS.2021.3121791 10.1155/2021/7894849 10.1007/s12559-021-09836-7 10.1007/s11356-021-16398-6
Effective deep learning approaches for predicting COVID-19 outcomes from chest computed tomography volumes.
The rapid evolution of the novel coronavirus disease (COVID-19) pandemic has resulted in an urgent need for effective clinical tools to reduce transmission and manage severe illness. Numerous teams are quickly developing artificial intelligence approaches to these problems, including using deep learning to predict COVID-19 diagnosis and prognosis from chest computed tomography (CT) imaging data. In this work, we assess the value of aggregated chest CT data for COVID-19 prognosis compared to clinical metadata alone. We develop a novel patient-level algorithm to aggregate the chest CT volume into a 2D representation that can be easily integrated with clinical metadata to distinguish COVID-19 pneumonia from chest CT volumes from healthy participants and participants with other viral pneumonia. Furthermore, we present a multitask model for joint segmentation of different classes of pulmonary lesions present in COVID-19 infected lungs that can outperform individual segmentation models for each task. We directly compare this multitask segmentation approach to combining feature-agnostic volumetric CT classification feature maps with clinical metadata for predicting mortality. We show that the combination of features derived from the chest CT volumes improve the AUC performance to 0.80 from the 0.52 obtained by using patients' clinical data alone. These approaches enable the automated extraction of clinically relevant features from chest CT volumes for risk stratification of COVID-19 patients.
Scientific reports
"2022-02-04T00:00:00"
[ "AnthonyOrtiz", "AnusuaTrivedi", "JocelynDesbiens", "MarianBlazes", "CalebRobinson", "SunilGupta", "RahulDodhia", "Pavan KBhatraju", "W ConradLiles", "AaronLee", "Juan M LavistaFerres" ]
10.1038/s41598-022-05532-0 10.1016/S1473-3099(20)30134-1 10.1016/S1473-3099(20)30086-4 10.3389/fbioe.2020.00898 10.1016/j.cell.2020.04.045 10.1148/ryct.2020200034 10.1371/journal.pone.0230548 10.1148/ryai.2020200048 10.1016/j.acha.2012.07.005 10.1016/j.mri.2012.06.010
COVID-19 detection from chest x-ray using MobileNet and residual separable convolution block.
A newly emerged coronavirus disease affects the social and economical life of the world. This virus mainly infects the respiratory system and spreads with airborne communication. Several countries witness the serious consequences of the COVID-19 pandemic. Early detection of COVID-19 infection is the critical step to survive a patient from death. The chest radiography examination is the fast and cost-effective way for COVID-19 detection. Several researchers have been motivated to automate COVID-19 detection and diagnosis process using chest x-ray images. However, existing models employ deep networks and are suffering from high training time. This work presents transfer learning and residual separable convolution block for COVID-19 detection. The proposed model utilizes pre-trained MobileNet for binary image classification. The proposed residual separable convolution block has improved the performance of basic MobileNet. Two publicly available datasets COVID5K, and COVIDRD have considered for the evaluation of the proposed model. Our proposed model exhibits superior performance than existing state-of-art and pre-trained models with 99% accuracy on both datasets. We have achieved similar performance on noisy datasets. Moreover, the proposed model outperforms existing pre-trained models with less training time and competitive performance than basic MobileNet. Further, our model is suitable for mobile applications as it uses fewer parameters and lesser training time.
Soft computing
"2022-02-03T00:00:00"
[ "V Santhosh KumarTangudu", "JagadeeshKakarla", "Isunuri BalaVenkateswarlu" ]
10.1007/s00500-021-06579-3 10.1007/s10489-020-01829-7 10.1016/j.patrec.2020.09.010 10.1109/JBHI.2020.2982103 10.1109/TETCI.2018.2866254 10.1016/j.ijmedinf.2020.104284 10.1109/ACCESS.2020.3016780 10.1109/TNSRE.2018.2834554 10.1016/j.jinf.2020.03.007 10.1016/j.media.2020.101794 10.1109/JBHI.2020.2991043 10.1186/s12890-020-01286-5 10.1109/ACCESS.2020.3025010
COVID-19 CT image recognition algorithm based on transformer and CNN.
Novel corona virus pneumonia (COVID-19) broke out in 2019, which had a great impact on the development of world economy and people's lives. As a new mainstream image processing method, deep learning network has been constructed to extract medical features from chest CT images, and has been used as a new detection method in clinical practice. However, due to the medical characteristics of COVID-19 CT images, the lesions are widely distributed and have many local features. Therefore, it is difficult to diagnose directly by using the existing deep learning model. According to the medical features of CT images in COVID-19, a parallel bi-branch model (Trans-CNN Net) based on Transformer module and Convolutional Neural Network module is proposed by making full use of the local feature extraction capability of Convolutional Neural Network and the global feature extraction advantage of Transformer. According to the principle of cross-fusion, a bi-directional feature fusion structure is designed, in which features extracted from two branches are fused bi-directionally, and the parallel structures of branches are fused by a feature fusion module, forming a model that can extract features of different scales. To verify the effect of network classification, the classification accuracy on COVIDx-CT dataset is 96.7%, which is obviously higher than that of typical CNN network (ResNet-152) (95.2%) and Transformer network (Deit-B) (75.8%). These results demonstrate accuracy is improved. This model also provides a new method for the diagnosis of COVID-19, and through the combination of deep learning and medical imaging, it promotes the development of real-time diagnosis of lung diseases caused by COVID-19 infection, which is helpful for reliable and rapid diagnosis, thus saving precious lives.
Displays
"2022-02-01T00:00:00"
[ "XiaoleFan", "XiufangFeng", "YunyunDong", "HuichaoHou" ]
10.1016/j.displa.2022.102150 10.1016/j.tmaid.2020.101623
Feasibility study of multi-site split learning for privacy-preserving medical systems under data imbalance constraints in COVID-19, X-ray, and cholesterol dataset.
It seems as though progressively more people are in the race to upload content, data, and information online; and hospitals haven't neglected this trend either. Hospitals are now at the forefront for multi-site medical data sharing to provide ground-breaking advancements in the way health records are shared and patients are diagnosed. Sharing of medical data is essential in modern medical research. Yet, as with all data sharing technology, the challenge is to balance improved treatment with protecting patient's personal information. This paper provides a novel split learning algorithm coined the term, "multi-site split learning", which enables a secure transfer of medical data between multiple hospitals without fear of exposing personal data contained in patient records. It also explores the effects of varying the number of end-systems and the ratio of data-imbalance on the deep learning performance. A guideline for the most optimal configuration of split learning that ensures privacy of patient data whilst achieving performance is empirically given. We argue the benefits of our multi-site split learning algorithm, especially regarding the privacy preserving factor, using CT scans of COVID-19 patients, X-ray bone scans, and cholesterol level medical data.
Scientific reports
"2022-01-29T00:00:00"
[ "Yoo JeongHa", "GusangLee", "MinjaeYoo", "SoyiJung", "SeehwanYoo", "JoongheonKim" ]
10.1038/s41598-022-05615-y 10.1016/j.knosys.2020.106647 10.1080/07391102.2021.1875049 10.1016/j.eswa.2020.114054 10.3233/XST-200784 10.1016/j.compbiomed.2021.104319 10.1002/jemt.23713 10.1109/JIOT.2021.3055804 10.1109/ACCESS.2021.3108455 10.2471/BLT.17.204891 10.1016/j.asoc.2020.106885 10.1038/s41597-021-00900-3 10.1016/j.bspc.2021.102588 10.1016/j.cca.2012.09.010
A fuzzy-enhanced deep learning approach for early detection of Covid-19 pneumonia from portable chest X-ray images.
The Covid-19 pandemic is the defining global health crisis of our time. Chest X-Rays (CXR) have been an important imaging modality for assisting in the diagnosis and management of hospitalised Covid-19 patients. However, their interpretation is time intensive for radiologists. Accurate computer aided systems can facilitate early diagnosis of Covid-19 and effective triaging. In this paper, we propose a fuzzy logic based deep learning (DL) approach to differentiate between CXR images of patients with Covid-19 pneumonia and with interstitial pneumonias not related to Covid-19. The developed model here, referred to as
Neurocomputing
"2022-01-27T00:00:00"
[ "CosimoIeracitano", "NadiaMammone", "MarioVersaci", "GiuseppeVarone", "Abder-RahmanAli", "AntonioArmentano", "GraziaCalabrese", "AnnaFerrarelli", "LorenaTurano", "CarmelaTebala", "ZainHussain", "ZakariyaSheikh", "AzizSheikh", "GiuseppeSceni", "AmirHussain", "Francesco CarloMorabito" ]
10.1016/j.neucom.2022.01.055
COVID-19 diagnosis using state-of-the-art CNN architecture features and Bayesian Optimization.
The coronavirus outbreak 2019, called COVID-19, which originated in Wuhan, negatively affected the lives of millions of people and many people died from this infection. To prevent the spread of the disease, which is still in effect, various restriction decisions have been taken all over the world. In addition, the number of COVID-19 tests has been increased to quarantine infected people. However, due to the problems encountered in the supply of RT-PCR tests and the ease of obtaining Computed Tomography and X-ray images, imaging-based methods have become very popular in the diagnosis of COVID-19. Therefore, studies using these images to classify COVID-19 have increased. This paper presents a classification method for computed tomography chest images in the COVID-19 Radiography Database using features extracted by popular Convolutional Neural Networks (CNN) models (AlexNet, ResNet18, ResNet50, Inceptionv3, Densenet201, Inceptionresnetv2, MobileNetv2, GoogleNet). The determination of hyperparameters of Machine Learning (ML) algorithms by Bayesian optimization, and ANN-based image segmentation are the two main contributions in this study. First of all, lung segmentation is performed automatically from the raw image with Artificial Neural Networks (ANNs). To ensure data diversity, data augmentation is applied to the COVID-19 classes, which are fewer than the other two classes. Then these images are applied as input to five different CNN models. The features extracted from each CNN model are given as input to four different ML algorithms, namely Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), Naive Bayes (NB), and Decision Tree (DT) for classification. To achieve the most successful classification accuracy, the hyperparameters of each ML algorithm are determined using Bayesian optimization. With the classification made using these hyperparameters, the highest success is obtained as 96.29% with the DenseNet201 model and SVM algorithm. The Sensitivity, Precision, Specificity, MCC, and F1-Score metric values for this structure are 0.9642, 0.9642, 0.9812, 0.9641 and 0.9453, respectively. These results showed that ML methods with the most optimum hyperparameters can produce successful results.
Computers in biology and medicine
"2022-01-26T00:00:00"
[ "Muhammet FatihAslan", "KadirSabanci", "AkifDurdu", "Muhammed FahriUnlersen" ]
10.1016/j.compbiomed.2022.105244 10.1016/j.clinimag.2020.04.001 10.1109/RBME.2020.2987975 10.1101/2020.05.01.20088211 10.1016/j.cell.2020.04.045
Efficient and visualizable convolutional neural networks for COVID-19 classification using Chest CT.
With coronavirus disease 2019 (COVID-19) cases rising rapidly, deep learning has emerged as a promising diagnosis technique. However, identifying the most accurate models to characterize COVID-19 patients is challenging because comparing results obtained with different types of data and acquisition processes is non-trivial. In this paper we designed, evaluated, and compared the performance of 20 convolutional neutral networks in classifying patients as COVID-19 positive, healthy, or suffering from other pulmonary lung infections based on chest computed tomography (CT) scans, serving as the first to consider the EfficientNet family for COVID-19 diagnosis and employ intermediate activation maps for visualizing model performance. All models are trained and evaluated in Python using 4173 chest CT images from the dataset entitled "A COVID multiclass dataset of CT scans," with 2168, 758, and 1247 images of patients that are COVID-19 positive, healthy, or suffering from other pulmonary infections, respectively. EfficientNet-B5 was identified as the best model with an F1 score of 0.9769 ± 0.0046, accuracy of 0.9759 ± 0.0048, sensitivity of 0.9788 ± 0.0055, specificity of 0.9730 ± 0.0057, and precision of 0.9751 ± 0.0051. On an alternate 2-class dataset, EfficientNetB5 obtained an accuracy of 0.9845 ± 0.0109, F1 score of 0.9599 ± 0.0251, sensitivity of 0.9682 ± 0.0099, specificity of 0.9883 ± 0.0150, and precision of 0.9526 ± 0.0523. Intermediate activation maps and Gradient-weighted Class Activation Mappings offered human-interpretable evidence of the model's perception of ground-class opacities and consolidations, hinting towards a promising use-case of artificial intelligence-assisted radiology tools. With a prediction speed of under 0.1 s on GPUs and 0.5 s on CPUs, our proposed model offers a rapid, scalable, and accurate diagnostic for COVID-19.
Expert systems with applications
"2022-01-26T00:00:00"
[ "AkshGarg", "SanaSalehi", "Marianna LaRocca", "RachaelGarner", "DominiqueDuncan" ]
10.1016/j.eswa.2022.116540 10.1093/COMJNL/BXAB051 10.1016/j.compbiomed.2020.103795 10.1007/s10489-020-01714-3 10.1016/j.compbiomed.2021.104454 10.1109/CVPR.2017.195 10.1007/s00521-021-05910-1 10.1109/CVPR.2016.90 10.1007/S12652-021-03282-X 10.1109/ACCESS.2021.3058537 10.1038/s41467-020-18685-1 10.1109/ICCCIS51004.2021.9397189 10.1109/SAMI50585.2021.9378646 10.1148/radiol.2020201343 10.1016/j.chaos.2020.110059 10.1155/2021/5528441 10.1016/j.asoc.2020.106691 10.1109/INISTA49547.2020.9194651 10.1109/ACCESS.2021.3083516 10.1016/j.compbiomed.2020.103792 10.1007/s11263-019-01228-7 10.1007/S11042-021-11158-7 10.1109/CVPR.2015.7298594 10.1109/CVPR.2016.308 10.1016/j.matpr.2020.06.245 10.1016/j.asoc.2020.106897 10.1016/j.ejrad.2020.109041 10.1109/TIP.8310.1109/TIP.2021.3058783 10.1148/radiol.2020201491 10.1007/s00330-020-06801-0
SAM: Self-augmentation mechanism for COVID-19 detection using chest X-ray images.
COVID-19 is a rapidly spreading viral disease and has affected over 100 countries worldwide. The numbers of casualties and cases of infection have escalated particularly in countries with weakened healthcare systems. Recently, reverse transcription-polymerase chain reaction (RT-PCR) is the test of choice for diagnosing COVID-19. However, current evidence suggests that COVID-19 infected patients are mostly stimulated from a lung infection after coming in contact with this virus. Therefore, chest X-ray (i.e., radiography) and chest CT can be a surrogate in some countries where PCR is not readily available. This has forced the scientific community to detect COVID-19 infection from X-ray images and recently proposed machine learning methods offer great promise for fast and accurate detection. Deep learning with convolutional neural networks (CNNs) has been successfully applied to radiological imaging for improving the accuracy of diagnosis. However, the performance remains limited due to the lack of representative X-ray images available in public benchmark datasets. To alleviate this issue, we propose a self-augmentation mechanism for data augmentation in the feature space rather than in the data space using reconstruction independent component analysis (RICA). Specifically, a unified architecture is proposed which contains a deep convolutional neural network (CNN), a feature augmentation mechanism, and a bidirectional LSTM (BiLSTM). The CNN provides the high-level features extracted at the pooling layer where the augmentation mechanism chooses the most relevant features and generates low-dimensional augmented features. Finally, BiLSTM is used to classify the processed sequential information. We conducted experiments on three publicly available databases to show that the proposed approach achieves the state-of-the-art results with accuracy of 97%, 84% and 98%. Explainability analysis has been carried out using feature visualization through PCA projection and t-SNE plots.
Knowledge-based systems
"2022-01-25T00:00:00"
[ "UsmanMuhammad", "Md ZiaulHoque", "MouradOussalah", "AnjaKeskinarkaus", "TapioSeppänen", "PinakiSarder" ]
10.1016/j.knosys.2022.108207
Contour-enhanced attention CNN for CT-based COVID-19 segmentation.
Accurate detection of COVID-19 is one of the challenging research topics in today's healthcare sector to control the coronavirus pandemic. Automatic data-powered insights for COVID-19 localization from medical imaging modality like chest CT scan tremendously augment clinical care assistance. In this research, a Contour-aware Attention Decoder CNN has been proposed to precisely segment COVID-19 infected tissues in a very effective way. It introduces a novel attention scheme to extract boundary, shape cues from CT contours and leverage these features in refining the infected areas. For every decoded pixel, the attention module harvests contextual information in its spatial neighborhood from the contour feature maps. As a result of incorporating such rich structural details into decoding via dense attention, the CNN is able to capture even intricate morphological details. The decoder is also augmented with a Cross Context Attention Fusion Upsampling to robustly reconstruct deep semantic features back to high-resolution segmentation map. It employs a novel pixel-precise attention model that draws relevant encoder features to aid in effective upsampling. The proposed CNN was evaluated on 3D scans from MosMedData and Jun Ma benchmarked datasets. It achieved state-of-the-art performance with a high dice similarity coefficient of 85.43% and a recall of 88.10%.
Pattern recognition
"2022-01-25T00:00:00"
[ "RKarthik", "RMenaka", "HariharanM", "DaehanWon" ]
10.1016/j.patcog.2022.108538 10.1109/JBHI.2020.2986926 10.1109/TPAMI.2020.3007032
COVID-19 detection in CT and CXR images using deep learning models.
Infectious diseases pose a threat to human life and could affect the whole world in a very short time. Corona-2019 virus disease (COVID-19) is an example of such harmful diseases. COVID-19 is a pandemic of an emerging infectious disease, called coronavirus disease 2019 or COVID-19, caused by the coronavirus SARS-CoV-2, which first appeared in December 2019 in Wuhan, China, before spreading around the world on a very large scale. The continued rise in the number of positive COVID-19 cases has disrupted the health care system in many countries, creating a lot of stress for governing bodies around the world, hence the need for a rapid way to identify cases of this disease. Medical imaging is a widely accepted technique for early detection and diagnosis of the disease which includes different techniques such as Chest X-ray (CXR), Computed Tomography (CT) scan, etc. In this paper, we propose a methodology to investigate the potential of deep transfer learning in building a classifier to detect COVID-19 positive patients using CT scan and CXR images. Data augmentation technique is used to increase the size of the training dataset in order to solve overfitting and enhance generalization ability of the model. Our contribution consists of a comprehensive evaluation of a series of pre-trained deep neural networks: ResNet50, InceptionV3, VGGNet-19, and Xception, using data augmentation technique. The findings proved that deep learning is effective at detecting COVID-19 cases. From the results of the experiments it was found that by considering each modality separately, the VGGNet-19 model outperforms the other three models proposed by using the CT image dataset where it achieved  88.5% precision, 86% recall, 86.5% F1-score, and 87% accuracy while the refined Xception version gave the highest precision, recall, F1-score, and accuracy values which equal 98% using CXR images dataset. On the other hand, and by applying the average of the two modalities X-ray and CT, VGG-19 presents the best score which is 90.5% for the accuracy and the F1-score, 90.3% for the recall while the precision is 91.5%. These results enables to automatize the process of analyzing chest CT scans and X-ray images with high accuracy and can be used in cases where RT-PCR testing and materials are limited.
Biogerontology
"2022-01-23T00:00:00"
[ "InesChouat", "AmiraEchtioui", "RafikKhemakhem", "WassimZouch", "MohamedGhorbel", "Ahmed BenHamida" ]
10.1007/s10522-021-09946-7 10.1148/radiol.2020200642 10.1007/s13246-020-00865-4 10.21203/rs.3.rs-32247/v1 10.1016/S0140-6736(20)30211-7 10.1007/s11042-021-10783-6 10.1007/s11042-021-11192-5 10.1007/S00521-020-05437-X 10.1101/2020.11.08.20227819 10.1007/s00330-021-07715-1 10.1016/j.eng.2020.04.010 10.1001/jama.2020.2648
Using a Deep Learning Model to Explore the Impact of Clinical Data on COVID-19 Diagnosis Using Chest X-ray.
The coronavirus pandemic (COVID-19) is disrupting the entire world; its rapid global spread threatens to affect millions of people. Accurate and timely diagnosis of COVID-19 is essential to control the spread and alleviate risk. Due to the promising results achieved by integrating machine learning (ML), particularly deep learning (DL), in automating the multiple disease diagnosis process. In the current study, a model based on deep learning was proposed for the automated diagnosis of COVID-19 using chest X-ray images (CXR) and clinical data of the patient. The aim of this study is to investigate the effects of integrating clinical patient data with the CXR for automated COVID-19 diagnosis. The proposed model used data collected from King Fahad University Hospital, Dammam, KSA, which consists of 270 patient records. The experiments were carried out first with clinical data, second with the CXR, and finally with clinical data and CXR. The fusion technique was used to combine the clinical features and features extracted from images. The study found that integrating clinical data with the CXR improves diagnostic accuracy. Using the clinical data and the CXR, the model achieved an accuracy of 0.970, a recall of 0.986, a precision of 0.978, and an F-score of 0.982. Further validation was performed by comparing the performance of the proposed system with the diagnosis of an expert. Additionally, the results have shown that the proposed system can be used as a tool that can help the doctors in COVID-19 diagnosis.
Sensors (Basel, Switzerland)
"2022-01-23T00:00:00"
[ "Irfan UllahKhan", "NidaAslam", "TalhaAnwar", "Hind SAlsaif", "Sara Mhd BacharChrouf", "Norah AAlzahrani", "Fatimah AhmedAlamoudi", "Mariam Moataz AlyKamaleldin", "Khaled BassamAwary" ]
10.3390/s22020669 10.1016/j.cca.2020.03.009 10.7326/M20-1495 10.1016/j.chaos.2020.110338 10.1093/bib/bbw068 10.1016/j.ibmed.2020.100013 10.1155/2021/5587188 10.1109/ACCESS.2021.3097559 10.3390/ijerph18126429 10.3390/jcm9092990 10.1017/dmp.2020.346 10.1007/s00330-020-07269-8 10.1148/radiol.2020203511 10.3390/info11090419 10.1177/2472630320958376 10.1148/radiol.2020202944 10.9781/ijimai.2021.04.001 10.1016/j.media.2020.101794 10.3390/app10165683 10.21227/w3aw-rv39 10.1016/j.media.2020.101797 10.1016/j.mlwa.2021.100138 10.1007/s13755-021-00166-4 10.1371/journal.pone.0257884 10.1007/s00330-021-08050-1 10.1038/s41598-020-74539-2
Automated detection of COVID-19 through convolutional neural network using chest x-ray images.
The COVID-19 epidemic has a catastrophic impact on global well-being and public health. More than 27 million confirmed cases have been reported worldwide until now. Due to the growing number of confirmed cases, and challenges to the variations of the COVID-19, timely and accurate classification of healthy and infected patients is essential to control and treat COVID-19. We aim to develop a deep learning-based system for the persuasive classification and reliable detection of COVID-19 using chest radiography. Firstly, we evaluate the performance of various state-of-the-art convolutional neural networks (CNNs) proposed over recent years for medical image classification. Secondly, we develop and train CNN from scratch. In both cases, we use a public X-Ray dataset for training and validation purposes. For transfer learning, we obtain 100% accuracy for binary classification (i.e., Normal/COVID-19) and 87.50% accuracy for tertiary classification (Normal/COVID-19/Pneumonia). With the CNN trained from scratch, we achieve 93.75% accuracy for tertiary classification. In the case of transfer learning, the classification accuracy drops with the increased number of classes. The results are demonstrated by comprehensive receiver operating characteristics (ROC) and confusion metric analysis with 10-fold cross-validation.
PloS one
"2022-01-22T00:00:00"
[ "RubinaSarki", "KhandakarAhmed", "HuaWang", "YanchunZhang", "KateWang" ]
10.1371/journal.pone.0262052 10.1001/jama.2020.3786 10.1148/radiol.2020200642 10.1007/s13755-021-00152-w 10.1007/s13755-021-00158-4 10.1056/NEJMoa2002032 10.1183/09031936.01.00213501 10.1007/s10044-021-00984-y 10.1038/s41598-020-76550-z 10.1016/j.compbiomed.2020.103792 10.1007/s13755-020-00129-1 10.1038/nature14539 10.1007/s13755-020-00125-5 10.1007/s13755-018-0046-0 10.1007/s13755-019-0084-2 10.1101/763136 10.1109/ACCESS.2020.3015258 10.1016/j.eng.2020.04.010 10.1007/s10489-020-02051-1 10.1148/ryct.2020200034 10.1016/j.media.2020.101794 10.1016/j.patrec.2020.09.010 10.1016/j.mehy.2020.109761 10.1016/j.chaos.2020.110495 10.1016/j.eswa.2020.114054 10.1016/j.cell.2018.02.010 10.1016/j.media.2017.07.005 10.1016/j.procs.2016.07.014 10.1049/el:20083469 10.1016/j.ejrnm.2015.01.004 10.1016/j.ipm.2009.03.002 10.1007/s13246-020-00865-4 10.1136/bjo.80.11.940
COVID-19 detection using chest X-ray images based on a developed deep neural network.
Currently, a new coronavirus called COVID-19 is the biggest challenge of the human at 21st century. Now, the spread of this virus is such that mortality has risen strongly in all cities of countries. Therefore, it is necessary to think of a solution to handle the disease by fast and timely diagnosis. This paper proposes a method that uses chest X-ray imagery to divide 2-4 classes into 7 different Scenarios, including Bacterial, Viral, Healthy, and COVID-19 classes. The aim of this study is to propose a method that uses chest X-ray imagery to divide 2-4 classes into 7 different Scenarios, including Bacterial, Viral, Healthy, and COVID-19 classes. 6 different databases from chest X-ray imagery that have been widely used in recent studies have been gathered for this aim. A Convolutional Neural Network-Long Short Time Memory model is designed and developed to extract features from raw data hierarchically. In order to make more realistic assumptions and use the Proposed Method in the practical field, white Gaussian noise is added to the raw chest X-ray imagery. Additionally, the proposed network is tested and investigated not only on 6 expressed databases but also on two additional databases. On the test set, the proposed network achieved an accuracy of more than 90% for all Scenarios excluding Scenario V, i.e. Healthy against the COVID-19 against the Viral, and also achieved 99% accuracy for separating the COVID-19 from the Healthy group. The results showed that the proposed network is robust to noise up to 1 dB. It is worth noting that the proposed network for two additional databases, which were only used as test databases, also achieved more than 90% accuracy. In addition, in comparison to the state-of-the-art pneumonia detection approaches, the final results obtained from the proposed network is so promising. The proposed network is effective in detecting COVID-19 and other lung infectious diseases using chest X-ray imagery and can thus assist radiologists in making rapid and accurate detections.
SLAS technology
"2022-01-22T00:00:00"
[ "ZohrehMousavi", "NahalShahini", "SobhanSheykhivand", "SinaMojtahedi", "AfroozArshadi" ]
10.1016/j.slast.2021.10.011
Deep Learning-Based Four-Region Lung Segmentation in Chest Radiography for COVID-19 Diagnosis.
Imaging plays an important role in assessing the severity of COVID-19 pneumonia. Recent COVID-19 research indicates that the disease progress propagates from the bottom of the lungs to the top. However, chest radiography (CXR) cannot directly provide a quantitative metric of radiographic opacities, and existing AI-assisted CXR analysis methods do not quantify the regional severity. In this paper, to assist the regional analysis, we developed a fully automated framework using deep learning-based four-region segmentation and detection models to assist the quantification of COVID-19 pneumonia. Specifically, a segmentation model is first applied to separate left and right lungs, and then a detection network of the carina and left hilum is used to separate upper and lower lungs. To improve the segmentation performance, an ensemble strategy with five models is exploited. We evaluated the clinical relevance of the proposed method compared with the radiographic assessment of the quality of lung edema (RALE) annotated by physicians. Mean intensities of segmented four regions indicate a positive correlation to the regional extent and density scores of pulmonary opacities based on the RALE. Therefore, the proposed method can accurately assist the quantification of regional pulmonary opacities of COVID-19 pneumonia patients.
Diagnostics (Basel, Switzerland)
"2022-01-22T00:00:00"
[ "Young-GonKim", "KyungsangKim", "DufanWu", "HuiRen", "Won YoungTak", "Soo YoungPark", "Yu RimLee", "Min KyuKang", "Jung GilPark", "Byung SeokKim", "Woo JinChung", "Mannudeep KKalra", "QuanzhengLi" ]
10.3390/diagnostics12010101 10.5694/mja2.50674 10.1016/S0140-6736(20)30183-5 10.1016/S1473-3099(20)30120-1 10.1148/radiol.2020200642 10.2214/AJR.20.22975 10.1007/s10044-021-00984-y 10.1038/s41598-020-76550-z 10.1007/s40846-020-00529-4 10.2214/AJR.19.21512 10.3389/fphys.2021.672823 10.1016/j.cmpb.2019.06.005 10.1136/thoraxjnl-2017-211280 10.1148/radiol.2020200843 10.1148/radiol.2020201754 10.1007/s11547-020-01200-3 10.1007/978-3-319-24574-4_28 10.3390/info11020125 10.1148/rg.2016150115 10.2214/ajr.174.1.1740071
COVID-Net CXR-S: Deep Convolutional Neural Network for Severity Assessment of COVID-19 Cases from Chest X-ray Images.
The world is still struggling in controlling and containing the spread of the COVID-19 pandemic caused by the SARS-CoV-2 virus. The medical conditions associated with SARS-CoV-2 infections have resulted in a surge in the number of patients at clinics and hospitals, leading to a significantly increased strain on healthcare resources. As such, an important part of managing and handling patients with SARS-CoV-2 infections within the clinical workflow is severity assessment, which is often conducted with the use of chest X-ray (CXR) images. In this work, we introduce COVID-Net CXR-S, a convolutional neural network for predicting the airspace severity of a SARS-CoV-2 positive patient based on a CXR image of the patient's chest. More specifically, we leveraged transfer learning to transfer representational knowledge gained from over 16,000 CXR images from a multinational cohort of over 15,000 SARS-CoV-2 positive and negative patient cases into a custom network architecture for severity assessment. Experimental results using the RSNA RICORD dataset showed that the proposed COVID-Net CXR-S has potential to be a powerful tool for computer-aided severity assessment of CXR images of COVID-19 positive patients. Furthermore, radiologist validation on select cases by two board-certified radiologists with over 10 and 19 years of experience, respectively, showed consistency between radiologist interpretation and critical factors leveraged by COVID-Net CXR-S for severity assessment. While not a production-ready solution, the ultimate goal for the open source release of COVID-Net CXR-S is to act as a catalyst for clinical scientists, machine learning researchers, as well as citizen scientists to develop innovative new clinical decision support solutions for helping clinicians around the world manage the continuing pandemic.
Diagnostics (Basel, Switzerland)
"2022-01-22T00:00:00"
[ "HosseinAboutalebi", "MayaPavlova", "Mohammad JavadShafiee", "AliSabri", "AmerAlaref", "AlexanderWong" ]
10.3390/diagnostics12010025 10.3389/fpubh.2020.00241 10.1186/s13054-020-03021-2 10.1136/thoraxjnl-2020-215518 10.1038/s41598-020-76550-z 10.1007/s10489-020-01902-1 10.3389/fmed.2020.608525 10.1101/2020.04.13.20063941 10.1016/j.cell.2020.04.045 10.7759/cureus.9448 10.1148/radiol.2021203957
Objective evaluation of deep uncertainty predictions for COVID-19 detection.
Deep neural networks (DNNs) have been widely applied for detecting COVID-19 in medical images. Existing studies mainly apply transfer learning and other data representation strategies to generate accurate point estimates. The generalization power of these networks is always questionable due to being developed using small datasets and failing to report their predictive confidence. Quantifying uncertainties associated with DNN predictions is a prerequisite for their trusted deployment in medical settings. Here we apply and evaluate three uncertainty quantification techniques for COVID-19 detection using chest X-Ray (CXR) images. The novel concept of uncertainty confusion matrix is proposed and new performance metrics for the objective evaluation of uncertainty estimates are introduced. Through comprehensive experiments, it is shown that networks pertained on CXR images outperform networks pretrained on natural image datasets such as ImageNet. Qualitatively and quantitatively evaluations also reveal that the predictive uncertainty estimates are statistically higher for erroneous predictions than correct predictions. Accordingly, uncertainty quantification methods are capable of flagging risky predictions with high uncertainty estimates. We also observe that ensemble methods more reliably capture uncertainties during the inference. DNN-based solutions for COVID-19 detection have been mainly proposed without any principled mechanism for risk mitigation. Previous studies have mainly focused on on generating single-valued predictions using pretrained DNNs. In this paper, we comprehensively apply and comparatively evaluate three uncertainty quantification techniques for COVID-19 detection using chest X-Ray images. The novel concept of uncertainty confusion matrix is proposed and new performance metrics for the objective evaluation of uncertainty estimates are introduced for the first time. Using these new uncertainty performance metrics, we quantitatively demonstrate when we could trust DNN predictions for COVID-19 detection from chest X-rays. It is important to note the proposed novel uncertainty evaluation metrics are generic and could be applied for evaluation of probabilistic forecasts in all classification problems.
Scientific reports
"2022-01-19T00:00:00"
[ "HamzehAsgharnezhad", "AfsharShamsi", "RoohallahAlizadehsani", "AbbasKhosravi", "SaeidNahavandi", "Zahra AlizadehSani", "DiptiSrinivasan", "Sheikh Mohammed SharifulIslam" ]
10.1038/s41598-022-05052-x 10.1038/nature21056 10.1038/s41591-018-0316-z 10.26599/BDMA.2020.9020012 10.26599/TST.2021.9010026 10.26599/BDMA.2020.9020013 10.1016/j.cnsns.2020.105372 10.26599/TST.2019.9010007 10.1148/ryai.2019180041 10.1148/radiol.2019191293 10.1093/jamia/ocv080
Survey on Diagnosing CORONA VIRUS from Radiography Chest X-ray Images Using Convolutional Neural Networks.
Corona Virus continues to harms its effects on the people lives across the globe. The screening of infected persons has to be identified is a vital step because it is a fast and low-cost way. Certain above mentioned things can be recognized by chest X-ray images that plays a significant role and also used for examining in detection of CORONA VIRUS(COVID-19). Here radiological chest X-rays are easily available with low cost only. In this survey paper, Convolutional Neural Network(CNN) based solution that will benefit in detection of the Covid-19 positive patients using radiography chest X-Ray images. To test the efficiency of the solution, using data sets of publicly available X-Ray images of Corona virus positive cases and negative cases. Images of positive Corona Virus patients and pictures of healthy person images are divided into testing images and trainable images. The solution which are providing the good results with classification accuracy within the test set-up. Then GUI based application supports for medical examination areas. This GUI application can be used on any computer and performed by any medical examiner or technician to determine Corona Virus positive patients using radiography X-ray images. The result will be precisely obtaining the Covid-19 Patient analysis through the chest X-ray images and also results may be retrieve within a few seconds.
Wireless personal communications
"2022-01-18T00:00:00"
[ "J TThirukrishna", "Sanda Reddy SaiKrishna", "PolicherlaShashank", "SSrikanth", "VRaghu" ]
10.1007/s11277-022-09463-x 10.1109/ACCESS.2020.3025010 10.1109/ACCESS.2020.3033762 10.1007/s11277-021-08466-4 10.1109/JAS.2020.1003393 10.1007/s10489-020-01829-7 10.1016/j.future.2017.11.042 10.1504/IJCC.2020.109379
Segmentation and classification on chest radiography: a systematic survey.
Chest radiography (X-ray) is the most common diagnostic method for pulmonary disorders. A trained radiologist is required for interpreting the radiographs. But sometimes, even experienced radiologists can misinterpret the findings. This leads to the need for computer-aided detection diagnosis. For decades, researchers were automatically detecting pulmonary disorders using the traditional computer vision (CV) methods. Now the availability of large annotated datasets and computing hardware has made it possible for deep learning to dominate the area. It is now the modus operandi for feature extraction, segmentation, detection, and classification tasks in medical imaging analysis. This paper focuses on the research conducted using chest X-rays for the lung segmentation and detection/classification of pulmonary disorders on publicly available datasets. The studies performed using the Generative Adversarial Network (GAN) models for segmentation and classification on chest X-rays are also included in this study. GAN has gained the interest of the CV community as it can help with medical data scarcity. In this study, we have also included the research conducted before the popularity of deep learning models to have a clear picture of the field. Many surveys have been published, but none of them is dedicated to chest X-rays. This study will help the readers to know about the existing techniques, approaches, and their significance.
The Visual computer
"2022-01-18T00:00:00"
[ "TarunAgrawal", "PrakashChoudhary" ]
10.1007/s00371-021-02352-7 10.1007/s00371-019-01630-9 10.1109/42.996338 10.1007/s10140-008-0763-9 10.1016/S1076-6332(98)80223-7 10.1148/radiology.182.1.1727272 10.1007/s13246-020-00966-0 10.1109/TPAMI.2016.2644615 10.1038/s41598-019-42294-8 10.1109/ACCESS.2018.2877890 10.1016/S0895-6111(98)00051-2 10.1109/TMI.2013.2290491 10.1007/BF01385685 10.1016/j.artmed.2020.101881 10.1109/TBME.2012.2226583 10.1118/1.3561504 10.1109/TSMCB.2004.831165 10.1016/j.acra.2005.08.035 10.1006/cviu.1995.1004 10.1109/TITB.2003.821313 10.1093/jamia/ocv080 10.1016/j.patrec.2020.12.010 10.1016/j.eswa.2021.115519 10.1118/1.597539 10.1118/1.3013555 10.1016/j.patcog.2017.10.013 10.1016/j.patrec.2019.11.040 10.1016/S0140-6736(99)06093-6 10.1007/s10489-020-02010-w 10.1007/s00371-020-01799-4 10.1109/TMI.2013.2284099 10.1007/s10489-020-01902-1 10.1016/j.measurement.2019.05.076 10.1007/s00371-019-01628-3 10.1007/BF00133570 10.1016/j.cell.2018.02.010 10.3390/s21020369 10.1109/34.387512 10.1148/radiol.2017162326 10.1038/nature14539 10.1109/5.726791 10.5588/ijtld.11.0425 10.1109/ACCESS.2018.2817023 10.1016/S1076-6332(03)80688-8 10.1016/j.artmed.2019.101744 10.1016/j.media.2017.07.005 10.1016/j.compmedimag.2019.05.005 10.1109/42.476112 10.1007/s11684-019-0726-4 10.1016/j.asoc.2020.106691 10.1016/S1361-8415(96)80007-7 10.1007/s11277-018-5702-9 10.1109/ACCESS.2020.3041867 10.1109/ACCESS.2020.3017915 10.1109/TMI.2018.2806086 10.1016/j.acra.2019.10.006 10.1001/jama.2011.1591 10.1016/j.chaos.2020.109944 10.1038/s41598-019-42557-4 10.1016/j.ajem.2009.07.011 10.1118/1.596209 10.3390/app10093233 10.1109/ACCESS.2020.3031384 10.1007/s00371-019-01649-y 10.1109/TIT.1956.1056810 10.1002/mp.14507 10.1037/h0042519 10.1016/j.media.2005.09.003 10.1148/radiol.2261011924 10.1109/TMI.2014.2305691 10.1109/TMI.2007.908130 10.3390/app11062751 10.1016/j.cmpb.2019.06.005 10.1016/j.media.2020.101693 10.1109/TMI.2002.803121 10.1109/42.993132 10.1109/42.974918 10.1016/j.media.2005.02.002 10.3390/s21051742 10.1118/1.598405 10.1109/ACCESS.2020.2994762 10.1118/1.597549 10.1118/1.597738 10.1007/s12021-018-9377-x 10.1016/j.media.2019.101552 10.1007/s10489-020-01867-1 10.1016/j.bspc.2018.01.011
Detecting Racial/Ethnic Health Disparities Using Deep Learning From Frontal Chest Radiography.
The aim of this study was to assess racial/ethnic and socioeconomic disparities in the difference between atherosclerotic vascular disease prevalence measured by a multitask convolutional neural network (CNN) deep learning model using frontal chest radiographs (CXRs) and the prevalence reflected by administrative hierarchical condition category codes in two cohorts of patients with coronavirus disease 2019 (COVID-19). A CNN model, previously published, was trained to predict atherosclerotic disease from ambulatory frontal CXRs. The model was then validated on two cohorts of patients with COVID-19: 814 ambulatory patients from a suburban location (presenting from March 14, 2020, to October 24, 2020, the internal ambulatory cohort) and 485 hospitalized patients from an inner-city location (hospitalized from March 14, 2020, to August 12, 2020, the external hospitalized cohort). The CNN model predictions were validated against electronic health record administrative codes in both cohorts and assessed using the area under the receiver operating characteristic curve (AUC). The CXRs from the ambulatory cohort were also reviewed by two board-certified radiologists and compared with the CNN-predicted values for the same cohort to produce a receiver operating characteristic curve and the AUC. The atherosclerosis diagnosis discrepancy, Δ The CNN prediction for vascular disease from frontal CXRs in the ambulatory cohort had an AUC of 0.85 (95% confidence interval, 0.82-0.89) and in the hospitalized cohort had an AUC of 0.69 (95% confidence interval, 0.64-0.75) against the electronic health record data. In the ambulatory cohort, the consensus radiologists' reading had an AUC of 0.89 (95% confidence interval, 0.86-0.92) relative to the CNN. Multivariate linear regression of Δ A CNN model was predictive of aortic atherosclerosis in two cohorts (one ambulatory and one hospitalized) with COVID-19. The discrepancy between the CNN model and the administrative code, Δ
Journal of the American College of Radiology : JACR
"2022-01-17T00:00:00"
[ "AyisPyrros", "Jorge MarioRodríguez-Fernández", "Stephen MBorstelmann", "Judy WawiraGichoya", "Jeanne MHorowitz", "BrianFornelli", "NasirSiddiqui", "YuryVelichko", "OluwasanmiKoyejo Sanmi", "WilliamGalanter" ]
10.1016/j.jacr.2021.09.010 10.1001/jama.2020.8598 10.5334/ijic.2500 10.1016/j.acra.2021.05.002
Bayesian-based optimized deep learning model to detect COVID-19 patients using chest X-ray image data.
Coronavirus Disease 2019 (COVID-19) is extremely infectious and rapidly spreading around the globe. As a result, rapid and precise identification of COVID-19 patients is critical. Deep Learning has shown promising performance in a variety of domains and emerged as a key technology in Artificial Intelligence. Recent advances in visual recognition are based on image classification and artefacts detection within these images. The purpose of this study is to classify chest X-ray images of COVID-19 artefacts in changed real-world situations. A novel Bayesian optimization-based convolutional neural network (CNN) model is proposed for the recognition of chest X-ray images. The proposed model has two main components. The first one utilizes CNN to extract and learn deep features. The second component is a Bayesian-based optimizer that is used to tune the CNN hyperparameters according to an objective function. The used large-scale and balanced dataset comprises 10,848 images (i.e., 3616 COVID-19, 3616 normal cases, and 3616 Pneumonia). In the first ablation investigation, we compared Bayesian optimization to three distinct ablation scenarios. We used convergence charts and accuracy to compare the three scenarios. We noticed that the Bayesian search-derived optimal architecture achieved 96% accuracy. To assist qualitative researchers, address their research questions in a methodologically sound manner, a comparison of research method and theme analysis methods was provided. The suggested model is shown to be more trustworthy and accurate in real world.
Computers in biology and medicine
"2022-01-14T00:00:00"
[ "MohamedLoey", "ShakerEl-Sappagh", "SeyedaliMirjalili" ]
10.1016/j.compbiomed.2022.105213 10.1038/s41586-020-2008-3 10.1016/S0140-6736(20)30211-7 10.1056/NEJMoa2001017 10.1056/NEJMoa2001316 10.1007/s10238-020-00648-x 10.1109/ACCESS.2020.2992341 10.1016/j.measurement.2020.108288 10.1109/ISIA51297.2020.9416545 10.1109/ACCESS.2020.3030090 10.1016/j.scs.2020.102600 10.1038/s41562-020-01009-0 10.1109/ICELTICs50595.2020.9315493 10.1109/EIConCIT50028.2021.9431852 10.1038/d41586-021-00785-7 10.1038/d41586-021-00094-z 10.1016/j.rbmo.2020.06.001 10.1007/s11548-020-02305-w 10.1109/ACCESS.2021.3050852 10.1007/s12559-020-09787-5 10.1007/s00521-020-05437-x 10.1109/ACCESS.2020.3025164 10.1007/s12652-021-03075-2 10.1155/2020/8876798 10.1109/ACPR.2015.7486599 10.1109/CVPR.2015.7298594 10.1109/CVPR.2016.90 10.1371/journal.pone.0242535 10.1007/s13755-020-00119-3 10.3390/sym12040651 10.1007/s10489-020-01829-7 10.3390/electronics9091439 10.1016/j.media.2020.101794 10.1109/ICEIEC49280.2020.9152329 10.3390/info11090419 10.1177/2472630320958376 10.1155/2020/8828855 10.1186/s40537-019-0192-5 10.1007/s13748-016-0094-0 10.1016/j.compbiomed.2021.104319 10.1109/ACCESS.2020.3010287 10.1016/j.cell.2018.02.010 10.11989/JEST.1674-862X.80904120 10.1007/s12530-020-09345-2 10.1007/s41664-018-0068-2
COVID-19 disease diagnosis from paper-based ECG trace image data using a novel convolutional neural network model.
Clinical reports show that COVID-19 disease has impacts on the cardiovascular system in addition to the respiratory system. Available COVID-19 diagnostic methods have been shown to have limitations. In addition to current diagnostic methods such as low-sensitivity standard RT-PCR tests and expensive medical imaging devices, the development of alternative methods for the diagnosis of COVID-19 disease would be beneficial for control of the COVID-19 pandemic. Further, it is important to quickly and accurately detect abnormalities caused by COVID-19 on the cardiovascular system via ECG. In this study, the diagnosis of COVID-19 disease is proposed using a novel deep Convolutional Neural Network model by using only ECG trace images created from ECG signals of COVID-19 infected patients based on the abnormalities caused by the COVID-19 virus on the cardiovascular system. An overall classification accuracy of 98.57%, 93.20%, 96.74% and AUC value of 0.9966, 0.9771, 0.9905 is achieved for COVID-19 vs. Normal, COVID-19 vs. Abnormal Heartbeats, COVID-19 vs. Myocardial Infarction binary classification tasks, respectively. In addition, an overall classification accuracy of 86.55% and 83.05% is achieved for COVID-19 vs. Abnormal Heartbeats vs. Myocardial Infarction and Normal vs. COVID-19 vs. Abnormal Heartbeats vs. Myocardial Infarction multi-classification tasks. This study is believed to have great potential to speed up the diagnosis and treatment of COVID-19 patients, saving clinicians time and facilitating the control of the pandemic.
Physical and engineering sciences in medicine
"2022-01-13T00:00:00"
[ "EmrahIrmak" ]
10.1007/s13246-022-01102-w 10.3390/ijerph17082690 10.1148/radiol.2020200905 10.1148/radiol.2020201160 10.1152/physiolgenomics.00084.2020 10.1049/ipr2.12153 10.7759/cureus.9540 10.1016/j.acra.2021.01.022 10.1111/anec.12815 10.22037/aaem.v9i1.957 10.1016/j.ihj.2020.11.007 10.1016/j.ejim.2020.06.015 10.1016/j.compbiomed.2020.103805 10.1007/s13246-020-00865-4 10.3390/SYM12040651 10.1007/s10096-020-03901-z 10.1101/2020.02.25.20021568 10.1148/ryai.2020200048 10.1016/j.media.2020.101860 10.1007/s00330-020-07042-x 10.1148/ryct.2020200075 10.1371/journal.pone.0236621 10.1186/s12911-021-01521-x 10.1093/eurheartj/ehaa408 10.1016/j.cjca.2020.03.028 10.1016/j.echo.2020.05.028 10.1016/j.dib.2021.106762 10.3390/electronics8030292 10.5152/electrica.2020.21004 10.1007/s40998-021-00426-9 10.1515/itms-2017-0003 10.1016/j.icte.2020.04.010 10.1016/j.compbiomed.2020.103792 10.1016/j.eswa.2020.114054 10.1016/j.irbm.2020.05.003 10.1016/j.compbiomed.2020.104037
Inverted bell-curve-based ensemble of deep learning models for detection of COVID-19 from chest X-rays.
Novel Coronavirus 2019 disease or COVID-19 is a viral disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The use of chest X-rays (CXRs) has become an important practice to assist in the diagnosis of COVID-19 as they can be used to detect the abnormalities developed in the infected patients' lungs. With the fast spread of the disease, many researchers across the world are striving to use several deep learning-based systems to identify the COVID-19 from such CXR images. To this end, we propose an inverted bell-curve-based ensemble of deep learning models for the detection of COVID-19 from CXR images. We first use a selection of models pretrained on ImageNet dataset and use the concept of transfer learning to retrain them with CXR datasets. Then the trained models are combined with the proposed inverted bell curve weighted ensemble method, where the output of each classifier is assigned a weight, and the final prediction is done by performing a weighted average of those outputs. We evaluate the proposed method on two publicly available datasets: the COVID-19 Radiography Database and the IEEE COVID Chest X-ray Dataset. The accuracy, F1 score and the AUC ROC achieved by the proposed method are 99.66%, 99.75% and 99.99%, respectively, in the first dataset, and, 99.84%, 99.81% and 99.99%, respectively, in the other dataset. Experimental results ensure that the use of transfer learning-based models and their combination using the proposed ensemble method result in improved predictions of COVID-19 in CXRs.
Neural computing & applications
"2022-01-12T00:00:00"
[ "AshisPaul", "ArpanBasu", "MuftiMahmud", "M ShamimKaiser", "RamSarkar" ]
10.1007/s00521-021-06737-6 10.1016/S0140-6736(20)30154-9 10.1007/s12559-020-09751-3 10.1016/j.cell.2018.02.010 10.1016/j.chest.2020.04.003 10.1109/ACCESS.2021.3050193 10.1109/TNNLS.2018.2790388 10.1109/34.273716 10.1109/ACCESS.2020.3010287 10.1016/j.inffus.2019.02.003 10.1016/j.cose.2020.101748 10.1016/j.compmedimag.2019.101660 10.1016/j.artmed.2019.101749 10.1109/TMI.2020.2995508 10.1109/ACCESS.2020.3003810 10.1007/s12539-020-00393-5 10.1007/s10489-021-02292-8 10.1007/s10489-020-01904-z 10.1016/j.asoc.2020.106742 10.1109/ACCESS.2020.2994762 10.3390/jimaging4020039 10.1109/TSMCB.2008.2009071 10.1038/s41598-019-56847-4 10.1109/JAS.2020.1003387 10.1002/jcph.1644 10.1093/jamia/ocaa280 10.1007/s10489-020-01888-w 10.3390/jimaging6060037 10.1007/s00521-020-05636-6
Classifier Fusion for Detection of COVID-19 from CT Scans.
The coronavirus disease (COVID-19) is an infectious disease caused by the SARS-CoV-2 virus. COVID-19 is found to be the most infectious disease in last few decades. This disease has infected millions of people worldwide. The inadequate availability and the limited sensitivity of the testing kits have motivated the clinicians and the scientist to use Computer Tomography (CT) scans to screen COVID-19. Recent advances in technology and the availability of deep learning approaches have proved to be very promising in detecting COVID-19 with increased accuracy. However, deep learning approaches require a huge labeled training dataset, and the current availability of benchmark COVID-19 data is still small. For the limited training data scenario, the CNN usually overfits after several iterations. Hence, in this work, we have investigated different pre-trained network architectures with transfer learning for COVID-19 detection that can work even on a small medical imaging dataset. Various variants of the pre-trained ResNet model, namely ResNet18, ResNet50, and ResNet101, are investigated in the current paper for the detection of COVID-19. The experimental results reveal that transfer learned ResNet50 model outperformed other models by achieving a recall of 98.80% and an F1-score of 98.41%. To further improvise the results, the activations from different layers of best performing model are also explored for the detection using the support vector machine, logistic regression and K-nearest neighbor classifiers. Moreover, a classifier fusion strategy is also proposed that fuses the predictions from the different classifiers via majority voting. Experimental results reveal that via using learned image features and classification fusion strategy, the recall, and F1-score have improvised to 99.20% and 99.40%.
Circuits, systems, and signal processing
"2022-01-11T00:00:00"
[ "TaranjitKaur", "Tapan KumarGandhi" ]
10.1007/s00034-021-01939-8
BEMD-3DCNN-based method for COVID-19 detection.
The coronavirus outbreak continues to spread around the world and no one knows when it will stop. Therefore, from the first day of the identification of the virus in Wuhan, China, scientists have launched numerous research projects to understand the nature of the virus, how to detect it, and search for the most effective medicine to help and protect patients. Importantly, a rapid diagnostic and detection system is a priority and should be developed to stop COVID-19 from spreading. Medical imaging techniques have been used for this purpose. Current research is focused on exploiting different backbones like VGG, ResNet, DenseNet, or combining them to detect COVID-19. By using these backbones many aspects cannot be analyzed like the spatial and contextual information in the images, although this information can be useful for more robust detection performance. In this paper, we used 3D representation of the data as input for the proposed 3DCNN-based deep learning model. The process includes using the Bi-dimensional Empirical Mode Decomposition (BEMD) technique to decompose the original image into IMFs, and then building a video of these IMF images. The formed video is used as input for the 3DCNN model to classify and detect the COVID-19 virus. The 3DCNN model consists of a 3D VGG-16 backbone followed by a Context-aware attention (CAA) module, and then fully connected layers for classification. Each CAA module takes the feature maps of different blocks of the backbone, which allows learning from different feature maps. In our experiments, we used 6484 X-ray images, of which 1802 were COVID-19 positive cases, 1910 normal cases, and 2772 pneumonia cases. The experiment results showed that our proposed technique achieved the desired results on the selected dataset. Additionally, the use of the 3DCNN model with contextual information processing exploited CAA networks to achieve better performance.
Computers in biology and medicine
"2022-01-09T00:00:00"
[ "AliRiahi", "OmarElharrouss", "SomayaAl-Maadeed" ]
10.1016/j.compbiomed.2021.105188 10.1038/s41598-021-96601-3 10.20944/preprints202003.0300.v1 10.1109/RBME.2020.2987975 10.1109/ACCESS.2020.3010287 10.1155/2015/769478 10.1109/ICASSP.2011.5946778 10.1007/s42600-021-00151-6 10.1109/TII.2021.3057683
Artificial intelligence for stepwise diagnosis and monitoring of COVID-19.
Main challenges for COVID-19 include the lack of a rapid diagnostic test, a suitable tool to monitor and predict a patient's clinical course and an efficient way for data sharing among multicenters. We thus developed a novel artificial intelligence system based on deep learning (DL) and federated learning (FL) for the diagnosis, monitoring, and prediction of a patient's clinical course. CT imaging derived from 6 different multicenter cohorts were used for stepwise diagnostic algorithm to diagnose COVID-19, with or without clinical data. Patients with more than 3 consecutive CT images were trained for the monitoring algorithm. FL has been applied for decentralized refinement of independently built DL models. A total of 1,552,988 CT slices from 4804 patients were used. The model can diagnose COVID-19 based on CT alone with the AUC being 0.98 (95% CI 0.97-0.99), and outperforms the radiologist's assessment. We have also successfully tested the incorporation of the DL diagnostic model with the FL framework. Its auto-segmentation analyses co-related well with those by radiologists and achieved a high Dice's coefficient of 0.77. It can produce a predictive curve of a patient's clinical course if serial CT assessments are available. The system has high consistency in diagnosing COVID-19 based on CT, with or without clinical data. Alternatively, it can be implemented on a FL platform, which would potentially encourage the data sharing in the future. It also can produce an objective predictive curve of a patient's clinical course for visualization. • CoviDet could diagnose COVID-19 based on chest CT with high consistency; this outperformed the radiologist's assessment. Its auto-segmentation analyses co-related well with those by radiologists and could potentially monitor and predict a patient's clinical course if serial CT assessments are available. It can be integrated into the federated learning framework. • CoviDet can be used as an adjunct to aid clinicians with the CT diagnosis of COVID-19 and can potentially be used for disease monitoring; federated learning can potentially open opportunities for global collaboration.
European radiology
"2022-01-07T00:00:00"
[ "HengruiLiang", "YuchenGuo", "XiangruChen", "Keng-LeongAng", "YuweiHe", "NaJiang", "QiangDu", "QingsiZeng", "LigongLu", "ZebinGao", "LinduoLi", "QuanzhengLi", "FangxingNie", "GuiguangDing", "GaoHuang", "AilanChen", "YiminLi", "WeijieGuan", "LingSang", "YuandaXu", "HuaiChen", "ZishengChen", "ShiyueLi", "NuofuZhang", "YingChen", "DanxiaHuang", "RunLi", "JianfuLi", "BoCheng", "YiZhao", "CaichenLi", "ShanXiong", "RunchenWang", "JunLiu", "WeiWang", "JunHuang", "FeiCui", "TaoXu", "Fleming Y MLure", "MeixiaoZhan", "YuanyiHuang", "QiangYang", "QionghaiDai", "WenhuaLiang", "JianxingHe", "NanshanZhong" ]
10.1007/s00330-021-08334-6 10.1016/j.crad.2017.11.015 10.21037/atm.2020.03.132 10.1016/j.cell.2020.04.045 10.1016/j.compbiomed.2020.103795 10.1016/j.ejrad.2020.109041 10.2196/19569 10.1038/s41467-020-17971-2 10.1109/TMI.2020.2994908 10.1109/TMI.2020.2995508 10.1109/TMI.2020.2996256 10.1109/TIP.2018.2857219
Fully automatic pipeline of convolutional neural networks and capsule networks to distinguish COVID-19 from community-acquired pneumonia via CT images.
Chest computed tomography (CT) is crucial in the diagnosis of coronavirus disease 2019 (COVID-19). However, the persistent pandemic and similar CT manifestations between COVID-19 and community-acquired pneumonia (CAP) raise methodological requirements. A fully automatic pipeline of deep learning is proposed for distinguishing COVID-19 from CAP using CT images. Inspired by the diagnostic process of radiologists, the pipeline comprises four connected modules for lung segmentation, selection of slices with lesions, slice-level prediction, and patient-level prediction. The roles of the first and second modules and the effectiveness of the capsule network for slice-level prediction were investigated. A dataset of 326 CT scans was collected to train and test the pipeline. Another public dataset of 110 patients was used to evaluate the generalization capability. LinkNet exhibited the largest intersection over union (0.967) and Dice coefficient (0.983) for lung segmentation. For the selection of slices with lesions, the capsule network with the ResNet50 block achieved an accuracy of 92.5% and an area under the curve (AUC) of 0.933. The capsule network using the DenseNet121 block demonstrated better performance for slice-level prediction, with an accuracy of 97.1% and AUC of 0.992. For both datasets, the prediction accuracy of our pipeline was 100% at the patient level. The proposed fully automatic deep learning pipeline of deep learning can distinguish COVID-19 from CAP via CT images rapidly and accurately, thereby accelerating diagnosis and augmenting the performance of radiologists. This pipeline is convenient for use by radiologists and provides explainable predictions.
Computers in biology and medicine
"2022-01-04T00:00:00"
[ "QianqianQi", "ShouliangQi", "YananWu", "ChenLi", "BinTian", "ShuyueXia", "JigangRen", "LimingYang", "HanlinWang", "HuiYu" ]
10.1016/j.compbiomed.2021.105182 10.1148/radiol.2020200905 10.1016/S0140-6736(20)30628-0 10.1016/S2213-2600(20)30079-5 10.1148/radiol.2020200642 10.1093/cid/ciaa461 10.1016/j.jinf.2020.03.051 10.1016/j.ajem.2020.04.016 10.1109/ACCESS.2020.3005510 10.1016/j.patcog.2021.107848 10.1016/j.jinf.2020.04.004 10.1016/j.patrec.2021.09.012 10.1007/s00330-020-07628-5 10.1148/radiol.2020200823 10.1109/TMI.2020.2993291 10.1101/2020.12.19.20248530 10.1109/ICCCS49678.2020.9277077 10.1109/ACCESS.2020.2994762 10.3233/XST-200715 10.1101/2020.04.24.20078584 10.3390/s21020455 10.1109/tmi.2020.2995508 10.1016/j.cmpb.2021.106406 10.1109/VCIP.2017.8305148 10.1016/j.cell.2020.04.045 10.1007/978-3-319-24574-4_28 10.1117/1.JMI.6.1.014006 10.1007/978-3-030-00889-5_1 10.1109/TMI.2019.2959609 10.1109/CVPR.2016.90 10.1038/s41746-021-00399-3 10.1007/s00259-020-04929-1 10.1016/j.patrec.2021.10.027 10.1056/NEJMoa2001316 10.1002/jmv.27335 10.1016/j.brainresbull.2021.08.012 10.15446/ing.investig.v42n1.90289 10.1101/2020.11.07.20227504 10.2807/1560-7917.ES.2020.25.3.2000045 10.1056/NEJMoa2001017 10.1056/NEJMoa2001017 10.1056/NEJMra072149 10.1177/1063293X211021435 10.1111/exsy.12776 10.5152/dir.2015.15221 10.1148/radiol.2020200230 10.1016/j.patcog.2020.107613 10.1016/j.patcog.2020.107747 10.1016/j.patcog.2021.108071 10.3390/s21175878 10.3390/diagnostics11050893 10.1016/j.bspc.2020.102296 10.1186/s12880-021-00640-1 10.1016/j.neucom.2020.07.144 10.3390/rs11050494 10.1109/TGRS.2018.2871782 10.3390/s21165575 10.1016/j.mehy.2020.109761 10.1109/ACCESS.2019.2920980 10.1109/ACCESS.2019.2933670 10.7150/thno.46428 10.1016/j.patcog.2021.108168
COV-ADSX: An Automated Detection System using X-ray Images, Deep Learning, and XGBoost for COVID-19.
Following the COVID-19 pandemic, scientists have been looking for different ways to diagnose COVID-19, and these efforts have led to a variety of solutions. One of the common methods of detecting infected people is chest radiography. In this paper, an Automated Detection System using X-ray images (COV-ADSX) is proposed, which employs a deep neural network and XGBoost to detect COVID-19. COV-ADSX was implemented using the Django web framework, which allows the user to upload an X-ray image and view the results of the COVID-19 detection and image's heatmap, which helps the expert to evaluate the chest area more accurately.
Software impacts
"2022-01-04T00:00:00"
[ "SharifHasani", "HamidNasiri" ]
10.1016/j.simpa.2021.100210 10.1038/s41586-020-2008-3 10.1016/j.ringps.2021.100034 10.1016/j.ijmst.2021.10.006 10.1016/j.apt.2021.09.020
A Review on Deep Learning Techniques for the Diagnosis of Novel Coronavirus (COVID-19).
Novel coronavirus (COVID-19) outbreak, has raised a calamitous situation all over the world and has become one of the most acute and severe ailments in the past hundred years. The prevalence rate of COVID-19 is rapidly rising every day throughout the globe. Although no vaccines for this pandemic have been discovered yet, deep learning techniques proved themselves to be a powerful tool in the arsenal used by clinicians for the automatic diagnosis of COVID-19. This paper aims to overview the recently developed systems based on deep learning techniques using different medical imaging modalities like Computer Tomography (CT) and X-ray. This review specifically discusses the systems developed for COVID-19 diagnosis using deep learning techniques and provides insights on well-known data sets used to train these networks. It also highlights the data partitioning techniques and various performance measures developed by researchers in this field. A taxonomy is drawn to categorize the recent works for proper insight. Finally, we conclude by addressing the challenges associated with the use of deep learning methods for COVID-19 detection and probable future trends in this research area. The aim of this paper is to facilitate experts (medical or otherwise) and technicians in understanding the ways deep learning techniques are used in this regard and how they can be potentially further utilized to combat the outbreak of COVID-19.
IEEE access : practical innovations, open solutions
"2022-01-04T00:00:00"
[ "Md MilonIslam", "FakhriKarray", "RedaAlhajj", "JiaZeng" ]
10.1109/ACCESS.2021.3058537 10.1080/07391102.2020.1767212 10.1109/ICCSRE.2019.8807741 10.1007/s10489-020-01900-3
Development of computer-aided model to differentiate COVID-19 from pulmonary edema in lung CT scan: EDECOVID-net.
The efforts made to prevent the spread of COVID-19 face specific challenges in diagnosing COVID-19 patients and differentiating them from patients with pulmonary edema. Although systemically administered pulmonary vasodilators and acetazolamide are of great benefit for treating pulmonary edema, they should not be used to treat COVID-19 as they carry the risk of several adverse consequences, including worsening the matching of ventilation and perfusion, impaired carbon dioxide transport, systemic hypotension, and increased work of breathing. This study proposes a machine learning-based method (EDECOVID-net) that automatically differentiates the COVID-19 symptoms from pulmonary edema in lung CT scans using radiomic features. To the best of our knowledge, EDECOVID-net is the first method to differentiate COVID-19 from pulmonary edema and a helpful tool for diagnosing COVID-19 at early stages. The EDECOVID-net has been proposed as a new machine learning-based method with some advantages, such as having simple structure and few mathematical calculations. In total, 13 717 imaging patches, including 5759 COVID-19 and 7958 edema images, were extracted using a CT incision by a specialist radiologist. The EDECOVID-net can distinguish the patients with COVID-19 from those with pulmonary edema with an accuracy of 0.98. In addition, the accuracy of the EDECOVID-net algorithm is compared with other machine learning methods, such as VGG-16 (Acc = 0.94), VGG-19 (Acc = 0.96), Xception (Acc = 0.95), ResNet101 (Acc = 0.97), and DenseNet20l (Acc = 0.97).
Computers in biology and medicine
"2022-01-02T00:00:00"
[ "ElenaVelichko", "FaridoddinShariaty", "MahdiOrooji", "VitaliiPavlov", "TatianaPervunina", "SergeyZavjalov", "RaziehKhazaei", "Amir RezaRadmard" ]
10.1016/j.compbiomed.2021.105172
Tiled Sparse Coding in Eigenspaces for Image Classification.
The automation in the diagnosis of medical images is currently a challenging task. The use of Computer Aided Diagnosis (CAD) systems can be a powerful tool for clinicians, especially in situations when hospitals are overflowed. These tools are usually based on artificial intelligence (AI), a field that has been recently revolutionized by deep learning approaches. These alternatives usually obtain a large performance based on complex solutions, leading to a high computational cost and the need of having large databases. In this work, we propose a classification framework based on sparse coding. Images are first partitioned into different tiles, and a dictionary is built after applying PCA to these tiles. The original signals are then transformed as a linear combination of the elements of the dictionary. Then, they are reconstructed by iteratively deactivating the elements associated with each component. Classification is finally performed employing as features the subsequent reconstruction errors. Performance is evaluated in a real context where distinguishing between four different pathologies: control versus bacterial pneumonia versus viral pneumonia versus COVID-19. Our system differentiates between pneumonia patients and controls with an accuracy of 97.74%, whereas in the 4-class context the accuracy is 86.73%. The excellent results and the pioneering use of sparse coding in this scenario evidence that our proposal can assist clinicians when their workload is high.
International journal of neural systems
"2021-12-31T00:00:00"
[ "Juan EArco", "AndrésOrtiz", "JavierRamírez", "Yu-DongZhang", "Juan MGórriz" ]
10.1142/S0129065722500071
Radiology During the COVID-19 Pandemic: Mapping Radiology Literature in 2020.
Our aim was to assess articles published in the field of radiology, nuclear medicine, and medical imaging in 2020 and analyze the linkage of radiology-related topics with coronavirus disease 2019 (COVID-19) through literature mapping along with a bibliometric analysis for publications. We performed a search on the Web of Science Core Collection database for articles in the field of radiology, nuclear medicine, and medical imaging published in 2020. We analyzed the included articles using VOS viewer software, where we analyzed the co-occurrence of keywords, representing major topics discussed. Of the resulting topics, a literature map was created and linkage analysis was done. A total of 24,748 articles were published in the field of radiology, nuclear medicine, and medical imaging in 2020. We found a total of 61,267 keywords; only 78 keywords occurred more than 250 times. COVID-19 had 449 occurrences, 29 links, with a total link strength of 271. MRI was the topic most commonly appearing in 2020 radiology publications, while "computed tomography" had the highest linkage strength with COVID-19, with a linkage strength of 149, representing 54.98% of the total COVID-19 linkage strength, followed by "radiotherapy, and "deep and machine learning". The top cited paper had a total of 1,687 citations. Nine out of the 10 most cited articles discussed COVID-19 and included "COVID-19" or "coronavirus" in their title, including the top cited paper. While MRI was the topic that dominated, CT had the highest linkage strength with COVID-19 and represented the topic of top cited articles in 2020 radiology publications.
Current medical imaging
"2021-12-31T00:00:00"
[ "NosaibaAl-Ryalat", "LnaMalkawi", "Ala'aAbu Salhiyeh", "FaisalAbualteen", "GhaidaAbdallah", "BayanAl Omari", "Saif AldeenAlRyalat" ]
10.2174/1573405618666211230105631
A Rapid Artificial Intelligence-Based Computer-Aided Diagnosis System for COVID-19 Classification from CT Images.
The excessive number of COVID-19 cases reported worldwide so far, supplemented by a high rate of false alarms in its diagnosis using the conventional polymerase chain reaction method, has led to an increased number of high-resolution computed tomography (CT) examinations conducted. The manual inspection of the latter, besides being slow, is susceptible to human errors, especially because of an uncanny resemblance between the CT scans of COVID-19 and those of pneumonia, and therefore demands a proportional increase in the number of expert radiologists. Artificial intelligence-based computer-aided diagnosis of COVID-19 using the CT scans has been recently coined, which has proven its effectiveness in terms of accuracy and computation time. In this work, a similar framework for classification of COVID-19 using CT scans is proposed. The proposed method includes four core steps: (i) preparing a database of three different classes such as COVID-19, pneumonia, and normal; (ii) modifying three pretrained deep learning models such as VGG16, ResNet50, and ResNet101 for the classification of COVID-19-positive scans; (iii) proposing an activation function and improving the firefly algorithm for feature selection; and (iv) fusing optimal selected features using descending order serial approach and classifying using multiclass supervised learning algorithms. We demonstrate that once this method is performed on a publicly available dataset, this system attains an improved accuracy of 97.9% and the computational time is almost 34 (sec).
Behavioural neurology
"2021-12-31T00:00:00"
[ "Hassaan HaiderSyed", "Muhammad AttiqueKhan", "UsmanTariq", "AmmarArmghan", "FayadhAlenezi", "Junaid AliKhan", "SeungminRho", "SeifedineKadry", "VenkatesanRajinikanth" ]
10.1155/2021/2560388 10.1590/1806-9282.66.7.880 10.1016/j.chaos.2020.110190 10.1016/j.apacoust.2020.107256 10.1007/s10489-020-01888-w 10.32604/cmc.2021.017337 10.1007/s00521-020-05410-8 10.3390/s21217286 10.11591/ijece.v11i1.pp365-374 10.1007/s10044-020-00950-0 10.1007/s00779-020-01494-0 10.32604/cmc.2021.016816 10.1016/j.ins.2021.05.035 10.1109/TIM.2020.3033072 10.32604/cmc.2021.018040 10.32604/cmc.2021.017101 10.1111/exsy.12497 10.1155/2021/5524637 10.4018/978-1-7998-1230-2 10.32604/cmc.2021.013191 10.32604/cmc.2022.020140 10.3390/v12070769 10.1002/jemt.23447 10.1007/s00521-021-06490-w 10.1002/int.22691 10.1016/j.patrec.2019.11.034 10.1109/ACCESS.2020.3010448 10.32604/cmes.2020.011380 10.1007/s12559-020-09787-5 10.1016/j.patrec.2020.12.015 10.1007/s12652-021-02967-7 10.1007/s11760-020-01820-2 10.1016/j.compeleceng.2020.106960 10.1109/JBHI.2020.3019505 10.1016/j.compbiomed.2020.103792 10.1007/s13246-020-00865-4 10.1016/j.imu.2020.100412 10.1007/s12652-020-02669-6 10.1016/j.asoc.2020.106906 10.1166/jmihi.2020.3222 10.1109/ACCESS.2020.3034217 10.1016/j.eswa.2019.112957 10.1007/s11042-021-10567-y 10.1016/j.patrec.2020.09.010
Contribution of artificial intelligence applications developed with the deep learning method to the diagnosis of COVID-19 pneumonia on computed tomography.
Computed tomography (CT) is an auxiliary modality in the diagnosis of the novel Coronavirus (COVID-19) disease and can guide physicians in the presence of lung involvement. In this study, we aimed to investigate the contribution of deep learning to diagnosis in patients with typical COVID-19 pneumonia findings on CT. This study retrospectively evaluated 690 lesions obtained from 35 patients diagnosed with COVID-19 pneumonia based on typical findings on non-contrast high-resolution CT (HRCT) in our hospital. The diagnoses of the patients were also confirmed by other necessary tests. HRCT images were assessed in the parenchymal window. In the images obtained, COVID-19 lesions were detected. For the deep Convolutional Neural Network (CNN) algorithm, the Confusion matrix was used based on a Tensorflow Framework in Python. A total of 596 labeled lesions obtained from 224 sections of the images were used for the training of the algorithm, 89 labeled lesions from 27 sections were used in validation, and 67 labeled lesions from 25 images in testing. Fifty-six of the 67 lesions used in the testing stage were accurately detected by the algorithm while the remaining 11 were not recognized. There was no false positive. The Recall, Precision and F1 score values in the test group were 83.58, 1, and 91.06, respectively. We successfully detected the COVID-19 pneumonia lesions on CT images using the algorithms created with artificial intelligence. The integration of deep learning into the diagnostic stage in medicine is an important step for the diagnosis of diseases that can cause lung involvement in possible future pandemics.
Tuberkuloz ve toraks
"2021-12-28T00:00:00"
[ "NevinAydın", "ÖzerÇelik" ]
10.5578/tt.20219606
xViTCOS: Explainable Vision Transformer Based COVID-19 Screening Using Radiography.
IEEE journal of translational engineering in health and medicine
"2021-12-28T00:00:00"
[ "Arnab KumarMondal", "ArnabBhattacharjee", "ParagSingla", "A PPrathosh" ]
10.1109/JTEHM.2021.3134096
Automatic coronavirus disease 2019 diagnosis based on chest radiography and deep learning - Success story or dataset bias?
Over the last 2 years, the artificial intelligence (AI) community has presented several automatic screening tools for coronavirus disease 2019 (COVID-19) based on chest radiography (CXR), with reported accuracies often well over 90%. However, it has been noted that many of these studies have likely suffered from dataset bias, leading to overly optimistic results. The purpose of this study was to thoroughly investigate to what extent biases have influenced the performance of a range of previously proposed and promising convolutional neural networks (CNNs), and to determine what performance can be expected with current CNNs on a realistic and unbiased dataset. Five CNNs for COVID-19 positive/negative classification were implemented for evaluation, namely VGG19, ResNet50, InceptionV3, DenseNet201, and COVID-Net. To perform both internal and cross-dataset evaluations, four datasets were created. The first dataset Valencian Region Medical Image Bank (BIMCV) followed strict reverse transcriptase-polymerase chain reaction (RT-PCR) test criteria and was created from a single reliable open access databank, while the second dataset (COVIDxB8) was created through a combination of six online CXR repositories. The third and fourth datasets were created by combining the opposing classes from the BIMCV and COVIDxB8 datasets. To decrease inter-dataset variability, a pre-processing workflow of resizing, normalization, and histogram equalization were applied to all datasets. Classification performance was evaluated on unseen test sets using precision and recall. A qualitative sanity check was performed by evaluating saliency maps displaying the top 5%, 10%, and 20% most salient segments in the input CXRs, to evaluate whether the CNNs were using relevant information for decision making. In an additional experiment and to further investigate the origin of potential dataset bias, all pixel values outside the lungs were set to zero through automatic lung segmentation before training and testing. When trained and evaluated on the single online source dataset (BIMCV), the performance of all CNNs is relatively low (precision: 0.65-0.72, recall: 0.59-0.71), but remains relatively consistent during external evaluation (precision: 0.58-0.82, recall: 0.57-0.72). On the contrary, when trained and internally evaluated on the combinatory datasets, all CNNs performed well across all metrics (precision: 0.94-1.00, recall: 0.77-1.00). However, when subsequently evaluated cross-dataset, results dropped substantially (precision: 0.10-0.61, recall: 0.04-0.80). For all datasets, saliency maps revealed the CNNs rarely focus on areas inside the lungs for their decision-making. However, even when setting all pixel values outside the lungs to zero, classification performance does not change and dataset bias remains. Results in this study confirm that when trained on a combinatory dataset, CNNs tend to learn the origin of the CXRs rather than the presence or absence of disease, a behavior known as short-cut learning. The bias is shown to originate from differences in overall pixel values rather than embedded text or symbols, despite consistent image pre-processing. When trained on a reliable, and realistic single-source dataset in which non-lung pixels have been masked, CNNs currently show limited sensitivity (<70%) for COVID-19 infection in CXR, questioning their use as a reliable automatic screening tool.
Medical physics
"2021-12-25T00:00:00"
[ "JenniferDhont", "CecileWolfs", "FrankVerhaegen" ]
10.1002/mp.15419 10.1101/2020.04.21.20063263 10.1101/2020.05.24.20111922 10.1109/access.2020.3010287 10.1109/BIBM49941.2020.9313304 10.1109/CVPR.2011.5995347 10.1109/CVPR.2016.90 10.1109/ICCC51575.2020.9344870 10.1109/ICCV.2019.00505 10.1101/2021.02.11.20196766 10.1101/2020.04.24.20078949 10.1109/ICIIP47207.2019.8985892
A Deep Learning Ensemble Approach for Automated COVID-19 Detection from Chest CT Images.
The aim of this study was to evaluate the performance of an automated COVID-19 detection method based on a transfer learning technique that makes use of chest computed tomography (CT) images. In this study, we used a publicly available multiclass CT scan dataset containing 4171 CT scans of 210 different patients. In particular, we extracted features from the CT images using a set of convolutional neural networks (CNNs) that had been pretrained on the ImageNet dataset as feature extractors, and we then selected a subset of these features using the Information Gain filter. The resulting feature vectors were then used to train a set of k Nearest Neighbors classifiers with 10-fold cross validation to assess the classification performance of the features that had been extracted by each CNN. Finally, a majority voting approach was used to classify each image into two different classes: COVID-19 and NO COVID-19. A total of 414 images of the test set (10% of the complete dataset) were correctly classified, and only 4 were misclassified, yielding a final classification accuracy of 99.04%. The high performance that was achieved by the method could make it feasible option that could be used to assist radiologists in COVID-19 diagnosis through the use of CT images.
Journal of clinical medicine
"2021-12-25T00:00:00"
[ "GaetanoZazzaro", "FrancescoMartone", "GianpaoloRomano", "LuigiPavone" ]
10.3390/jcm10245982 10.1016/S2468-2667(20)30074-8 10.1007/s10044-021-00984-y 10.1148/radiol.2020200330 10.1128/JCM.00297-20 10.3348/kjr.2020.0195 10.1001/jama.2020.8259 10.1016/j.jinf.2020.03.007 10.1148/radiol.2020200230 10.1097/JCMA.0000000000000336 10.1136/bmjopen-2020-042946 10.1148/radiol.2020200823 10.1016/j.artmed.2020.101935 10.1007/s13246-020-00865-4 10.1109/JAS.2020.1003393 10.1007/s10489-020-01902-1 10.1016/j.bbe.2021.05.013 10.1117/12.2588672 10.3390/s21020455 10.1016/j.eng.2020.04.010 10.1007/s00330-021-07715-1 10.1007/s42979-021-00782-7 10.1016/j.iot.2021.100377 10.1101/2020.04.24.20078584 10.3390/app11178227 10.1109/TKDE.2009.191 10.1109/JPROC.2020.3004555 10.1109/cvpr.2017.243 10.1109/cvpr.2015.7298594 10.1109/CVPR.2016.90 10.1109/CVPR.2017.195 10.1109/INISTA.2017.8001122 10.1007/BF00153759 10.11591/eei.v9i1.1464
3D virtual histopathology of cardiac tissue from Covid-19 patients based on phase-contrast X-ray tomography.
For the first time, we have used phase-contrast X-ray tomography to characterize the three-dimensional (3d) structure of cardiac tissue from patients who succumbed to Covid-19. By extending conventional histopathological examination by a third dimension, the delicate pathological changes of the vascular system of severe Covid-19 progressions can be analyzed, fully quantified and compared to other types of viral myocarditis and controls. To this end, cardiac samples with a cross-section of 3.5mm were scanned at a laboratory setup as well as at a parallel beam setup at a synchrotron radiation facility the synchrotron in a parallel beam configuration. The vascular network was segmented by a deep learning architecture suitable for 3d datasets (V-net), trained by sparse manual annotations. Pathological alterations of vessels, concerning the variation of diameters and the amount of small holes, were observed, indicative of elevated occurrence of intussusceptive angiogenesis, also confirmed by high-resolution cone beam X-ray tomography and scanning electron microscopy. Furthermore, we implemented a fully automated analysis of the tissue structure in the form of shape measures based on the structure tensor. The corresponding distributions show that the histopathology of Covid-19 differs from both influenza and typical coxsackie virus myocarditis.
eLife
"2021-12-22T00:00:00"
[ "MariusReichardt", "PatrickMoller Jensen", "VedranaAndersen Dahl", "AndersBjorholm Dahl", "MaximilianAckermann", "HarshitShah", "FlorianLänger", "ChristopherWerlein", "Mark PKuehnel", "DannyJonigk", "TimSalditt" ]
10.7554/eLife.71359 10.1007/s10456-012-9294-9 10.1007/978-1-4939-1462-3_5 10.1183/13993003.03147-2020 10.1056/NEJMoa2015432 10.1093/eurheartj/ehaa092 10.1161/CIRCULATIONAHA.120.050097 10.1063/1.4818737 10.1161/CIRCULATIONAHA.120.050754 10.1007/s00059-020-04909-z 10.1063/1.125225 10.1364/josaa.26.000890 10.1038/s41598-019-43407-z 10.1016/j.ijcard.2020.03.087 10.7554/eLife.60408 10.1097/SLA.0b013e31820563a8 10.1107/S1600577520011327 10.1080/09540091.2012.664122 10.1007/s00414-020-02500-z 10.1016/j.carpath.2020.107300 10.1016/j.cell.2020.02.052 10.1007/978-3-030-20205-7 10.1016/j.jacc.2020.11.031 10.1088/1367-2630/aa764b 10.1007/s10853-009-4016-4 10.1006/cgip.1994.1042 10.1107/S1600577520002398 10.1111/his.14134 10.1007/s10456-014-9428-3 10.1093/bioinformatics/btz423 10.1109/3DV.2016.79 10.1038/s41569-020-0413-9 10.1117/1.JMI.7.2.023501 10.1101/2021.09.16.460594 10.1002/ejhf.1828 10.1073/pnas.1801678115 10.1364/opex.12.002960 10.1016/j.ultramic.2015.05.002 10.1364/OE.24.025129 10.1101/2021.02.03.429481 10.1093/cvr/cvaa160 10.1016/s1361-8415(02)00053-1 10.7326/M20-2003 10.1007/s15010-020-01424-5 10.1038/s41569-020-0360-5
The COVID-19 epidemic analysis and diagnosis using deep learning: A systematic literature review and future directions.
Since December 2019, the COVID-19 outbreak has resulted in countless deaths and has harmed all facets of human existence. COVID-19 has been designated an epidemic by the World Health Organization (WHO), which has placed a tremendous burden on nearly all countries, especially those with weak health systems. However, Deep Learning (DL) has been applied in several applications and many types of detection applications in the medical field, including thyroid diagnosis, lung nodule recognition, fetal localization, and detection of diabetic retinopathy. Furthermore, various clinical imaging sources, like Magnetic Resonance Imaging (MRI), X-ray, and Computed Tomography (CT), make DL a perfect technique to tackle the epidemic of COVID-19. Inspired by this fact, a considerable amount of research has been done. A Systematic Literature Review (SLR) has been used in this study to discover, assess, and integrate findings from relevant studies. DL techniques used in COVID-19 have also been categorized into seven main distinct categories as Long Short Term Memory Networks (LSTM), Self-Organizing Maps (SOMs), Conventional Neural Networks (CNNs), Generative Adversarial Networks (GANs), Recurrent Neural Networks (RNNs), Autoencoders, and hybrid approaches. Then, the state-of-the-art studies connected to DL techniques and applications for health problems with COVID-19 have been highlighted. Moreover, many issues and problems associated with DL implementation for COVID-19 have been addressed, which are anticipated to stimulate more investigations to control the prevalence and disaster control in the future. According to the findings, most papers are assessed using characteristics such as accuracy, delay, robustness, and scalability. Meanwhile, other features are underutilized, such as security and convergence time. Python is also the most commonly used language in papers, accounting for 75% of the time. According to the investigation, 37.83% of applications have identified chest CT/chest X-ray images for patients.
Computers in biology and medicine
"2021-12-21T00:00:00"
[ "ArashHeidari", "NimaJafari Navimipour", "MehmetUnal", "ShivaToumaj" ]
10.1016/j.compbiomed.2021.105141
Clinical Applicable AI System Based on Deep Learning Algorithm for Differentiation of Pulmonary Infectious Disease.
Frontiers in medicine
"2021-12-21T00:00:00"
[ "Yu-HanZhang", "Xiao-FeiHu", "Jie-ChaoMa", "Xian-QiWang", "Hao-RanLuo", "Zi-FengWu", "ShuZhang", "De-JunShi", "Yi-ZhouYu", "Xiao-MingQiu", "Wen-BingZeng", "WeiChen", "JianWang" ]
10.3389/fmed.2021.753055 10.1016/S0140-6736(18)32203-7 10.1086/431588 10.1038/s41467-019-12898-9 10.3390/jcm8040514 10.1148/radiol.2020200905 10.1148/ryct.2020200075 10.1109/TMI.2020.2994908 10.1148/radiol.2021204522 10.1016/j.cell.2018.02.010 10.3390/app8101715 10.1183/13993003.00398-2020 10.1016/j.cell.2020.04.045 10.7150/thno.46465 10.1038/s41551-018-0304-0 10.1145/2939672.2939785 10.1080/10629360500107527 10.11613/BM.2013.018 10.1111/j.1558-5646.1995.tb04456.x 10.1101/2020.05.16.20103408 10.3390/jcm9010248 10.1148/radiol.2020200230 10.1148/radiol.2020200463 10.2214/AJR.17.17857 10.1148/radiol.2020200432 10.1038/s41598-020-80061-2 10.1016/j.ejrad.2020.108961 10.1002/sim.5328
Automatic detection of COVID-19 in chest radiographs using serially concatenated deep and handcrafted features.
Since the infectious disease occurrence rate in the human community is gradually rising due to varied reasons, appropriate diagnosis and treatments are essential to control its spread. The recently discovered COVID-19 is one of the contagious diseases, which infected numerous people globally. This contagious disease is arrested by several diagnoses and handling actions. Medical image-supported diagnosis of COVID-19 infection is an approved clinical practice. This research aims to develop a new Deep Learning Method (DLM) to detect the COVID-19 infection using the chest X-ray. The proposed work implemented two methods namely, detection of COVID-19 infection using (i) a Firefly Algorithm (FA) optimized deep-features and (ii) the combined deep and machine features optimized with FA. In this work, a 5-fold cross-validation method is engaged to train and test detection methods. The performance of this system is analyzed individually resulting in the confirmation that the deep feature-based technique helps to achieve a detection accuracy of > 92% with SVM-RBF classifier and combining deep and machine features achieves > 96% accuracy with Fine KNN classifier. In the future, this technique may have potential to play a vital role in testing and validating the X-ray images collected from patients suffering from the infection diseases.
Journal of X-ray science and technology
"2021-12-21T00:00:00"
[ "SRajesh Kannan", "JSivakumar", "PEzhilarasi" ]
10.3233/XST-211050
Automated COVID-19 diagnosis and prognosis with medical imaging and who is publishing: a systematic review.
 To conduct a systematic survey of published techniques for automated diagnosis and prognosis of COVID-19 diseases using medical imaging, assessing the validity of reported performance and investigating the proposed clinical use-case. To conduct a scoping review into the authors publishing such work.  The Scopus database was queried and studies were screened for article type, and minimum source normalized impact per paper and citations, before manual relevance assessment and a bias assessment derived from a subset of the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). The number of failures of the full CLAIM was adopted as a surrogate for risk-of-bias. Methodological and performance measurements were collected from each technique. Each study was assessed by one author. Comparisons were evaluated for significance with a two-sided independent t-test.  Of 1002 studies identified, 390 remained after screening and 81 after relevance and bias exclusion. The ratio of exclusion for bias was 71%, indicative of a high level of bias in the field. The mean number of CLAIM failures per study was 8.3 ± 3.9 [1,17] (mean ± standard deviation [min,max]). 58% of methods performed diagnosis versus 31% prognosis. Of the diagnostic methods, 38% differentiated COVID-19 from healthy controls. For diagnostic techniques, area under the receiver operating curve (AUC) = 0.924 ± 0.074 [0.810,0.991] and accuracy = 91.7% ± 6.4 [79.0,99.0]. For prognostic techniques, AUC = 0.836 ± 0.126 [0.605,0.980] and accuracy = 78.4% ± 9.4 [62.5,98.0]. CLAIM failures did not correlate with performance, providing confidence that the highest results were not driven by biased papers. Deep learning techniques reported higher AUC (p < 0.05) and accuracy (p < 0.05), but no difference in CLAIM failures was identified.  A majority of papers focus on the less clinically impactful diagnosis task, contrasted with prognosis, with a significant portion performing a clinically unnecessary task of differentiating COVID-19 from healthy. Authors should consider the clinical scenario in which their work would be deployed when developing techniques. Nevertheless, studies report superb performance in a potentially impactful application. Future work is warranted in translating techniques into clinical tools.
Physical and engineering sciences in medicine
"2021-12-18T00:00:00"
[ "Ashley GGillman", "FebrioLunardo", "JosephPrinable", "GreggBelous", "AaronNicolson", "HangMin", "AndrewTerhorst", "Jason ADowling" ]
10.1007/s13246-021-01093-0 10.1016/S0140-6736(21)02758-6 10.1148/radiol.2020200343 10.1148/radiol.2020200527 10.1148/ryct.2020200152 10.1148/radiol.2020203173 10.1002/14651858.CD013639.pub4 10.1038/s42256-021-00338-7 10.1016/j.media.2021.102225 10.1109/RBME.2020.2987975 10.1016/j.jiph.2020.06.028 10.21203/rs.3.rs-30432/v1 10.1136/bmj.m1328 10.7326/M18-1377 10.1038/s42256-021-00307-0 10.1148/ryai.2020200029 10.1038/nrclinonc.2017.141 10.21105/joss.01686 10.1002/int.22449 10.1109/TMI.2020.2996256 10.1007/s13246-020-00888-x 10.1016/j.cell.2018.02.010 10.1109/TMI.2021.3079709 10.1186/s12967-020-02692-3 10.1016/j.compbiomed.2020.104181 10.1016/j.compbiomed.2021.104252 10.1088/1361-6560/abbf9e 10.7150/thno.46428 10.1007/s00330-020-07042-x 10.1038/s41598-021-91305-0 10.1109/JBHI.2020.3034296 10.1016/j.media.2020.101824 10.1148/radiol.2020203465 10.1148/radiol.2020201365 10.1002/jum.15406 10.1111/j.1445-5994.2011.02528.x 10.1016/j.patter.2021.100269 10.1038/s41746-020-00369-1 10.1109/TMI.2020.2995965 10.1109/JBHI.2020.3037127 10.1371/journal.pone.0242301 10.1038/s42003-020-01535-7 10.1016/j.compbiomed.2021.104835 10.1016/j.compbiomed.2021.104375 10.1016/j.compbiomed.2021.104575 10.1038/s41598-021-96755-0 10.1016/j.media.2021.102096 10.1007/s00330-020-07156-2 10.1016/j.compbiomed.2021.104399 10.1016/j.bbe.2021.04.006 10.1109/TNNLS.2021.3054746 10.1016/j.media.2021.102054 10.1016/j.media.2020.101860 10.1038/s41598-021-95537-y 10.1186/s12967-021-02992-2 10.1038/s41598-021-90991-0 10.1016/j.ejrad.2021.109602 10.1155/2021/6649591 10.1038/s41467-020-18685-1 10.1109/TMI.2021.3066161 10.1007/s11547-021-01370-8 10.1016/j.inffus.2021.02.013 10.1109/JBHI.2021.3076086 10.1038/s41598-020-80261-w 10.1016/j.compbiomed.2021.104526 10.1016/j.bspc.2021.102588 10.1259/bjr.20201007 10.1016/j.media.2020.101836 10.1016/j.ijmedinf.2020.104284 10.1016/j.cell.2020.04.045 10.1109/TUFFC.2021.3068190 10.1016/j.compbiomed.2021.104296 10.1148/radiol.2020200905 10.1371/journal.pone.0242535 10.1016/j.chaos.2020.110153 10.1109/JBHI.2020.3023246 10.1016/j.media.2021.101992 10.1016/j.media.2021.101975 10.1016/j.knosys.2020.106270 10.1148/radiol.2020201491 10.1136/bmjopen-2020-045120 10.1109/TMI.2020.2994908 10.1109/ACCESS.2020.3044858 10.1148/ryai.2020200048 10.1007/s00330-021-07715-1 10.1109/TBDATA.2021.3056564 10.1097/RLI.0000000000000763 10.1038/s41598-020-76141-y 10.1016/j.bspc.2021.102622 10.1016/j.eswa.2021.114677 10.3390/app11020672 10.3390/app10165683 10.1109/TCBB.2021.3065361 10.1109/JBHI.2020.3036722 10.1038/s41598-020-76550-z 10.1109/JBHI.2020.3030853 10.1109/JBHI.2020.3018181 10.1109/TMI.2020.2995508 10.1109/TMI.2020.3001810 10.1109/TMI.2020.2994459 10.1007/s10479-021-04154-5 10.1016/j.media.2021.102205 10.1016/j.compbiomed.2021.104887
Transfer learning based novel ensemble classifier for COVID-19 detection from chest CT-scans.
Coronavirus Disease 2019 (COVID-19) is a deadly infection that affects the respiratory organs in humans as well as animals. By 2020, this disease turned out to be a pandemic affecting millions of individuals across the globe. Conducting rapid tests for a large number of suspects preventing the spread of the virus has become a challenge. In the recent past, several deep learning based approaches have been developed for automating the process of detecting COVID-19 infection from Lung Computerized Tomography (CT) scan images. However, most of them rely on a single model prediction for the final decision which may or may not be accurate. In this paper, we propose a novel ensemble approach that aggregates the strength of multiple deep neural network architectures before arriving at the final decision. We use various pre-trained models such as VGG16, VGG19, InceptionV3, ResNet50, ResNet50V2, InceptionResNetV2, Xception, and MobileNet and fine-tune them using Lung CT Scan images. All these trained models are further used to create a strong ensemble classifier that makes the final prediction. Our experiments exhibit that the proposed ensemble approach is superior to existing ensemble approaches and set state-of-the-art results for detecting COVID-19 infection from lung CT scan images.
Computers in biology and medicine
"2021-12-17T00:00:00"
[ "Nagur ShareefShaik", "Teja KrishnaCherukuri" ]
10.1016/j.compbiomed.2021.105127 10.1007/s11760-021-02022-0
MRFGRO: a hybrid meta-heuristic feature selection method for screening COVID-19 using deep features.
COVID-19 is a respiratory disease that causes infection in both lungs and the upper respiratory tract. The World Health Organization (WHO) has declared it a global pandemic because of its rapid spread across the globe. The most common way for COVID-19 diagnosis is real-time reverse transcription-polymerase chain reaction (RT-PCR) which takes a significant amount of time to get the result. Computer based medical image analysis is more beneficial for the diagnosis of such disease as it can give better results in less time. Computed Tomography (CT) scans are used to monitor lung diseases including COVID-19. In this work, a hybrid model for COVID-19 detection has developed which has two key stages. In the first stage, we have fine-tuned the parameters of the pre-trained convolutional neural networks (CNNs) to extract some features from the COVID-19 affected lungs. As pre-trained CNNs, we have used two standard CNNs namely, GoogleNet and ResNet18. Then, we have proposed a hybrid meta-heuristic feature selection (FS) algorithm, named as Manta Ray Foraging based Golden Ratio Optimizer (MRFGRO) to select the most significant feature subset. The proposed model is implemented over three publicly available datasets, namely, COVID-CT dataset, SARS-COV-2 dataset, and MOSMED dataset, and attains state-of-the-art classification accuracies of 99.15%, 99.42% and 95.57% respectively. Obtained results confirm that the proposed approach is quite efficient when compared to the local texture descriptors used for COVID-19 detection from chest CT-scan images.
Scientific reports
"2021-12-17T00:00:00"
[ "ArijitDey", "SohamChattopadhyay", "Pawan KumarSingh", "AliAhmadian", "MassimilianoFerrara", "NorazakSenu", "RamSarkar" ]
10.1038/s41598-021-02731-z 10.1148/radiol.2020200527 10.1007/s10096-020-03901-z 10.1080/07391102.2020.1758788 10.1093/femspd/ftaa042.OCLC823140442 10.1080/07391102.2020.1763199 10.1016/j.ejrad.2020.109017 10.1109/4235.585893 10.1109/ACCESS.2020.2994762 10.1109/ACCESS.2020.3016780 10.1016/j.asoc.2020.106912 10.1038/s41598-019-56847-4 10.1101/2020.02.23.20026930 10.1016/j.asoc.2021.107698 10.1101/2020.05.14.20101873 10.1016/j.eng.2020.04.010 10.1101/2020.04.24.20078584 10.1038/s41598-020-79139-8 10.3390/diagnostics11020315 10.1057/palgrave.jors.2600781 10.1007/s00521-020-05297-5 10.1109/ACCESS.2020.3028241 10.1109/ACCESS.2020.3031718 10.1007/s12065-019-00279-6 10.1109/ACCESS.2020.3005827 10.1016/j.engappai.2019.103300 10.1007/s00500-019-03949-w 10.1016/j.patcog.2006.12.019 10.1016/j.neucom.2005.12.126 10.1016/j.compstruc.2004.01.002 10.3390/en12101884 10.1016/j.knosys.2019.105190 10.1016/j.knosys.2020.106270 10.1016/j.chaos.2020.110190 10.1007/s11356-020-10133-3
Weakly-supervised lesion analysis with a CNN-based framework for COVID-19.
Physics in medicine and biology
"2021-12-15T00:00:00"
[ "KaichaoWu", "BethJelfs", "XiangyuanMa", "RuitianKe", "XueruiTan", "QiangFang" ]
10.1088/1361-6560/ac4316
Multi-COVID-Net: Multi-objective optimized network for COVID-19 diagnosis from chest X-ray images.
Coronavirus Disease 2019 (COVID-19) had already spread worldwide, and healthcare services have become limited in many countries. Efficient screening of hospitalized individuals is vital in the struggle toward COVID-19 through chest radiography, which is one of the important assessment strategies. This allows researchers to understand medical information in terms of chest X-ray (CXR) images and evaluate relevant irregularities, which may result in a fully automated identification of the disease. Due to the rapid growth of cases every day, a relatively small number of COVID-19 testing kits are readily accessible in health care facilities. Thus it is imperative to define a fully automated detection method as an instant alternate treatment possibility to limit the occurrence of COVID-19 among individuals. In this paper, a two-step Deep learning (DL) architecture has been proposed for COVID-19 diagnosis using CXR. The proposed DL architecture consists of two stages, "feature extraction and classification". The "Multi-Objective Grasshopper Optimization Algorithm (MOGOA)" is presented to optimize the DL network layers; hence, these networks have named as "Multi-COVID-Net". This model classifies the Non-COVID-19, COVID-19, and pneumonia patient images automatically. The Multi-COVID-Net has been tested by utilizing the publicly available datasets, and this model provides the best performance results than other state-of-the-art methods.
Applied soft computing
"2021-12-15T00:00:00"
[ "TriptiGoel", "RMurugan", "SeyedaliMirjalili", "Deba KumarChakrabartty" ]
10.1016/j.asoc.2021.108250
Fusion of multi-scale bag of deep visual words features of chest X-ray images to detect COVID-19 infection.
Chest X-ray (CXR) images have been one of the important diagnosis tools used in the COVID-19 disease diagnosis. Deep learning (DL)-based methods have been used heavily to analyze these images. Compared to other DL-based methods, the bag of deep visual words-based method (BoDVW) proposed recently is shown to be a prominent representation of CXR images for their better discriminability. However, single-scale BoDVW features are insufficient to capture the detailed semantic information of the infected regions in the lungs as the resolution of such images varies in real application. In this paper, we propose a new multi-scale bag of deep visual words (MBoDVW) features, which exploits three different scales of the 4th pooling layer's output feature map achieved from VGG-16 model. For MBoDVW-based features, we perform the Convolution with Max pooling operation over the 4th pooling layer using three different kernels: [Formula: see text], [Formula: see text], and [Formula: see text]. We evaluate our proposed features with the Support Vector Machine (SVM) classification algorithm on four CXR public datasets (CD1, CD2, CD3, and CD4) with over 5000 CXR images. Experimental results show that our method produces stable and prominent classification accuracy (84.37%, 88.88%, 90.29%, and 83.65% on CD1, CD2, CD3, and CD4, respectively).
Scientific reports
"2021-12-15T00:00:00"
[ "ChiranjibiSitaula", "Tej BahadurShahi", "SunilAryal", "FaezehMarzbanrad" ]
10.1038/s41598-021-03287-8 10.1038/s41598-021-85875-2 10.1007/s42979-020-00401-x 10.1007/s42979-019-0007-y 10.1023/A:1011139631724 10.1007/s13755-020-00131-7 10.1109/ACCESS.2021.3058537 10.1007/s42979-020-00382-x 10.3390/sym12040651 10.1016/j.jvcir.2016.05.022 10.1109/LGRS.2018.2864116 10.1007/s10489-020-02055-x 10.1016/j.cell.2018.02.010 10.3390/math8091441 10.1109/ACCESS.2019.2925002 10.3390/app10020559 10.2299/jsp.16.343 10.1007/s42600-021-00151-6 10.1016/j.imu.2020.100412 10.1109/72.788646 10.1023/A:1010933404324 10.1007/s11227-020-03481-x 10.1016/j.cmpb.2020.105581 10.1016/j.imu.2020.100505 10.1109/5254.708428 10.4310/SII.2009.v2.n3.a8 10.1007/s10044-021-00970-4
Performance of a computer aided diagnosis system for SARS-CoV-2 pneumonia based on ultrasound images.
In this study we aimed to leverage deep learning to develop a computer aided diagnosis (CAD) system toward helping radiologists in the diagnosis of SARS-CoV-2 virus syndrome on Lung ultrasonography (LUS). A CAD system is developed based on a transfer learning of a residual network (ResNet) to extract features on LUS and help radiologists to distinguish SARS-CoV-2 virus syndrome from healthy and non-SARS-CoV-2 pneumonia. A publicly available LUS dataset for SARS-CoV-2 virus syndrome consisting of 3909 images has been employed. Six radiologists with different experiences participated in the experiment. A comprehensive LUS data set was constructed and employed to train and verify the proposed method. Several metrics such as accuracy, recall, precision, and F1-score, are used to evaluate the performance of the proposed CAD approach. The performances of the radiologists with and without the help of CAD are also evaluated quantitively. The p-values of the t-test shows that with the help of the CAD system, both junior and senior radiologists significantly improve their diagnosis performance on both balanced and unbalanced datasets. Experimental results indicate the proposed CAD approach and the machine features from it can significantly improve the radiologists' performance in the SARS-CoV-2 virus syndrome diagnosis. With the help of the proposed CAD system, the junior and senior radiologists achieved F1-score values of 91.33% and 95.79% on balanced dataset and 94.20% and 96.43% on unbalanced dataset. The proposed approach is verified on an independent test dataset and reports promising performance. The proposed CAD system reports promising performance in facilitating radiologists' diagnosis SARS-CoV-2 virus syndrome and might assist the development of a fast, accessible screening method for pulmonary diseases.
European journal of radiology
"2021-12-14T00:00:00"
[ "ShiyaoShang", "ChunwangHuang", "WenxiaoYan", "RuminChen", "JinglinCao", "YukunZhang", "YanhuiGuo", "GuoqingDu" ]
10.1016/j.ejrad.2021.110066 10.7326/M20-1495 10.1148/radiol.2020200642 10.3346/jkms.2020.35.e142 10.5811/westjem.2020.5.47743 10.1186/1465-9921-15-50 10.1002/jum.15284 10.1002/jum.15683 10.1186/s13054-020-02876-9 10.1148/radiol.2020200847 10.1007/s10396-021-01081-7 10.1002/14651858.CD013639.pub4 10.1109/ICCV.2017.74 10.1023/A:1022627411411 10.1109/JBHI.2019.2936151 10.1109/TUFFC.2020.3005512 10.1007/s10278-020-00356-8 10.1109/TMI.2020.2994459 10.1016/S2213-2600(20)30120-X 10.1016/j.ejphar.2020.173375
Fully automatic deep convolutional approaches for the analysis of COVID-19 using chest X-ray images.
Covid-19 is a new infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Given the seriousness of the situation, the World Health Organization declared a global pandemic as the Covid-19 rapidly around the world. Among its applications, chest X-ray images are frequently used for an early diagnostic/screening of Covid-19 disease, given the frequent pulmonary impact in the patients, critical issue to prevent further complications caused by this highly infectious disease. In this work, we propose 4 fully automatic approaches for the classification of chest X-ray images under the analysis of 3 different categories: Covid-19, pneumonia and healthy cases. Given the similarity between the pathological impact in the lungs between Covid-19 and pneumonia, mainly during the initial stages of both lung diseases, we performed an exhaustive study of differentiation considering different pathological scenarios. To address these classification tasks, we evaluated 6 representative state-of-the-art deep network architectures on 3 different public datasets: (I) Chest X-ray dataset of the Radiological Society of North America (RSNA); (II) Covid-19 Image Data Collection; (III) SIRM dataset of the Italian Society of Medical Radiology. To validate the designed approaches, several representative experiments were performed using 6,070 chest X-ray radiographs. In general, satisfactory results were obtained from the designed approaches, reaching a global accuracy values of 0.9706
Applied soft computing
"2021-12-14T00:00:00"
[ "Joaquimde Moura", "JorgeNovo", "MarcosOrtega" ]
10.1016/j.asoc.2021.108190 10.1101/2020.03.30.20047787 10.1007/s40846-020-00529-4
Randomly initialized convolutional neural network for the recognition of COVID-19 using X-ray images.
By the start of 2020, the novel coronavirus (COVID-19) had been declared a worldwide pandemic, and because of its infectiousness and severity, several strands of research have focused on combatting its ongoing spread. One potential solution to detecting COVID-19 rapidly and effectively is by analyzing chest X-ray images using Deep Learning (DL) models. Convolutional Neural Networks (CNNs) have been presented as particularly efficient techniques for early diagnosis, but most still include limitations. In this study, we propose a novel randomly initialized CNN (RND-CNN) architecture for the recognition of COVID-19. This network consists of a set of differently-sized hidden layers all created from scratch. The performance of this RND-CNN is evaluated using two public datasets: the COVIDx and the enhanced COVID-19 datasets. Each of these datasets consists of medical images (X-rays) in one of three different classes: chests with COVID-19, with pneumonia, or in a normal state. The proposed RND-CNN model yields encouraging results for its accuracy in detecting COVID-19 results, achieving 94% accuracy for the COVIDx dataset and 99% accuracy on the enhanced COVID-19 dataset.
International journal of imaging systems and technology
"2021-12-14T00:00:00"
[ "SafaBen Atitallah", "MahaDriss", "WadiiBoulila", "HendaBen Ghézala" ]
10.1002/ima.22654
The effect of deep feature concatenation in the classification problem: An approach on COVID-19 disease detection.
In image classification applications, the most important thing is to obtain useful features. Convolutional neural networks automatically learn the extracted features during training. The classification process is carried out with the obtained features. Therefore, obtaining successful features is critical to achieving high classification success. This article focuses on providing effective features to enhance classification performance. For this purpose, the success of the process of concatenating features in classification is taken as basis. At first, the features acquired by feature transfer method are extracted from AlexNet, Xception, NASNETLarge, and EfficientNet-B0 architectures, which are known to be successful in classification problems. Concatenating the features results in the creation of a new feature set. The method is completed by subjecting the features to various classification algorithms. The proposed pipeline is applied to the three datasets: "COVID-19 Image Dataset," "COVID-19 Pneumonia Normal Chest X-ray (PA) Dataset," and "COVID-19 Radiography Database" for COVID-19 disease detection. The whole datasets contain three classes (normal, COVID, and pneumonia). The best classification accuracies for the three datasets are 98.8%, 95.9%, and 99.6%, respectively. Performance metrics are given such as: sensitivity, precision, specificity, and F1-score values, as well. Contribution of paper is as follows: COVID-19 disease is similar to other lung infections. This situation makes diagnosis difficult. Furthermore, the virus's rapid spread necessitates the need to detect cases as soon as possible. There has been an increased curiosity in computer-aided deep learning models to provide the requirements. The use of the proposed method will be beneficial as it provides high accuracy.
International journal of imaging systems and technology
"2021-12-14T00:00:00"
[ "EmineCengil", "AhmetÇınar" ]
10.1002/ima.22659 10.21203/rs.3.rs-65967/v2
COLI-Net: Deep learning-assisted fully automated COVID-19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images.
We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347'259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7'333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98-0.99) and 0.91 ± 0.038 (95% CI, 0.90-0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, -0.12 to 0.18) and -0.18 ± 3.4% (95% CI, -0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16-0.59) and 0.81 ± 6.6% (95% CI, -0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the
International journal of imaging systems and technology
"2021-12-14T00:00:00"
[ "IsaacShiri", "HosseinArabi", "YazdanSalimi", "AmirhosseinSanaat", "AzadehAkhavanallaf", "GhasemHajianfar", "DariushAskari", "ShakibaMoradi", "ZahraMansouri", "MasoumehPakbin", "SalehSandoughdaran", "HamidAbdollahi", "Amir RezaRadmard", "KiaraRezaei-Kalantari", "MostafaGhelich Oghli", "HabibZaidi" ]
10.1002/ima.22672 10.1007/s12350-020-02119-y
Local binary pattern and deep learning feature extraction fusion for COVID-19 detection on computed tomography images.
The deadly coronavirus virus (COVID-19) was confirmed as a pandemic by the World Health Organization (WHO) in December 2019. It is important to identify suspected patients as early as possible in order to control the spread of the virus, improve the efficacy of medical treatment, and, as a result, lower the mortality rate. The adopted method of detecting COVID-19 is the reverse-transcription polymerase chain reaction (RT-PCR), the process is affected by a scarcity of RT-PCR kits as well as its complexities. Medical imaging using machine learning and deep learning has proved to be one of the most efficient methods of detecting respiratory diseases, but to train machine learning features needs to be extracted manually, and in deep learning, efficiency is affected by deep learning architecture and low data. In this study, handcrafted local binary pattern (LBP) and automatic seven deep learning models extracted features were used to train support vector machines (SVM) and K-nearest neighbour (KNN) classifiers, to improve the performance of the classifier, a concatenated LBP and deep learning feature was proposed to train the KNN and SVM, based on the performance criteria, the models VGG-19 + LBP achieved the highest accuracy of 99.4%. The SVM and KNN classifiers trained on the hybrid feature outperform the state of the art model. This shows that the proposed feature can improve the performance of the classifiers in detecting COVID-19.
Expert systems
"2021-12-14T00:00:00"
[ "Auwalu SalehMubarak", "SertanSerte", "FadiAl-Turjman", "Zubaida Sa'idAmeen", "MehmetOzsoz" ]
10.1111/exsy.12842 10.3390/make2040027 10.1109/iotm.0001.2000123 10.2307/2685209 10.2214/ajr.20.23012 10.3390/rs9010067 10.3390/sym12040651 10.1007/s00779-020-01462-8 10.1155/2021/8828404
Cryo-shift: reducing domain shift in cryo-electron subtomograms with unsupervised domain adaptation and randomization.
Cryo-Electron Tomography (cryo-ET) is a 3D imaging technology that enables the visualization of subcellular structures in situ at near-atomic resolution. Cellular cryo-ET images help in resolving the structures of macromolecules and determining their spatial relationship in a single cell, which has broad significance in cell and structural biology. Subtomogram classification and recognition constitute a primary step in the systematic recovery of these macromolecular structures. Supervised deep learning methods have been proven to be highly accurate and efficient for subtomogram classification, but suffer from limited applicability due to scarcity of annotated data. While generating simulated data for training supervised models is a potential solution, a sizeable difference in the image intensity distribution in generated data as compared with real experimental data will cause the trained models to perform poorly in predicting classes on real subtomograms. In this work, we present Cryo-Shift, a fully unsupervised domain adaptation and randomization framework for deep learning-based cross-domain subtomogram classification. We use unsupervised multi-adversarial domain adaption to reduce the domain shift between features of simulated and experimental data. We develop a network-driven domain randomization procedure with 'warp' modules to alter the simulated data and help the classifier generalize better on experimental data. We do not use any labeled experimental data to train our model, whereas some of the existing alternative approaches require labeled experimental samples for cross-domain classification. Nevertheless, Cryo-Shift outperforms the existing alternative approaches in cross-domain subtomogram classification in extensive evaluation studies demonstrated herein using both simulated and experimental data. https://github.com/xulabs/aitom. Supplementary data are available at Bioinformatics online.
Bioinformatics (Oxford, England)
"2021-12-14T00:00:00"
[ "HmrishavBandyopadhyay", "ZihaoDeng", "LeitingDing", "SinuoLiu", "Mostofa RafidUddin", "XiangruiZeng", "SimaBehpour", "MinXu" ]
10.1093/bioinformatics/btab794
Lung detection and severity prediction of pneumonia patients based on COVID-19 DET-PRE network.
The sudden outbreak of COVID-19 pneumonia has brought a heavy disaster to individuals globally. Facing this new virus, the clinicians have no automatic tools to assess the severity of pneumonia patients. In the current work, a COVID-19 DET-PRE network with two pipelines was proposed. Firstly, the lungs in X-rays were detected and segmented through the improved YOLOv3 Dense network to remove redundant features. Then, the VGG16 classifier was pre-trained on the source domain, and the severity of the disease was predicted on the target domain by means of transfer learning. The experiment results demonstrated that the COVID-19 DET-PRE network can effectively detect the lungs from X-rays and accurately predict the severity of the disease. The mean average precisions (mAPs) of lung detection in patients with mild and severe illness were 0.976 and 0.983 respectively. Moreover, the accuracy of severity prediction of COVID-19 pneumonia can reach 86.1%. The proposed neural network has high accuracy, which is suitable for the clinical diagnosis of COVID-19 pneumonia.
Expert review of medical devices
"2021-12-14T00:00:00"
[ "JiaqiaoZhang", "YanYan", "HongjunNi", "ZhonghuaNi" ]
10.1080/17434440.2022.2014319
Assessing Lobe-wise Burden of COVID-19 Infection in Computed Tomography of Lungs using Knowledge Fusion from Multiple Datasets.
Segmentation of COVID-19 infection in the lung tissue and its quantification in individual lobes is pivotal to understanding the disease's effect. It helps to determine the disease progression and gauge the extent of medical support required. Automation of this process is challenging due to the lack of a standardized dataset with voxel-wise annotations of the lung field, lobes, and infections like ground-glass opacity (GGO) and consolidation. However, multiple datasets have been found to contain one or more classes of the required annotations. Typical deep learning-based solutions overcome such challenges by training neural networks under adversarial and multi-task constraints. We propose to train a convolutional neural network to solve the challenge while it learns from multiple data sources, each of which is annotated for only a few classes. We have experimentally verified our approach by training the model on three publicly available datasets and evaluating its ability to segment the lung field, lobes and COVID-19 infected regions. Additionally, eight scans that previously had annotations for infection and lung have been annotated for lobes. Our model quantifies infection per lobe in these scans with an average error of 4.5%.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
"2021-12-12T00:00:00"
[ "MahalakshumiVisvanathan", "VelmuruganBalasubramanian", "RachanaSathish", "SuhasiniBalasubramaniam", "DebdootSheet" ]
10.1109/EMBC46164.2021.9629591
A Denoising Self-supervised Approach for COVID-19 Pneumonia Lesion Segmentation with Limited Annotated CT Images.
The coronavirus disease 2019 (COVID-19) has become a global pandemic. The segmentation of COVID-19 pneumonia lesions from CT images is important in quantitative evaluation and assessment of the infection. Though many deep learning segmentation methods have been proposed, the performance is limited when pixel-level annotations are hard to obtain. In order to alleviate the performance limitation brought by the lack of pixel-level annotation in COVID-19 pneumonia lesion segmentation task, we construct a denoising self-supervised framework, which is composed of a pretext denoising task and a downstream segmentation task. Through the pretext denoising task, the semantic features from massive unlabelled data are learned in an unsupervised manner, so as to provide additional supervisory signal for the downstream segmentation task. Experimental results showed that our method can effectively leverage unlabelled images to improve the segmentation performance, and outperformed reconstruction-based self-supervised learning when only a small set of training images are annotated.Clinical relevance-The proposed method can effectively leverage unlabelled images to improve the performance for COVID-19 pneumonia lesion segmentation when only a small set of CT images are annotated.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
"2021-12-12T00:00:00"
[ "YiboGao", "HuanWang", "XinglongLiu", "NingHuang", "GuotaiWang", "ShaotingZhang" ]
10.1109/EMBC46164.2021.9630215
Automated Detection of COVID-19 Cases using Recent Deep Convolutional Neural Networks and CT images
COVID-19 is an acute severe respiratory disease caused by a novel coronavirus SARS-CoV-2. After its first appearance in Wuhan (China), it spread rapidly across the world and became a pandemic. It had a devastating effect on everyday life, public health, and the world economy. The use of advanced artificial intelligence (AI) techniques combined with radiological imaging can be helpful in speeding-up the detection of this disease. In this study, we propose the development of recent deep learning models for automatic COVID-19 detection using computed tomography (CT) images. The proposed models are fine-tuned and optimized to provide accurate results for multiclass classification of COVID-19 vs. Community Acquired Pneumonia (CAP) vs. Normal cases. Tests were conducted both at the image and patient-level and show that the proposed algorithms achieve very high scores. In addition, an explainability algorithm was developed to help visualize the symptoms of the disease detected by the best performing deep model.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
"2021-12-12T00:00:00"
[ "MohamedChetoui", "Moulay AAkhloufi" ]
10.1109/EMBC46164.2021.9629689
CNN Filter Learning from Drawn Markers for the Detection of Suggestive Signs of COVID-19 in CT Images.
Early detection of COVID-19 is vital to control its spread. Deep learning methods have been presented to detect suggestive signs of COVID-19 from chest CT images. However, due to the novelty of the disease, annotated volumetric data are scarce. Here we propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN). For a few CT images, the user draws markers at representative normal and abnormal regions. The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones, and the decision layer of our CNN is a support vector machine. As we have no control over the CT image acquisition, we also propose an intensity standardization approach. Our method can achieve mean accuracy and kappa values of 0.97 and 0.93, respectively, on a dataset with 117 CT images extracted from different sites, surpassing its counterpart in all scenarios.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
"2021-12-12T00:00:00"
[ "Azael MSousa", "FabianoReis", "RachelZerbini", "Joao L DComba", "Alexandre XFalcao" ]
10.1109/EMBC46164.2021.9629806
Multi-feature Multi-Scale CNN-Derived COVID-19 Classification from Lung Ultrasound Data.
The global pandemic of the novel coronavirus disease 2019 (COVID-19) has put tremendous pressure on the medical system. Imaging plays a complementary role in the management of patients with COVID-19. Computed tomography (CT) and chest X-ray (CXR) are the two dominant screening tools. However, difficulty in eliminating the risk of disease transmission, radiation exposure and not being cost-effective are some of the challenges for CT and CXR imaging. This fact induces the implementation of lung ultrasound (LUS) for evaluating COVID-19 due to its practical advantages of noninvasiveness, repeatability, and sensitive bedside property. In this paper, we utilize a deep learning model to perform the classification of COVID-19 from LUS data, which could produce objective diagnostic information for clinicians. Specifically, all LUS images are processed to obtain their corresponding local phase filtered images and radial symmetry transformed images before fed into the multi-scale residual convolutional neural network (CNN). Secondly, image combination as the input of the network is used to explore rich and reliable features. Feature fusion strategy at different levels is adopted to investigate the relationship between the depth of feature aggregation and the classification accuracy. Our proposed method is evaluated on the point-of-care US (POCUS) dataset together with the Italian COVID-19 Lung US database (ICLUS-DB) and shows promising performance for COVID-19 prediction.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
"2021-12-12T00:00:00"
[ "HuiChe", "JaredRadbel", "JagSunderram", "John LNosher", "Vishal MPatel", "IlkerHacihaliloglu" ]
10.1109/EMBC46164.2021.9631069
Precision Medicine: Using Artificial Intelligence to Improve Diagnostics and Healthcare.
The continued generation of large amounts of data within healthcare-from imaging to electronic medical health records to genomics and multi-omics -necessitates tools and methods to parse and interpret these data to improve healthcare outcomes. Artificial intelligence, and in particular deep learning, has enabled researchers to gain new insights from large scale and multimodal data. At the 2022 Pacific Symposium on Biocomputing (PSB) session entitled "Precision Medicine: Using Artificial Intelligence to Improve Diagnostics and Healthcare", we showcase the latest research, influenced and inspired by the idea of using technology to build a more fair, tailored, and cost-effective healthcare system after the COVID-19 pandemic.
Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing
"2021-12-11T00:00:00"
[ "RoxanaDaneshjou", "Steven EBrenner", "Jonathan HChen", "Dana CCrawford", "Samuel GFinlayson", "ŁukaszKidziński", "Martha LBulyk" ]
null
The diagnostic performance of deep-learning-based CT severity score to identify COVID-19 pneumonia.
To determine the diagnostic accuracy of a deep-learning (DL)-based algorithm using chest computed tomography (CT) scans for the rapid diagnosis of coronavirus disease 2019 (COVID-19), as compared to the reference standard reverse-transcription polymerase chain reaction (RT-PCR) test. In this retrospective analysis, data of COVID-19 suspected patients who underwent RT-PCR and chest CT examination for the diagnosis of COVID-19 were assessed. By quantifying the affected area of the lung parenchyma, severity score was evaluated for each lobe of the lung with the DL-based algorithm. The diagnosis was based on the total lung severity score ranging from 0 to 25. The data were randomly split into a 40% training set and a 60% test set. Optimal cut-off value was determined using Youden-index method on the training cohort. A total of 1259 patients were enrolled in this study. The prevalence of RT-PCR positivity in the overall investigated period was 51.5%. As compared to RT-PCR, sensitivity, specificity, positive predictive value, negative predictive value and accuracy on the test cohort were 39.0%, 80.2%, 68.0%, 55.0% and 58.9%, respectively. Regarding the whole data set, when adding those with positive RT-PCR test at any time during hospital stay or "COVID-19 without virus detection", as final diagnosis to the true positive cases, specificity increased from 80.3% to 88.1% and the positive predictive value increased from 68.4% to 81.7%. DL-based CT severity score was found to have a good specificity and positive predictive value, as compared to RT-PCR. This standardized scoring system can aid rapid diagnosis and clinical decision making. DL-based CT severity score can detect COVID-19-related lung alterations even at early stages, when RT-PCR is not yet positive.
The British journal of radiology
"2021-12-11T00:00:00"
[ "Anna SáraKardos", "JuditSimon", "ChiaraNardocci", "István ViktorSzabó", "NorbertNagy", "Renad HeyamAbdelrahman", "EmeseZsarnóczay", "BenceFejér", "BalázsFutácsi", "VeronikaMüller", "BélaMerkely", "PálMaurovich-Horvat" ]
10.1259/bjr.20210759 10.1007/s11357-020-00226-9 10.1001/jama.2020.12839 10.1016/S0140-6736(20)30183-5 10.1016/j.ijsu.2020.02.034 10.1001/jama.2020.1097 10.1002/jmv.25786 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1001/jama.2020.1585 10.1097/RTI.0000000000000524 10.1148/radiol.2020200230 10.1177/0846537120913033 10.1148/radiol.2020201365 10.1148/radiol.2020201473 10.1038/s41591-020-0931-3 10.1148/radiol.2020200905 10.1109/RBME.2020.2987975 10.26355/eurrev_202008_22510 10.1016/j.cell.2020.04.045 10.1136/bmj.h5527 10.2214/AJR.20.22976 10.1556/1647.2020.00002 10.1148/radiol.2020200330 10.1148/radiol.2020200343 10.1016/S2214-109X(20)30068-1 10.1016/j.asoc.2020.106897 10.1183/13993003.00775-2020 10.1148/radiol.2020200463
Role of Artificial Intelligence in COVID-19 Detection.
The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.
Sensors (Basel, Switzerland)
"2021-12-11T00:00:00"
[ "AnjanGudigar", "URaghavendra", "SnehaNayak", "Chui PingOoi", "Wai YeeChan", "Mokshagna RohitGangavarapu", "ChinmayDharmik", "JyothiSamanth", "Nahrizul AdibKadri", "KhairunnisaHasikin", "Prabal DattaBarua", "SubrataChakraborty", "Edward JCiaccio", "U RajendraAcharya" ]
10.3390/s21238045 10.1056/NEJMoa2001017 10.1056/NEJMoa2001316 10.1038/s41423-020-0402-2 10.1016/S1473-3099(20)30230-9 10.1128/JVI.00127-20 10.1016/j.cell.2020.02.052 10.1128/JVI.79.24.15511-15524.2005 10.1373/clinchem.2005.054460 10.1016/S2213-2600(20)30076-X 10.1016/j.clim.2020.108427 10.1016/S0140-6736(03)13410-1 10.1056/NEJMc2010419 10.1056/NEJMoa2002032 10.1001/jama.2020.3204 10.1053/j.gastro.2020.03.065 10.1016/j.cca.2020.03.009 10.1186/s42492-021-00078-w 10.1007/s10489-020-01862-6 10.3390/app11083414 10.1155/2021/5528144 10.1155/2021/6677314 10.1155/2020/9756518 10.1007/s42979-021-00605-9 10.1007/s42600-021-00135-6 10.1016/j.imu.2021.100564 10.1016/j.scs.2020.102589 10.1016/j.chaos.2020.110338 10.1016/j.ijsu.2010.02.007 10.1148/radiol.2020200330 10.1007/s10140-021-01905-6 10.1101/2020.04.24.20078584 10.5281/zenodo.3757476 10.3390/app11020672 10.1016/0010-4809(71)90034-6 10.1016/S0734-189X(85)90153-7 10.1016/0165-1684(94)90060-4 10.1007/s12559-020-09779-5 10.1016/j.eswa.2020.113909 10.1613/jair.953 10.1109/ACCESS.2020.2994762 10.1109/34.192463 10.1109/TSMC.1973.4309314 10.1111/1365-2478.12234 10.1016/j.chemolab.2020.104054 10.1109/cvpr.2005.177 10.1016/j.ijleo.2013.05.132 10.1155/2021/5544742 10.1371/journal.pone.0235187 10.1109/TPAMI.2002.1017623 10.1371/journal.pone.0250688 10.1109/CVPR.2016.90 10.1145/3065386 10.1167/17.10.296 10.1007/s11263-015-0816-y 10.1109/CVPR.2017.195 10.3390/sym12091526 10.1007/s42979-021-00690-w 10.1016/j.advengsoft.2017.07.002 10.1109/ACCESS.2021.3061058 10.1177/003754970107600201 10.1016/j.bspc.2020.102173 10.1016/j.advengsoft.2013.12.007 10.1007/s12559-021-09848-3 10.1038/s41598-020-71294-2 10.1109/TPAMI.2005.159 10.1016/j.csda.2004.07.026 10.1007/s00521-015-1920-1 10.1109/ACCESS.2020.3028012 10.3390/e22050517 10.1016/j.advengsoft.2016.01.008 10.1016/j.asoc.2021.107238 10.1109/TEVC.2008.919004 10.1007/s10096-020-03901-z 10.1109/34.709601 10.1080/00401706.1996.10484565 10.1007/BF00058655 10.1109/101.8118 10.1109/TSMCB.2011.2168604 10.1006/jcss.1997.1504 10.1016/j.jksuci.2020.12.010 10.1016/j.procs.2020.09.258 10.1016/j.imu.2020.100412 10.1016/j.imu.2020.100360 10.1016/j.cmpb.2020.105581 10.1016/j.ibmed.2020.100014 10.1016/j.compbiomed.2020.103792 10.1016/j.chaos.2020.110071 10.1016/j.cmpb.2020.105608 10.1016/j.bbe.2020.08.008 10.1016/j.ijmedinf.2020.104284 10.1016/j.media.2020.101794 10.1016/j.patrec.2020.09.010 10.1016/j.chaos.2020.109944 10.1155/2020/8828855 10.1155/2020/8889023 10.3390/ai1030027 10.3390/app10134640 10.3390/app10165683 10.3390/electronics9091388 10.3390/ijerph17186933 10.3390/info11090419 10.3390/jpm10040213 10.3390/proceedings2020054031 10.3390/sym12040651 10.3390/sym12091530 10.1007/s13246-020-00865-4 10.1007/s13246-020-00888-x 10.1088/1757-899X/982/1/012004 10.1371/journal.pone.0243963 10.1038/s41598-020-76550-z 10.1371/journal.pone.0242535 10.1177/2472630320958376 10.1109/TMI.2020.2993291 10.1109/ACCESS.2020.3010287 10.1109/ACCESS.2020.3025010 10.1109/ACCESS.2021.3077592 10.1155/2021/3277988 10.1155/2021/3604900 10.1155/2021/5513679 10.1155/2021/6621607 10.1155/2021/6658058 10.1155/2021/7804540 10.1155/2021/8828404 10.1155/2021/8829829 10.1155/2021/8890226 10.1155/2021/9929274 10.1155/2021/9989237 10.1016/j.radi.2020.10.018 10.1016/j.asoc.2021.107184 10.1016/j.imu.2020.100505 10.1016/j.imu.2020.100506 10.1016/j.neucom.2021.03.034 10.1016/j.compbiomed.2020.104181 10.3390/a14060183 10.3390/app11062884 10.3390/computation9010003 10.3390/diagnostics11050775 10.3390/diagnostics11050895 10.3390/math9091002 10.3390/s21041480 10.3390/s21051742 10.3390/app11041424 10.3390/ijerph18158052 10.1007/s10489-020-01829-7 10.1007/s00530-021-00794-6 10.1007/s42600-021-00151-6 10.1007/s10044-021-00984-y 10.1134/S1054661821020140 10.1186/s41747-020-00203-z 10.1007/s42979-021-00496-w 10.1007/s42600-020-00120-5 10.1007/s10489-020-01888-w 10.1007/s12652-021-02917-3 10.1007/s00354-021-00121-7 10.1007/s10044-021-00970-4 10.1007/s12530-021-09385-2 10.1007/s40031-021-00589-3 10.1007/s12559-020-09774-w 10.1007/s10489-020-01902-1 10.1371/journal.pone.0247839 10.1371/journal.pone.0252573 10.1109/ACCESS.2021.3061621 10.1109/TCBB.2021.3066331 10.1109/JBHI.2021.3067333 10.1109/ACCESS.2021.3083516 10.1109/TNNLS.2021.3082015 10.1109/ACCESS.2021.3086229 10.1109/TNNLS.2021.3086570 10.1016/j.compbiomed.2020.103795 10.1016/j.imu.2020.100427 10.1155/2021/6649591 10.3390/diagnostics10110901 10.3390/e23020204 10.1007/s11548-020-02286-w 10.1007/s00521-020-05437-x 10.1007/s10489-020-02149-6 10.1109/TMI.2020.2996645 10.1109/TMI.2020.2995508 10.1109/JSEN.2020.3025855 10.1109/JBHI.2020.3030853 10.1016/j.compbiomed.2021.104356 10.1016/j.iot.2021.100377 10.1016/j.compbiomed.2021.104304 10.1016/j.ejrad.2021.109602 10.1016/j.neucom.2020.07.144 10.1016/j.irbm.2021.01.004 10.1016/j.patcog.2021.107828 10.1016/j.media.2020.101836 10.1016/j.compbiomed.2021.104306 10.1155/2021/5522729 10.1155/2021/5527271 10.1155/2021/5527923 10.1155/2021/5528441 10.1155/2021/5554408 10.1155/2021/6633755 10.1155/2021/8840835 10.1155/2021/9999368 10.1155/2021/6680455 10.3390/ai2020016 10.3390/diagnostics11020158 10.3390/diagnostics11050893 10.3390/ijerph18062842 10.3390/s21020455 10.3390/s21062215 10.1007/s10489-020-01826-w 10.1007/s00521-021-05910-1 10.1007/s10489-020-02002-w 10.1186/s43055-021-00524-y 10.1007/s10489-021-02292-8 10.1007/s10140-020-01886-y 10.1007/s13755-021-00140-0 10.1007/s00330-020-07087-y 10.1007/s11042-020-09894-3 10.3233/JIFS-201985 10.1371/journal.pone.0244416 10.1371/journal.pone.0249450 10.1371/journal.pone.0250952 10.1109/TBDATA.2021.3056564 10.1109/TNNLS.2021.3054746 10.1016/j.inffus.2021.02.013 10.1016/j.compbiomed.2021.104296 10.1016/j.chaos.2020.110190 10.1016/j.compbiomed.2021.104348 10.1007/s11042-021-10783-6 10.1016/j.bspc.2021.102490 10.1038/s41598-021-87523-1 10.1007/s10489-020-01831-z 10.1007/s40846-021-00630-2 10.1016/j.bbe.2021.05.013 10.1016/j.patcog.2021.107848 10.1016/j.bspc.2021.102602 10.1007/s10489-020-01943-6 10.3390/app11094233 10.1016/j.aej.2021.03.052 10.1007/s10489-020-02122-3 10.1109/ACCESS.2020.3016780 10.3390/ijerph18126499 10.1016/j.infrared.2019.103041 10.1016/j.cmpb.2019.105205 10.3389/fcvm.2021.638011 10.1016/j.ibmed.2020.100013 10.1016/j.media.2021.102046 10.1007/s00259-020-04953-1 10.1016/j.media.2021.102054 10.1016/j.patcog.2020.107747 10.3390/cancers13081960 10.1016/j.bspc.2021.102622 10.1002/ima.22552 10.1016/j.knosys.2021.107242 10.1002/jmv.26699 10.1016/j.imu.2020.100428 10.1016/j.patrec.2021.09.012 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.4065/mcp.2010.0260 10.1007/s00134-020-05996-6 10.1016/j.compbiomed.2021.104944 10.3390/diagnostics11111962
MHA-CoroCapsule: Multi-Head Attention Routing-Based Capsule Network for COVID-19 Chest X-Ray Image Classification.
The outbreak of COVID-19 threatens the lives and property safety of countless people and brings a tremendous pressure to health care systems worldwide. The principal challenge in the fight against this disease is the lack of efficient detection methods. AI-assisted diagnosis based on deep learning can detect COVID-19 cases for chest X-ray images automatically, and also improve the accuracy and efficiency of doctors' diagnosis. However, large scale annotation of chest X-ray images is difficult because of limited resources and heavy burden on the medical system. To meet the challenge, we propose a capsule network model with multi-head attention routing algorithm, called MHA-CoroCapsule, to provide fast and accurate diagnostics for COVID-19 diseases from chest X-ray images. The MHA-CoroCapsule consists of convolutional layers, two capsule layers, and a non-iterative, parameterized multi-head attention routing algorithm is used to quantify the relationship between the two capsule layers. The experiments are performed on a combined dataset constituted by two publicly available datasets including normal, non-COVID pneumonia and COVID-19 images. The model achieves the accuracy of 97.28%, recall of 97.36%, and precision of 97.38% even with a limited number of samples. The experimental results demonstrate that, contrary to the transfer learning and deep feature extraction approaches, the proposed MHA-CoroCapsule has an encouraging performance with fewer trainable parameters and does not require pretraining and plenty of training samples.
IEEE transactions on medical imaging
"2021-12-10T00:00:00"
[ "FudongLi", "XingyuLu", "JianjunYuan" ]
10.1109/TMI.2021.3134270
Application of CycleGAN and transfer learning techniques for automated detection of COVID-19 using X-ray images.
Coronavirus (which is also known as COVID-19) is severely impacting the wellness and lives of many across the globe. There are several methods currently to detect and monitor the progress of the disease such as radiological image from patients' chests, measuring the symptoms and applying polymerase chain reaction (RT-PCR) test. X-ray imaging is one of the popular techniques used to visualise the impact of the virus on the lungs. Although manual detection of this disease using radiology images is more popular, it can be time-consuming, and is prone to human errors. Hence, automated detection of lung pathologies due to COVID-19 utilising deep learning (Bowles et al.) techniques can assist with yielding accurate results for huge databases. Large volumes of data are needed to achieve generalizable DL models; however, there are very few public databases available for detecting COVID-19 disease pathologies automatically. Standard data augmentation method can be used to enhance the models' generalizability. In this research, the Extensive COVID-19 X-ray and CT Chest Images Dataset has been used and generative adversarial network (GAN) coupled with trained, semi-supervised CycleGAN (SSA- CycleGAN) has been applied to augment the training dataset. Then a newly designed and finetuned Inception V3 transfer learning model has been developed to train the algorithm for detecting COVID-19 pandemic. The obtained results from the proposed Inception-CycleGAN model indicated Accuracy = 94.2%, Area under Curve = 92.2%, Mean Squared Error = 0.27, Mean Absolute Error = 0.16. The developed Inception-CycleGAN framework is ready to be tested with further COVID-19 X-Ray images of the chest.
Pattern recognition letters
"2021-12-09T00:00:00"
[ "GhazalBargshady", "XujuanZhou", "Prabal DattaBarua", "RajGururajan", "YuefengLi", "U RajendraAcharya" ]
10.1016/j.patrec.2021.11.020 10.1148/radiol.2020200905 10.1016/j.compag.2019.01.041 10.1007/978-1-4842-2766-4_7
Deep learning classification of COVID-19 in chest radiographs: performance and influence of supplemental training.
Journal of medical imaging (Bellingham, Wash.)
"2021-12-07T00:00:00"
[ "Rafael BFricks", "FrancescoRia", "HamidChalian", "PegahKhoshpouri", "EhsanAbadi", "LorenzoBianchi", "William PSegars", "EhsanSamei" ]
10.1117/1.JMI.8.6.064501 10.1016/S1473-3099(20)30120-1 10.1148/radiol.2020200432 10.1148/radiol.2020200823 10.1148/radiol.2020200642 10.1148/radiol.2020201365 10.1148/radiol.2020200905 10.1118/1.4859315 10.1148/radiol.2020200905 10.1148/radiol.2020201874 10.1148/radiol.2020200230 10.1038/s41598-019-42294-8 10.1109/CVPR.2017.243 10.1109/CVPR.2009.5206848 10.1109/ICCV.2015.123 10.2307/2531595 10.1109/LSP.2014.2337313 10.7717/peerj.10387 10.1109/ICCV.2017.74 10.1016/S0197-2456(00)00097-0 10.1371/journal.pmed.1002686 10.1016/j.inffus.2021.04.008 10.3390/ijerph17186933
A Novel and Robust Approach to Detect Tuberculosis Using Transfer Learning.
Deep learning has emerged as a promising technique for a variety of elements of infectious disease monitoring and detection, including
Journal of healthcare engineering
"2021-12-07T00:00:00"
[ "OmarFaruk", "EshanAhmed", "SakilAhmed", "AnikaTabassum", "TahiaTazin", "SamiBourouis", "MohammadMonirujjaman Khan" ]
10.1155/2021/1002799 10.1109/ECS.2015.7124909 10.1164/art.1949.60.4.466 10.1186/1471-2334-5-111 10.1109/icsipa.2017.8120663 10.1038/s41598-019-42557-4 10.1007/978-981-15-0339-9_13 10.1109/ACCESS.2020.3031384 10.11591/ijai.v8.i4.pp429-435 10.1016/j.bbe.2014.08.002 10.1109/coase.2009.5234173 10.37200/ijpr/v24i5/pr2020283 10.1016/j.neunet.2019.04.025 10.3902/jnns.24.3 10.1109/CVPR.2016.308 10.1109/CVPR.2016.90 10.4171/zaa/1156 10.1007/s11042-019-7233-0 10.18178/ijmlc.2021.11.2.1023 10.1038/s41698-017-0029-7 10.14738/tmlai.24.328 10.1109/CHASE.2016.18
Detection of COVID-19 With CT Images Using Hybrid Complex Shearlet Scattering Networks.
With the ongoing worldwide coronavirus disease 2019 (COVID-19) pandemic, it is desirable to develop effective algorithms to automatically detect COVID-19 with chest computed tomography (CT) images. Recently, a considerable number of methods based on deep learning have indeed been proposed. However, training an accurate deep learning model requires a large-scale chest CT dataset, which is hard to collect due to the high contagiousness of COVID-19. To achieve improved detection performance, this paper proposes a hybrid framework that fuses the complex shearlet scattering transform (CSST) and a suitable convolutional neural network into a single model. The introduced CSST cascades complex shearlet transforms with modulus nonlinearities and low-pass filter convolutions to compute a sparse and locally invariant image representation. The features computed from the input chest CT images are discriminative for COVID-19 detection. Furthermore, a wide residual network with a redesigned residual block (WR2N) is developed to learn more granular multiscale representations by applying it to scattering features. The combination of model-based CSST and data-driven WR2N leads to a more convenient neural network for image representation, where the idea is to learn only the image parts that the CSST cannot handle instead of all parts. Experiments on two public datasets demonstrate the superiority of our method. We can obtain more accurate results than several state-of-the-art COVID-19 classification methods in terms of measures such as accuracy, the F1-score, and the area under the receiver operating characteristic curve.
IEEE journal of biomedical and health informatics
"2021-12-03T00:00:00"
[ "QingyunRen", "BingyinZhou", "LiangTian", "WeiGuo" ]
10.1109/JBHI.2021.3132157
Validation of expert system enhanced deep learning algorithm for automated screening for COVID-Pneumonia on chest X-rays.
SARS-CoV2 pandemic exposed the limitations of artificial intelligence based medical imaging systems. Earlier in the pandemic, the absence of sufficient training data prevented effective deep learning (DL) solutions for the diagnosis of COVID-19 based on X-Ray data. Here, addressing the lacunae in existing literature and algorithms with the paucity of initial training data; we describe CovBaseAI, an explainable tool using an ensemble of three DL models and an expert decision system (EDS) for COVID-Pneumonia diagnosis, trained entirely on pre-COVID-19 datasets. The performance and explainability of CovBaseAI was primarily validated on two independent datasets. Firstly, 1401 randomly selected CxR from an Indian quarantine center to assess effectiveness in excluding radiological COVID-Pneumonia requiring higher care. Second, curated dataset; 434 RT-PCR positive cases and 471 non-COVID/Normal historical scans, to assess performance in advanced medical settings. CovBaseAI had an accuracy of 87% with a negative predictive value of 98% in the quarantine-center data. However, sensitivity was 0.66-0.90 taking RT-PCR/radiologist opinion as ground truth. This work provides new insights on the usage of EDS with DL methods and the ability of algorithms to confidently predict COVID-Pneumonia while reinforcing the established learning; that benchmarking based on RT-PCR may not serve as reliable ground truth in radiological diagnosis. Such tools can pave the path for multi-modal high throughput detection of COVID-Pneumonia in screening and referral.
Scientific reports
"2021-12-03T00:00:00"
[ "Prashant SadashivGidde", "Shyam SunderPrasad", "Ajay PratapSingh", "NitinBhatheja", "SatyarthaPrakash", "PrateekSingh", "AakashSaboo", "RohitTakhar", "SalilGupta", "SumeetSaurav", "RaghunandananM V", "AmritpalSingh", "VirenSardana", "HarshMahajan", "ArjunKalyanpur", "Atanendu ShekharMandal", "VidurMahajan", "AnuragAgrawal", "AnjaliAgrawal", "Vasantha KumarVenugopal", "SanjaySingh", "DebasisDash" ]
10.1038/s41598-021-02003-w 10.1001/jama.2020.3786 10.1101/2020.04.04.20052241 10.1148/radiol.2020201160 10.1016/j.clinimag.2020.04.001 10.1186/s43055-019-0116-6 10.1016/j.cmpb.2020.105608 10.1016/j.chaos.2020.110190 10.1016/j.compbiomed.2020.103792 10.1101/2020.04.12.20062661 10.1109/access.2020.3010287 10.1038/s42256-021-00307-0 10.1109/TPAMI.2016.2577031 10.1109/ACCESS.2020.3044858 10.1016/j.media.2021.102046 10.1016/j.compbiomed.2020.103869 10.1016/j.cmpb.2020.105581 10.1186/s12938-020-00831-x 10.1109/TMI.1983.4307610 10.1016/S0140-6736(20)30183-5 10.1289/ehp.8377
Trends in the application of deep learning networks in medical image analysis: Evolution between 2012 and 2020.
To evaluate the general rules and future trajectories of deep learning (DL) networks in medical image analysis through bibliometric and hot spot analysis of original articles published between 2012 and 2020. Original articles related to DL and medical imaging were retrieved from the PubMed database. For the analysis, data regarding radiological subspecialties; imaging techniques; DL networks; sample size; study purposes, setting, origins and design; statistical analysis; funding sources; authors; and first authors' affiliation was manually extracted from each article. The Bibliographic Item Co-Occurrence Matrix Builder and VOSviewer were used to identify the research topics of the included articles and illustrate the future trajectories of studies. The study included 2685 original articles. The number of publications on DL and medical imaging has increased substantially since 2017, accounting for 97.2% of all included articles. We evaluated the rules of the application of 47 DL networks to eight radiological tasks on 11 human organ sites. Neuroradiology, thorax, and abdomen were frequent research subjects, while thyroid was under-represented. Segmentation and classification tasks were the primary purposes. U-Net, ResNet, and VGG were the most frequently used Convolutional neural network-derived networks. GAN-derived networks were widely developed and applied in 2020, and transfer learning was highlighted in the COVID-19 studies. Brain, prostate, and diabetic retinopathy-related studies were mature research topics in the field. Breast- and lung-related studies were in a stage of rapid development. This study evaluates the general rules and future trajectories of DL network application in medical image analyses and provides guidance for future studies.
European journal of radiology
"2021-12-01T00:00:00"
[ "LuWang", "HairuiWang", "YingnaHuang", "BaihuiYan", "ZhihuiChang", "ZhaoyuLiu", "MingfangZhao", "LeiCui", "JiangdianSong", "FanLi" ]
10.1016/j.ejrad.2021.110069
Computer-aided COVID-19 diagnosis and a comparison of deep learners using augmented CXRs.
Coronavirus Disease 2019 (COVID-19) is contagious, producing respiratory tract infection, caused by a newly discovered coronavirus. Its death toll is too high, and early diagnosis is the main problem nowadays. Infected people show a variety of symptoms such as fatigue, fever, tastelessness, dry cough, etc. Some other symptoms may also be manifested by radiographic visual identification. Therefore, Chest X-Rays (CXR) play a key role in the diagnosis of COVID-19. In this study, we use Chest X-Rays images to develop a computer-aided diagnosis (CAD) of the disease. These images are used to train two deep networks, the Convolution Neural Network (CNN), and the Long Short-Term Memory Network (LSTM) which is an artificial Recurrent Neural Network (RNN). The proposed study involves three phases. First, the CNN model is trained on raw CXR images. Next, it is trained on pre-processed CXR images and finally enhanced CXR images are used for deep network CNN training. Geometric transformations, color transformations, image enhancement, and noise injection techniques are used for augmentation. From augmentation, we get 3,220 augmented CXRs as training datasets. In the final phase, CNN is used to extract the features of CXR imagery that are fed to the LSTM model. The performance of the four trained models is evaluated by the evaluation techniques of different models, including accuracy, specificity, sensitivity, false-positive rate, and receiver operating characteristic (ROC) curve. We compare our results with other benchmark CNN models. Our proposed CNN-LSTM model gives superior accuracy (99.02%) than the other state-of-the-art models. Our method to get improved input, helped the CNN model to produce a very high true positive rate (TPR 1) and no false-negative result whereas false negative was a major problem while using Raw CXR images. We conclude after performing different experiments that some image pre-processing and augmentation, remarkably improves the results of CNN-based models. It will help a better early detection of the disease that will eventually reduce the mortality rate of COVID.
Journal of X-ray science and technology
"2021-11-30T00:00:00"
[ "AsmaNaseer", "MariaTamoor", "ArifahAzhar" ]
10.3233/XST-211047 10.1155/2020/3263407
Data augmentation using Generative Adversarial Networks (GANs) for GAN-based detection of Pneumonia and COVID-19 in chest X-ray images.
Successful training of convolutional neural networks (CNNs) requires a substantial amount of data. With small datasets, networks generalize poorly. Data Augmentation techniques improve the generalizability of neural networks by using existing training data more effectively. Standard data augmentation methods, however, produce limited plausible alternative data. Generative Adversarial Networks (GANs) have been utilized to generate new data and improve the performance of CNNs. Nevertheless, data augmentation techniques for training GANs are underexplored compared to CNNs. In this work, we propose a new GAN architecture for augmentation of chest X-rays for semi-supervised detection of pneumonia and COVID-19 using generative models. We show that the proposed GAN can be used to effectively augment data and improve classification accuracy of disease in chest X-rays for pneumonia and COVID-19. We compare our augmentation GAN model with Deep Convolutional GAN and traditional augmentation methods (rotate, zoom, etc.) on two different X-ray datasets and show our GAN-based augmentation method surpasses other augmentation methods for training a GAN in detecting anomalies in X-ray images.
Informatics in medicine unlocked
"2021-11-30T00:00:00"
[ "SamanMotamed", "PatrikRogalla", "FarzadKhalvati" ]
10.1016/j.imu.2021.100779
Medical image processing and COVID-19: A literature review and bibliometric analysis.
COVID-19 crisis has placed medical systems over the world under unprecedented and growing pressure. Medical imaging processing can help in the diagnosis, treatment, and early detection of diseases. It has been considered as one of the modern technologies applied to fight against the COVID-19 crisis. Although several artificial intelligence, machine learning, and deep learning techniques have been deployed in medical image processing in the context of COVID-19 disease, there is a lack of research considering systematic literature review and categorization of published studies in this field. A systematic review locates, assesses, and interprets research outcomes to address a predetermined research goal to present evidence-based practical and theoretical insights. The main goal of this study is to present a literature review of the deployed methods of medical image processing in the context of the COVID-19 crisis. With this in mind, the studies available in reliable databases were retrieved, studied, evaluated, and synthesized. Based on the in-depth review of literature, this study structured a conceptual map that outlined three multi-layered folds: data gathering and description, main steps of image processing, and evaluation metrics. The main research themes were elaborated in each fold, allowing the authors to recommend upcoming research paths for scholars. The outcomes of this review highlighted that several methods have been adopted to classify the images related to the diagnosis and detection of COVID-19. The adopted methods have presented promising outcomes in terms of accuracy, cost, and detection speed.
Journal of infection and public health
"2021-11-28T00:00:00"
[ "Rabab AliAbumalloh", "MehrbakhshNilashi", "MuhammedYousoof Ismail", "AshwaqAlhargan", "AbdullahAlghamdi", "Ahmed OmarAlzahrani", "LinahSaraireh", "ReemOsman", "ShahlaAsadi" ]
10.1016/j.jiph.2021.11.013 10.1007/s10209-018-0618-4 10.1016/j.infsof.2008.09.009 10.1016/j.asoc.2016.04.020 10.1016/j.ijinfomgt.2016.06.005 10.1016/j.jksuci.2021.01.007
Can Deep Learning-Based Volumetric Analysis Predict Oxygen Demand Increase in Patients with COVID-19 Pneumonia?
Medicina (Kaunas, Lithuania)
"2021-11-28T00:00:00"
[ "MarieTakahashi", "TomoyukiFujioka", "ToshihiroHorii", "KoichiroKimura", "MizukiKimura", "YurikaHashimoto", "YoshioKitazume", "MitsuhiroKishino", "UkihideTateishi" ]
10.3390/medicina57111148 10.7326/M20-3012 10.1056/NEJMsb2005114 10.1016/j.rmed.2020.105941 10.1007/s00038-020-01390-7 10.1148/radiol.2020202504 10.1148/radiol.2020200843 10.1148/radiol.2020200527 10.1148/radiol.2020200370 10.1007/s00330-020-06978-4 10.1007/s11604-020-01012-5 10.1007/s11604-019-00831-5 10.3390/diagnostics10050330 10.1148/rg.2017170077 10.1016/j.mri.2020.10.003 10.1186/s43055-020-00309-9 10.1148/radiol.2020202439 10.1007/s00330-020-07044-9 10.5152/dir.2019.20294 10.1016/j.media.2020.101836 10.1038/bmt.2012.244 10.1371/journal.pone.0251946 10.7150/thno.45985 10.1016/j.acra.2020.09.004
COVID-19 Detection Using Deep Learning Algorithm on Chest X-ray Images.
COVID-19, regarded as the deadliest virus of the 21st century, has claimed the lives of millions of people around the globe in less than two years. Since the virus initially affects the lungs of patients, X-ray imaging of the chest is helpful for effective diagnosis. Any method for automatic, reliable, and accurate screening of COVID-19 infection would be beneficial for rapid detection and reducing medical or healthcare professional exposure to the virus. In the past, Convolutional Neural Networks (CNNs) proved to be quite successful in the classification of medical images. In this study, an automatic deep learning classification method for detecting COVID-19 from chest X-ray images is suggested using a CNN. A dataset consisting of 3616 COVID-19 chest X-ray images and 10,192 healthy chest X-ray images was used. The original data were then augmented to increase the data sample to 26,000 COVID-19 and 26,000 healthy X-ray images. The dataset was enhanced using histogram equalization, spectrum, grays, cyan and normalized with NCLAHE before being applied to CNN models. Initially using the dataset, the symptoms of COVID-19 were detected by employing eleven existing CNN models; VGG16, VGG19, MobileNetV2, InceptionV3, NFNet, ResNet50, ResNet101, DenseNet, EfficientNetB7, AlexNet, and GoogLeNet. From the models, MobileNetV2 was selected for further modification to obtain a higher accuracy of COVID-19 detection. Performance evaluation of the models was demonstrated using a confusion matrix. It was observed that the modified MobileNetV2 model proposed in the study gave the highest accuracy of 98% in classifying COVID-19 and healthy chest X-rays among all the implemented CNN models. The second-best performance was achieved from the pre-trained MobileNetV2 with an accuracy of 97%, followed by VGG19 and ResNet101 with 95% accuracy for both the models. The study compares the compilation time of the models. The proposed model required the least compilation time with 2 h, 50 min and 21 s. Finally, the Wilcoxon signed-rank test was performed to test the statistical significance. The results suggest that the proposed method can efficiently identify the symptoms of infection from chest X-ray images better than existing methods.
Biology
"2021-11-28T00:00:00"
[ "ShamimaAkter", "F M Javed MehediShamrat", "SovonChakraborty", "AsifKarim", "SamiAzam" ]
10.3390/biology10111174 10.1016/j.ijid.2020.01.050 10.1016/j.jaut.2020.102433 10.1016/S0140-6736(20)30211-7 10.1038/s41586-020-2008-3 10.1056/NEJMoa2002032 10.1001/jama.2020.1585 10.1016/S0140-6736(20)30185-9 10.1056/NEJMoa2001017 10.1056/NEJMoa2001316 10.1056/NEJMoa2001191 10.1056/NEJMra1312885 10.11591/ijece.v11i3.pp2631-2639 10.1109/ACCESS.2020.3017082 10.11591/ijeecs.v23.i1.pp463-470 10.3390/app11167174 10.1088/0031-9155/60/7/2715 10.1088/0031-9155/60/10/4015 10.1016/j.cmpb.2018.05.006 10.1016/j.cmpb.2018.04.011 10.1016/j.cmpb.2019.01.005 10.1049/iet-ipr.2016.0526 10.1016/j.jviromet.2020.113974 10.1148/radiol.2020200330 10.1136/bmj.m641 10.1148/radiol.2020200527 10.1148/radiol.2020200343 10.1136/bmjopen-2020-047110 10.1183/09031936.01.00213501 10.1148/ryct.2020200034 10.1007/s13246-020-00865-4 10.1007/s10044-021-00984-y 10.1038/s41746-020-0273-z 10.1016/j.media.2020.101794 10.1016/j.patrec.2020.09.010 10.3390/make2040027 10.1016/j.compbiomed.2020.103792 10.1016/j.cmpb.2020.105581 10.1016/j.eswa.2020.113909 10.1177/2472630320958376 10.1016/j.radi.2020.10.018 10.1109/ACCESS.2020.3044858 10.1109/TNNLS.2021.3070467 10.1038/s41598-020-76550-z 10.1007/s10489-020-01829-7 10.1016/j.chaos.2020.110122 10.1016/j.irbm.2020.07.001 10.1109/ACCESS.2020.2974242 10.1016/j.eswa.2020.114054 10.1016/j.asoc.2021.107878 10.1609/aaai.v33i01.3301801 10.14569/IJACSA.2021.0120880 10.1111/j.0006-341X.2003.00125.x
Segmentation of infected region in CT images of COVID-19 patients based on QC-HC U-net.
Since the outbreak of COVID-19 in 2019, the rapid spread of the epidemic has brought huge challenges to medical institutions. If the pathological region in the COVID-19 CT image can be automatically segmented, it will help doctors quickly determine the patient's infection, thereby speeding up the diagnosis process. To be able to automatically segment the infected area, we proposed a new network structure and named QC-HC U-Net. First, we combine residual connection and dense connection to form a new connection method and apply it to the encoder and the decoder. Second, we choose to add Hypercolumns in the decoder section. Compared with the benchmark 3D U-Net, the improved network can effectively avoid vanishing gradient while extracting more features. To improve the situation of insufficient data, resampling and data enhancement methods are selected in this paper to expand the datasets. We used 63 cases of MSD lung tumor data for training and testing, continuously verified to ensure the training effect of this model, and then selected 20 cases of public COVID-19 data for training and testing. Experimental results showed that in the segmentation of COVID-19, the specificity and sensitivity were 85.3% and 83.6%, respectively, and in the segmentation of MSD lung tumors, the specificity and sensitivity were 81.45% and 80.93%, respectively, without any fitting.
Scientific reports
"2021-11-26T00:00:00"
[ "QinZhang", "XiaoqiangRen", "BenzhengWei" ]
10.1038/s41598-021-01502-0 10.1001/jama.2020.1585 10.1002/mp.13865 10.1109/TBDATA.2021.3056564 10.1016/j.aej.2021.01.011 10.1007/s11036-020-01703-3 10.1002/mp.14676 10.1007/s10489-020-01826-w 10.1016/j.asoc.2020.106580 10.1109/TMI.2020.2996645 10.1109/TMI.2020.2995965 10.1109/TMI.2019.2894349 10.32604/cmc.2021.016698 10.1007/s00330-019-06441-z 10.1007/s10462-019-09716-5 10.1016/j.compmedimag.2018.01.006 10.32604/cmc.2021.017433
Factors determining generalization in deep learning models for scoring COVID-CT images.
The COVID-19 pandemic has inspired unprecedented data collection and computer vision modelling efforts worldwide, focused on the diagnosis of COVID-19 from medical images. However, these models have found limited, if any, clinical application due in part to unproven generalization to data sets beyond their source training corpus. This study investigates the generalizability of deep learning models using publicly available COVID-19 Computed Tomography data through cross dataset validation. The predictive ability of these models for COVID-19 severity is assessed using an independent dataset that is stratified for COVID-19 lung involvement. Each inter-dataset study is performed using histogram equalization, and contrast limited adaptive histogram equalization with and without a learning Gabor filter. We show that under certain conditions, deep learning models can generalize well to an external dataset with F1 scores up to 86%. The best performing model shows predictive accuracy of between 75% and 96% for lung involvement scoring against an external expertly stratified dataset. From these results we identify key factors promoting deep learning generalization, being primarily the uniform acquisition of training images, and secondly diversity in CT slice position.
Mathematical biosciences and engineering : MBE
"2021-11-25T00:00:00"
[ "Michael JamesHorry", "SubrataChakraborty", "BiswajeetPradhan", "MaryamFallahpoor", "HosseinChegeni", "ManoranjanPaul" ]
10.3934/mbe.2021456