title
stringlengths
2
287
abstract
stringlengths
0
5.14k
journal
stringlengths
4
184
date
unknown
authors
sequencelengths
1
57
doi
stringlengths
16
6.63k
Drr4covid: Learning Automated COVID-19 Infection Segmentation From Digitally Reconstructed Radiographs.
Automated infection measurement and COVID-19 diagnosis based on Chest X-ray (CXR) imaging is important for faster examination, where infection segmentation is an essential step for assessment and quantification. However, due to the heterogeneity of X-ray imaging and the difficulty of annotating infected regions precisely, learning automated infection segmentation on CXRs remains a challenging task. We propose a novel approach, called DRR4Covid, to learn COVID-19 infection segmentation on CXRs from digitally reconstructed radiographs (DRRs). DRR4Covid consists of an infection-aware DRR generator, a segmentation network, and a domain adaptation module. Given a labeled Computed Tomography scan, the infection-aware DRR generator can produce infection-aware DRRs with pixel-level annotations of infected regions for training the segmentation network. The domain adaptation module is designed to enable the segmentation network trained on DRRs to generalize to CXRs. The statistical analyses made on experiment results have indicated that our infection-aware DRRs are significantly better than standard DRRs in learning COVID-19 infection segmentation (p < 0.05) and the domain adaptation module can improve the infection segmentation performance on CXRs significantly (p < 0.05). Without using any annotations of CXRs, our network has achieved a classification score of (Accuracy: 0.949, AUC: 0.987, F1-score: 0.947) and a segmentation score of (Accuracy: 0.956, AUC: 0.980, F1-score: 0.955) on a test set with 558 normal cases and 558 positive cases. Besides, by adjusting the strength of radiological signs of COVID-19 infection in infection-aware DRRs, we estimate the detection limit of X-ray imaging in detecting COVID-19 infection. The estimated detection limit, measured by the percent volume of the lung that is infected by COVID-19, is 19.43% ± 16.29%, and the estimated lower bound of infected voxel contribution rate for significant radiological signs of COVID-19 infection is 20.0%. Our codes are made publicly available at https://github.com/PengyiZhang/DRR4Covid.
IEEE access : practical innovations, open solutions
"2021-11-24T00:00:00"
[ "PengyiZhang", "YunxinZhong", "YulinDeng", "XiaoyingTang", "XiaoqiongLi" ]
10.1109/ACCESS.2020.3038279 10.1016/S0140-6736(20)30260-9 10.1016/S2213-2600(20)30076-X 10.1016/S1473-3099(20)30086-4 10.1109/TMI.2020.2991954 10.1109/TMI.2020.2993291 10.1109/RBME.2020.2987975 10.1118/1.595715 10.1016/0360-3016(90)90074-T 10.1109/TMI.2005.856749 10.1148/radiol.2020201160 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1088/0031-9155/45/10/305 10.1007/s13246-014-0257-x 10.1118/1.3190156 10.1007/s11548-019-02011-2 10.1109/TPAMI.2019.2903401 10.1109/TNNLS.2020.2988928 10.1016/j.neunet.2019.07.010 10.1371/journal.pone.0130140
2019 Novel Coronavirus-Infected Pneumonia on CT: A Feasibility Study of Few-Shot Learning for Computerized Diagnosis of Emergency Diseases.
COVID-19 is an emerging disease with transmissibility and severity. So far, there are no effective therapeutic drugs or vaccines for COVID-19. The most serious complication of COVID-19 is a type of pneumonia called 2019 novel coronavirus-infected pneumonia (NCIP) with about 4.3% mortality rate. Comparing to chest Digital Radiography (DR), it is recently reported that chest Computed Tomography (CT) is more useful to serve as the early screening and diagnosis tool for NCIP. In this study, aimed to help physicians make the diagnostic decision, we develop a machine learning (ML) approach for automated diagnosis of NCIP on chest CT. Different from most ML approaches which often require training on thousands or millions of samples, we design a few-shot learning approach, in which we combine few-shot learning with weakly supervised model training, for computerized NCIP diagnosis. A total of 824 patients are retrospectively collected from two Hospitals with IRB approval. We first use 9 patients with clinically confirmed NCIP and 20 patients without known lung diseases for training a location detector which is a multitask deep convolutional neural network (DCNN) designed to output a probability of NCIP and the segmentation of targeted lesion area. An experienced radiologist manually localizes the potential locations of NCIPs on chest CTs of 9 COVID-19 patients and interactively segments the area of the NCIP lesions as the reference standard. Then, the multitask DCNN is furtherly fine-tuned by a weakly supervised learning scheme with 291 case-level labeled samples without lesion labels. A test set of 293 patients is independently collected for evaluation. With our NCIP-Net, the test AUC is 0.91. Our system has potential to serve as the NCIP screening and diagnosis tools for the fight of COVID-19's endemic and pandemic.
IEEE access : practical innovations, open solutions
"2021-11-24T00:00:00"
[ "YaomingLai", "GuangmingLi", "DongmeiWu", "WanminLian", "ChengLi", "JunzhangTian", "XiaofenMa", "HuiChen", "WenXu", "JunWei", "YaqinZhang", "GuihuaJiang" ]
10.1109/ACCESS.2020.3033069 10.1056/NEJMoa2001017 10.1136/bmj.m606 10.1093/cid/ciaa247
UMLF-COVID: an unsupervised meta-learning model specifically designed to identify X-ray images of COVID-19 patients.
With the rapid spread of COVID-19 worldwide, quick screening for possible COVID-19 patients has become the focus of international researchers. Recently, many deep learning-based Computed Tomography (CT) image/X-ray image fast screening models for potential COVID-19 patients have been proposed. However, the existing models still have two main problems. First, most of the existing supervised models are based on pre-trained model parameters. The pre-training model needs to be constructed on a dataset with features similar to those in COVID-19 X-ray images, which limits the construction and use of the model. Second, the number of categories based on the X-ray dataset of COVID-19 and other pneumonia patients is usually imbalanced. In addition, the quality is difficult to distinguish, leading to non-ideal results with the existing model in the multi-class classification COVID-19 recognition task. Moreover, no researchers have proposed a COVID-19 X-ray image learning model based on unsupervised meta-learning. This paper first constructed an unsupervised meta-learning model for fast screening of COVID-19 patients (UMLF-COVID). This model does not require a pre-trained model, which solves the limitation problem of model construction, and the proposed unsupervised meta-learning framework solves the problem of sample imbalance and sample quality. The UMLF-COVID model is tested on two real datasets, each of which builds a three-category and four-category model. And the experimental results show that the accuracy of the UMLF-COVID model is 3-10% higher than that of the existing models. In summary, we believe that the UMLF-COVID model is a good complement to COVID-19 X-ray fast screening models.
BMC medical imaging
"2021-11-24T00:00:00"
[ "RuiMiao", "XinDong", "Sheng-LiXie", "YongLiang", "Sio-LongLo" ]
10.1186/s12880-021-00704-2
A multitask dual-stream attention network for the identification of KRAS mutation in colorectal cancer.
It is of great significance to accurately identify the KRAS gene mutation status for patients in tumor prognosis and personalized treatment. Although the computer-aided diagnosis system based on deep learning has gotten all-round development, its performance still cannot meet the current clinical application requirements due to the inherent limitations of small-scale medical image data set and inaccurate lesion feature extraction. Therefore, our aim is to propose a deep learning model based on T2 MRI of colorectal cancer (CRC) patients to identify whether KRAS gene is mutated. In this research, a multitask attentive model is proposed to identify KRAS gene mutations in patients, which is mainly composed of a segmentation subnetwork and an identification subnetwork. Specifically, at first, the features extracted by the encoder of segmentation model are used as guidance information to guide the two attention modules in the identification network for precise activation of the lesion area. Then the original image of the lesion and the segmentation result are concatenated for feature extraction. Finally, features extracted from the second step are combined with features activated by the attention modules to identify the gene mutation status. In this process, we introduce the interlayer loss function to encourage the similarity of the two subnetwork parameters and ensure that the key features are fully extracted to alleviate the overfitting problem caused by small data set to some extent. The proposed identification model is benchmarked primarily using 15-fold cross validation. Three hundred and eighty-two images from 36 clinical cases were used to test the model. For the identification of KRAS mutation status, the average accuracy is 89.95 We developed a novel deep learning-based model to identify the KRAS status in CRC. We demonstrated the excellent properties of the proposed identification through comparison with ground truth gene mutation status of 36 clinical cases. And all these results show that the novel method has great potential for clinical application.
Medical physics
"2021-11-23T00:00:00"
[ "KaiSong", "ZijuanZhao", "YulanMa", "JiaWenWang", "WeiWu", "YanQiang", "JuanjuanZhao", "SumanChaudhary" ]
10.1002/mp.15361
A review of explainable and interpretable AI with applications in COVID-19 imaging.
The development of medical imaging artificial intelligence (AI) systems for evaluating COVID-19 patients has demonstrated potential for improving clinical decision making and assessing patient outcomes during the recent COVID-19 pandemic. These have been applied to many medical imaging tasks, including disease diagnosis and patient prognosis, as well as augmented other clinical measurements to better inform treatment decisions. Because these systems are used in life-or-death decisions, clinical implementation relies on user trust in the AI output. This has caused many developers to utilize explainability techniques in an attempt to help a user understand when an AI algorithm is likely to succeed as well as which cases may be problematic for automatic assessment, thus increasing the potential for rapid clinical translation. AI application to COVID-19 has been marred with controversy recently. This review discusses several aspects of explainable and interpretable AI as it pertains to the evaluation of COVID-19 disease and it can restore trust in AI application to this disease. This includes the identification of common tasks that are relevant to explainable medical imaging AI, an overview of several modern approaches for producing explainable output as appropriate for a given imaging scenario, a discussion of how to evaluate explainable AI, and recommendations for best practices in explainable/interpretable AI implementation. This review will allow developers of AI systems for COVID-19 to quickly understand the basics of several explainable AI techniques and assist in the selection of an approach that is both appropriate and effective for a given scenario.
Medical physics
"2021-11-20T00:00:00"
[ "Jordan DFuhrman", "NaveenaGorre", "QiyuanHu", "HuiLi", "IssamEl Naqa", "Maryellen LGiger" ]
10.1002/mp.15359
Deep Learning Algorithm for COVID-19 Classification Using Chest X-Ray Images.
Early diagnosis of the harmful severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), along with clinical expertise, allows governments to break the transition chain and flatten the epidemic curve. Although reverse transcription-polymerase chain reaction (RT-PCR) offers quick results, chest X-ray (CXR) imaging is a more reliable method for disease classification and assessment. The rapid spread of the coronavirus disease 2019 (COVID-19) has triggered extensive research towards developing a COVID-19 detection toolkit. Recent studies have confirmed that the deep learning-based approach, such as convolutional neural networks (CNNs), provides an optimized solution for COVID-19 classification; however, they require substantial training data for learning features. Gathering this training data in a short period has been challenging during the pandemic. Therefore, this study proposes a new model of CNN and deep convolutional generative adversarial networks (DCGANs) that classify CXR images into normal, pneumonia, and COVID-19. The proposed model contains eight convolutional layers, four max-pooling layers, and two fully connected layers, which provide better results than the existing pretrained methods (AlexNet and GoogLeNet). DCGAN performs two tasks: (1) generating synthetic/fake images to overcome the challenges of an imbalanced dataset and (2) extracting deep features of all images in the dataset. In addition, it enlarges the dataset and represents the characteristics of diversity to provide a good generalization effect. In the experimental analysis, we used four distinct publicly accessible datasets of chest X-ray images (COVID-19 X-ray, COVID Chest X-ray, COVID-19 Radiography, and CoronaHack-Chest X-Ray) to train and test the proposed CNN and the existing pretrained methods. Thereafter, the proposed CNN method was trained with the four datasets based on the DCGAN synthetic images, resulting in higher accuracy (94.8%, 96.6%, 98.5%, and 98.6%) than the existing pretrained models. The overall results suggest that the proposed DCGAN-CNN approach is a promising solution for efficient COVID-19 diagnosis.
Computational and mathematical methods in medicine
"2021-11-20T00:00:00"
[ "SharmilaV J", "Jemi FlorinabelD" ]
10.1155/2021/9269173 10.1038/s41598-020-76550-z 10.1016/j.compbiomed.2020.103792 10.2214/AJR.20.22976 10.1016/j.eswa.2017.11.028 10.1016/j.procs.2018.10.513 10.1016/j.ifacol.2019.12.406 10.1155/2021/8785636 10.1007/s40846-020-00529-4 10.1016/j.media.2017.07.005 10.1016/j.patrec.2020.09.010 10.1109/ACCESS.2020.2994762 10.1007/978-3-030-21074-8_24 10.1148/radiol.2020200343 10.1007/s15010-020-01427-2 10.1111/exsy.12759 10.32604/cmc.2021.012955 10.32604/cmc.2021.012874 10.1109/ACCESS.2020.2995597 10.4103/0970-2113.120610
Deep Learning Approaches for Detecting COVID-19 From Chest X-Ray Images: A Survey.
Chest X-ray (CXR) imaging is a standard and crucial examination method used for suspected cases of coronavirus disease (COVID-19). In profoundly affected or limited resource areas, CXR imaging is preferable owing to its availability, low cost, and rapid results. However, given the rapidly spreading nature of COVID-19, such tests could limit the efficiency of pandemic control and prevention. In response to this issue, artificial intelligence methods such as deep learning are promising options for automatic diagnosis because they have achieved state-of-the-art performance in the analysis of visual information and a wide range of medical images. This paper reviews and critically assesses the preprint and published reports between March and May 2020 for the diagnosis of COVID-19 via CXR images using convolutional neural networks and other deep learning architectures. Despite the encouraging results, there is an urgent need for public, comprehensive, and diverse datasets. Further investigations in terms of explainable and justifiable decisions are also required for more robust, transparent, and accurate predictions.
IEEE access : practical innovations, open solutions
"2021-11-18T00:00:00"
[ "Hanan SAlghamdi", "GhadaAmoudi", "SalmaElhag", "KawtherSaeedi", "JomanahNasser" ]
10.1109/ACCESS.2021.3054484 10.1109/TNNLS.2020.2995800 10.1007/s42600-020-00091-7 10.3233/XST-200715 10.17632/rscbjbr9sj.2 10.1109/ACCESS.2020.3033762
Deep Convolutional Approaches for the Analysis of COVID-19 Using Chest X-Ray Images From Portable Devices.
The recent human coronavirus disease (COVID-19) is a respiratory infection caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Given the effects of COVID-19 in pulmonary tissues, chest radiography imaging plays an important role in the screening, early detection, and monitoring of the suspected individuals. Hence, as the pandemic of COVID-19 progresses, there will be a greater reliance on the use of portable equipment for the acquisition of chest X-ray images due to its accessibility, widespread availability, and benefits regarding to infection control issues, minimizing the risk of cross-contamination. This work presents novel fully automatic approaches specifically tailored for the classification of chest X-ray images acquired by portable equipment into 3 different clinical categories: normal, pathological, and COVID-19. For this purpose, 3 complementary deep learning approaches based on a densely convolutional network architecture are herein presented. The joint response of all the approaches allows to enhance the differentiation between patients infected with COVID-19, patients with other diseases that manifest characteristics similar to COVID-19 and normal cases. The proposed approaches were validated over a dataset specifically retrieved for this research. Despite the poor quality of the chest X-ray images that is inherent to the nature of the portable equipment, the proposed approaches provided global accuracy values of 79.62%, 90.27% and 79.86%, respectively, allowing a reliable analysis of portable radiographs to facilitate the clinical decision-making process.
IEEE access : practical innovations, open solutions
"2021-11-18T00:00:00"
[ "JoaquimDe Moura", "Lucia RamosGarcia", "Placido Francisco LizancosVidal", "MilenaCruz", "Laura AbelairasLopez", "Eva CastroLopez", "JorgeNovo", "MarcosOrtega" ]
10.1109/ACCESS.2020.3033762 10.1056/NEJM199304293281706 10.1056/NEJMoa2002032 10.1101/2020.05.01.20087254 10.1101/2020.06.21.20136598 10.1007/s00500-020-05275-y 10.1101/2020.05.04.20090423 10.1198/10618600152418584
A deep learning approach using effective preprocessing techniques to detect COVID-19 from chest CT-scan and X-ray images.
Coronavirus disease-19 (COVID-19) is a severe respiratory viral disease first reported in late 2019 that has spread worldwide. Although some wealthy countries have made significant progress in detecting and containing this disease, most underdeveloped countries are still struggling to identify COVID-19 cases in large populations. With the rising number of COVID-19 cases, there are often insufficient COVID-19 diagnostic kits and related resources in such countries. However, other basic diagnostic resources often do exist, which motivated us to develop Deep Learning models to assist clinicians and radiologists to provide prompt diagnostic support to the patients. In this study, we have developed a deep learning-based COVID-19 case detection model trained with a dataset consisting of chest CT scans and X-ray images. A modified ResNet50V2 architecture was employed as deep learning architecture in the proposed model. The dataset utilized to train the model was collected from various publicly available sources and included four class labels: confirmed COVID-19, normal controls and confirmed viral and bacterial pneumonia cases. The aggregated dataset was preprocessed through a sharpening filter before feeding the dataset into the proposed model. This model attained an accuracy of 96.452% for four-class cases (COVID-19/Normal/Bacterial pneumonia/Viral pneumonia), 97.242% for three-class cases (COVID-19/Normal/Bacterial pneumonia) and 98.954% for two-class cases (COVID-19/Viral pneumonia) using chest X-ray images. The model acquired a comprehensive accuracy of 99.012% for three-class cases (COVID-19/Normal/Community-acquired pneumonia) and 99.99% for two-class cases (Normal/COVID-19) using CT-scan images of the chest. This high accuracy presents a new and potentially important resource to enable radiologists to identify and rapidly diagnose COVID-19 cases with only basic but widely available equipment.
Computers in biology and medicine
"2021-11-16T00:00:00"
[ "Khabir UddinAhamed", "ManowarulIslam", "AshrafUddin", "ArnishaAkhter", "Bikash KumarPaul", "Mohammad AbuYousuf", "ShahadatUddin", "Julian M WQuinn", "Mohammad AliMoni" ]
10.1016/j.compbiomed.2021.105014 10.1093/bib/bbab197 10.1093/bib/bbab115 10.1016/B978-0-12-813087-2.00003-8 10.1016/B978-0-12-381420-3.00007-2
Potential diagnosis of COVID-19 from chest X-ray and CT findings using semi-supervised learning.
COVID-19 is an infectious disease, which has adversely affected public health and the economy across the world. On account of the highly infectious nature of the disease, rapid automated diagnosis of COVID-19 is urgently needed. A few recent findings suggest that chest X-rays and CT scans can be used by machine learning for the diagnosis of COVID-19. Herein, we employed semi-supervised learning (SSL) approaches to detect COVID-19 cases accurately by analyzing digital chest X-rays and CT scans. On a relatively small COVID-19 radiography dataset, which contains only 219 COVID-19 positive images, 1341 normal and 1345 viral pneumonia images, our algorithm, COVIDCon, which takes advantage of data augmentation, consistency regularization, and multicontrastive learning, attains 97.07% average class prediction accuracy, with 1000 labeled images, which is 7.65% better than the next best SSL method, virtual adversarial training. COVIDCon performs even better on a larger COVID-19 CT Scan dataset that contains 82,767 images. It achieved an excellent accuracy of 99.13%, at 20,000 labels, which is 6.45% better than the next best pseudo-labeling approach. COVIDCon outperforms other state-of-the-art algorithms at every label that we have investigated. These results demonstrate COVIDCon as the benchmark SSL algorithm for potential diagnosis of COVID-19 from chest X-rays and CT-Scans. Furthermore, COVIDCon performs exceptionally well in identifying COVID-19 positive cases from a completely unseen repository with a confirmed COVID-19 case history. COVIDCon, may provide a fast, accurate, and reliable method for screening COVID-19 patients.
Physical and engineering sciences in medicine
"2021-11-16T00:00:00"
[ "PrachetaSahoo", "IndranilRoy", "RandeepAhlawat", "SaquibIrtiza", "LatifurKhan" ]
10.1007/s13246-021-01075-2 10.1109/ACCESS.2021.3058537 10.1109/TCBB.2021.3065361
Improving motion-mask segmentation in thoracic CT with multiplanar U-nets.
Motion-mask segmentation from thoracic computed tomography (CT) images is the process of extracting the region that encompasses lungs and viscera, where large displacements occur during breathing. It has been shown to help image registration between different respiratory phases. This registration step is, for example, useful for radiotherapy planning or calculating local lung ventilation. Knowing the location of motion discontinuity, that is, sliding motion near the pleura, allows a better control of the registration preventing unrealistic estimates. Nevertheless, existing methods for motion-mask segmentation are not robust enough to be used in clinical routine. This article shows that it is feasible to overcome this lack of robustness by using a lightweight deep-learning approach usable on a standard computer, and this even without data augmentation or advanced model design. A convolutional neural-network architecture with three 2D U-nets for the three main orientations (sagittal, coronal, axial) was proposed. Predictions generated by the three U-nets were combined by majority voting to provide a single 3D segmentation of the motion mask. The networks were trained on a database of nonsmall cell lung cancer 4D CT images of 43 patients. Training and evaluation were done with a K-fold cross-validation strategy. Evaluation was based on a visual grading by two experts according to the appropriateness of the segmented motion mask for the registration task, and on a comparison with motion masks obtained by a baseline method using level sets. A second database (76 CT images of patients with early-stage COVID-19), unseen during training, was used to assess the generalizability of the trained neural network. The proposed approach outperformed the baseline method in terms of quality and robustness: the success rate increased from With 5-s processing time on a mid-range GPU and success rates around
Medical physics
"2021-11-16T00:00:00"
[ "LudmillaPenarrubia", "NicolasPinon", "EmmanuelRoux", "Eduardo EnriqueDávila Serrano", "Jean-ChristopheRichard", "MaciejOrkisz", "DavidSarrut" ]
10.1002/mp.15347
COVID-19 Case Recognition from Chest CT Images by Deep Learning, Entropy-Controlled Firefly Optimization, and Parallel Feature Fusion.
In healthcare, a multitude of data is collected from medical sensors and devices, such as X-ray machines, magnetic resonance imaging, computed tomography (CT), and so on, that can be analyzed by artificial intelligence methods for early diagnosis of diseases. Recently, the outbreak of the COVID-19 disease caused many deaths. Computer vision researchers support medical doctors by employing deep learning techniques on medical images to diagnose COVID-19 patients. Various methods were proposed for COVID-19 case classification. A new automated technique is proposed using parallel fusion and optimization of deep learning models. The proposed technique starts with a contrast enhancement using a combination of top-hat and Wiener filters. Two pre-trained deep learning models (AlexNet and VGG16) are employed and fine-tuned according to target classes (COVID-19 and healthy). Features are extracted and fused using a parallel fusion approach-parallel positive correlation. Optimal features are selected using the entropy-controlled firefly optimization method. The selected features are classified using machine learning classifiers such as multiclass support vector machine (MC-SVM). Experiments were carried out using the Radiopaedia database and achieved an accuracy of 98%. Moreover, a detailed analysis is conducted and shows the improved performance of the proposed scheme.
Sensors (Basel, Switzerland)
"2021-11-14T00:00:00"
[ "Muhammad AttiqueKhan", "MajedAlhaisoni", "UsmanTariq", "NazarHussain", "AbdulMajid", "RobertasDamaševičius", "RytisMaskeliūnas" ]
10.3390/s21217286 10.1016/S0140-6736(20)30185-9 10.1038/s41564-020-0695-z 10.2807/1560-7917.es.2020.25.6.2000094 10.1056/NEJMoa2001316 10.1007/s11869-020-00944-1 10.1016/j.aquaculture.2020.735881 10.7717/peerj-cs.564 10.1111/exsy.12759 10.1016/j.inffus.2020.11.005 10.1007/s10044-020-00950-0 10.3390/app11199023 10.3390/sym13010113 10.1007/s10489-020-01826-w 10.1016/j.compbiomed.2020.103795 10.1007/s00500-020-05275-y 10.1007/s12559-020-09751-3 10.3389/fmed.2020.608525 10.3390/sym12040651 10.1007/s00779-020-01494-0 10.1371/journal.pone.0243189 10.1016/j.mehy.2020.109761 10.32604/cmc.2021.013191 10.1007/s10489-020-01889-9 10.4108/eai.13-7-2018.163997 10.1016/j.eswa.2020.114054 10.1007/s10489-020-01902-1 10.1155/2021/8829829 10.1109/tcbb.2021.3065361 10.1016/j.compeleceng.2020.106960 10.1007/s10489-020-02149-6 10.1016/j.patrec.2020.12.010 10.1080/07391102.2020.1788642 10.2196/19569 10.1109/ACCESS.2020.3005510 10.1016/j.media.2020.101836 10.1007/s00330-020-07044-9 10.1109/ACCESS.2020.3016780 10.1016/j.compbiomed.2020.103792 10.1016/j.imu.2020.100412 10.3390/s21062215 10.3390/s21041480 10.3390/s21165482 10.3390/s21103322 10.1111/exsy.12497 10.3390/su12125037 10.3390/diagnostics10110904 10.1145/3065386 10.1007/s11263-015-0816-y 10.1109/ACCESS.2020.3034217 10.3390/sym12071146 10.1109/sai.2014.6918213 10.1016/S1672-6529(09)60240-7 10.1166/jmihi.2020.3222 10.3390/diagnostics11071212 10.7717/peerj-cs.456 10.1109/TITB.2006.879600 10.1007/s10489-020-02055-x 10.32604/cmc.2021.016816 10.1016/j.chaos.2020.110153 10.1155/2020/9756518 10.1007/s00521-021-06490-w 10.1002/int.22691 10.32604/cmc.2022.020140 10.1109/JBHI.2021.3067789 10.1016/j.compeleceng.2020.106956
Impact of Lung Segmentation on the Diagnosis and Explanation of COVID-19 in Chest X-ray Images.
COVID-19 frequently provokes pneumonia, which can be diagnosed using imaging exams. Chest X-ray (CXR) is often useful because it is cheap, fast, widespread, and uses less radiation. Here, we demonstrate the impact of lung segmentation in COVID-19 identification using CXR images and evaluate which contents of the image influenced the most. Semantic segmentation was performed using a U-Net CNN architecture, and the classification using three CNN architectures (VGG, ResNet, and Inception). Explainable Artificial Intelligence techniques were employed to estimate the impact of segmentation. A three-classes database was composed: lung opacity (pneumonia), COVID-19, and normal. We assessed the impact of creating a CXR image database from different sources, and the COVID-19 generalization from one source to another. The segmentation achieved a Jaccard distance of 0.034 and a Dice coefficient of 0.982. The classification using segmented images achieved an F1-Score of 0.88 for the multi-class setup, and 0.83 for COVID-19 identification. In the cross-dataset scenario, we obtained an F1-Score of 0.74 and an area under the ROC curve of 0.9 for COVID-19 identification using segmented images. Experiments support the conclusion that even after segmentation, there is a strong bias introduced by underlying factors from different sources.
Sensors (Basel, Switzerland)
"2021-11-14T00:00:00"
[ "Lucas OTeixeira", "Rodolfo MPereira", "DiegoBertolini", "Luiz SOliveira", "LorisNanni", "George D CCavalcanti", "Yandre M GCosta" ]
10.3390/s21217116 10.1038/s41577-020-0311-8 10.7326/M20-0504 10.1152/physiolgenomics.00029.2020 10.1016/j.ajem.2012.08.041 10.1016/j.cmpb.2020.105532 10.1038/s41598-020-76550-z 10.1016/j.inffus.2021.04.008 10.3390/ijerph17186933 10.1109/RBME.2020.2987975 10.1016/j.scs.2020.102589 10.1109/ACCESS.2021.3058537 10.1038/s42256-021-00307-0 10.1016/j.media.2021.102225 10.1016/j.cell.2020.04.045 10.1016/j.cmpb.2020.105608 10.1109/TMI.2021.3079709 10.1038/s41551-021-00704-1 10.1038/s42256-021-00338-7 10.1148/radiol.2020200527 10.1007/s12553-021-00520-2 10.1097/MAJ.0b013e31818ad805 10.1016/j.ejrnm.2015.11.004 10.2214/AJR.09.3625 10.1186/s40537-019-0197-0 10.3390/info11020125 10.1037/met0000061 10.1038/s41563-019-0345-0
Decision and feature level fusion of deep features extracted from public COVID-19 data-sets.
The Coronavirus disease (COVID-19), which is an infectious pulmonary disorder, has affected millions of people and has been declared as a global pandemic by the WHO. Due to highly contagious nature of COVID-19 and its high possibility of causing severe conditions in the patients, the development of rapid and accurate diagnostic tools have gained importance. The real-time reverse transcription-polymerize chain reaction (RT-PCR) is used to detect the presence of Coronavirus RNA by using the mucus and saliva mixture samples taken by the nasopharyngeal swab technique. But, RT-PCR suffers from having low-sensitivity especially in the early stage. Therefore, the usage of chest radiography has been increasing in the early diagnosis of COVID-19 due to its fast imaging speed, significantly low cost and low dosage exposure of radiation. In our study, a computer-aided diagnosis system for X-ray images based on convolutional neural networks (CNNs) and ensemble learning idea, which can be used by radiologists as a supporting tool in COVID-19 detection, has been proposed. Deep feature sets extracted by using seven CNN architectures were concatenated for feature level fusion and fed to multiple classifiers in terms of decision level fusion idea with the aim of discriminating COVID-19, pneumonia and no-finding classes. In the decision level fusion idea, a majority voting scheme was applied to the resultant decisions of classifiers. The obtained accuracy values and confusion matrix based evaluation criteria were presented for three progressively created data-sets. The aspects of the proposed method that are superior to existing COVID-19 detection studies have been discussed and the fusion performance of proposed approach was validated visually by using Class Activation Mapping technique. The experimental results show that the proposed approach has attained high COVID-19 detection performance that was proven by its comparable accuracy and superior precision/recall values with the existing studies.
Applied intelligence (Dordrecht, Netherlands)
"2021-11-13T00:00:00"
[ "Hamza OsmanIlhan", "GorkemSerbes", "NizamettinAydin" ]
10.1007/s10489-021-02945-8 10.1109/TPAMI.2015.2500224 10.1038/s41598-019-42294-8 10.1017/dmp.2015.38 10.1016/S0140-6736(20)30460-8 10.1016/j.coastaleng.2019.103593 10.1016/j.compag.2020.105339 10.1109/ACCESS.2020.2992341 10.1109/JBHI.2015.2425041 10.1016/S0140-6736(20)30211-7 10.1148/radiol.2020200230 10.1016/j.patrec.2019.11.025 10.1161/CIRCULATIONAHA.120.046941 10.1016/j.compbiomed.2019.103351 10.1016/j.compmedimag.2007.02.002 10.1109/JSTARS.2018.2878037 10.3390/diagnostics10050329 10.1109/34.927459 10.2307/2333955 10.1016/j.measurement.2019.05.076 10.1016/j.cell.2018.02.010 10.1093/biomet/58.3.433 10.1016/j.neucom.2021.01.085 10.1016/j.bspc.2021.102932 10.1016/j.cmpb.2020.105581 10.1109/34.667881 10.1007/s00392-020-01626-9 10.1109/JBHI.2021.3058293 10.1109/TNN.2004.837780 10.2214/AJR.20.22954 10.1038/s41598-019-38966-0 10.1080/02564602.2015.1015631 10.1016/j.zemedi.2018.11.002 10.11613/BM.2012.031 10.1109/ACCESS.2018.2813079 10.1016/j.compbiomed.2019.103545 10.1016/j.compbiomed.2021.104399 10.3390/app8101715 10.1007/s11263-015-0816-y 10.1016/j.asoc.2018.10.022 10.1016/j.ajem.2012.08.041 10.1016/j.patrec.2019.11.019 10.1109/TMI.2016.2528162 10.1016/j.neucom.2019.12.083 10.1109/TMI.2016.2535302 10.1016/j.bspc.2017.06.018 10.1109/ACCESS.2020.2994762 10.1186/s40537-019-0276-2 10.1109/TMI.2020.3040950 10.1016/j.biosystemseng.2019.01.003
Corona-Nidaan: lightweight deep convolutional neural network for chest X-Ray based COVID-19 infection detection.
The coronavirus COVID-19 pandemic is today's major public health crisis, we have faced since the Second World War. The pandemic is spreading around the globe like a wave, and according to the World Health Organization's recent report, the number of confirmed cases and deaths are rising rapidly. COVID-19 pandemic has created severe social, economic, and political crises, which in turn will leave long-lasting scars. One of the countermeasures against controlling coronavirus outbreak is specific, accurate, reliable, and rapid detection technique to identify infected patients. The availability and affordability of RT-PCR kits remains a major bottleneck in many countries, while handling COVID-19 outbreak effectively. Recent findings indicate that chest radiography anomalies can characterize patients with COVID-19 infection. In this study, Corona-Nidaan, a lightweight deep convolutional neural network (DCNN), is proposed to detect COVID-19, Pneumonia, and Normal cases from chest X-ray image analysis; without any human intervention. We introduce a simple minority class oversampling method for dealing with imbalanced dataset problem. The impact of transfer learning with pre-trained CNNs on chest X-ray based COVID-19 infection detection is also investigated. Experimental analysis shows that Corona-Nidaan model outperforms prior works and other pre-trained CNN based models. The model achieved 95% accuracy for three-class classification with 94% precision and recall for COVID-19 cases. While studying the performance of various pre-trained models, it is also found that VGG19 outperforms other pre-trained CNN models by achieving 93% accuracy with 87% recall and 93% precision for COVID-19 infection detection. The model is evaluated by screening the COVID-19 infected Indian Patient chest X-ray dataset with good accuracy.
Applied intelligence (Dordrecht, Netherlands)
"2021-11-13T00:00:00"
[ "MainakChakraborty", "Sunita VikrantDhavale", "JitendraIngole" ]
10.1007/s10489-020-01978-9 10.1001/jama.2017.14585 10.1001/jama.2016.17216 10.1016/S0140-6736(20)30183-5 10.1148/radiol.2017162326 10.1038/s41586-020-2008-3
Stacked-autoencoder-based model for COVID-19 diagnosis on CT images.
With the outbreak of COVID-19, medical imaging such as computed tomography (CT) based diagnosis is proved to be an effective way to fight against the rapid spread of the virus. Therefore, it is important to study computerized models for infectious detection based on CT imaging. New deep learning-based approaches are developed for CT assisted diagnosis of COVID-19. However, most of the current studies are based on a small size dataset of COVID-19 CT images as there are less publicly available datasets for patient privacy reasons. As a result, the performance of deep learning-based detection models needs to be improved based on a small size dataset. In this paper, a stacked autoencoder detector model is proposed to greatly improve the performance of the detection models such as precision rate and recall rate. Firstly, four autoencoders are constructed as the first four layers of the whole stacked autoencoder detector model being developed to extract better features of CT images. Secondly, the four autoencoders are cascaded together and connected to the dense layer and the softmax classifier to constitute the model. Finally, a new classification loss function is constructed by superimposing reconstruction loss to enhance the detection accuracy of the model. The experiment results show that our model is performed well on a small size COVID-2019 CT image dataset. Our model achieves the average accuracy, precision, recall, and F1-score rate of 94.7%, 96.54%, 94.1%, and 94.8%, respectively. The results reflect the ability of our model in discriminating COVID-19 images which might help radiologists in the diagnosis of suspected COVID-19 patients.
Applied intelligence (Dordrecht, Netherlands)
"2021-11-13T00:00:00"
[ "DaqiuLi", "ZhangjieFu", "JunXu" ]
10.1007/s10489-020-02002-w 10.1038/s41564-020-0695-z 10.1016/S1672-0229(03)01031-3 10.1111/ajt.15876 10.1093/cid/ciaa203 10.1002/jmv.25762 10.1038/s41423-020-0407-x 10.1021/acsnano.0c02624 10.1038/d41586-020-00983-9 10.1038/s41591-020-0824-5 10.1109/TMI.2020.2995965 10.1109/TMI.2020.2995508 10.1016/j.eswa.2019.112957 10.1016/j.neunet.2020.01.018 10.1038/s41598-019-42557-4 10.1145/3065386 10.1016/j.amc.2005.09.016 10.1038/s41583-020-0277-3 10.1016/j.ipm.2009.03.002
Automated diagnosis of COVID-19 with limited posteroanterior chest X-ray images using fine-tuned deep neural networks.
The novel coronavirus 2019 (COVID-19) is a respiratory syndrome that resembles pneumonia. The current diagnostic procedure of COVID-19 follows reverse-transcriptase polymerase chain reaction (RT-PCR) based approach which however is less sensitive to identify the virus at the initial stage. Hence, a more robust and alternate diagnosis technique is desirable. Recently, with the release of publicly available datasets of corona positive patients comprising of computed tomography (CT) and chest X-ray (CXR) imaging; scientists, researchers and healthcare experts are contributing for faster and automated diagnosis of COVID-19 by identifying pulmonary infections using deep learning approaches to achieve better cure and treatment. These datasets have limited samples concerned with the positive COVID-19 cases, which raise the challenge for unbiased learning. Following from this context, this article presents the random oversampling and weighted class loss function approach for unbiased fine-tuned learning (transfer learning) in various state-of-the-art deep learning approaches such as baseline ResNet, Inception-v3, Inception ResNet-v2, DenseNet169, and NASNetLarge to perform binary classification (as normal and COVID-19 cases) and also multi-class classification (as COVID-19, pneumonia, and normal case) of posteroanterior CXR images. Accuracy, precision, recall, loss, and area under the curve (AUC) are utilized to evaluate the performance of the models. Considering the experimental results, the performance of each model is scenario dependent; however, NASNetLarge displayed better scores in contrast to other architectures, which is further compared with other recently proposed approaches. This article also added the visual explanation to illustrate the basis of model classification and perception of COVID-19 in CXR images.
Applied intelligence (Dordrecht, Netherlands)
"2021-11-13T00:00:00"
[ "Narinder SinghPunn", "SonaliAgarwal" ]
10.1007/s10489-020-01900-3 10.1164/ajrccm.150.5.7952571 10.1007/s13246-020-00865-4 10.1016/j.imavis.2009.04.012 10.1148/radiol.2019181960 10.3390/app10020559 10.1016/j.jormas.2019.06.002 10.1007/s11517-019-01965-4 10.1186/s40537-019-0192-5 10.1016/j.cell.2018.02.010 10.1148/radiol.2017162326 10.1016/j.neucom.2016.12.038 10.1016/j.futures.2017.03.006 10.1109/TMI.2020.2993291 10.1145/3376922 10.1371/journal.pmed.1002686 10.1146/annurev-bioeng-071516-044442
Deep learning based detection and analysis of COVID-19 on chest X-ray images.
Covid-19 is a rapidly spreading viral disease that infects not only humans, but animals are also infected because of this disease. The daily life of human beings, their health, and the economy of a country are affected due to this deadly viral disease. Covid-19 is a common spreading disease, and till now, not a single country can prepare a vaccine for COVID-19. A clinical study of COVID-19 infected patients has shown that these types of patients are mostly infected from a lung infection after coming in contact with this disease. Chest x-ray (i.e., radiography) and chest CT are a more effective imaging technique for diagnosing lunge related problems. Still, a substantial chest x-ray is a lower cost process in comparison to chest CT. Deep learning is the most successful technique of machine learning, which provides useful analysis to study a large amount of chest x-ray images that can critically impact on screening of Covid-19. In this work, we have taken the PA view of chest x-ray scans for covid-19 affected patients as well as healthy patients. After cleaning up the images and applying data augmentation, we have used deep learning-based CNN models and compared their performance. We have compared Inception V3, Xception, and ResNeXt models and examined their accuracy. To analyze the model performance, 6432 chest x-ray scans samples have been collected from the Kaggle repository, out of which 5467 were used for training and 965 for validation. In result analysis, the Xception model gives the highest accuracy (i.e., 97.97%) for detecting Chest X-rays images as compared to other models. This work only focuses on possible methods of classifying covid-19 infected patients and does not claim any medical accuracy.
Applied intelligence (Dordrecht, Netherlands)
"2021-11-13T00:00:00"
[ "RachnaJain", "MeenuGupta", "SohamTaneja", "D JudeHemanth" ]
10.1007/s10489-020-01902-1 10.1148/radiol.2303030853 10.1148/radiol.2282030593 10.1016/S0140-6736(20)30183-5 10.1148/rg.2018170048 10.1148/radiol.2462070712 10.1093/ndt/gfaa069 10.1016/j.idm.2020.02.002 10.2214/AJR.14.13021 10.3348/kjr.2016.17.1.166 10.1148/radiol.2020200230 10.1016/j.jfo.2020.02.001 10.1016/j.clinimag.2020.04.001 10.1016/S1473-3099(20)30086-4 10.1186/s40537-019-0276-2 10.4103/0301-4738.37595
DenResCov-19: A deep transfer learning network for robust automatic classification of COVID-19, pneumonia, and tuberculosis from X-rays.
The global pandemic of coronavirus disease 2019 (COVID-19) is continuing to have a significant effect on the well-being of the global population, thus increasing the demand for rapid testing, diagnosis, and treatment. As COVID-19 can cause severe pneumonia, early diagnosis is essential for correct treatment, as well as to reduce the stress on the healthcare system. Along with COVID-19, other etiologies of pneumonia and Tuberculosis (TB) constitute additional challenges to the medical system. Pneumonia (viral as well as bacterial) kills about 2 million infants every year and is consistently estimated as one of the most important factor of childhood mortality (according to the World Health Organization). Chest X-ray (CXR) and computed tomography (CT) scans are the primary imaging modalities for diagnosing respiratory diseases. Although CT scans are the gold standard, they are more expensive, time consuming, and are associated with a small but significant dose of radiation. Hence, CXR have become more widespread as a first line investigation. In this regard, the objective of this work is to develop a new deep transfer learning pipeline, named DenResCov-19, to diagnose patients with COVID-19, pneumonia, TB or healthy based on CXR images. The pipeline consists of the existing DenseNet-121 and the ResNet-50 networks. Since the DenseNet and ResNet have orthogonal performances in some instances, in the proposed model we have created an extra layer with convolutional neural network (CNN) blocks to join these two models together to establish superior performance as compared to the two individual networks. This strategy can be applied universally in cases where two competing networks are observed. We have tested the performance of our proposed network on two-class (pneumonia and healthy), three-class (COVID-19 positive, healthy, and pneumonia), as well as four-class (COVID-19 positive, healthy, TB, and pneumonia) classification problems. We have validated that our proposed network has been able to successfully classify these lung-diseases on our four datasets and this is one of our novel findings. In particular, the AUC-ROC are 99.60, 96.51, 93.70, 96.40% and the F1 values are 98.21, 87.29, 76.09, 83.17% on our Dataset X-Ray 1, 2, 3, and 4 (DXR1, DXR2, DXR3, DXR4), respectively.
Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
"2021-11-12T00:00:00"
[ "MichailMamalakis", "Andrew JSwift", "BartVorselaars", "SurajitRay", "SimonneWeeks", "WeipingDing", "Richard HClayton", "Louise SMackenzie", "AbhirupBanerjee" ]
10.1016/j.compmedimag.2021.102008 10.1016/j.media.2020.101860 10.1016/j.media.2020.101910 10.1016/j.media.2020.101836 10.1016/j.media.2021.102054 10.1016/j.media.2020.101913 10.1016/j.media.2021.101975 10.1016/j.media.2021.101992 10.1016/j.media.2020.101824
Classifying chest CT images as COVID-19 positive/negative using a convolutional neural network ensemble model and uniform experimental design method.
To classify chest computed tomography (CT) images as positive or negative for coronavirus disease 2019 (COVID-19) quickly and accurately, researchers attempted to develop effective models by using medical images. A convolutional neural network (CNN) ensemble model was developed for classifying chest CT images as positive or negative for COVID-19. To classify chest CT images acquired from COVID-19 patients, the proposed COVID19-CNN ensemble model combines the use of multiple trained CNN models with a majority voting strategy. The CNN models were trained to classify chest CT images by transfer learning from well-known pre-trained CNN models and by applying their algorithm hyperparameters as appropriate. The combination of algorithm hyperparameters for a pre-trained CNN model was determined by uniform experimental design. The chest CT images (405 from COVID-19 patients and 397 from healthy patients) used for training and performance testing of the COVID19-CNN ensemble model were obtained from an earlier study by Hu in 2020. Experiments showed that, the COVID19-CNN ensemble model achieved 96.7% accuracy in classifying CT images as COVID-19 positive or negative, which was superior to the accuracies obtained by the individual trained CNN models. Other performance measures (i.e., precision, recall, specificity, and F The COVID19-CNN ensemble model had superior accuracy and excellent capability in classifying chest CT images as COVID-19 positive or negative.
BMC bioinformatics
"2021-11-10T00:00:00"
[ "Yao-MeiChen", "Yenming JChen", "Wen-HsienHo", "Jinn-TsongTsai" ]
10.1186/s12859-021-04083-x 10.1101/2020.04.24.20078998 10.1148/radiol.2020200905 10.1101/2020.02.23.20026930 10.1101/2020.02.14.20023028 10.1148/ryct.2020200028 10.1007/s11263-015-0816-y
A novel deep neuroevolution-based image classification method to diagnose coronavirus disease (COVID-19).
COVID-19 has had a detrimental impact on normal activities, public safety, and the global financial system. To identify the presence of this disease within communities and to commence the management of infected patients early, positive cases should be diagnosed as quickly as possible. New results from X-ray imaging indicate that images provide key information about COVID-19. Advanced deep-learning (DL) models can be applied to X-ray radiological images to accurately diagnose this disease and to mitigate the effects of a shortage of skilled medical personnel in rural areas. However, the performance of DL models strongly depends on the methodology used to design their architectures. Therefore, deep neuroevolution (DNE) techniques are introduced to automatically design DL architectures accurately. In this paper, a new paradigm is proposed for the automated diagnosis of COVID-19 from chest X-ray images using a novel two-stage improved DNE Algorithm. The proposed DNE framework is evaluated on a real-world dataset and the results demonstrate that it provides the highest classification performance in terms of different evaluation metrics.
Computers in biology and medicine
"2021-11-09T00:00:00"
[ "SajadAhmadian", "Seyed Mohammad JafarJalali", "Syed Mohammed ShamsulIslam", "AbbasKhosravi", "EbrahimFazli", "SaeidNahavandi" ]
10.1016/j.compbiomed.2021.104994
COVID-19 infection localization and severity grading from chest X-ray images.
The immense spread of coronavirus disease 2019 (COVID-19) has left healthcare systems incapable to diagnose and test patients at the required rate. Given the effects of COVID-19 on pulmonary tissues, chest radiographic imaging has become a necessity for screening and monitoring the disease. Numerous studies have proposed Deep Learning approaches for the automatic diagnosis of COVID-19. Although these methods achieved outstanding performance in detection, they have used limited chest X-ray (CXR) repositories for evaluation, usually with a few hundred COVID-19 CXR images only. Thus, such data scarcity prevents reliable evaluation of Deep Learning models with the potential of overfitting. In addition, most studies showed no or limited capability in infection localization and severity grading of COVID-19 pneumonia. In this study, we address this urgent need by proposing a systematic and unified approach for lung segmentation and COVID-19 localization with infection quantification from CXR images. To accomplish this, we have constructed the largest benchmark dataset with 33,920 CXR images, including 11,956 COVID-19 samples, where the annotation of ground-truth lung segmentation masks is performed on CXRs by an elegant human-machine collaborative approach. An extensive set of experiments was performed using the state-of-the-art segmentation networks, U-Net, U-Net++, and Feature Pyramid Networks (FPN). The developed network, after an iterative process, reached a superior performance for lung region segmentation with Intersection over Union (IoU) of 96.11% and Dice Similarity Coefficient (DSC) of 97.99%. Furthermore, COVID-19 infections of various shapes and types were reliably localized with 83.05% IoU and 88.21% DSC. Finally, the proposed approach has achieved an outstanding COVID-19 detection performance with both sensitivity and specificity values above 99%.
Computers in biology and medicine
"2021-11-09T00:00:00"
[ "Anas MTahir", "Muhammad E HChowdhury", "AmithKhandakar", "TawsifurRahman", "YazanQiblawey", "UzairKhurshid", "SerkanKiranyaz", "NabilIbtehaz", "M SohelRahman", "SomayaAl-Maadeed", "SakibMahmud", "MaymounaEzeddin", "KhaledHameed", "TahirHamid" ]
10.1016/j.compbiomed.2021.105002 10.1002/rmv.2112 10.34740/KAGGLE/DSV/2759090
MT-nCov-Net: A Multitask Deep-Learning Framework for Efficient Diagnosis of COVID-19 Using Tomography Scans.
The localization and segmentation of the novel coronavirus disease of 2019 (COVID-19) lesions from computerized tomography (CT) scans are of great significance for developing an efficient computer-aided diagnosis system. Deep learning (DL) has emerged as one of the best choices for developing such a system. However, several challenges limit the efficiency of DL approaches, including data heterogeneity, considerable variety in the shape and size of the lesions, lesion imbalance, and scarce annotation. In this article, a novel multitask regression network for segmenting COVID-19 lesions is proposed to address these challenges. We name the framework MT-nCov-Net. We formulate lesion segmentation as a multitask shape regression problem that enables partaking the poor-, intermediate-, and high-quality features between various tasks. A multiscale feature learning (MFL) module is presented to capture the multiscale semantic information, which helps to efficiently learn small and large lesion features while reducing the semantic gap between different scale representations. In addition, a fine-grained lesion localization (FLL) module is introduced to detect infection lesions using an adaptive dual-attention mechanism. The generated location map and the fused multiscale representations are subsequently passed to the lesion regression (LR) module to segment the infection lesions. MT-nCov-Net enables learning complete lesion properties to accurately segment the COVID-19 lesion by regressing its shape. MT-nCov-Net is experimentally evaluated on two public multisource datasets, and the overall performance validates its superiority over the current cutting-edge approaches and demonstrates its effectiveness in tackling the problems facing the diagnosis of COVID-19.
IEEE transactions on cybernetics
"2021-11-09T00:00:00"
[ "WeipingDing", "MohamedAbdel-Basset", "HossamHawash", "Osama MELkomy" ]
10.1109/TCYB.2021.3123173
[Forefront of AI Applications for COVID-19 Imaging Diagnosis].
The intra- and inter-observer variability in diagnosis of thoracic CT images may affect the diagnosis of COVID-19. Therefore, several studies have been reported to develop artificial intelligence (AI) approaches using deep learning (DL) and radiomics technologies. The difference between them is automatic feature extraction (DL) and hand-crafted one (radiomics). The advantages of the AI-based imaging approaches for the COVID-19 are fast throughput, non-invasion, quantification, and integration of PCR results, CT findings, and clinical information. To the best of my knowledge, three types of the AI approaches have been studied: detection, severity differentiation, and prognosis prediction of COVID-19. AI technologies on assessment of severity/prediction of prognosis for COVID-19 may be more crucial than detection of COVID-19 pneumonia after COVID-19 becomes one of common diseases.
Igaku butsuri : Nihon Igaku Butsuri Gakkai kikanshi = Japanese journal of medical physics : an official journal of Japan Society of Medical Physics
"2021-11-09T00:00:00"
[ "HidetakaArimura", "TakahiroIwasaki" ]
10.11323/jjmp.41.3_82
Accuracy of deep learning-based computed tomography diagnostic system for COVID-19: A consecutive sampling external validation cohort study.
Ali-M3, an artificial intelligence program, analyzes chest computed tomography (CT) and detects the likelihood of coronavirus disease (COVID-19) based on scores ranging from 0 to 1. However, Ali-M3 has not been externally validated. Our aim was to evaluate the accuracy of Ali-M3 for detecting COVID-19 and discuss its clinical value. We evaluated the external validity of Ali-M3 using sequential Japanese sampling data. In this retrospective cohort study, COVID-19 infection probabilities for 617 symptomatic patients were determined using Ali-M3. In 11 Japanese tertiary care facilities, these patients underwent reverse transcription-polymerase chain reaction (RT-PCR) testing. They also underwent chest CT to confirm a diagnosis of COVID-19. Of the 617 patients, 289 (46.8%) were RT-PCR-positive. The area under the curve (AUC) of Ali-M3 for predicting a COVID-19 diagnosis was 0.797 (95% confidence interval: 0.762‒0.833) and the goodness-of-fit was P = 0.156. With a cut-off probability of a diagnosis of COVID-19 by Ali-M3 set at 0.5, the sensitivity and specificity were 80.6% and 68.3%, respectively. A cut-off of 0.2 yielded a sensitivity and specificity of 89.2% and 43.2%, respectively. Among the 223 patients who required oxygen, the AUC was 0.825. Sensitivity at a cut-off of 0.5% and 0.2% was 88.7% and 97.9%, respectively. Although the sensitivity was lower when the days from symptom onset were fewer, the sensitivity increased for both cut-off values after 5 days. We evaluated Ali-M3 using external validation with symptomatic patient data from Japanese tertiary care facilities. As Ali-M3 showed sufficient sensitivity performance, despite a lower specificity performance, Ali-M3 could be useful in excluding a diagnosis of COVID-19.
PloS one
"2021-11-05T00:00:00"
[ "TatsuyoshiIkenoue", "YukiKataoka", "YoshinoriMatsuoka", "JunichiMatsumoto", "JunjiKumasawa", "KentaroTochitatni", "HirakuFunakoshi", "TomohiroHosoda", "AikoKugimiya", "MichinoriShirano", "FumikoHamabe", "SachiyoIwata", "ShingoFukuma", "NoneNone" ]
10.1371/journal.pone.0258760 10.1016/j.chest.2020.03.063 10.1111/anae.15072 10.2214/AJR.20.22954 10.2214/AJR.20.23034 10.2214/ajr.20.22975 10.1016/j.ejrad.2020.108941 10.1148/radiol.2020200370 10.1148/radiol.2020200823 10.1186/s41747-018-0061-6 10.1148/ryct.2020200075 10.1007/s00330-020-06699-8 10.1136/bmj.m689 10.1136/bmj.m1328 10.7326/M14-0697 10.1515/cclm-2020-0285 10.7326/M20-1495 10.1093/cid/cis403 10.1038/s41591-020-0897-1 10.1148/radiol.2020201365 10.1038/srep34921 10.23736/S0031-0808.20.03938–5 10.1038/s41598-020-74164-z 10.1503/cmaj.050090 10.1148/radiol.2020200642 10.1001/jama.2020.6173
Detection of COVID-19 from Chest CT Images Using CNN with MLP Hybrid Model.
COVID-19 when left undetected can lead to a hazardous infection spread, leading to an unfortunate loss of life. It's of utmost importance to diagnose COVID-19 in Infected patients at the earliest, to avoid further complications. RT-PCR, the gold standard method is routinely used for the diagnosis of COVID-19 infection. Yet, this method comes along with few limitations such as its time-consuming nature, a scarcity of trained manpower, sophisticated laboratory equipment and the possibility of false positive and negative results. Physicians and global health care centers use CT scan as an alternate for the diagnosis of COVID-19. But this process of detection too, might demand more manual work, effort and time. Thus, automating the detection of COVID-19 using an intelligent system has been a recent research topic, in the view of pandemic. This will also help in saving the physician's time for carrying out further treatment. In this paper, a hybrid learning model has been proposed to identify the COVID-19 infection using CT scan images. The Convolutional Neural Network (CNN) was used for feature extraction and Multilayer Perceptron was used for classification. This hybrid learning model's results were also compared with traditional CNN and MLP models in terms of Accuracy, F1-Score, Precision and Recall. This Hybrid CNN-MLP model showed an Accuracy of 94.89% when compared with CNN and MLP giving 86.95% and 80.77% respectively.
Studies in health technology and informatics
"2021-11-05T00:00:00"
[ "Sakthi Jaya SundarRajasekar", "VasumathiNarayanan", "VaralakshmiPerumal" ]
10.3233/SHTI210617
EpistoNet: an ensemble of Epistocracy-optimized mixture of experts for detecting COVID-19 on chest X-ray images.
The Coronavirus has spread across the world and infected millions of people, causing devastating damage to the public health and global economies. To mitigate the impact of the coronavirus a reliable, fast, and accurate diagnostic system should be promptly implemented. In this study, we propose EpistoNet, a decision tree-based ensemble model using two mixtures of discriminative experts to classify COVID-19 lung infection from chest X-ray images. To optimize the architecture and hyper-parameters of the designed neural networks, we employed Epistocracy algorithm, a recently proposed hyper-heuristic evolutionary method. Using 2500 chest X-ray images consisting of 1250 COVID-19 and 1250 non-COVID-19 cases, we left out 500 images for testing and partitioned the remaining 2000 images into 5 different clusters using K-means clustering algorithm. We trained multiple deep convolutional neural networks on each cluster to help build a mixture of strong discriminative experts from the top-performing models supervised by a gating network. The final ensemble model obtained 95% accuracy on COVID-19 images and 93% accuracy on non-COVID-19. The experimental results show that EpistoNet can accurately, and reliably be used to detect COVID-19 infection in the chest X-ray images, and Epistocracy algorithm can be effectively used to optimize the hyper-parameters of the proposed models.
Scientific reports
"2021-11-05T00:00:00"
[ "Seyed ZiaeMousavi Mojab", "SeyedmohammadShams", "FarshadFotouhi", "HamidSoltanian-Zadeh" ]
10.1038/s41598-021-00524-y 10.1016/j.meegid.2020.104211 10.3389/fmed.2020.00515 10.1038/s41598-020-76550-z 10.7326/M20-1495 10.7861/clinmedicine.13-4-349 10.1148/radiol.2020201160 10.1016/j.eswa.2018.04.021 10.1109/TMI.2017.2655486 10.1038/s41591-019-0447-x 10.1080/07391102.2020.1767212 10.1007/s12539-020-00393-5 10.1109/ACCESS.2020.3016780 10.1016/j.imu.2020.100360 10.1155/2021/8829829 10.1109/ACCESS.2020.2995597 10.1016/j.eswa.2020.114054 10.1007/s10462-012-9338-y
AIoT Used for COVID-19 Pandemic Prevention and Control.
The pandemic of COVID-19 is continuing to wreak havoc in 2021, with at least 170 million victims around the world. Healthcare systems are overwhelmed by the large-scale virus infection. Luckily, Internet of Things (IoT) is one of the most effective paradigms in the intelligent world, in which the technology of artificial intelligence (AI), like cloud computing and big data analysis, is playing a vital role in preventing the spread of the pandemic of COVID-19. AI and 5G technologies are advancing by leaps and bounds, further strengthening the intelligence and connectivity of IoT applications, and conventional IoT has been gradually upgraded to be more powerful AI + IoT (AIoT). For example, in terms of remote screening and diagnosis of COVID-19 patients, AI technology based on machine learning and deep learning has recently upgraded medical equipment significantly and has reshaped the workflow with minimal contact with patients, so medical specialists can make clinical decisions more efficiently, providing the best protection not only to patients but also to specialists themselves. This paper reviews the latest progress made in combating COVID-19 with both IoT and AI and also provides comprehensive details on how to combat the pandemic of COVID-19 as well as the technologies that may be applied in the future.
Contrast media & molecular imaging
"2021-11-04T00:00:00"
[ "Shu-WenChen", "Xiao-WeiGu", "Jia-JiWang", "Hui-ShengZhu" ]
10.1155/2021/3257035 10.1109/GCWkshps50303.2020.9367584 10.1109/ICECCE49384.2020.9179284 10.1109/JTEHM.2021.3058841 10.2196/preprints.19033 10.32604/cmes.2021.016386 10.1109/JSEN.2021.3062442 10.1109/JBHI.2020.3042523 10.1016/j.bspc.2021.102656 10.1109/JBHI.2020.3037127 10.1109/TIP.2021.3058783 10.1016/j.media.2020.101836 10.1155/2021/5544742 10.26599/bdma.2020.9020012 10.32604/cmes.2021.015807 10.1155/2021/6633755 10.1007/s11390-020-0679-8.2 10.1007/s42979-020-00300-1 10.1109/I-SMAC49090.2020.9243576 10.1109/ACCESS.2021.3058448 10.1109/CONFLUENCE.2019.8776970 10.1109/CBMS.2018.00087 10.1109/ICECOCS50124.2020.9314459 10.1109/BIBM49941.2020.9313088 10.1109/IEMTRONICS51293.2020.9216437 10.1109/IC2IE50715.2020.9274663 10.1007/s42979-020-00400-y 10.1007/s12063-020-00164-x 10.1007/s11356-020-11676-1 10.1007/978-3-030-62412-5_46 10.1109/ICUEMS52408.2021.00026 10.1007/s41062-020-00454-0 10.1186/s13677-020-00215-5 10.19850/j.cnki.2096-4706.2020.13.054 10.1007/s42979-020-00248-2 10.1007/s13167-020-00218-x 10.1186/s11782-020-00087-1 10.1057/s42214-021-00108-7 10.3969/j.issn.1006-723X.2020.03.018 10.14089/j.cnki.cn11-3664/f.2020.03.001 10.1109/MNET.011.2000704 10.1109/JIOT.2020.3041042 10.1109/TMRB.2020.3036461 10.3969/j.issn.1672-8270.2020.06.043 10.1109/ICSTCEE49637.2020.9277223 10.1109/ISMSIT50672.2020.9254906 10.1109/NILES50944.2020.9257919
A Transfer Learning-Based Approach with Deep CNN for COVID-19- and Pneumonia-Affected Chest X-ray Image Classification.
The COVID-19 pandemic creates a significant impact on everyone's life. One of the fundamental movements to cope with this challenge is identifying the COVID-19-affected patients as early as possible. In this paper, we classified COVID-19, Pneumonia, and Healthy cases from the chest X-ray images by applying the transfer learning approach on the pre-trained VGG-19 architecture. We use MongoDB as a database to store the original image and corresponding category. The analysis is performed on a public dataset of 3797 X-ray images, among them COVID-19 affected (1184 images), Pneumonia affected (1294 images), and Healthy (1319 images) (https://www.kaggle.com/tawsifurrahman/covid19-radiography-database/version/3). This research gained an accuracy of 97.11%, average precision of 97%, and average Recall of 97% on the test dataset.
SN computer science
"2021-11-02T00:00:00"
[ "SoarovChakraborty", "ShouravPaul", "K M AzharulHasan" ]
10.1007/s42979-021-00881-5 10.1109/ACCESS.2018.2798799 10.1016/j.artmed.2019.07.009 10.1016/j.media.2017.07.005 10.1111/tmi.13383 10.1093/cid/cir1053 10.1145/1773912.1773922 10.1145/1327452.1327492 10.1145/1365815.1365816 10.1016/j.crad.2018.12.015 10.1371/journal.pmed.1002686 10.1016/j.tranon.2018.10.012 10.1007/s13755-018-0057-x 10.3389/fnins.2018.00804 10.1109/ACCESS.2020.3010287 10.1016/j.compbiomed.2021.104319 10.1016/j.ijmedinf.2020.104284 10.1109/TMI.2020.2993291 10.1007/s00330-021-07715-1 10.1155/2020/8828855
Automated Diagnosis of Chest X-Ray for Early Detection of COVID-19 Disease.
In March 2020, the World Health Organization announced the COVID-19 pandemic, its dangers, and its rapid spread throughout the world. In March 2021, the second wave of the pandemic began with a new strain of COVID-19, which was more dangerous for some countries, including India, recording 400,000 new cases daily and more than 4,000 deaths per day. This pandemic has overloaded the medical sector, especially radiology. Deep-learning techniques have been used to reduce the burden on hospitals and assist physicians for accurate diagnoses. In our study, two models of deep learning, ResNet-50 and AlexNet, were introduced to diagnose X-ray datasets collected from many sources. Each network diagnosed a multiclass (four classes) and a two-class dataset. The images were processed to remove noise, and a data augmentation technique was applied to the minority classes to create a balance between the classes. The features extracted by convolutional neural network (CNN) models were combined with traditional Gray-level Cooccurrence Matrix (GLCM) and Local Binary Pattern (LBP) algorithms in a 1-D vector of each image, which produced more representative features for each disease. Network parameters were tuned for optimum performance. The ResNet-50 network reached accuracy, sensitivity, specificity, and Area Under the Curve (AUC) of 95%, 94.5%, 98%, and 97.10%, respectively, with the multiclasses (COVID-19, viral pneumonia, lung opacity, and normal), while it reached accuracy, sensitivity, specificity, and AUC of 99%, 98%, 98%, and 97.51%, respectively, with the binary classes (COVID-19 and normal).
Computational and mathematical methods in medicine
"2021-11-02T00:00:00"
[ "Ebrahim MohammedSenan", "AliAlzahrani", "Mohammed YAlzahrani", "NizarAlsharif", "Theyazn H HAldhyani" ]
10.1155/2021/6919483 10.1007/s00256-020-03582-x 10.1016/j.ejrad.2020.109075 10.2214/AJR.20.23530 10.1148/radiol.2020200642 10.3390/sym12040651 10.1016/j.compbiomed.2020.103805 10.1109/JBHI.2020.3037127 10.1007/s00330-020-07044-9 10.2196/19569 10.1109/TMI.2020.2995965 10.1109/JBHI.2020.3019505 10.1007/s40846-020-00529-4 10.1016/j.asoc.2020.106691 10.1016/j.eswa.2020.113909 10.1016/j.inffus.2020.10.004 10.1016/j.gltp.2021.01.001 10.3390/ijerph18063056 10.3390/healthcare9050522 10.1109/IIPHDW.2018.8388338 10.1016/j.patcog.2017.10.013 10.1016/S0893-6080(03)00115-1 10.1016/j.bspc.2019.101734 10.1109/TMI.2018.2791721 10.1155/2021/1004767 10.1007/s13246-020-00865-4 10.1016/j.cmpb.2020.105581 10.1016/j.compbiomed.2021.104348 10.1016/j.inffus.2021.02.013 10.1016/j.eswa.2020.114054
WOANet: Whale optimized deep neural network for the classification of COVID-19 from radiography images.
Coronavirus Diseases (COVID-19) is a new disease that will be declared a global pandemic in 2020. It is characterized by a constellation of traits like fever, dry cough, dyspnea, fatigue, chest pain, etc. Clinical findings have shown that the human chest Computed Tomography(CT) images can diagnose lung infection in most COVID-19 patients. Visual changes in CT scan due to COVID-19 is subjective and evaluated by radiologists for diagnosis purpose. Deep Learning (DL) can provide an automatic diagnosis tool to relieve radiologists' burden for quantitative analysis of CT scan images in patients. However, DL techniques face different training problems like mode collapse and instability. Deciding on training hyper-parameters to adjust the weight and biases of DL by a given CT image dataset is crucial for achieving the best accuracy. This paper combines the backpropagation algorithm and Whale Optimization Algorithm (WOA) to optimize such DL networks. Experimental results for the diagnosis of COVID-19 patients from a comprehensive COVID-CT scan dataset show the best performance compared to other recent methods. The proposed network architecture results were validated with the existing pre-trained network to prove the efficiency of the network.
Biocybernetics and biomedical engineering
"2021-11-02T00:00:00"
[ "RMurugan", "TriptiGoel", "SeyedaliMirjalili", "Deba KumarChakrabartty" ]
10.1016/j.bbe.2021.10.004
An ensemble learning method based on ordinal regression for COVID-19 diagnosis from chest CT.
Coronavirus disease 2019 (COVID-19) has brought huge losses to the world, and it remains a great threat to public health. X-ray computed tomography (CT) plays a central role in the management of COVID-19. Traditional diagnosis with pulmonary CT images is time-consuming and error-prone, which could not meet the need for precise and rapid COVID-19 screening. Nowadays, deep learning (DL) has been successfully applied to CT image analysis, which assists radiologists in workflow scheduling and treatment planning for patients with COVID-19. Traditional methods use cross-entropy as the loss function with a Softmax classifier following a fully-connected layer. Most DL-based classification methods target intraclass relationships in a certain class (early, progressive, severe, or dissipative phases), ignoring the natural order of different phases of the disease progression,
Physics in medicine and biology
"2021-10-30T00:00:00"
[ "XiaodongGuo", "YimingLei", "PengHe", "WenbingZeng", "RanYang", "YinjinMa", "PengFeng", "QingLyu", "GeWang", "HongmingShan" ]
10.1088/1361-6560/ac34b2
CO-IRv2: Optimized InceptionResNetV2 for COVID-19 detection from chest CT images.
This paper focuses on the application of deep learning (DL) in the diagnosis of coronavirus disease (COVID-19). The novelty of this work is in the introduction of optimized InceptionResNetV2 for COVID-19 (CO-IRv2) method. A part of the CO-IRv2 scheme is derived from the concepts of InceptionNet and ResNet with hyperparameter tuning, while the remaining part is a new architecture consisting of a global average pooling layer, batch normalization, dense layers, and dropout layers. The proposed CO-IRv2 is applied to a new dataset of 2481 computed tomography (CT) images formed by collecting two independent datasets. Data resizing and normalization are performed, and the evaluation is run up to 25 epochs. Various performance metrics, including precision, recall, accuracy, F1-score, area under the receiver operating characteristics (AUC) curve are used as performance metrics. The effectiveness of three optimizers known as Adam, Nadam and RMSProp are evaluated in classifying suspected COVID-19 patients and normal people. Results show that for CO-IRv2 and for CT images, the obtained accuracies of Adam, Nadam and RMSProp optimizers are 94.97%, 96.18% and 96.18%, respectively. Furthermore, it is shown here that for the case of CT images, CO-IRv2 with Nadam optimizer has better performance than existing DL algorithms in the diagnosis of COVID-19 patients. Finally, CO-IRv2 is applied to an X-ray dataset of 1662 images resulting in a classification accuracy of 99.40%.
PloS one
"2021-10-29T00:00:00"
[ "M Rubaiyat HossainMondal", "SubratoBharati", "PrajoyPodder" ]
10.1371/journal.pone.0259179 10.1111/tmi.13383 10.1016/j.imu.2020.100374 10.1056/NEJMc2001272 10.1056/NEJMc2013020 10.1001/jama.2020.2783 10.7326/M20-1495 10.3233/HIS-210008 10.1016/j.imu.2020.100391 10.1007/s11045-020-00756-7 10.1097/RLI.0000000000000672 10.1016/j.compbiomed.2020.103792 10.2174/1573405617666210713113439 10.1056/NEJMoa2002032 10.1148/radiol.2020200236 10.1148/radiol.2020200230 10.1038/s41591-019-0447-x 10.3390/s18082521 10.1164/rccm.201705-0860OC 10.1038/s41591-018-0177-5 10.1007/s12194-017-0406-5 10.1016/j.ejrad.2020.109041 10.1016/j.eng.2020.04.010 10.1016/j.bspc.2021.102588 10.1007/s10140-020-01886-y 10.1007/s00330-020-07108-w 10.1016/j.jpha.2020.03.004 10.1109/TCBB.2021.3065361 10.1016/j.asoc.2020.106885 10.1007/s11548-020-02286-w 10.1038/s41746-020-00373-5 10.1016/j.compbiomed.2020.104037 10.1016/j.media.2020.101824 10.1007/s10044-021-00984-y 10.1007/s13246-020-00865-4 10.6084/m9.figshare.14818116.v1 10.1371/journal.pone.0228422
Outbreak COVID-19 in Medical Image Processing Using Deep Learning: A State-of-the-Art Review.
From the month of December-19, the outbreak of Coronavirus (COVID-19) triggered several deaths and overstated every aspect of individual health. COVID-19 has been designated as a pandemic by World Health Organization. The circumstances placed serious trouble on every country worldwide, particularly with health arrangements and time-consuming responses. The increase in the positive cases of COVID-19 globally spread every day. The quantity of accessible diagnosing kits is restricted because of complications in detecting the existence of the illness. Fast and correct diagnosis of COVID-19 is a timely requirement for the prevention and controlling of the pandemic through suitable isolation and medicinal treatment. The significance of the present work is to discuss the outline of the deep learning techniques with medical imaging such as outburst prediction, virus transmitted indications, detection and treatment aspects, vaccine availability with remedy research. Abundant image resources of medical imaging as X-rays, Computed Tomography Scans, Magnetic Resonance imaging, formulate deep learning high-quality methods to fight against the pandemic COVID-19. The review presents a comprehensive idea of deep learning and its related applications in healthcare received over the past decade. At the last, some issues and confrontations to control the health crisis and outbreaks have been introduced. The progress in technology has contributed to developing individual's lives. The problems faced by the radiologists during medical imaging techniques and deep learning approaches for diagnosing the COVID-19 infections have been also discussed.
Archives of computational methods in engineering : state of the art reviews
"2021-10-26T00:00:00"
[ "JaspreetKaur", "PrabhpreetKaur" ]
10.1007/s11831-021-09667-7 10.1016/j.inffus.2017.10.006 10.1016/j.scs.2020.102018 10.1109/ACCESS.2020.3006172 10.1016/j.cell.2018.02.010 10.3390/app8101715 10.1016/j.compbiomed.2020.103792 10.1007/s10489-020-01826-w 10.1007/s13246-020-00865-4 10.1109/TMI.2020.2996645 10.1016/j.asoc.2020.106742 10.1109/ACCESS.2020.3005510 10.1016/j.cmpb.2020.105581 10.1016/j.cmpb.2020.105532 10.1016/j.chaos.2020.110190 10.1007/s00330-020-07044-9 10.1016/j.compbiomed.2021.104575 10.1016/j.asoc.2020.106912 10.1007/s10489-021-02393-4 10.1007/s12652-021-03306-6 10.1016/j.media.2021.102205 10.1155/2021/6680455 10.1007/s10489-020-01831-z 10.1016/j.scs.2021.103252 10.1109/ACCESS.2020.3009328 10.1038/s41591-020-0820-9 10.1016/j.jare.2020.03.005 10.1016/j.jhin.2020.01.022 10.3390/electronics9020274 10.1016/j.ajem.2020.09.013 10.1186/s12880-020-00485-0 10.1016/j.neucom.2019.04.086 10.1007/s12065-020-00403-x 10.1109/TMI.2016.2553401 10.1007/s10278-019-00182-7 10.1016/j.cmpb.2019.105268 10.1002/itl2.187 10.1016/j.scs.2020.102582 10.1016/j.inffus.2014.09.004 10.1109/TMI.2016.2538802 10.1016/j.media.2017.07.005 10.1109/ACCESS.2017.2788044 10.1007/s00138-020-01060-x 10.1016/j.media.2018.11.010 10.1002/mp.13620 10.1109/TMI.2017.2743464 10.1109/TMI.2019.2903562 10.1109/trpms.2018.2890359 10.1109/TMI.2019.2919951 10.1109/ACCESS.2020.3005152 10.1109/ACCESS.2020.2981337 10.1109/TMI.2015.2508280 10.1109/TMI.2018.2872031 10.1109/TMI.2015.2458702 10.1109/TMI.2016.2528120 10.1109/TMI.2016.2528162 10.1109/TNNLS.2019.2892409 10.1109/TMI.2019.2894322 10.1109/TPAMI.2012.277 10.1109/TMI.2013.2256922 10.1167/iovs.16-19964 10.1109/ACCESS.2020.2993937 10.3390/app11010371 10.1038/s41598-020-80261-w 10.1016/j.scs.2020.102589 10.1148/radiol.2020200432 10.1001/jama.2020.2648 10.1111/tmi.13383 10.1007/s12194-017-0406-5 10.1016/j.zemedi.2018.11.002 10.1007/s10916-018-1088-1 10.1016/j.gpb.2017.07.003 10.1038/s41591-020-0824-5 10.1016/j.media.2020.101693 10.1016/j.ijid.2020.03.004 10.1016/S0140-6736(20)30183-5 10.1016/j.ijid.2020.03.021 10.1016/S0140-6736(20)30627-9 10.2196/23996 10.1148/radiol.2020200463 10.1016/j.compbiomed.2019.103387 10.1016/j.cogsys.2019.09.007 10.1007/s11042-018-5714-1 10.1038/s41598-018-22437-z 10.1016/j.patrec.2020.03.011 10.1038/nature14539 10.1016/j.patcog.2019.01.006 10.1007/s11263-015-0816-y 10.1371/journal.pone.0207982 10.1007/s11548-018-1843-2 10.1016/S0140-6736(20)30260-9 10.1016/j.jaut.2020.102433 10.1016/S0140-6736(20)30251-8 10.1007/s11427-020-1637-5 10.1186/s12942-020-00202-8 10.1016/j.ijantimicag.2020.105948 10.1016/j.scs.2020.102372 10.1001/jama.2020.4169 10.1148/radiol.2020200642 10.1148/radiol.2020200823 10.1016/S0140-6736(20)30211-7 10.1007/s00134-020-05996-6 10.1148/radiol.2020200370 10.1007/s00259-020-04734-w 10.1148/radiol.2020200770 10.1016/j.eng.2020.04.010 10.5812/archcid.103232 10.1016/j.dsx.2020.04.012 10.1007/s12539-020-00376-6 10.1016/j.csbj.2020.03.025 10.1007/s00264-020-04609-7 10.1016/j.ijmedinf.2020.104284 10.1109/JBHI.2020.3019505 10.1007/s10489-020-01900-3 10.1038/s41598-020-76550-z 10.1109/TCBB.2021.3065361 10.1016/j.imu.2020.100360 10.1006/viro.1995.0056
A comparative analysis of eleven neural networks architectures for small datasets of lung images of COVID-19 patients toward improved clinical decisions.
The 2019 novel severe acute respiratory syndrome coronavirus 2-SARS-CoV2, commonly known as COVID-19, is a highly infectious disease that has endangered the health of many people around the world. COVID-19, which infects the lungs, is often diagnosed and managed using X-ray or computed tomography (CT) images. For such images, rapid and accurate classification and diagnosis can be performed using deep learning methods that are trained using existing neural network models. However, at present, there is no standardized method or uniform evaluation metric for image classification, which makes it difficult to compare the strengths and weaknesses of different neural network models. This paper used eleven well-known convolutional neural networks, including VGG-16, ResNet-18, ResNet-50, DenseNet-121, DenseNet-169, Inception-v3, Inception-v4, SqueezeNet, MobileNet, ShuffeNet, and EfficientNet-b0, to classify and distinguish COVID-19 and non-COVID-19 lung images. These eleven models were applied to different batch sizes and epoch cases, and their overall performance was compared and discussed. The results of this study can provide decision support in guiding research on processing and analyzing small medical datasets to understand which model choices can yield better outcomes in lung image classification, diagnosis, disease management and patient care.
Computers in biology and medicine
"2021-10-25T00:00:00"
[ "YuanYang", "LinZhang", "MingyuDu", "JingyuBo", "HaoleiLiu", "LeiRen", "XiaoheLi", "M JamalDeen" ]
10.1016/j.compbiomed.2021.104887 10.1142/S1793962320410032 10.1142/S1793962321410014 10.1142/S1793962321410014 10.1016/j.ijsu.2020.02.034 10.1016/j.ejrad.2020.109041 10.1101/2020.02.23.20026930 10.1109/CVPR.2016.90 10.1007/s10489-020-01714-3 10.1101/2020.03.19.20039354 10.1109/CVPR.2015.7298594 10.1109/CVPR.2016.308 10.1007/978-3-030-01264-9_8 10.33889/IJMEMS.2020.5.4.052 10.1186/s40537-019-0197-0 10.1186/s40537-019-0197-0 10.1109/ICCV.2019.00140 10.2200/S00010ED1V01Y200508IVM003 10.2200/S00010ED1V01Y200508IVM003 10.4236/jcc.2019.73002 10.1109/TIP.2005.854492 10.1109/TIP.2012.2214050 10.1007/s11263-019-01228-7 10.1007/978-3-319-10590-1_53 10.1109/WACV.2018.00097
A large margin piecewise linear classifier with fusion of deep features in the diagnosis of COVID-19.
The world has experienced epidemics of coronavirus infections several times over the last two decades. Recent studies have shown that using medical imaging techniques can be useful in developing an automatic computer-aided diagnosis system to detect pandemic diseases with high accuracy at an early stage. In this study, a large margin piecewise linear classifier was developed to diagnose COVID-19 compared to a wide range of viral pneumonia, including SARS and MERS, using chest x-ray images. In the proposed method, a preprocessing pipeline was employed. Moreover, deep pre- and post-rectified linear unit (ReLU) features were extracted using the well-known VGG-Net19, which was fine-tuned to optimize transfer learning. Afterward, the canonical correlation analysis was performed for feature fusion, and fused deep features were passed into the LMPL classifier. The introduced method reached the highest performance in comparison with related state-of-the-art methods for two different schemes (normal, COVID-19, and typical viral pneumonia) and (COVID-19, SARS, and MERS pneumonia) with 99.39% and 98.86% classification accuracy, respectively.
Computers in biology and medicine
"2021-10-24T00:00:00"
[ "NedaAzouji", "AshkanSami", "MohammadTaheri", "HenningMüller" ]
10.1016/j.compbiomed.2021.104927
Deep learning for lung disease segmentation on CT: Which reconstruction kernel should be used?
The purpose of this study was to determine whether a single reconstruction kernel or both high and low frequency kernels should be used for training deep learning models for the segmentation of diffuse lung disease on chest computed tomography (CT). Two annotated datasets of COVID-19 pneumonia (323,960 slices) and interstitial lung disease (ILD) (4,284 slices) were used. Annotated CT images were used to train a U-Net architecture to segment disease. All CT slices were reconstructed using both a lung kernel (LK) and a mediastinal kernel (MK). Three different trainings, resulting in three different models were compared for each disease: training on LK only, MK only or LK+MK images. Dice similarity scores (DSC) were compared using the Wilcoxon signed-rank test. Models only trained on LK images performed better on LK images than on MK images (median DSC = 0.62 [interquartile range (IQR): 0.54, 0.69] vs. 0.60 [IQR: 0.50, 0.70], P < 0.001 for COVID-19 and median DSC = 0.62 [IQR: 0.56, 0.69] vs. 0.50 [IQR 0.43, 0.57], P < 0.001 for ILD). Similarly, models only trained on MK images performed better on MK images (median DSC = 0.62 [IQR: 0.53, 0.68] vs. 0.54 [IQR: 0.47, 0.63], P < 0.001 for COVID-19 and 0.69 [IQR: 0.61, 0.73] vs. 0.63 [IQR: 0.53, 0.70], P < 0.001 for ILD). Models trained on both kernels performed better or similarly than those trained on only one kernel. For COVID-19, median DSC was 0.67 (IQR: =0.59, 0.73) when applied on LK images and 0.67 (IQR: 0.60, 0.74) when applied on MK images (P < 0.001 for both). For ILD, median DSC was 0.69 (IQR: 0.63, 0.73) when applied on LK images (P = 0.006) and 0.68 (IQR: 0.62, 0.72) when applied on MK images (P > 0.99). Reconstruction kernels impact the performance of deep learning-based models for lung disease segmentation. Training on both LK and MK images improves the performance.
Diagnostic and interventional imaging
"2021-10-24T00:00:00"
[ "Trieu-NghiHoang-Thi", "MariaVakalopoulou", "StergiosChristodoulidis", "NikosParagios", "Marie-PierreRevel", "GuillaumeChassagnon" ]
10.1016/j.diii.2021.10.001
A Promising and Challenging Approach: Radiologists' Perspective on Deep Learning and Artificial Intelligence for Fighting COVID-19.
Chest X-rays (CXR) and computed tomography (CT) are the main medical imaging modalities used against the increased worldwide spread of the 2019 coronavirus disease (COVID-19) epidemic. Machine learning (ML) and artificial intelligence (AI) technology, based on medical imaging fully extracting and utilizing the hidden information in massive medical imaging data, have been used in COVID-19 research of disease diagnosis and classification, treatment decision-making, efficacy evaluation, and prognosis prediction. This review article describes the extensive research of medical image-based ML and AI methods in preventing and controlling COVID-19, and summarizes their characteristics, differences, and significance in terms of application direction, image collection, and algorithm improvement, from the perspective of radiologists. The limitations and challenges faced by these systems and technologies, such as generalization and robustness, are discussed to indicate future research directions.
Diagnostics (Basel, Switzerland)
"2021-10-24T00:00:00"
[ "TianmingWang", "ZhuChen", "QuanliangShang", "CongMa", "XiangyuChen", "EnhuaXiao" ]
10.3390/diagnostics11101924 10.1016/j.bios.2020.112752 10.1148/radiol.2020200463 10.1016/j.diii.2020.10.001 10.1038/nbt.4233 10.7150/thno.38065 10.1016/j.ejrad.2020.109236 10.1007/s00259-020-04795-x 10.1002/14651858.CD013639.pub4 10.1007/s00259-020-04953-1 10.1148/radiol.2020200905 10.1007/s00330-021-07937-3 10.1038/s41467-020-18685-1 10.1038/s41467-020-17971-2 10.1016/j.cell.2020.04.045 10.1038/s41551-021-00704-1 10.1148/radiol.2020203511 10.1007/s00330-021-08050-1 10.1016/j.acra.2020.09.004 10.1183/13993003.00775-2020 10.21037/atm-20-3026 10.1186/s12879-021-05839-9 10.3389/fmed.2021.699984 10.1186/s12967-021-02992-2 10.1148/radiol.2020201874 10.1371/journal.pone.0252440 10.1148/radiol.2020201491 10.1002/mp.14609 10.1148/ryai.2020200079 10.1148/radiol.2020202439 10.1016/j.media.2021.102096 10.1038/s41598-021-95114-3 10.7150/thno.46465 10.1038/s41467-020-17280-8 10.3233/XST-200685 10.2147/TCRM.S280726 10.3389/fimmu.2020.585647 10.1038/s41598-021-86735-9 10.3348/kjr.2020.0146 10.1038/s41591-020-0931-3 10.3348/kjr.2020.1104 10.3390/jpm11060501 10.1016/S2589-7500(21)00039-X 10.1007/s00330-020-07225-6 10.1186/s12911-021-01588-6 10.1016/j.media.2020.101913 10.1007/s00259-020-05075-4 10.3233/XST-200757 10.1109/ACCESS.2020.2994762 10.1016/j.comcom.2021.06.011 10.3389/frai.2021.612914 10.1016/j.imu.2020.100378 10.1080/03014460.2020.1839132 10.1016/j.chaos.2020.109864 10.1016/j.chaos.2020.109853 10.1016/j.chaos.2020.109850 10.3390/ijerph17155330 10.1155/2021/6668985 10.1146/annurev-biophys-062920-063711 10.1016/S2589-7500(20)30192-8 10.3389/fimmu.2020.01581 10.1016/j.bj.2020.05.001 10.1016/j.chest.2020.11.026 10.1016/j.diii.2020.11.008
Predicting Mechanical Ventilation and Mortality in COVID-19 Using Radiomics and Deep Learning on Chest Radiographs: A Multi-Institutional Study.
In this study, we aimed to predict mechanical ventilation requirement and mortality using computational modeling of chest radiographs (CXRs) for coronavirus disease 2019 (COVID-19) patients. This two-center, retrospective study analyzed 530 deidentified CXRs from 515 COVID-19 patients treated at Stony Brook University Hospital and Newark Beth Israel Medical Center between March and August 2020. Linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and random forest (RF) machine learning classifiers to predict mechanical ventilation requirement and mortality were trained and evaluated using radiomic features extracted from patients' CXRs. Deep learning (DL) approaches were also explored for the clinical outcome prediction task and a novel radiomic embedding framework was introduced. All results are compared against radiologist grading of CXRs (zone-wise expert severity scores). Radiomic classification models had mean area under the receiver operating characteristic curve (mAUCs) of 0.78 ± 0.05 (sensitivity = 0.72 ± 0.07, specificity = 0.72 ± 0.06) and 0.78 ± 0.06 (sensitivity = 0.70 ± 0.09, specificity = 0.73 ± 0.09), compared with expert scores mAUCs of 0.75 ± 0.02 (sensitivity = 0.67 ± 0.08, specificity = 0.69 ± 0.07) and 0.79 ± 0.05 (sensitivity = 0.69 ± 0.08, specificity = 0.76 ± 0.08) for mechanical ventilation requirement and mortality prediction, respectively. Classifiers using both expert severity scores and radiomic features for mechanical ventilation (mAUC = 0.79 ± 0.04, sensitivity = 0.71 ± 0.06, specificity = 0.71 ± 0.08) and mortality (mAUC = 0.83 ± 0.04, sensitivity = 0.79 ± 0.07, specificity = 0.74 ± 0.09) demonstrated improvement over either artificial intelligence or radiologist interpretation alone. Our results also suggest instances in which the inclusion of radiomic features in DL improves model predictions over DL alone. The models proposed in this study and the prognostic information they provide might aid physician decision making and efficient resource allocation during the COVID-19 pandemic.
Diagnostics (Basel, Switzerland)
"2021-10-24T00:00:00"
[ "JosephBae", "SaarthakKapse", "GagandeepSingh", "RishabhGattu", "SyedAli", "NealShah", "ColinMarshall", "JonathanPierce", "TejPhatak", "AmitGupta", "JeremyGreen", "NikhilMadan", "PrateekPrasanna" ]
10.3390/diagnostics11101812 10.1016/S1473-3099(20)30120-1 10.1148/radiol.2020201754 10.2196/24018 10.1001/jamainternmed.2020.2033 10.1148/ryct.2020200047 10.1007/s00330-020-07270-1 10.3390/jcm9124129 10.1148/radiol.2020201160 10.1148/radiol.2020200642 10.1038/s42256-020-0180-7 10.1371/journal.pone.0233328 10.7717/peerj.11205 10.1080/23808993.2019.1585805 10.1016/j.compbiomed.2020.103792 10.1016/j.media.2020.101860 10.1016/j.crad.2021.02.005 10.1148/ryai.2020200098 10.1007/s11036-020-01672-7 10.1109/ACCESS.2021.3086020 10.1007/s10278-021-00421-w 10.1109/LGRS.2018.2802944 10.1109/TSMC.1973.4309314 10.1016/0031-3203(91)90143-S 10.1109/34.709601 10.1109/TPAMI.2005.159 10.1038/s41598-019-47765-6 10.1038/s41598-021-88538-4
Stratifying the early radiologic trajectory in dyspneic patients with COVID-19 pneumonia.
This study aimed to stratify the early pneumonia trajectory on chest radiographs and compare patient characteristics in dyspneic patients with coronavirus disease 2019 (COVID-19). We retrospectively included 139 COVID-19 patients with dyspnea (87 men, 62.7±16.3 years) and serial chest radiographs from January to September 2020. Radiographic pneumonia extent was quantified as a percentage using a previously-developed deep learning algorithm. A group-based trajectory model was used to categorize the pneumonia trajectory after symptom onset during hospitalization. Clinical findings, and outcomes were compared, and Cox regression was performed for survival analysis. Radiographic pneumonia trajectories were categorized into four groups. Group 1 (n = 83, 59.7%) had negligible pneumonia, and group 2 (n = 29, 20.9%) had mild pneumonia. Group 3 (n = 13, 9.4%) and group 4 (n = 14, 10.1%) showed similar considerable pneumonia extents at baseline, but group 3 had decreasing pneumonia extent at 1-2 weeks, while group 4 had increasing pneumonia extent. Intensive care unit admission and mortality were significantly more frequent in groups 3 and 4 than in groups 1 and 2 (P < .05). Groups 3 and 4 shared similar clinical and laboratory findings, but thrombocytopenia (<150×103/μL) was exclusively observed in group 4 (P = .016). When compared to groups 1 and 2, group 4 (hazard ratio, 63.3; 95% confidence interval, 7.9-504.9) had a two-fold higher risk for mortality than group 3 (hazard ratio, 31.2; 95% confidence interval, 3.5-280.2), and this elevated risk was maintained after adjusting confounders. Monitoring the early radiologic trajectory beyond baseline further prognosticated at-risk COVID-19 patients, who potentially had thrombo-inflammatory responses.
PloS one
"2021-10-23T00:00:00"
[ "Jin YoungKim", "Keum JiJung", "Seung-JinYoo", "Soon HoYoon" ]
10.1371/journal.pone.0259010 10.1148/radiol.2021203998 10.1001/jama.2020.2648 10.3348/kjr.2020.0132 10.1148/radiol.2020200370 10.1148/radiol.2020203496 10.3348/kjr.2020.0564 10.1148/radiol.2020201365 10.1148/radiol.2020203173 10.1016/j.jinf.2020.04.021 10.3346/jkms.2020.35.e316 10.3346/jkms.2020.35.e413 10.1148/radiol.2020201160 10.1186/s12889-019-7077-6 10.1177/0049124101029003005 10.1146/annurev.clinpsy.121208.131413 10.1093/aje/kwt179 10.2214/AJR.20.22976 10.1016/j.mayocp.2020.04.006 10.1016/S1473-3099(20)30086-4 10.1159/000512007 10.1111/jth.14975 10.1161/CIRCRESAHA.120.317703 10.1080/09537104.2020.1754383 10.1016/j.cca.2020.03.022 10.1093/labmed/lmaa067 10.1016/j.thromres.2020.11.017 10.1186/s13613-020-00706-3
GACDN: generative adversarial feature completion and diagnosis network for COVID-19.
The outbreak of coronavirus disease 2019 (COVID-19) causes tens of million infection world-wide. Many machine learning methods have been proposed for the computer-aided diagnosis between COVID-19 and community-acquired pneumonia (CAP) from chest computed tomography (CT) images. Most of these methods utilized the location-specific handcrafted features based on the segmentation results to improve the diagnose performance. However, the prerequisite segmentation step is time-consuming and needs the intervention by lots of expert radiologists, which cannot be achieved in the areas with limited medical resources. We propose a generative adversarial feature completion and diagnosis network (GACDN) that simultaneously generates handcrafted features by radiomic counterparts and makes accurate diagnoses based on both original and generated features. Specifically, we first calculate the radiomic features from the CT images. Then, in order to fast obtain the location-specific handcrafted features, we use the proposed GACDN to generate them by its corresponding radiomic features. Finally, we use both radiomic features and location-specific handcrafted features for COVID-19 diagnosis. For the performance of our generated location-specific handcrafted features, the results of four basic classifiers show that it has an average of 3.21% increase in diagnoses accuracy. Besides, the experimental results on COVID-19 dataset show that our proposed method achieved superior performance in COVID-19 vs. community acquired pneumonia (CAP) classification compared with the state-of-the-art methods. The proposed method significantly improves the diagnoses accuracy of COVID-19 vs. CAP in the condition of incomplete location-specific handcrafted features. Besides, it is also applicable in some regions lacking of expert radiologists and high-performance computing resources.
BMC medical imaging
"2021-10-23T00:00:00"
[ "QiZhu", "HaizhouYe", "LiangSun", "ZhongnianLi", "RanWang", "FengShi", "DinggangShen", "DaoqiangZhang" ]
10.1186/s12880-021-00681-6 10.1148/radiol.2020201343 10.1016/j.compbiomed.2020.103792 10.1016/S0140-6736(20)30260-9 10.1016/S1473-3099(20)30086-4 10.1148/radiol.2020200274 10.1148/radiol.2020200432 10.1016/j.ejrad.2020.108961 10.2214/AJR.20.22954 10.1007/s00330-020-06731-x 10.1109/JBHI.2020.3019505 10.1109/TMI.2020.2992546 10.1109/TMI.2016.2582386 10.1088/1361-6560/abe838 10.29080/jhsp.v4i2.375 10.1016/j.neuroimage.2011.06.064 10.1109/MSP.2017.2765202 10.1145/3301282 10.1142/S0218488598000094 10.1109/TMM.2020.3013408
Current limitations to identify covid-19 using artificial intelligence with chest x-ray imaging (part ii). The shortcut learning problem.
Since the outbreak of the COVID-19 pandemic, computer vision researchers have been working on automatic identification of this disease using radiological images. The results achieved by automatic classification methods far exceed those of human specialists, with sensitivity as high as 100% being reported. However, prestigious radiology societies have stated that the use of this type of imaging alone is not recommended as a diagnostic method. According to some experts the patterns presented in these images are unspecific and subtle, overlapping with other viral pneumonias. This report seeks to evaluate the analysis the robustness and generalizability of different approaches using artificial intelligence, deep learning and computer vision to identify COVID-19 using chest X-rays images. We also seek to alert researchers and reviewers to the issue of "shortcut learning". Recommendations are presented to identify whether COVID-19 automatic classification models are being affected by shortcut learning. Firstly, papers using explainable artificial intelligence methods are reviewed. The results of applying external validation sets are evaluated to determine the generalizability of these methods. Finally, studies that apply traditional computer vision methods to perform the same task are considered. It is evident that using the whole chest X-Ray image or the bounding box of the lungs, the image regions that contribute most to the classification appear outside of the lung region, something that is not likely possible. In addition, although the investigations that evaluated their models on data sets external to the training set, the effectiveness of these models decreased significantly, it may provide a more realistic representation as how the model will perform in the clinic. The results indicate that, so far, the existing models often involve shortcut learning, which makes their use less appropriate in the clinical setting.
Health and technology
"2021-10-19T00:00:00"
[ "José DanielLópez-Cabrera", "RubénOrozco-Morales", "Jorge ArmandoPortal-Díaz", "OrlandoLovelle-Enríquez", "MarlénPérez-Díaz" ]
10.1007/s12553-021-00609-8 10.1016/j.ijantimicag.2020.105924 10.1016/j.cca.2020.03.009 10.1148/radiol.2020200642 10.7326/M20-1495 10.1016/j.ijid.2020.03.071 10.1016/j.chest.2020.04.003 10.1177/0846537120924606 10.1109/ACCESS.2021.3058537 10.1007/s12553-021-00520-2 10.1148/radiol.2020200527 10.1148/ryct.2020200034 10.1148/radiol.2020201160 10.3348/kjr.2020.0132 10.1016/j.ejrad.2020.109092 10.1016/j.crad.2020.03.008 10.1109/JBHI.2020.3037127 10.1016/S2589-7500(20)30079-0 10.1109/ACCESS.2020.3027685 10.1016/j.jiph.2020.06.028 10.15212/bioi-2020-0015 10.1007/s10462-021-09985-z 10.1016/j.bspc.2020.102365 10.1016/j.mehy.2020.109761 10.1007/s12559-020-09795-5 10.1613/jair.1.12162 10.1016/j.csbj.2020.08.003 10.1148/ryai.2019180031 10.1371/journal.pmed.1002683 10.1016/j.inffus.2021.04.008 10.1109/ACCESS.2020.3044858 10.3892/etm.2020.8797 10.1007/s11548-020-02305-w 10.1007/s13755-020-00119-3 10.1007/s00521-020-05636-6 10.1016/j.patrec.2020.09.010 10.1109/ACCESS.2021.3079716 10.1023/B:VISI.0000029664.99615.94 10.1109/TPAMI.2002.1017623 10.1371/journal.pone.0235187 10.1016/j.ins.2020.09.041 10.20944/preprints202003.0300.v1
COVID-19 diagnosis from chest x-rays: developing a simple, fast, and accurate neural network.
Chest x-rays are a fast and inexpensive test that may potentially diagnose COVID-19, the disease caused by the novel coronavirus. However, chest imaging is not a first-line test for COVID-19 due to low diagnostic accuracy and confounding with other viral pneumonias. Recent research using deep learning may help overcome this issue as convolutional neural networks (CNNs) have demonstrated high accuracy of COVID-19 diagnosis at an early stage. We used the COVID-19 Radiography database [36], which contains x-ray images of COVID-19, other viral pneumonia, and normal lungs. We developed a CNN in which we added a dense layer on top of a pre-trained baseline CNN (EfficientNetB0), and we trained, validated, and tested the model on 15,153 X-ray images. We used data augmentation to avoid overfitting and address class imbalance; we used fine-tuning to improve the model's performance. From the external test dataset, we calculated the model's accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1-score. Our model differentiated COVID-19 from normal lungs with 95% accuracy, 90% sensitivity, and 97% specificity; it differentiated COVID-19 from other viral pneumonia and normal lungs with 93% accuracy, 94% sensitivity, and 95% specificity. Our parsimonious CNN shows that it is possible to differentiate COVID-19 from other viral pneumonia and normal lungs on x-ray images with high accuracy. Our method may assist clinicians with making more accurate diagnostic decisions and support chest X-rays as a valuable screening tool for the early, rapid diagnosis of COVID-19. The online version contains supplementary material available at 10.1007/s13755-021-00166-4.
Health information science and systems
"2021-10-19T00:00:00"
[ "VasilisNikolaou", "SebastianoMassaro", "MasoudFakhimi", "LamprosStergioulas", "WolfgangGarn" ]
10.1007/s13755-021-00166-4 10.36416/1806-3756/e20200226 10.1148/radiol.2020200642 10.1007/s13755-020-00135-3 10.1016/j.imu.2020.100360 10.1016/j.chaos.2020.109944 10.1016/j.diii.2020.11.008 10.1016/j.cmpb.2020.105608 10.3390/sym12040651 10.1016/j.compbiomed.2020.103792 10.1007/s12559-020-09751-3 10.1007/s00264-020-04609-7 10.1016/j.mehy.2020.109761 10.1016/j.compbiomed.2020.103805 10.1016/j.cmpb.2020.105581 10.18517/ijaseit.10.2.11446 10.1109/ACCESS.2020.2994762 10.1016/j.cmpb.2020.105532 10.1007/s13246-020-00865-4 10.1371/journal.pone.0235187 10.1097/RTI.0000000000000532 10.1007/s13246-020-00888-x 10.1016/S2589-7500(21)00039-X
COVID-19 Diagnosis from CT Images with Convolutional Neural Network Optimized by Marine Predator Optimization Algorithm.
In recent years, almost every country in the world has struggled against the spread of Coronavirus Disease 2019. If governments and public health systems do not take action against the spread of the disease, it will have a severe impact on human life. A noteworthy technique to stop this pandemic is diagnosing COVID-19 infected patients and isolating them instantly. The present study proposes a method for the diagnosis of COVID-19 from CT images. The method is a hybrid method based on convolutional neural network which is optimized by a newly introduced metaheuristic, called marine predator optimization algorithm. This optimization method is performed to improve the system accuracy. The method is then implemented on the chest CT scans with the COVID-19-related findings (MosMedData) dataset, and the results are compared with three other methods from the literature to indicate the method's performance. The final results indicate that the proposed method with 98.11% accuracy, 98.13% precision, 98.66% sensitivity, and 97.26%
BioMed research international
"2021-10-16T00:00:00"
[ "HuapingJia", "JunlongZhao", "AliArshaghi" ]
10.1155/2021/5122962 10.1007/s10489-020-01826-w 10.1117/12.2588672 10.1016/j.media.2020.101794 10.1016/j.imu.2020.100412 10.1109/ACCESS.2020.3022366 10.1002/ima.22608 10.1016/j.energy.2018.10.153 10.1515/med-2018-0002 10.1007/s13369-021-05688-3 10.1016/j.egyr.2019.11.013 10.1016/j.egyr.2019.10.029 10.1016/j.egyr.2020.03.010 10.1016/j.egyr.2020.04.012 10.1016/j.egyr.2019.09.039 10.1016/j.cma.2020.113609 10.1016/j.knosys.2019.105190 10.1016/j.eswa.2020.113377 10.1038/44831 10.1109/ACCESS.2020.3016780 10.1007/s00330-020-06817-6
Diagnostic Test Accuracy of Deep Learning Detection of COVID-19: A Systematic Review and Meta-Analysis.
To perform a meta-analysis to compare the diagnostic test accuracy (DTA) of deep learning (DL) in detecting coronavirus disease 2019 (COVID-19), and to investigate how network architecture and type of datasets affect DL performance. We searched PubMed, Web of Science and Inspec from January 1, 2020, to December 3, 2020, for retrospective and prospective studies on deep learning detection with at least reported sensitivity and specificity. Pooled DTA was obtained using random-effect models. Sub-group analysis between studies was also carried out for data source and network architectures. The pooled sensitivity and specificity were 91% (95% confidence interval [CI]: 88%, 93%; I The diagnosis of COVID-19 via deep learning has achieved incredible performance, and the source of datasets, as well as network architectures, strongly affect DL performance.
Academic radiology
"2021-10-16T00:00:00"
[ "Temitope EmmanuelKomolafe", "YuzhuCao", "Benedictor AlexanderNguchu", "PatriceMonkam", "Ebenezer ObaloluwaOlaniyi", "HaotianSun", "JianZheng", "XiaodongYang" ]
10.1016/j.acra.2021.08.008 10.1148/radiol.2020200905 10.4103/0970-1591.91444 10.1002/sim.3631 10.1002/sim.1186 10.1007/s11548-020-02286-w 10.1186/s43163-020-00039-9 10.1001/jama.1994.0351033008103
COVID-19 detection from lung CT-Scans using a fuzzy integral-based CNN ensemble.
The COVID-19 pandemic has collapsed the public healthcare systems, along with severely damaging the economy of the world. The SARS-CoV-2 virus also known as the coronavirus, led to community spread, causing the death of more than a million people worldwide. The primary reason for the uncontrolled spread of the virus is the lack of provision for population-wise screening. The apparatus for RT-PCR based COVID-19 detection is scarce and the testing process takes 6-9 h. The test is also not satisfactorily sensitive (71% sensitive only). Hence, Computer-Aided Detection techniques based on deep learning methods can be used in such a scenario using other modalities like chest CT-scan images for more accurate and sensitive screening. In this paper, we propose a method that uses a Sugeno fuzzy integral ensemble of four pre-trained deep learning models, namely, VGG-11, GoogLeNet, SqueezeNet v1.1 and Wide ResNet-50-2, for classification of chest CT-scan images into COVID and Non-COVID categories. The proposed framework has been tested on a publicly available dataset for evaluation and it achieves 98.93% accuracy and 98.93% sensitivity on the same. The model outperforms state-of-the-art methods on the same dataset and proves to be a reliable COVID-19 detector. The relevant source codes for the proposed approach can be found at: https://github.com/Rohit-Kundu/Fuzzy-Integral-Covid-Detection.
Computers in biology and medicine
"2021-10-15T00:00:00"
[ "RohitKundu", "Pawan KumarSingh", "SeyedaliMirjalili", "RamSarkar" ]
10.1016/j.compbiomed.2021.104895 10.1088/2632-2153/abf22c 10.1109/TPAMI.2019.2918284 10.1007/s11042-021-11319-8 10.1148/radiol.2020200370 10.1109/ICCV.2017.74 10.25540/e3y2-aqye
Artificial intelligence on COVID-19 pneumonia detection using chest xray images.
Recent studies show the potential of artificial intelligence (AI) as a screening tool to detect COVID-19 pneumonia based on chest x-ray (CXR) images. However, issues on the datasets and study designs from medical and technical perspectives, as well as questions on the vulnerability and robustness of AI algorithms have emerged. In this study, we address these issues with a more realistic development of AI-driven COVID-19 pneumonia detection models by generating our own data through a retrospective clinical study to augment the dataset aggregated from external sources. We optimized five deep learning architectures, implemented development strategies by manipulating data distribution to quantitatively compare study designs, and introduced several detection scenarios to evaluate the robustness and diagnostic performance of the models. At the current level of data availability, the performance of the detection model depends on the hyperparameter tuning and has less dependency on the quantity of data. InceptionV3 attained the highest performance in distinguishing pneumonia from normal CXR in two-class detection scenario with sensitivity (Sn), specificity (Sp), and positive predictive value (PPV) of 96%. The models attained higher general performance of 91-96% Sn, 94-98% Sp, and 90-96% PPV in three-class compared to four-class detection scenario. InceptionV3 has the highest general performance with accuracy, F1-score, and g-mean of 96% in the three-class detection scenario. For COVID-19 pneumonia detection, InceptionV3 attained the highest performance with 86% Sn, 99% Sp, and 91% PPV with an AUC of 0.99 in distinguishing pneumonia from normal CXR. Its capability of differentiating COVID-19 pneumonia from normal and non-COVID-19 pneumonia attained 0.98 AUC and a micro-average of 0.99 for other classes.
PloS one
"2021-10-15T00:00:00"
[ "Lei RigiBaltazar", "Mojhune GabrielManzanillo", "JoverlynGaudillo", "Ethel DominiqueViray", "MarioDomingo", "BeatriceTiangco", "JasonAlbia" ]
10.1371/journal.pone.0257884 10.1080/14737159.2020.1757437 10.1148/radiol.2020200642 10.1038/s41598-020-76550-z 10.1016/j.clinimag.2020.04.001 10.1259/bjr/20276974 10.1016/j.ejrad.2004.03.010 10.1016/j.ejro.2020.100231 10.1007/s10489-020-01888-w 10.1016/j.bspc.2021.102583 10.1016/j.asoc.2020.106885 10.1016/j.media.2020.101794 10.1109/ACCESS.2020.3010287 10.1007/s10489-020-02076-6 10.1007/s13246-020-00865-4 10.1016/j.imu.2020.100360 10.1007/s40846-020-00529-4 10.1007/s10044-021-00984-y 10.1007/s10489-020-01900-3 10.1101/2020.03.20.20039834 10.1371/journal.pmed.1002686 10.1148/radiol.2020201491 10.1016/S2589-7500(20)30003-0
Conditional GAN based augmentation for predictive modeling of respiratory signals.
Respiratory illness is the primary cause of mortality and impairment in the life span of an individual in the current COVID-19 pandemic scenario. The inability to inhale and exhale is one of the difficult conditions for a person suffering from respiratory disorders. Unfortunately, the diagnosis of respiratory disorders with the presently available imaging and auditory screening modalities are sub-optimal and the accuracy of diagnosis varies with different medical experts. At present, deep neural nets demand a massive amount of data suitable for precise models. In reality, the respiratory data set is quite limited, and therefore, data augmentation (DA) is employed to enlarge the data set. In this study, conditional generative adversarial networks (cGAN) based DA is utilized for synthetic generation of signals. The publicly available repository such as ICBHI 2017 challenge, RALE and Think Labs Lung Sounds Library are considered for classifying the respiratory signals. To assess the efficacy of the artificially created signals by the DA approach, similarity measures are calculated between original and augmented signals. After that, to quantify the performance of augmentation in classification, scalogram representation of generated signals are fed as input to different pre-trained deep learning architectures viz Alexnet, GoogLeNet and ResNet-50. The experimental results are computed and performance results are compared with existing classical approaches of augmentation. The research findings conclude that the proposed cGAN method of augmentation provides better accuracy of 92.50% and 92.68%, respectively for both the two data sets using ResNet 50 model.
Computers in biology and medicine
"2021-10-13T00:00:00"
[ "SJayalakshmy", "Gnanou FlorenceSudha" ]
10.1016/j.compbiomed.2021.104930
Detection and analysis of COVID-19 in medical images using deep learning techniques.
The main purpose of this work is to investigate and compare several deep learning enhanced techniques applied to X-ray and CT-scan medical images for the detection of COVID-19. In this paper, we used four powerful pre-trained CNN models, VGG16, DenseNet121, ResNet50,and ResNet152, for the COVID-19 CT-scan binary classification task. The proposed Fast.AI ResNet framework was designed to find out the best architecture, pre-processing, and training parameters for the models largely automatically. The accuracy and F1-score were both above 96% in the diagnosis of COVID-19 using CT-scan images. In addition, we applied transfer learning techniques to overcome the insufficient data and to improve the training time. The binary and multi-class classification of X-ray images tasks were performed by utilizing enhanced VGG16 deep transfer learning architecture. High accuracy of 99% was achieved by enhanced VGG16 in the detection of X-ray images from COVID-19 and pneumonia. The accuracy and validity of the algorithms were assessed on X-ray and CT-scan well-known public datasets. The proposed methods have better results for COVID-19 diagnosis than other related in literature. In our opinion, our work can help virologists and radiologists to make a better and faster diagnosis in the struggle against the outbreak of COVID-19.
Scientific reports
"2021-10-06T00:00:00"
[ "DandiYang", "CristhianMartinez", "LaraVisuña", "HardevKhandhar", "ChintanBhatt", "JesusCarretero" ]
10.1038/s41598-021-99015-3 10.1016/j.ijsu.2020.02.034 10.1038/s41598-020-79139-8 10.1038/s41598-021-91305-0 10.1001/jama.2017.14585 10.1016/j.future.2018.04.065 10.3390/ijms17081313 10.1016/j.media.2017.07.005 10.1007/s11548-021-02335-y 10.1016/j.patrec.2019.11.013 10.1007/s11042-020-10010-8 10.1371/journal.pone.0242535 10.3390/info11020108 10.1016/j.cell.2018.02.010 10.1109/ACCESS.2020.3025164 10.1016/j.bspc.2021.102588 10.1109/ACCESS.2020.3025010 10.1016/j.eswa.2020.114054 10.3390/electronics9091388 10.1016/j.compbiomed.2020.103792
Role of standard and soft tissue chest radiography images in deep-learning-based early diagnosis of COVID-19.
Journal of medical imaging (Bellingham, Wash.)
"2021-10-02T00:00:00"
[ "QiyuanHu", "KarenDrukker", "Maryellen LGiger" ]
10.1117/1.JMI.8.S1.014503 10.1371/journal.pone.0242958 10.1136/bmj.m1808 10.1016/j.chest.2020.04.003 10.1016/j.clinimag.2020.04.001 10.1148/ryct.2020200034 10.1148/radiol.2020201874 10.1016/j.cmpb.2020.105581 10.1016/j.compbiomed.2020.103792 10.1016/j.chaos.2020.109944 10.1038/s41598-020-76550-z 10.1148/radiol.2020203511 10.1136/bmj.m1328 10.1109/CVPR.2017.369 10.1117/12.2581977 10.1109/CVPR.2009.5206848 10.1371/journal.pmed.1002686 10.1038/s41598-021-87994-2 10.1117/1.JMI.4.4.041307 10.3978/j.issn.2223-4292.2014.11.20 10.1038/s41598-020-67441-4 10.1080/01621459.1987.10478410 10.1006/jmps.1998.1218 10.2307/2531595 10.1148/radiol.12120725 10.1109/ICCV.2017.74 10.2214/AJR.19.21512
Development of smart camera systems based on artificial intelligence network for social distance detection to fight against COVID-19.
In this work, an artificial intelligence network-based smart camera system prototype, which tracks social distance using a bird's-eye perspective, has been developed. "MobileNet SSD-v3", "Faster-R-CNN Inception-v2", "Faster-R-CNN ResNet-50" models have been utilized to identify people in video sequences. The final prototype based on the Faster R-CNN model is an integrated embedded system that detects social distance with the camera. The software developed using the "Nvidia Jetson Nano" development kit and Raspberry Pi camera module calculates all necessary actions in itself, detects social distance violations, makes audible and light warnings, and reports the results to the server. It is predicted that the developed smart camera prototype can be integrated into public spaces within the "sustainable smart cities," the scope that the world is on the verge of a change.
Applied soft computing
"2021-10-01T00:00:00"
[ "OnurKaraman", "AdiAlhudhaif", "KemalPolat" ]
10.1016/j.asoc.2021.107610 10.1016/j.ijantimicag.2020.105951 10.1007/s11427-020-1637-5 10.1001/jama.2020.1585 10.1056/NEJMoa2001191 10.3906/sag-2004-172 10.1542/peds.2020-0702 10.3345/cep.2020.00493 10.1016/S0140-6736(20)30313-5 10.1136/bmj.m1066 10.1016/S0140-6736(20)30679-6 10.1093/jtm/taaa039 10.1093/jtm/taaa020 10.1136/bmj.m3223 10.1016/j.neucom.2018.01.092 10.1016/j.scitotenv.2020.138858 10.3390/make1030044 10.1016/j.procs.2018.10.335 10.1145/2184319.2184337 10.1016/j.sysarc.2020.101896 10.1093/eurpub/ckn107
MTU-COVNet: A hybrid methodology for diagnosing the COVID-19 pneumonia with optimized features from multi-net.
The aim of this study was to establish and evaluate a fully automatic deep learning system for the diagnosis of COVID-19 using thoracic computed tomography (CT). In this retrospective study, a novel hybrid model (MTU-COVNet) was developed to extract visual features from volumetric thoracic CT scans for the detection of COVID-19. The collected dataset consisted of 3210 CT scans from 953 patients. Of the total 3210 scans in the final dataset, 1327 (41%) were obtained from the COVID-19 group, 929 (29%) from the CAP group, and 954 (30%) from the Normal CT group. Diagnostic performance was assessed with the area under the receiver operating characteristic (ROC) curve, sensitivity, and specificity. The proposed approach with the optimized features from concatenated layers reached an overall accuracy of 97.7% for the CT-MTU dataset. The rest of the total performance metrics, such as; specificity, sensitivity, precision, F1 score, and Matthew Correlation Coefficient were 98.8%, 97.6%, 97.8%, 97.7%, and 96.5%, respectively. This model showed high diagnostic performance in detecting COVID-19 pneumonia (specificity: 98.0% and sensitivity: 98.2%) and CAP (specificity: 99.1% and sensitivity: 97.1%). The areas under the ROC curves for COVID-19 and CAP were 0.997 and 0.996, respectively. A deep learning-based AI system built on the CT imaging can detect COVID-19 pneumonia with high diagnostic efficiency and distinguish it from CAP and normal CT. AI applications can have beneficial effects in the fight against COVID-19.
Clinical imaging
"2021-10-01T00:00:00"
[ "GürkanKavuran", "Erdalİn", "Ayşegül AltıntopGeçkil", "MahmutŞahin", "Nurcan KırıcıBerber" ]
10.1016/j.clinimag.2021.09.007 10.1016/j.rmed.2020.106239 10.1097/MCP.0000000000000671 10.1001/jama.2019.21118 10.1016/S0140-6736(20)30183-5 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1148/radiol.2020200490 10.1016/j.ejrad.2019.108774 10.21037/jtd.2018.02.57 10.1016/j.diii.2020.10.004 10.3390/app8101715 10.1148/radiol.2020200905 10.1101/2020.03.20.20039834 10.1101/2020.02.14.20023028 10.1183/13993003.00775-2020 10.1101/2020.03.12.20027185 10.1061/(ASCE)GT.1943-5606.0001284 10.1109/CVPR.2016.90 10.3390/electronics8101130 10.1023/A:1022627411411 10.1007/978-1-4615-5703-6_3 10.1023/A:1009715923555 10.1080/03007995.2020.1830050 10.1148/radiol.2020200230 10.1155/2020/9756518 10.1148/ryct.2020200034 10.1007/s11547-021-01370-8 10.1016/j.mtcomm.2021.102198
Determining Top Fully Connected Layer's Hidden Neuron Count for Transfer Learning, Using Knowledge Distillation: a Case Study on Chest X-Ray Classification of Pneumonia and COVID-19.
Deep convolutional neural network (CNN)-assisted classification of images is one of the most discussed topics in recent years. Continuously innovation of neural network architectures is making it more correct and efficient every day. But training a neural network from scratch is very time-consuming and requires a lot of sophisticated computational equipment and power. So, using some pre-trained neural network as feature extractor for any image classification task or "transfer learning" is a very popular approach that saves time and computational power for practical use of CNNs. In this paper, an efficient way of building full model from any pre-trained model with high accuracy and low memory is proposed using knowledge distillation. Using the distilled knowledge of the last layer of pre-trained networks passes through fully connected layers with different hidden layers, followed by Softmax layer. The accuracies of student networks are mildly lesser than the whole models, but accuracy of student models clearly indicates the accuracy of the real network. In this way, the best number of hidden layers for dense layer for that pre-trained network with best accuracy and no-overfitting can be found with less time. Here, VGG16 and VGG19 (pre-trained upon "ImageNet" dataset) is tested upon chest X-rays (pneumonia and COVID-19). For finding the best total number of hidden layers, it saves nearly 44 min for VGG19 and 36 min and 37 s for VGG16 feature extractor.
Journal of digital imaging
"2021-10-01T00:00:00"
[ "RitwickGhosh" ]
10.1007/s10278-021-00518-2 10.1093/cid/cir1051 10.1109/42.34715 10.1109/42.845178 10.1109/TMI.2003.815900 10.1007/978-3-319-10590-1_53 10.1145/3065386 10.1109/TMI.2016.2553401 10.1109/TMI.2016.2528129 10.1109/TMI.2016.2536809 10.1109/TMI.2016.2528162 10.1038/nature14539 10.1109/MSP.2017.2765695 10.1016/j.cell.2018.02.010
COVID-view: Diagnosis of COVID-19 using Chest CT.
Significant work has been done towards deep learning (DL) models for automatic lung and lesion segmentation and classification of COVID-19 on chest CT data. However, comprehensive visualization systems focused on supporting the dual visual+DL diagnosis of COVID-19 are non-existent. We present COVID-view, a visualization application specially tailored for radiologists to diagnose COVID-19 from chest CT data. The system incorporates a complete pipeline of automatic lungs segmentation, localization/isolation of lung abnormalities, followed by visualization, visual and DL analysis, and measurement/quantification tools. Our system combines the traditional 2D workflow of radiologists with newer 2D and 3D visualization techniques with DL support for a more comprehensive diagnosis. COVID-view incorporates a novel DL model for classifying the patients into positive/negative COVID-19 cases, which acts as a reading aid for the radiologist using COVID-view and provides the attention heatmap as an explainable DL for the model output. We designed and evaluated COVID-view through suggestions, close feedback and conducting case studies of real-world patient data by expert radiologists who have substantial experience diagnosing chest CT scans for COVID-19, pulmonary embolism, and other forms of lung infections. We present requirements and task analysis for the diagnosis of COVID-19 that motivate our design choices and results in a practical system which is capable of handling real-world patient cases.
IEEE transactions on visualization and computer graphics
"2021-09-30T00:00:00"
[ "ShreerajJadhav", "GaofengDeng", "MarleneZawin", "Arie EKaufman" ]
10.1109/TVCG.2021.3114851 10.1109/TVCG.2020.3020958
COVID Mortality Prediction with Machine Learning Methods: A Systematic Review and Critical Appraisal.
More than a year has passed since the report of the first case of coronavirus disease 2019 (COVID), and increasing deaths continue to occur. Minimizing the time required for resource allocation and clinical decision making, such as triage, choice of ventilation modes and admission to the intensive care unit is important. Machine learning techniques are acquiring an increasingly sought-after role in predicting the outcome of COVID patients. Particularly, the use of baseline machine learning techniques is rapidly developing in COVID mortality prediction, since a mortality prediction model could rapidly and effectively help clinical decision-making for COVID patients at imminent risk of death. Recent studies reviewed predictive models for SARS-CoV-2 diagnosis, severity, length of hospital stay, intensive care unit admission or mechanical ventilation modes outcomes; however, systematic reviews focused on prediction of COVID mortality outcome with machine learning methods are lacking in the literature. The present review looked into the studies that implemented machine learning, including deep learning, methods in COVID mortality prediction thus trying to present the existing published literature and to provide possible explanations of the best results that the studies obtained. The study also discussed challenging aspects of current studies, providing suggestions for future developments.
Journal of personalized medicine
"2021-09-29T00:00:00"
[ "FrancescaBottino", "EmanuelaTagliente", "LucaPasquini", "Alberto DiNapoli", "MartinaLucignani", "LorenzoFigà-Talamanca", "AntonioNapolitano" ]
10.3390/jpm11090893 10.1038/s41418-020-00720-9 10.3934/mbe.2021039 10.3390/jpm11040290 10.1183/13993003.00775-2020 10.21037/atm-20-3026 10.1148/radiol.2020202723 10.1136/bmj.m1328 10.1016/j.jiph.2020.06.028 10.1109/RBME.2020.2987975 10.1109/ACCESS.2021.3058537 10.1038/s42256-021-00307-0 10.1016/j.patcog.2009.06.009 10.1002/cem.3226 10.3390/biom10101460 10.1038/s41598-021-86327-7 10.1136/bmjresp-2017-000240 10.1111/j.2517-6161.1996.tb02080.x 10.3389/fpubh.2020.587937 10.1038/s41551-020-00633-5 10.1371/journal.pone.0243262 10.1007/s11548-020-02299-5 10.2196/24018 10.2196/20259 10.1186/s12911-020-01316-6 10.2196/25442 10.1038/s41379-020-00700-x 10.1007/s00521-020-05592-1 10.1002/emp2.12205 10.1371/journal.pone.0249285 10.1038/s41467-020-18684-2 10.1038/s41598-020-75767-2 10.1080/07853890.2020.1868564 10.2196/24207 10.1093/ije/dyaa171 10.1038/s42256-020-0180-7 10.2196/23458 10.1038/s41746-021-00456-x 10.1016/j.mayocpiqo.2021.05.001 10.3390/jpm11050343 10.1136/bmjhci-2020-100235 10.2214/AJR.20.22954 10.1080/01621459.1994.10476866 10.3390/jcm8060799 10.1145/1577069.1577078 10.1016/j.jocs.2016.05.005 10.1017/dmp.2021.82 10.1186/s12889-020-09721-2 10.1007/s00330-020-07270-1 10.1016/j.ijid.2020.05.021 10.1007/s00330-020-07269-8 10.5808/GI.2019.17.4.e41 10.1155/2020/2836236 10.1016/j.dss.2012.01.016 10.1109/TSMCB.2008.2002909 10.1155/2013/239628 10.1109/IJCNN.2016.7727770 10.1016/j.imu.2020.100449 10.1016/j.asoc.2020.106885 10.1109/TKDE.2019.2912815 10.3389/fpubh.2017.00307 10.1186/1471-2288-14-40 10.1016/j.jclinepi.2014.09.007 10.7326/0003-4819-130-6-199903160-00016 10.1097/EDE.0b013e3181c30fb2 10.1016/j.jclinepi.2019.09.016 10.1016/j.jbi.2017.10.008 10.1373/clinchem.2016.255539 10.1371/journal.pone.0245384 10.1016/S2589-7500(20)30217-X 10.2196/23128 10.1002/jmv.26699 10.18632/aging.103770 10.1186/s13098-020-00565-9 10.1016/S2352-4642(21)00066-3
A two-tier feature selection method using Coalition game and Nystrom sampling for screening COVID-19 from chest X-Ray images.
The world is still under the threat of different strains of the coronavirus and the pandemic situation is far from over. The method, that is widely used for the detection of COVID-19 is Reverse Transcription Polymerase chain reaction (RT-PCR), which is a time-consuming method and is prone to manual errors, and has poor precision. Although many nations across the globe have begun the mass immunization procedure, the COVID-19 vaccine will take a long time to reach everyone. The application of artificial intelligence (AI) and computer-aided diagnosis (CAD) has been used in the domain of medical imaging for a long period. It is quite evident that the use of CAD in the detection of COVID-19 is inevitable. The main objective of this paper is to use convolutional neural network (CNN) and a novel feature selection technique to analyze Chest X-Ray (CXR) images for the detection of COVID-19. We propose a novel two-tier feature selection method, which increases the accuracy of the overall classification model used for screening COVID-19 CXRs. Filter feature selection models are often more effective than wrapper methods as wrapper methods tend to be computationally more expensive and are not useful for large datasets dealing with a large number of features. However, most filter methods do not take into consideration how a group of features would work together, rather they just look at the features individually and decide on a score. We have used approximate Shapley value, a concept of Coalition game theory, to deal with this problem. Further, in the case of a large dataset, it is important to work with shorter embeddings of the features. We have used CUR decomposition and Nystrom sampling to further reduce the feature space. To check the efficacy of this two-tier feature selection method, we have applied it to the features extracted by three standard deep learning models, namely
Journal of ambient intelligence and humanized computing
"2021-09-28T00:00:00"
[ "PratikBhowal", "SubhankarSen", "RamSarkar" ]
10.1007/s12652-021-03491-4 10.1016/j.matpr.2017.11.298 10.1109/ACCESS.2020.3025164 10.3390/diagnostics11020315 10.1098/rsif.2017.0387 10.1016/j.compbiomed 10.1109/ACCESS.2020.3028012 10.1007/s11042-019-07811-x 10.1007/s00500-020-05183-1 10.3390/diagnostics11050895 10.1109/TMI.2020.2993291 10.1038/s41551-018-0195-0 10.1109/TMI.2020.2994459 10.1002/mp.13264 10.1146/annurev-bioeng-071516-044442 10.1007/s10489-020-02149-6 10.1142/S0218001421510046 10.1128/CMR.00133-20 10.1007/s10489-020-01888-w 10.1016/j.mehy.2020.109761 10.1109/JBHI.2020.3023246 10.1038/s41598-018-34455-y
Detection and classification of lung diseases for pneumonia and Covid-19 using machine and deep learning techniques.
Since the arrival of the novel Covid-19, several types of researches have been initiated for its accurate prediction across the world. The earlier lung disease pneumonia is closely related to Covid-19, as several patients died due to high chest congestion (pneumonic condition). It is challenging to differentiate Covid-19 and pneumonia lung diseases for medical experts. The chest X-ray imaging is the most reliable method for lung disease prediction. In this paper, we propose a novel framework for the lung disease predictions like pneumonia and Covid-19 from the chest X-ray images of patients. The framework consists of dataset acquisition, image quality enhancement, adaptive and accurate region of interest (ROI) estimation, features extraction, and disease anticipation. In dataset acquisition, we have used two publically available chest X-ray image datasets. As the image quality degraded while taking X-ray, we have applied the image quality enhancement using median filtering followed by histogram equalization. For accurate ROI extraction of chest regions, we have designed a modified region growing technique that consists of dynamic region selection based on pixel intensity values and morphological operations. For accurate detection of diseases, robust set of features plays a vital role. We have extracted visual, shape, texture, and intensity features from each ROI image followed by normalization. For normalization, we formulated a robust technique to enhance the detection and classification results. Soft computing methods such as artificial neural network (ANN), support vector machine (SVM), K-nearest neighbour (KNN), ensemble classifier, and deep learning classifier are used for classification. For accurate detection of lung disease, deep learning architecture has been proposed using recurrent neural network (RNN) with long short-term memory (LSTM). Experimental results show the robustness and efficiency of the proposed model in comparison to the existing state-of-the-art methods.
Journal of ambient intelligence and humanized computing
"2021-09-28T00:00:00"
[ "ShimpyGoyal", "RajivSingh" ]
10.1007/s12652-021-03464-7 10.1007/s10489-020-01829-7 10.1155/2018/4168538 10.1007/s13246-020-00865-4 10.1007/s11042-019-08394-3 10.1109/TMI.2010.2095026 10.1007/s42399-020-00383-0 10.1007/s10489-020-01714-3 10.1007/s00415-020-10067-3 10.1007/s00500-020-05275-y 10.1007/s00405-020-06319-7 10.1007/s11042-019-08260-2 10.1007/s12652-020-02669-6 10.1007/s42979-020-00373-y 10.3390/diagnostics10060417 10.1007/s10489-020-02010-w 10.1109/tmi.2013.2284099 10.1007/s10489-020-01902-1 10.1016/j.cell.2018.02.010 10.1049/iet-ipr.2016.1014 10.5373/JARDCS/V11I9/20193162 10.1007/s12652-020-02502-0 10.1016/j.procs.2017.12.016 10.1007/s42399-020-00527-2 10.1109/icsec.2016.7859887 10.1007/s13755-020-00135-3 10.1186/s12890-020-01286-5 10.1007/s00405-020-06284-1 10.1142/s0218001421510046 10.1007/s10489-020-02149-6 10.1007/s42399-020-00603-7 10.1007/s10140-020-01869-z 10.1007/s10489-020-01888-w 10.1186/s43055-020-00296-x 10.1007/s00330-020-07201-0 10.1016/s0933-3657(01)00094-x
A Fine-tuned deep convolutional neural network for chest radiography image classification on COVID-19 cases.
The outbreak of coronavirus disease 2019 (COVID-19) continues to have a catastrophic impact on the living standard of people worldwide. To fight against COVID-19, many countries are using a combination of containment and mitigation activities. Effective screening of contaminated patients is a critical step in the battle against COVID-19. During the early medical examination, it was observed that patient having abnormalities in chest radiography images shows the symptoms of COVID-19 infection. Motivated by this, in this article, we proposed a unique framework to diagnose the COVID-19 infection. Here, we removed the fully connected layers of an already proven model VGG-16 and placed a new simplified fully connected layer set that is initialized with some random weights on top of this deep convolutional neural network, which has already learned discriminative features, namely, edges, colors, geometric changes,shapes, and objects. To avoid the risk of destroying the rich features, we warm up our FC head by seizing all layers in the body of our network and then unfreeze all the layers in the network body to be fine-tuned.The suggested classification model achieved an accuracy of 97.12% with 99.2% sensitivity and 99.6% specificity for COVID-19 identification. This classification model is superior to the other classification model used to classify COVID-19 infected patients.
Multimedia tools and applications
"2021-09-28T00:00:00"
[ "Amiya KumarDash", "PuspanjaliMohapatra" ]
10.1007/s11042-021-11388-9 10.1007/s13246-020-00865-4 10.1016/j.asoc.2019.105773 10.1016/S0140-6736(20)30211-7 10.1007/s11042-018-5714-1 10.1056/NEJMoa2002032 10.1016/S0140-6736(20)30183-5 10.1016/j.compbiomed.2020.103792 10.1007/s11042-020-08905-7 10.1016/j.idm.2020.02.002 10.1021/acsnano.0c02624 10.1038/s41598-019-56847-4
Weakly Supervised Segmentation of COVID19 Infection with Scribble Annotation on CT Images.
Segmentation of infections from CT scans is important for accurate diagnosis and follow-up in tackling the COVID-19. Although the convolutional neural network has great potential to automate the segmentation task, most existing deep learning-based infection segmentation methods require fully annotated ground-truth labels for training, which is time-consuming and labor-intensive. This paper proposed a novel weakly supervised segmentation method for COVID-19 infections in CT slices, which only requires scribble supervision and is enhanced with the uncertainty-aware self-ensembling and transformation-consistent techniques. Specifically, to deal with the difficulty caused by the shortage of supervision, an uncertainty-aware mean teacher is incorporated into the scribble-based segmentation method, encouraging the segmentation predictions to be consistent under different perturbations for an input image. This mean teacher model can guide the student model to be trained using information in images without requiring manual annotations. On the other hand, considering the output of the mean teacher contains both correct and unreliable predictions, equally treating each prediction in the teacher model may degrade the performance of the student network. To alleviate this problem, the pixel level uncertainty measure on the predictions of the teacher model is calculated, and then the student model is only guided by reliable predictions from the teacher model. To further regularize the network, a transformation-consistent strategy is also incorporated, which requires the prediction to follow the same transformation if a transform is performed on an input image of the network. The proposed method has been evaluated on two public datasets and one local dataset. The experimental results demonstrate that the proposed method is more effective than other weakly supervised methods and achieves similar performance as those fully supervised.
Pattern recognition
"2021-09-28T00:00:00"
[ "XiaomingLiu", "QuanYuan", "YaozongGao", "KeleiHe", "ShuoWang", "XiaoTang", "JinshanTang", "DinggangShen" ]
10.1016/j.patcog.2021.108341
Evaluating Deep Neural Network Architectures with Transfer Learning for Pneumonitis Diagnosis.
Pneumonitis is an infectious disease that causes the inflammation of the air sac. It can be life-threatening to the very young and elderly. Detection of pneumonitis from X-ray images is a significant challenge. Early detection and assistance with diagnosis can be crucial. Recent developments in the field of deep learning have significantly improved their performance in medical image analysis. The superior predictive performance of the deep learning methods makes them ideal for pneumonitis classification from chest X-ray images. However, training deep learning models can be cumbersome and resource-intensive. Reusing knowledge representations of public models trained on large-scale datasets through transfer learning can help alleviate these challenges. In this paper, we compare various image classification models based on transfer learning with well-known deep learning architectures. The Kaggle chest X-ray dataset was used to evaluate and compare our models. We apply basic data augmentation and fine-tune our feed-forward classification head on the models pretrained on the ImageNet dataset. We observed that the DenseNet201 model outperforms other models with an AUROC score of 0.966 and a recall score of 0.99. We also visualize the class activation maps from the DenseNet201 model to interpret the patterns recognized by the model for prediction.
Computational and mathematical methods in medicine
"2021-09-24T00:00:00"
[ "SuryaKrishnamurthy", "KathiravanSrinivasan", "Saeed MianQaisar", "P M Durai RajVincent", "Chuan-YuChang" ]
10.1155/2021/8036304 10.1109/ICC.2007.637 10.1007/s40747-021-00324-x 10.1007/s11554-020-00987-8 10.1016/j.imed.2021.05.005 10.1007/s12065-020-00540-3 10.1016/j.media.2017.07.005 10.1007/s11684-019-0726-4 10.1016/S2589-7500(19)30123-2 10.1038/s41746-020-00376-2 10.1007/978-3-030-35445-9_20 10.1016/j.compeleceng.2019.08.004 10.1155/2020/8876798 10.3390/diagnostics10090649 10.1016/j.measurement.2020.108046 10.3390/diagnostics11030530 10.1016/j.asoc.2020.106859 10.1016/j.asoc.2020.106744 10.1007/s12652-021-03075-2 10.1155/2019/4180949 10.4108/eai.28-5-2020.166290 10.1038/s41467-020-17971-2 10.1016/j.chaos.2020.110495 10.3390/s21144749 10.3390/s21082852 10.17632/rscbjbr9sj.2 10.32604/cmc.2021.016736 10.1109/TCSVT.2017.2718622 10.1109/ICCE-China.2017.7990985 10.1155/2021/5541134
Non-melanoma skin cancer diagnosis: a comparison between dermoscopic and smartphone images by unified visual and sonification deep learning algorithms.
Non-melanoma skin cancer (NMSC) is the most frequent keratinocyte-origin skin tumor. It is confirmed that dermoscopy of NMSC confers a diagnostic advantage as compared to visual face-to-face assessment. COVID-19 restrictions diagnostics by telemedicine photos, which are analogous to visual inspection, displaced part of in-person visits. This study evaluated by a dual convolutional neural network (CNN) performance metrics in dermoscopic (DI) versus smartphone-captured images (SI) and tested if artificial intelligence narrows the proclaimed gap in diagnostic accuracy. A CNN that receives a raw image and predicts malignancy, overlaid by a second independent CNN which processes a sonification (image-to-sound mapping) of the original image, were combined into a unified malignancy classifier. All images were histopathology-verified in a comparison between NMSC and benign skin lesions excised as suspected NMSCs. Study criteria outcomes were sensitivity and specificity for the unified output. Images acquired by DI (n = 132 NMSC, n = 33 benign) were compared to SI (n = 170 NMSC, n = 28 benign). DI and SI analysis metrics resulted in an area under the curve (AUC) of the receiver operator characteristic curve of 0.911 and 0.821, respectively. Accuracy was increased by DI (0.88; CI 81.9-92.4) as compared to SI (0.75; CI 68.1-80.6, p < 0.005). Sensitivity of DI was higher than SI (95.3%, CI 90.4-98.3 vs 75.3%, CI 68.1-81.6, p < 0.001), but not specificity (p = NS). Telemedicine use of smartphone images might result in a substantial decrease in diagnostic performance as compared to dermoscopy, which needs to be considered by both healthcare providers and patients.
Journal of cancer research and clinical oncology
"2021-09-22T00:00:00"
[ "ADascalu", "B NWalker", "YOron", "E ODavid" ]
10.1007/s00432-021-03809-x 10.1002/cncr.32969 10.1177/1357633X19874200 10.1016/J.ESWA.2012.07.021 10.1016/j.ebiom.2019.04.055 10.1002/14651858.CD011901.pub2 10.1111/jdv.12434 10.1684/ejd.2012.1727 10.1016/J.JAAD.2016.10.041 10.1136/bmj.m127 10.1200/CCI.17.00159 10.1146/annurev-psych-120709-145346 10.3389/fmed.2020.598903 10.1007/S00432-018-02834-7 10.1200/JCO.19.02031 10.1055/S-0039-1677897 10.1016/j.jaad.2003.07.029 10.1001/jamadermatol.2015.0173 10.1056/NEJMRA1708701 10.1016/j.dib.2020.106221 10.1111/jdv.14782 10.1016/j.jaad.2019.08.012 10.1001/jamadermatol.2015.1187 10.1001/jamadermatol.2013.2139 10.1200/JCO.19.03350 10.1038/s41591-018-0300-7 10.1038/sdata.2018.161 10.1001/jamadermatol.2018.4378 10.1001/archdermatol.2012.893 10.1016/j.ebiom.2019.01.028 10.1111/bjd.16730
Automatic deep learning system for COVID-19 infection quantification in chest CT.
The paper proposes an automatic deep learning system for COVID-19 infection areas segmentation in chest CT scans. CT imaging proved its ability to detect the COVID-19 disease even for asymptotic patients, which make it a trustworthy alternative for PCR. Coronavirus disease spread globally and PCR screening is the adopted diagnostic testing method for COVID-19 detection. However, PCR is criticized due its low sensitivity ratios, also, it is time-consuming and manual complicated process. The proposed framework includes different steps; it starts to prepare the region of interest by segmenting the lung organ, which then undergoes edge enhancing diffusion filtering (EED) to improve the infection areas contrast and intensity homogeneity. The proposed FCN is implemented using U-net architecture with modified residual block to include concatenation skip connection. The block improves the learning of gradient values by forwarding the infection area features through the network. The proposed system is evaluated using different measures and achieved dice overlapping score of 0.961 and 0.780 for lung and infection areas segmentation, respectively. The proposed system is trained and tested using many 2D CT slices extracted from diverse datasets from different sources, which demonstrate the system generalization and effectiveness. The use of more datasets from different sources helps to enhance the system accuracy and generalization, which can be accomplished based on the data availability in in the future.
Multimedia tools and applications
"2021-09-21T00:00:00"
[ "Omar IbrahimAlirr" ]
10.1007/s11042-021-11299-9 10.1109/TMI.2020.2996645 10.1148/radiol.2020200432 10.1007/s10278-019-00227-x 10.4236/jcc.2015.311023 10.1148/radiol.2020200905 10.1109/TMI.2009.2022368 10.1016/j.jinf.2020.04.004 10.1016/j.chest.2020.04.003 10.1016/j.ijsu.2020.02.034
Software system to predict the infection in COVID-19 patients using deep learning and web of things.
Since the end of 2019, computed tomography (CT) images have been used as an important substitute for the time-consuming Reverse Transcriptase polymerase chain reaction (RT-PCR) test; a new coronavirus 2019 (COVID-19) disease has been detected and has quickly spread through many countries across the world. Medical imaging such as computed tomography provides great potential due to growing skepticism toward the sensitivity of RT-PCR as a screening tool. For this purpose, automated image segmentation is highly desired for a clinical decision aid and disease monitoring. However, there is limited publicly accessible COVID-19 image knowledge, leading to the overfitting of conventional approaches. To address this issue, the present paper focuses on data augmentation techniques to create synthetic data. Further, a framework has been proposed using WoT and traditional U-Net with EfficientNet B0 to segment the COVID Radiopedia and Medseg datasets automatically. The framework achieves an
Software: practice & experience
"2021-09-21T00:00:00"
[ "AshimaSingh", "AmritaKaur", "ArwinderDhillon", "SahilAhuja", "HarpreetVohra" ]
10.1002/spe.3011 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1016/j.radi.2005.02.003 10.1016/j.media.2017.07.005 10.1109/rbme.2020.2987975 10.1016/j.chest.2020.04.003 10.14358/PERS.80.2.000 10.1080/17517575.2020.1820583 10.1148/radiol.2020201343 10.1101/2020.03.19.20039354 10.1148/radiol.2020201237 10.1148/radiol.2020200905 10.1007/s11831-021-09547-0 10.1101/2020.03.19.20038315 10.1101/2020.04.22 10.1148/radiol.2020200642 10.5281/zenodo.3757476 10.1038/nature14539 10.1007/978-3-319-46976-8_19 10.1016/j.media.2019.01.012 10.1109/tmi.2018.2845918
DR-MIL: deep represented multiple instance learning distinguishes COVID-19 from community-acquired pneumonia in CT images.
Given that the novel coronavirus disease 2019 (COVID-19) has become a pandemic, a method to accurately distinguish COVID-19 from community-acquired pneumonia (CAP) is urgently needed. However, the spatial uncertainty and morphological diversity of COVID-19 lesions in the lungs, and subtle differences with respect to CAP, make differential diagnosis non-trivial. We propose a deep represented multiple instance learning (DR-MIL) method to fulfill this task. A 3D volumetric CT scan of one patient is treated as one bag and ten CT slices are selected as the initial instances. For each instance, deep features are extracted from the pre-trained ResNet-50 with fine-tuning and represented as one deep represented instance score (DRIS). Each bag with a DRIS for each initial instance is then input into a citation k-nearest neighbor search to generate the final prediction. A total of 141 COVID-19 and 100 CAP CT scans were used. The performance of DR-MIL is compared with other potential strategies and state-of-the-art models. DR-MIL displayed an accuracy of 95% and an area under curve of 0.943, which were superior to those observed for comparable methods. COVID-19 and CAP exhibited significant differences in both the DRIS and the spatial pattern of lesions (p<0.001). As a means of content-based image retrieval, DR-MIL can identify images used as key instances, references, and citers for visual interpretation. DR-MIL can effectively represent the deep characteristics of COVID-19 lesions in CT images and accurately distinguish COVID-19 from CAP in a weakly supervised manner. The resulting DRIS is a useful supplement to visual interpretation of the spatial pattern of lesions when screening for COVID-19.
Computer methods and programs in biomedicine
"2021-09-19T00:00:00"
[ "ShouliangQi", "CaiwenXu", "ChenLi", "BinTian", "ShuyueXia", "JigangRen", "LimingYang", "HanlinWang", "HuiYu" ]
10.1016/j.cmpb.2021.106406 10.1148/radiol.2020201491 10.1109/RBME.2020.2990959 10.1109/RBME.2020.2987975 10.1109/CVPR.2016.90
COVID-19 early detection for imbalanced or low number of data using a regularized cost-sensitive CapsNet.
With the presence of novel coronavirus disease at the end of 2019, several approaches were proposed to help physicians detect the disease, such as using deep learning to recognize lung involvement based on the pattern of pneumonia. These approaches rely on analyzing the CT images and exploring the COVID-19 pathologies in the lung. Most of the successful methods are based on the deep learning technique, which is state-of-the-art. Nevertheless, the big drawback of the deep approaches is their need for many samples, which is not always possible. This work proposes a combined deep architecture that benefits both employed architectures of DenseNet and CapsNet. To more generalize the deep model, we propose a regularization term with much fewer parameters. The network convergence significantly improved, especially when the number of training data is small. We also propose a novel Cost-sensitive loss function for imbalanced data that makes our model feasible for the condition with a limited number of positive data. Our novelties make our approach more intelligent and potent in real-world situations with imbalanced data, popular in hospitals. We analyzed our approach on two publicly available datasets, HUST and COVID-CT, with different protocols. In the first protocol of HUST, we followed the original paper setup and outperformed it. With the second protocol of HUST, we show our approach superiority concerning imbalanced data. Finally, with three different validations of the COVID-CT, we provide evaluations in the presence of a low number of data along with a comparison with state-of-the-art.
Scientific reports
"2021-09-18T00:00:00"
[ "MaliheJavidi", "SaeidAbbaasi", "SaraNaybandi Atashi", "MahdiJampour" ]
10.1038/s41598-021-97901-4 10.1038/s41598-020-76282-0 10.1038/s41598-020-76740-9 10.1038/s41467-020-17971-2 10.1038/s41598-020-74539-2 10.1038/s41598-021-84219-4 10.1038/s41598-020-76550-z 10.1038/s41598-020-70479-z 10.1109/CVPR.2017.243 10.1109/TPAMI.2019.2913372 10.3390/sym12040651 10.1007/s00521-020-05437-x 10.1016/j.media.2020.101910 10.1038/s41598-020-71294-2 10.1016/j.patrec.2020.10.001 10.1007/s00330-020-06801-0 10.1148/radiol.2020200343 10.1155/2020/9756518 10.1148/radiol.2020201491 10.2196/19569 10.1016/j.imu.2020.100427 10.1038/s41598-021-93658-y 10.1038/s41598-020-74164-z 10.1007/s00500-020-05424-3 10.32604/cmc.2021.012955 10.3390/s18093153 10.1016/j.patcog.2021.107851 10.1038/s41551-020-00633-5 10.1109/JBHI.2020.3023246 10.1016/j.media.2021.101978 10.1038/s41467-020-20657-4 10.1038/s41551-021-00704-1
CARes-UNet: Content-aware residual UNet for lesion segmentation of COVID-19 from chest CT images.
Coronavirus disease 2019 (COVID-19) has caused a serious global health crisis. It has been proven that the deep learning method has great potential to assist doctors in diagnosing COVID-19 by automatically segmenting the lesions in computed tomography (CT) slices. However, there are still several challenges restricting the application of these methods, including high variation in lesion characteristics and low contrast between lesion areas and healthy tissues. Moreover, the lack of high-quality labeled samples and large number of patients lead to the urgency to develop a high accuracy model, which performs well not only under supervision but also with semi-supervised methods. We propose a content-aware lung infection segmentation deep residual network (content-aware residual UNet (CARes-UNet)) to segment the lesion areas of COVID-19 from the chest CT slices. In our CARes-UNet, the residual connection was used in the convolutional block, which alleviated the degradation problem during the training. Then, the content-aware upsampling modules were introduced to improve the performance of the model while reducing the computation cost. Moreover, to achieve faster convergence, an advanced optimizer named Ranger was utilized to update the model's parameters during training. Finally, we employed a semi-supervised segmentation framework to deal with the problem of lacking pixel-level labeled data. We evaluated our approach using three public datasets with multiple metrics and compared its performance to several models. Our method outperforms other models in multiple indicators, for instance in terms of Dice coefficient on COVID-SemiSeg Dataset, CARes-UNet got the score 0.731, and semi-CARes-UNet further boosted it to 0.776. More ablation studies were done and validated the effectiveness of each key component of our proposed model. Compared with the existing neural network methods applied to the COVID-19 lesion segmentation tasks, our CARes-UNet can gain more accurate segmentation results, and semi-CARes-UNet can further improve it using semi-supervised learning methods while presenting a possible way to solve the problem of lack of high-quality annotated samples. Our CARes-UNet and semi-CARes-UNet can be used in artificial intelligence-empowered computer-aided diagnosis system to improve diagnostic accuracy in this ongoing COVID-19 pandemic.
Medical physics
"2021-09-17T00:00:00"
[ "XinhuaXu", "YuhangWen", "LuZhao", "YiZhang", "YoujunZhao", "ZixuanTang", "ZiduoYang", "Calvin Yu-ChianChen" ]
10.1002/mp.15231 10.1109/CVPR42600.2020.01070
Densely connected attention network for diagnosing COVID-19 based on chest CT.
To fully enhance the feature extraction capabilities of deep learning models, so as to accurately diagnose coronavirus disease 2019 (COVID-19) based on chest CT images, a densely connected attention network (DenseANet) was constructed by utilizing the self-attention mechanism in deep learning. During the construction of the DenseANet, we not only densely connected attention features within and between the feature extraction blocks with the same scale, but also densely connected attention features with different scales at the end of the deep model, thereby further enhancing the high-order features. In this way, as the depth of the deep model increases, the spatial attention features generated by different layers can be densely connected and gradually transferred to deeper layers. The DenseANet takes CT images of the lung fields segmented by an improved U-Net as inputs and outputs the probability of the patients suffering from COVID-19. Compared with exiting attention networks, DenseANet can maximize the utilization of self-attention features at different depths in the model. A five-fold cross-validation experiment was performed on a dataset containing 2993 CT scans of 2121 patients, and experiments showed that the DenseANet can effectively locate the lung lesions of patients infected with SARS-CoV-2, and distinguish COVID-19, common pneumonia and normal controls with an average of 96.06% Acc and 0.989 AUC. The DenseANet we proposed can generate strong attention features and achieve the best diagnosis results. In addition, the proposed method of densely connecting attention features can be easily extended to other advanced deep learning methods to improve their performance in related tasks.
Computers in biology and medicine
"2021-09-15T00:00:00"
[ "YuFu", "PengXue", "EnqingDong" ]
10.1016/j.compbiomed.2021.104857 10.1109/JBHI.2021.3094578 10.1016/j.cmpb.2021.106381 10.1016/j.media.2020.101913 10.1016/j.compbiomed.2021.104744 10.3389/fmed.2020.608525 10.1038/s41467-020-17971-2 10.1002/mp.15044 10.1109/CVPR.2016.90 10.1007/978-3-319-24574-4_28
Genetic-based adaptive momentum estimation for predicting mortality risk factors for COVID-19 patients using deep learning.
The mortality risk factors for coronavirus disease (COVID-19) must be early predicted, especially for severe cases, to provide intensive care before they develop to critically ill immediately. This paper aims to develop an optimized convolution neural network (CNN) for predicting mortality risk factors for COVID-19 patients. The proposed model supports two types of input data clinical variables and the computed tomography (CT) scans. The features are extracted from the optimized CNN phase and then applied to the classification phase. The CNN model's hyperparameters were optimized using a proposed genetic-based adaptive momentum estimation (GB-ADAM) algorithm. The GB-ADAM algorithm employs the genetic algorithm (GA) to optimize Adam optimizer's configuration parameters, consequently improving the classification accuracy. The model is validated using three recent cohorts from New York, Mexico, and Wuhan, consisting of 3055, 7497,504 patients, respectively. The results indicated that the most significant mortality risk factors are: CD
International journal of imaging systems and technology
"2021-09-15T00:00:00"
[ "Sally MElghamrawy", "Aboul EllaHassanien", "Athanasios VVasilakos" ]
10.1002/ima.22644 10.1016/j.ijid.2020.01.009 10.1016/S0140-6736(20)30566-3 10.1038/s41467-020-17280-8 10.1016/j.measurement.2019.107459 10.1101/2020.02.27.20028027v2 10.1101/2020.04.21.20074591v1 10.1101/2020.04.11.20056523v1 10.1101/2020.02.27.20028027v3 10.1101/2020.02.20.20025510v1
A novel and efficient deep learning approach for COVID-19 detection using X-ray imaging modality.
With the exponential growth of COVID-19 cases, medical practitioners are searching for accurate and quick automated detection methods to prevent Covid from spreading while trying to reduce the computational requirement of devices. In this research article, a deep learning Convolutional Neural Network (CNN) based accurate and efficient ensemble model using deep learning is being proposed with 2161 COVID-19, 2022 pneumonia, and 5863 normal chest X-ray images that has been collected from previous publications and other online resources. To improve the detection accuracy contrast enhancement and image normalization have been done to produce better quality images at the pre-processing level. Further data augmentation methods are used by creating modified versions of images in the dataset to train the four efficient CNN models (Inceptionv3, DenseNet121, Xception, InceptionResNetv2) Experimental results provide 98.33% accuracy for binary class and 92.36% for multiclass. The performance evaluation metrics reveal that this tool can be very helpful for early disease diagnosis.
International journal of imaging systems and technology
"2021-09-15T00:00:00"
[ "PrashantBhardwaj", "AmanpreetKaur" ]
10.1002/ima.22627 10.1016/S2213-2600(20)30056-4 10.1016/j.scs.2020.102589 10.1002/jmv.25681 10.1016/j.ijsu.2020.04.001 10.1001/jama.2020.1585 10.3390/app10020559 10.1109/IVCNZ.2018.8634671 10.1016/j.compbiomed.2020.103795 10.1016/j.irbm.2020.05.003 10.1016/S1473-3099(20)30134-1 10.1016/j.inffus.2017.10.006 10.1016/j.scs.2020.102018 10.1007/s11042-017-4637-6 10.1148/radiol.2020200642 10.1016/j.clinimag.2020.04.001 10.1038/s41598-020-76550-z 10.1148/radiol.2020200905 10.1016/j.compbiomed.2020.103792 10.1109/JBHI.2016.2636665 10.1109/ACCESS.2020.3010287 10.1016/j.cmpb.2020.105581 10.1007/s11042-019-07820-w 10.1016/j.media.2020.101794 10.1016/j.chaos.2020.109944 10.1109/CVPR.2017.243 10.1109/CVPR.2018.00474 10.1007/s11263-015-0816-y 10.1016/j.cell.2018.02.010 10.20944/preprints202003.0300.v1 10.1007/s13246-020-00865-4 10.1016/j.cmpb.2020.105608 10.1109/JSEN.2018.2807245 10.1016/j.imu.2020.100360 10.1109/TII.2021.3057683 10.3390/sym13010113 10.1109/CIBCB48159.2020.9277695 10.1109/EBBT.2019.8742050 10.1007/s00521-020-05410-8 10.1016/j.patrec.2019.11.013 10.1101/2020.04.13.20063461 10.1109/JIOT.2020.3047662 10.1007/s11042-017-5537-5 10.1016/j.media.2021.101993 10.1016/j.patrec.2018.08.010 10.1007/s10462-020-09825-6 10.1109/ICPR.1996.547205 10.35940/ijeat.b3957.129219 10.1007/s13202-020-00839-y 10.1016/j.measurement.2019.106965 10.1007/s13246-020-00888-x 10.1167/9.8.1037 10.1007/s12652-020-02669-6 10.1016/j.asoc.2020.106580
A Novel Multicolor-thresholding Auto-detection Method to Detect the Location and Severity of Inflammation in Confirmed SARS-COV-2 Cases using Chest X-Ray Images.
Since late 2019, Coronavirus Disease 2019 (COVID-19) has spread around the world. It has been determined that the disease is very contagious and can cause Acute Respiratory Distress (ARD). Medical imaging has the potential to help identify, detect, and quantify the severity of this infection. This work seeks to develop a novel auto-detection technique for verified COVID-19 cases that can detect aberrant alterations in traditional X-ray pictures. Nineteen separately colored layers were created from X-ray scans of patients diagnosed with COVID-19. Each layer represents objects that have a similar contrast and can be represented by a single color. In a single layer, objects with similar contrasts are formed. A single color image was created by extracting all the objects from all the layers. The prototype model could recognize a wide range of abnormal changes in the image texture based on color differentiation. This was true even when the contrast values of the detected unclear abnormalities varied slightly. The results indicate that the proposed novel method is 91% accurate in detecting and grading COVID-19 lung infections compared to the opinions of three experienced radiologists evaluating chest X-ray images. Additionally, the method can be used to determine the infection site and severity of the disease by categorizing X-rays into five severity levels. By comparing affected tissue to healthy tissue, the proposed COVID-19 auto-detection method can identify locations and indicate the severity of the disease, as well as predict where the disease may spread.
Current medical imaging
"2021-09-14T00:00:00"
[ "Mohammed SAlqahtani", "Mohamed AAbbas", "Ali MAlqahtani", "Mohammad YAlshahrani", "Abdulhadi JAlkulib", "Magbool AAlelyani", "Awad MAlmarhaby" ]
10.2174/1573405617666210910150119
Classification of Lung Disease in Children by Using Lung Ultrasound Images and Deep Convolutional Neural Network.
Bronchiolitis is the most common cause of hospitalization of children in the first year of life and pneumonia is the leading cause of infant mortality worldwide. Lung ultrasound technology (LUS) is a novel imaging diagnostic tool for the early detection of respiratory distress and offers several advantages due to its low-cost, relative safety, portability, and easy repeatability. More precise and efficient diagnostic and therapeutic strategies are needed. Deep-learning-based computer-aided diagnosis (CADx) systems, using chest X-ray images, have recently demonstrated their potential as a screening tool for pulmonary disease (such as COVID-19 pneumonia). We present the first computer-aided diagnostic scheme for LUS images of pulmonary diseases in children. In this study, we trained from scratch four state-of-the-art deep-learning models (VGG19, Xception, Inception-v3 and Inception-ResNet-v2) for detecting children with bronchiolitis and pneumonia. In our experiments we used a data set consisting of 5,907 images from 33 healthy infants, 3,286 images from 22 infants with bronchiolitis, and 4,769 images from 7 children suffering from bacterial pneumonia. Using four-fold cross-validation, we implemented one binary classification (healthy vs. bronchiolitis) and one three-class classification (healthy vs. bronchiolitis vs. bacterial pneumonia) out of three classes. Affine transformations were applied for data augmentation. Hyperparameters were optimized for the learning rate, dropout regularization, batch size, and epoch iteration. The Inception-ResNet-v2 model provides the highest classification performance, when compared with the other models used on test sets: for healthy vs. bronchiolitis, it provides 97.75% accuracy, 97.75% sensitivity, and 97% specificity whereas for healthy vs. bronchiolitis vs. bacterial pneumonia, the Inception-v3 model provides the best results with 91.5% accuracy, 91.5% sensitivity, and 95.86% specificity. We performed a gradient-weighted class activation mapping (Grad-CAM) visualization and the results were qualitatively evaluated by a pediatrician expert in LUS imaging: heatmaps highlight areas containing diagnostic-relevant LUS imaging-artifacts, e.g., A-, B-, pleural-lines, and consolidations. These complex patterns are automatically learnt from the data, thus avoiding hand-crafted features usage. By using LUS imaging, the proposed framework might aid in the development of an accessible and rapid decision support-method for diagnosing pulmonary diseases in children using LUS imaging.
Frontiers in physiology
"2021-09-14T00:00:00"
[ "SilviaMagrelli", "PieroValentini", "CristinaDe Rose", "RosaMorello", "DaniloBuonsenso" ]
10.3389/fphys.2021.693448 10.1007/s11548-015-1181-6 10.1007/s13246-020-00865-4 10.1186/s12887-015-0380-1 10.1121/1.1903488 10.1117/12.2581865 10.1038/s41598-019-54499-y 10.1007/978-3-319-66179-7_30 10.1016/j.ultrasmedbio.2020.07.005 10.3390/app11020672 10.1093/cid/cir625 10.1016/0165-0173(94)00016-I 10.1002/jum.15147 10.1007/s40477-021-00600-z 10.1186/s12890-019-0925-4 10.1016/S2213-2600(20)30120-X 10.1002/uog.22055 10.1007/s40477-020-00520-4 10.1002/jum.15347 10.1007/s00247-020-04750-w 10.1109/TUFFC.2020.3005512 10.1145/1541880.1541882 10.1016/j.emc.2011.10.009 10.1109/CVPR.2017.195 10.1056/NEJMp1500523 10.1186/1476-7120-6-16 10.1371/journal.pone.0206410 10.1109/CVPR.2009.5206848 10.3390/diagnostics9040172 10.1121/1.1903489 10.1121/1.393818 10.1088/0031-9155/5/4/302 10.1038/s41591-018-0316-z 10.1016/j.ejheart.2007.10.009 10.1109/TMI.2016.2538802 10.1109/TMI.2016.2553401 10.1136/thoraxjnl-2011-200598 10.1117/12.2254581 10.1007/978-3-030-05318-5 10.1016/j.ajog.2020.04.020 10.1016/j.amjcard.2004.02.012 10.1109/ACCESS.2017.2788044 10.1097/PEC.0000000000001050 10.1007/978-3-030-01045-4_8 10.1038/nature14539 10.1016/j.cmpb.2019.06.023 10.1164/ajrccm.156.5.96-07096 10.1378/chest.08-2281 10.1183/23120541.00539-2020 10.1016/j.media.2017.07.005 10.1016/S0140-6736(14)61698-6 10.1016/j.eng.2018.11.020 10.1002/mp.12134 10.1016/j.acra.2018.02.018 10.3389/fdata.2021.612561 10.1016/S0301-5629(02)00561-6 10.1007/978-3-319-24571-3_14 10.1016/S0140-6736(20)31875-4 10.1002/ppul.25255 10.1002/ppul.24426 10.1097/RUQ.0000000000000411 10.1007/s10044-021-00984-y 10.1186/1757-7241-22-23 10.1016/j.compbiomed.2020.103792 10.5811/westjem.2020.5.47743 10.3389/fmed.2020.00375 10.1016/0301-5629(86)90220-6 10.1117/1.JMI.4.1.014502 10.1038/s41551-018-0195-0 10.1542/peds.2014-2742 10.7863/jum.2003.22.2.173 10.1002/jum.15306 10.1093/pch/20.2.67 10.5220/0007404301120119 10.1016/j.media.2019.01.010 10.1109/ICCV.2017.74 10.1001/jama.2017.9039 10.1146/annurev-bioeng-071516-044442 10.1111/anae.15082 10.7863/ultra.15.08023 10.1080/17476348.2019.1565997 10.1002/jum.15285 10.1378/chest.130.2.533 10.1109/EMBC.2017.8037577 10.1542/peds.2006-2223 10.1007/s00431-019-03335-6 10.1109/CVPR.2016.308 10.1002/jum.15468 10.1016/j.arcped.2017.11.005 10.1016/j.ultrasmedbio.2020.07.003 10.1109/JBHI.2019.2936151 10.1007/s00134-012-2513-4 10.1007/s00134-021-06373-7 10.1007/s00134-020-06048-9 10.1016/j.ajem.2006.02.013 10.1155/2018/7068349 10.1038/s41598-020-76550-z 10.1109/ICCVW.2017.71 10.1016/j.compbiomed.2019.02.002 10.1016/j.ultrasmedbio.2017.07.013
Novel ensemble of optimized CNN and dynamic selection techniques for accurate Covid-19 screening using chest CT images.
The world is significantly affected by infectious coronavirus disease (covid-19). Timely prognosis and treatment are important to control the spread of this infection. Unreliable screening systems and limited number of clinical facilities are the major hurdles in controlling the spread of covid-19. Nowadays, many automated detection systems based on deep learning techniques using computed tomography (CT) images have been proposed to detect covid-19. However, these systems have the following drawbacks: (i) limited data problem poses a major hindrance to train the deep neural network model to provide accurate diagnosis, (ii) random choice of hyperparameters of Convolutional Neural Network (CNN) significantly affects the classification performance, since the hyperparameters have to be application dependent and, (iii) the generalization ability using CNN classification is usually not validated. To address the aforementioned issues, we propose two models: (i) based on a transfer learning approach, and (ii) using novel strategy to optimize the CNN hyperparameters using Whale optimization-based BAT algorithm + AdaBoost classifier built using dynamic ensemble selection techniques. According to our second method depending on the characteristics of test sample, the classifier is chosen, thereby reducing the risk of overfitting and simultaneously produced promising results. Our proposed methodologies are developed using 746 CT images. Our method obtained a sensitivity, specificity, accuracy, F-1 score, and precision of 0.98, 0.97, 0.98, 0.98, and 0.98, respectively with five-fold cross-validation strategy. Our developed prototype is ready to be tested with huge chest CT images database before its real-world application.
Computers in biology and medicine
"2021-09-12T00:00:00"
[ "SameenaPathan", "P CSiddalingaswamy", "PreethamKumar", "ManoharaPai M M", "TanweerAli", "U RajendraAcharya" ]
10.1016/j.compbiomed.2021.104835
A multi-scale gated multi-head attention depthwise separable CNN model for recognizing COVID-19.
Coronavirus 2019 (COVID-19) is a new acute respiratory disease that has spread rapidly throughout the world. In this paper, a lightweight convolutional neural network (CNN) model named multi-scale gated multi-head attention depthwise separable CNN (MGMADS-CNN) is proposed, which is based on attention mechanism and depthwise separable convolution. A multi-scale gated multi-head attention mechanism is designed to extract effective feature information from the COVID-19 X-ray and CT images for classification. Moreover, the depthwise separable convolution layers are adopted as MGMADS-CNN's backbone to reduce the model size and parameters. The LeNet-5, AlexNet, GoogLeNet, ResNet, VGGNet-16, and three MGMADS-CNN models are trained, validated and tested with tenfold cross-validation on X-ray and CT images. The results show that MGMADS-CNN with three attention layers (MGMADS-3) has achieved accuracy of 96.75% on X-ray images and 98.25% on CT images. The specificity and sensitivity are 98.06% and 96.6% on X-ray images, and 98.17% and 98.05% on CT images. The size of MGMADS-3 model is only 43.6 M bytes. In addition, the detection speed of MGMADS-3 on X-ray images and CT images are 6.09 ms and 4.23 ms for per image, respectively. It is proved that the MGMADS-3 can detect and classify COVID-19 faster with higher accuracy and efficiency.
Scientific reports
"2021-09-12T00:00:00"
[ "GengHong", "XiaoyanChen", "JianyongChen", "MiaoZhang", "YumengRen", "XinyuZhang" ]
10.1038/s41598-021-97428-8 10.1016/S0140-6736(20)30183-5 10.1148/radiol.2020200230 10.1056/NEJMoa2002032 10.1148/radiol.2020200274 10.1016/j.clinimag.2020.10.035 10.1148/radiol.2020200463 10.1148/radiol.2020200343 10.1007/s10489-020-01829-7 10.1016/j.compbiomed.2020.103792 10.1016/j.cmpb.2020.105581 10.1038/s41598-019-56847-4 10.1007/s10096-020-03901-z 10.1148/radiol.2020200905 10.1109/TMI.2020.2995965 10.1109/TCBB.2021.3065361 10.1016/j.neunet.2019.12.024 10.1016/j.neunet.2020.01.034 10.1016/j.neucom.2019.11.049 10.1007/s40846-020-00529-4
Recognition of COVID-19 from CT Scans Using Two-Stage Deep-Learning-Based Approach: CNR-IEMN.
Since the appearance of the COVID-19 pandemic (at the end of 2019, Wuhan, China), the recognition of COVID-19 with medical imaging has become an active research topic for the machine learning and computer vision community. This paper is based on the results obtained from the 2021 COVID-19 SPGC challenge, which aims to classify volumetric CT scans into normal, COVID-19, or community-acquired pneumonia (Cap) classes. To this end, we proposed a deep-learning-based approach (CNR-IEMN) that consists of two main stages. In the first stage, we trained four deep learning architectures with a multi-tasks strategy for slice-level classification. In the second stage, we used the previously trained models with an XG-boost classifier to classify the whole CT scan into normal, COVID-19, or Cap classes. Our approach achieved a good result on the validation set, with an overall accuracy of 87.75% and 96.36%, 52.63%, and 95.83% sensitivities for COVID-19, Cap, and normal, respectively. On the other hand, our approach achieved fifth place on the three test datasets of SPGC in the COVID-19 challenge, where our approach achieved the best result for COVID-19 sensitivity. In addition, our approach achieved second place on two of the three testing sets.
Sensors (Basel, Switzerland)
"2021-09-11T00:00:00"
[ "FaresBougourzi", "RiccardoContino", "CosimoDistante", "AbdelmalikTaleb-Ahmed" ]
10.3390/s21175878 10.1007/s10238-020-00648-x 10.3390/s21051742 10.7326/M20-1495 10.21203/rs.3.rs-491375/v1 10.1109/ICASSP39728.2021.9414185 10.1038/s41597-021-00900-3 10.1148/radiol.2020200236 10.1016/j.eswa.2020.113459 10.3390/jimaging7030051 10.1109/TBDATA.2021.3056564 10.1109/TMI.2021.3066161 10.3390/s21020455 10.1007/s10140-020-01886-y 10.1080/07391102.2020.1788642 10.1038/s41467-020-20657-4 10.1109/TIP.2021.3058783 10.1101/2020.03.12.20027185 10.1101/2020.10.11.20211052 10.1109/ICASSP39728.2021.9414947 10.1109/ICASSP39728.2021.9414745 10.1109/ICASSP39728.2021.9414007 10.1109/ICASSP39728.2021.9414426 10.1109/ICASSP39728.2021.9413707 10.1145/3065386
On the Use of Deep Learning for Imaging-Based COVID-19 Detection Using Chest X-rays.
The global COVID-19 pandemic that started in 2019 and created major disruptions around the world demonstrated the imperative need for quick, inexpensive, accessible and reliable diagnostic methods that would allow the detection of infected individuals with minimal resources. Radiography, and more specifically, chest radiography, is a relatively inexpensive medical imaging modality that can potentially offer a solution for the diagnosis of COVID-19 cases. In this work, we examined eleven deep convolutional neural network architectures for the task of classifying chest X-ray images as belonging to healthy individuals, individuals with COVID-19 or individuals with viral pneumonia. All the examined networks are established architectures that have been proven to be efficient in image classification tasks, and we evaluated three different adjustments to modify the architectures for the task at hand by expanding them with additional layers. The proposed approaches were evaluated for all the examined architectures on a dataset with real chest X-ray images, reaching the highest classification accuracy of 98.04% and the highest F1-score of 98.22% for the best-performing setting.
Sensors (Basel, Switzerland)
"2021-09-11T00:00:00"
[ "Gabriel IluebeOkolo", "StamosKatsigiannis", "TurkeAlthobaiti", "NaeemRamzan" ]
10.3390/s21175702 10.1016/j.idm.2020.02.002 10.1056/NEJMoa2002032 10.1001/jama.2020.2648 10.7326/M20-1382 10.1001/jama.2020.3786 10.1148/radiol.2020200432 10.1016/j.ijid.2020.04.023 10.2807/1560-7917.ES.2020.25.50.2000568 10.1148/ryct.2020200034 10.1016/S0140-6736(20)30183-5 10.1148/radiol.2020200642 10.1148/radiol.2020203173 10.1001/jama.2020.4326 10.1148/ryct.2020209004 10.1148/radiol.2020201160 10.1016/j.jinf.2020.03.007 10.1016/j.acra.2020.03.002 10.1007/s00117-013-2537-y 10.1016/j.crad.2020.03.008 10.1016/j.inffus.2021.04.008 10.1016/S2589-7500(19)30123-2 10.1109/TMI.2019.2928790 10.1109/TMI.2019.2936500 10.1145/3065386 10.1148/radiol.2017162326 10.1148/radiol.2019181960 10.3390/s20123482 10.1007/s00345-019-03059-0 10.1038/s41551-018-0301-3 10.3390/app10020559 10.1007/s11263-015-0816-y 10.3390/app9194130 10.1007/s00330-021-07715-1 10.1109/CVPR.2016.308 10.1038/s41598-020-76550-z 10.1109/ACCESS.2020.3010287 10.1016/j.imu.2020.100405 10.1016/j.bbe.2020.08.008 10.1038/s41598-020-74539-2 10.1007/s13246-020-00865-4 10.1038/s41598-020-71294-2 10.1109/JSEN.2020.3028494 10.1016/j.cell.2018.02.010 10.1016/j.icte.2020.04.010 10.1007/s10916-018-0932-7 10.1021/ci00027a006 10.1016/j.geoderma.2019.06.016 10.25835/0090041 10.1016/j.asoc.2020.106912 10.32628/IJSRST207614 10.1101/2020.11.08.20227819
AIforCOVID: Predicting the clinical outcomes in patients with COVID-19 applying AI to chest-X-rays. An Italian multicentre study.
Recent epidemiological data report that worldwide more than 53 million people have been infected by SARS-CoV-2, resulting in 1.3 million deaths. The disease has been spreading very rapidly and few months after the identification of the first infected, shortage of hospital resources quickly became a problem. In this work we investigate whether artificial intelligence working with chest X-ray (CXR) scans and clinical data can be used as a possible tool for the early identification of patients at risk of severe outcome, like intensive care or death. Indeed, further to induce lower radiation dose than computed tomography (CT), CXR is a simpler and faster radiological technique, being also more widespread. In this respect, we present three approaches that use features extracted from CXR images, either handcrafted or automatically learnt by convolutional neuronal networks, which are then integrated with the clinical data. As a further contribution, this work introduces a repository that collects data from 820 patients enrolled in six Italian hospitals in spring 2020 during the first COVID-19 emergency. The dataset includes CXR images, several clinical attributes and clinical outcomes. Exhaustive evaluation shows promising performance both in 10-fold and leave-one-centre-out cross-validation, suggesting that clinical data and images have the potential to provide useful information for the management of patients and hospital resources.
Medical image analysis
"2021-09-08T00:00:00"
[ "PaoloSoda", "Natascha ClaudiaD'Amico", "JacopoTessadori", "GiovanniValbusa", "ValerioGuarrasi", "ChandraBortolotto", "Muhammad UsmanAkbar", "RosaSicilia", "ErmannoCordelli", "DeborahFazzini", "MichaelaCellina", "GiancarloOliva", "GiovanniCallea", "SilviaPanella", "MaurizioCariati", "DilettaCozzi", "VittorioMiele", "ElviraStellato", "GianpaoloCarrafiello", "GiuliaCastorani", "AnnalisaSimeone", "LorenzoPreda", "GiulioIannello", "AlessioDel Bue", "FabioTedoldi", "MarcoAlí", "DiegoSona", "SergioPapa" ]
10.1016/j.media.2021.102216 10.1371/journal.pone.0087357
Pneumonia detection in chest X-ray images using an ensemble of deep learning models.
Pneumonia is a respiratory infection caused by bacteria or viruses; it affects many individuals, especially in developing and underdeveloped nations, where high levels of pollution, unhygienic living conditions, and overcrowding are relatively common, together with inadequate medical infrastructure. Pneumonia causes pleural effusion, a condition in which fluids fill the lung, causing respiratory difficulty. Early diagnosis of pneumonia is crucial to ensure curative treatment and increase survival rates. Chest X-ray imaging is the most frequently used method for diagnosing pneumonia. However, the examination of chest X-rays is a challenging task and is prone to subjective variability. In this study, we developed a computer-aided diagnosis system for automatic pneumonia detection using chest X-ray images. We employed deep transfer learning to handle the scarcity of available data and designed an ensemble of three convolutional neural network models: GoogLeNet, ResNet-18, and DenseNet-121. A weighted average ensemble technique was adopted, wherein the weights assigned to the base learners were determined using a novel approach. The scores of four standard evaluation metrics, precision, recall, f1-score, and the area under the curve, are fused to form the weight vector, which in studies in the literature was frequently set experimentally, a method that is prone to error. The proposed approach was evaluated on two publicly available pneumonia X-ray datasets, provided by Kermany et al. and the Radiological Society of North America (RSNA), respectively, using a five-fold cross-validation scheme. The proposed method achieved accuracy rates of 98.81% and 86.85% and sensitivity rates of 98.80% and 87.02% on the Kermany and RSNA datasets, respectively. The results were superior to those of state-of-the-art methods and our method performed better than the widely used ensemble techniques. Statistical analyses on the datasets using McNemar's and ANOVA tests showed the robustness of the approach. The codes for the proposed work are available at https://github.com/Rohit-Kundu/Ensemble-Pneumonia-Detection.
PloS one
"2021-09-08T00:00:00"
[ "RohitKundu", "RitachetaDas", "Zong WooGeem", "Gi-TaeHan", "RamSarkar" ]
10.1371/journal.pone.0256630 10.1002/jhm.955 10.1002/ppul.22806 10.3390/s21113922 10.1007/s00779-020-01494-0 10.7717/peerj-cs.495 10.3390/app10093233 10.1016/j.cmpb.2019.06.023 10.1007/s12559-020-09787-5 10.1186/s12911-019-0792-1 10.21037/atm-20-3026 10.1155/2019/4180949 10.1016/j.chemolab.2021.104256 10.1016/j.measurement.2019.05.076 10.2214/AJR.19.21512 10.1038/s41598-021-93658-y 10.1038/s41598-021-93783-8 10.1016/j.compbiomed.2020.103869 10.1162/089976698300017197 10.1016/j.csda.2003.10.021
Cardiovascular CT and MRI in 2020: Review of Key Articles.
Despite the global coronavirus pandemic, cardiovascular imaging continued to evolve throughout 2020. It was an important year for cardiac CT and MRI, with increasing prominence in cardiovascular research, use in clinical decision making, and in guidelines. This review summarizes key publications in 2020 relevant to current and future clinical practice. In cardiac CT, these have again predominated in assessment of patients with chest pain and structural heart diseases, although more refined CT techniques, such as quantitative plaque analysis and CT perfusion, are also maturing. In cardiac MRI, the major developments have been in patients with cardiomyopathy and myocarditis, although coronary artery disease applications remain well represented. Deep learning applications in cardiovascular imaging have continued to advance in both CT and MRI, and these are now closer than ever to routine clinical adoption. Perhaps most important has been the rapid deployment of MRI in enhancing understanding of the impact of COVID-19 infection on the heart. Although this review focuses primarily on articles published in
Radiology
"2021-09-08T00:00:00"
[ "Gaurav SGulsin", "NiallMcVeigh", "Jonathon ALeipsic", "Jonathan DDodd" ]
10.1148/radiol.2021211002
COVID-19 detection method based on SVRNet and SVDNet in lung x-rays.
Journal of medical imaging (Bellingham, Wash.)
"2021-09-03T00:00:00"
[ "KedongRao", "KaiXie", "ZiqiHu", "XiaolongGuo", "ChangWen", "JianbiaoHe" ]
10.1117/1.JMI.8.S1.017504 10.1148/radiol.2020200642 10.1148/radiol.2020200241 10.1148/radiol.2020200463 10.1038/s41586-020-2008-3 10.1148/radiol.2020200490 10.1613/jair.1.12162 10.7507/1001-5515.201710060 10.7507/1001-5515.202005056 10.1038/s41598-020-76550-z 10.1007/s10044-021-00984-y 10.1007/s13246-020-00865-4 10.1148/radiol.2020200905 10.1016/j.cmpb.2020.105581 10.1016/j.asoc.2020.106897 10.1109/ACCESS.2020.3010287 10.1016/j.compbiomed.2021.104319 10.1109/CVPR.2016.90 10.1109/CVPR.2017.195 10.1109/ACCESS.2020.2971225 10.1016/j.cmpb.2020.105608 10.1016/j.chaos.2020.110245 10.1016/j.compbiomed.2020.103792
Deep learning for distinguishing normal versus abnormal chest radiographs and generalization to two unseen diseases tuberculosis and COVID-19.
Chest radiography (CXR) is the most widely-used thoracic clinical imaging modality and is crucial for guiding the management of cardiothoracic conditions. The detection of specific CXR findings has been the main focus of several artificial intelligence (AI) systems. However, the wide range of possible CXR abnormalities makes it impractical to detect every possible condition by building multiple separate systems, each of which detects one or more pre-specified conditions. In this work, we developed and evaluated an AI system to classify CXRs as normal or abnormal. For training and tuning the system, we used a de-identified dataset of 248,445 patients from a multi-city hospital network in India. To assess generalizability, we evaluated our system using 6 international datasets from India, China, and the United States. Of these datasets, 4 focused on diseases that the AI was not trained to detect: 2 datasets with tuberculosis and 2 datasets with coronavirus disease 2019. Our results suggest that the AI system trained using a large dataset containing a diverse array of CXR abnormalities generalizes to new patient populations and unseen diseases. In a simulated workflow where the AI system prioritized abnormal cases, the turnaround time for abnormal cases reduced by 7-28%. These results represent an important step towards evaluating whether AI can be safely used to flag cases in a general setting where previously unseen abnormalities exist. Lastly, to facilitate the continued development of AI models for CXR, we release our collected labels for the publicly available dataset.
Scientific reports
"2021-09-03T00:00:00"
[ "ZaidNabulsi", "AndrewSellergren", "ShaharJamshy", "CharlesLau", "EdwardSantos", "Atilla PKiraly", "WenxingYe", "JieYang", "RoryPilgrim", "SaharKazemzadeh", "JinYu", "Sreenivasa RajuKalidindi", "MozziyarEtemadi", "FlorenciaGarcia-Vicente", "DavidMelnick", "Greg SCorrado", "LilyPeng", "KrishEswaran", "DanielTse", "NeeralBeladia", "YunLiu", "Po-Hsuan CameronChen", "ShravyaShetty" ]
10.1038/s41598-021-93967-2 10.1007/s11604-008-0259-2 10.4103/2156-7514.97747 10.1148/radiol.2019191293 10.1371/journal.pmed.1002686 10.1148/radiol.2017162326 10.1148/radiol.2018180237 10.1016/S2589-7500(20)30162-X 10.1186/s12916-019-1426-2 10.1056/NEJMoa2002032 10.1371/journal.pone.0242301 10.2214/AJR.09.2950 10.1111/j.1525-1497.2006.00427.x 10.1148/radiol.2019192515 10.1016/j.crad.2018.05.015 10.1148/radiol.2018180921 10.1148/radiol.2019191225 10.1038/s41746-020-0273-z 10.1097/RLI.0000000000000341 10.1148/radiol.2018181422 10.1109/TMI.2013.2284099 10.1109/TMI.2013.2290491 10.7326/M20-1495 10.1016/S0893-6080(98)00116-6 10.1002/sim.1012 10.1136/bmj.310.6973.170
Auto informing COVID-19 detection result from x-ray/CT images based on deep learning.
It is no secret to all that the corona pandemic has caused a decline in all aspects of the world. Therefore, offering an accurate automatic diagnostic system is very important. This paper proposed an accurate COVID-19 system by testing various deep learning models for x-ray/computed tomography (CT) medical images. A deep preprocessing procedure was done with two filters and segmentation to increase classification results. According to the results obtained, 99.94% of accuracy, 98.70% of sensitivity, and 100% of specificity scores were obtained by the Xception model in the x-ray dataset and the InceptionV3 model for CT scan images. The compared results have demonstrated that the proposed model is proven to be more successful than the deep learning algorithms in previous studies. Moreover, it has the ability to automatically notify the examination results to the patients, the health authority, and the community after taking any x-ray or CT images.
The Review of scientific instruments
"2021-09-03T00:00:00"
[ "Ahlam FadhilMahmood", "Saja WaleedMahmood" ]
10.1063/5.0059829
CORONA-Net: Diagnosing COVID-19 from X-ray Images Using Re-Initialization and Classification Networks.
The COVID-19 pandemic has been deemed a global health pandemic. The early detection of COVID-19 is key to combating its outbreak and could help bring this pandemic to an end. One of the biggest challenges in combating COVID-19 is accurate testing for the disease. Utilizing the power of Convolutional Neural Networks (CNNs) to detect COVID-19 from chest X-ray images can help radiologists compare and validate their results with an automated system. In this paper, we propose a carefully designed network, dubbed CORONA-Net, that can accurately detect COVID-19 from chest X-ray images. CORONA-Net is divided into two phases: (1) The reinitialization phase and (2) the classification phase. In the reinitialization phase, the network consists of encoder and decoder networks. The objective of this phase is to train and initialize the encoder and decoder networks by a distribution that comes out of medical images. In the classification phase, the decoder network is removed from CORONA-Net, and the encoder network acts as a backbone network to fine-tune the classification phase based on the learned weights from the reinitialization phase. Extensive experiments were performed on a publicly available dataset, COVIDx, and the results show that CORONA-Net significantly outperforms the current state-of-the-art networks with an overall accuracy of 95.84%.
Journal of imaging
"2021-08-31T00:00:00"
[ "SherifElbishlawi", "Mohamed HAbdelpakey", "Mohamed SShehata", "Mostafa MMohamed" ]
10.3390/jimaging7050081 10.1038/nature14539 10.1148/radiol.2020200905 10.1101/2020.04.14.20065722 10.1016/j.mehy.2020.109761 10.3390/ijerph17186933 10.1016/j.media.2020.101794 10.1101/2020.02.14.20023028 10.1007/s10489-020-02076-6 10.1101/2020.05.01.20088211 10.36227/techrxiv.12464402.v1 10.1016/j.cmpb.2020.105532 10.1007/s13246-020-00865-4 10.1016/j.cmpb.2020.105608 10.1109/ACCESS.2020.3005510 10.1016/j.chaos.2020.110245 10.3389/fmed.2020.00427 10.1016/j.ejrad.2020.108940 10.33889/IJMEMS.2020.5.4.052 10.1016/j.eng.2020.04.010 10.1101/2020.02.23.20026930 10.1145/3065386
PM₂.₅ Monitoring: Use Information Abundance Measurement and Wide and Deep Learning.
This article devises a photograph-based monitoring model to estimate the real-time PM
IEEE transactions on neural networks and learning systems
"2021-08-31T00:00:00"
[ "KeGu", "HongyanLiu", "ZhifangXia", "JunfeiQiao", "WeisiLin", "DanielThalmann" ]
10.1109/TNNLS.2021.3105394
Research on Classification of COVID-19 Chest X-Ray Image Modal Feature Fusion Based on Deep Learning.
Most detection methods of coronavirus disease 2019 (COVID-19) use classic image classification models, which have problems of low recognition accuracy and inaccurate capture of modal features when detecting chest X-rays of COVID-19. This study proposes a COVID-19 detection method based on image modal feature fusion. This method first performs small-sample enhancement processing on chest X-rays, such as rotation, translation, and random transformation. Five classic pretraining models are used when extracting modal features. A global average pooling layer reduces training parameters and prevents overfitting. The model is trained and fine-tuned, the machine learning evaluation standard is used to evaluate the model, and the receiver operating characteristic (ROC) curve is drawn. Experiments show that compared with the classic model, the classification method in this study can more effectively detect COVID-19 image modal information, and it achieves the expected effect of accurately detecting cases.
Journal of healthcare engineering
"2021-08-31T00:00:00"
[ "DongshengJi", "ZhujunZhang", "YanzhongZhao", "QianchuanZhao" ]
10.1155/2021/6799202 10.1097/cm9.0000000000000866 10.1007/s00330-021-07715-1 10.1148/radiol.2020200905 10.3390/sym12040651 10.1016/j.imu.2020.100360 10.5455/jjee.204-158531224 10.1007/s13246-020-00865-4 10.1109/TKDE.2009.191 10.1109/cvpr.2017.195 10.1016/j.eswa.2020.114054 10.1007/s10044-021-00970-4
ANFIS-Net for automatic detection of COVID-19.
Among the most leading causes of mortality across the globe are infectious diseases which have cost tremendous lives with the latest being coronavirus (COVID-19) that has become the most recent challenging issue. The extreme nature of this infectious virus and its ability to spread without control has made it mandatory to find an efficient auto-diagnosis system to assist the people who work in touch with the patients. As fuzzy logic is considered a powerful technique for modeling vagueness in medical practice, an Adaptive Neuro-Fuzzy Inference System (ANFIS) was proposed in this paper as a key rule for automatic COVID-19 detection from chest X-ray images based on the characteristics derived by texture analysis using gray level co-occurrence matrix (GLCM) technique. Unlike the proposed method, especially deep learning-based approaches, the proposed ANFIS-based method can work on small datasets. The results were promising performance accuracy, and compared with the other state-of-the-art techniques, the proposed method gives the same performance as the deep learning with complex architectures using many backbone.
Scientific reports
"2021-08-29T00:00:00"
[ "AfnanAl-Ali", "OmarElharrouss", "UvaisQidwai", "SomayaAl-Maaddeed" ]
10.1038/s41598-021-96601-3 10.1016/j.chaos.2020.109947 10.1109/ACCESS.2021.3058537 10.1007/s42979-020-00383-w 10.14257/ijbsbt.2016.8.3.21 10.1002/ima.22170 10.1016/j.procs.2019.12.134 10.1002/ima.22257 10.1002/ima.22329 10.14419/ijet.v7i3.27.17763 10.31557/APJCP.2018.19.11.3203 10.1007/s10586-018-2160-9 10.3390/math8060890 10.1007/s42979-020-00401-x 10.1007/s42979-019-0007-y 10.1007/s42979-019-0007-y 10.1016/j.compbiomed.2020.103792 10.1007/s10489-020-01823-z 10.1007/s10462-017-9610-2 10.1109/ACCESS.2019.2893141 10.1016/j.measurement.2016.10.010
Pulmonary COVID-19: Learning Spatiotemporal Features Combining CNN and LSTM Networks for Lung Ultrasound Video Classification.
Deep Learning is a very active and important area for building Computer-Aided Diagnosis (CAD) applications. This work aims to present a hybrid model to classify lung ultrasound (LUS) videos captured by convex transducers to diagnose COVID-19. A Convolutional Neural Network (CNN) performed the extraction of spatial features, and the temporal dependence was learned using a Long Short-Term Memory (LSTM). Different types of convolutional architectures were used for feature extraction. The hybrid model (CNN-LSTM) hyperparameters were optimized using the Optuna framework. The best hybrid model was composed of an Xception pre-trained on ImageNet and an LSTM containing 512 units, configured with a dropout rate of 0.4, two fully connected layers containing 1024 neurons each, and a sequence of 20 frames in the input layer (20×2018). The model presented an average accuracy of 93% and sensitivity of 97% for COVID-19, outperforming models based purely on spatial approaches. Furthermore, feature extraction using transfer learning with models pre-trained on ImageNet provided comparable results to models pre-trained on LUS images. The results corroborate with other studies showing that this model for LUS classification can be an important tool in the fight against COVID-19 and other lung diseases.
Sensors (Basel, Switzerland)
"2021-08-29T00:00:00"
[ "BrunoBarros", "PauloLacerda", "CélioAlbuquerque", "AuraConci" ]
10.3390/s21165486 10.1056/NEJMoa2001017 10.1101/2021.03.19.21253946 10.1101/2020.12.30.20249034 10.1016/S0140-6736(21)00183-5 10.1038/d41586-021-01274-7 10.1155/2020/5714714 10.1016/j.chaos.2020.110152 10.1007/S00521-020-05626-8 10.1016/j.chaos.2020.109945 10.1016/j.chaos.2020.110182 10.1038/s41746-021-00453-0 10.3892/etm.2020.8797 10.1016/j.chaos.2020.110027 10.3390/ijerph17103730 10.1155/2020/1846926 10.1371/journal.pntd.0008280 10.1136/bmj.m1091 10.1016/S2213-2600(20)30120-X 10.3390/s21062174 10.1590/s1678-9946202062044 10.1136/bmj.m1808 10.1136/bmjresp-2018-000354 10.1186/s12245-018-0170-2 10.1016/j.jemermed.2021.01.041 10.1590/0100-3984.2020.0051 10.1093/cid/ciaa1408 10.1016/j.pulmoe.2021.02.004 10.1016/j.eng.2020.09.007 10.1121/10.0002183 10.1016/j.ultrasmedbio.2020.07.003 10.1097/00000542-200401000-00006 10.1016/j.cjca.2020.05.008 10.2214/ajr.159.1.1609716 10.1016/j.ultrasmedbio.2020.04.033 10.1016/j.afjem.2020.04.007 10.3389/fdata.2021.612561 10.1109/TUFFC.2021.3068190 10.1016/j.eng.2018.11.020 10.1109/JPROC.2021.3054390 10.1002/emp2.12018 10.1590/0100-3984.2020.53.5e3 10.1016/S2589-7500(19)30123-2 10.1038/s41746-020-00376-2 10.1016/j.scs.2020.102589 10.1007/s12065-020-00540-3 10.1155/2018/5137904 10.1016/j.ibmed.2020.100013 10.1016/j.media.2020.101913 10.1016/j.asoc.2020.106912 10.1109/ACCESS.2020.3016780 10.2196/23811 10.1016/j.chaos.2020.109947 10.1016/j.chaos.2020.110338 10.1016/j.inffus.2020.11.005 10.1007/s10044-020-00950-0 10.1159/000509763 10.1002/14651858.CD013639.PUB4 10.1007/BF02551274 10.1148/radiol.2017171183 10.1002/mp.13764 10.1007/3-540-46805-6_19 10.1109/CVPR.2017.369 10.3390/s21155192 10.1038/nature14539 10.1016/j.patcog.2017.10.013 10.1109/ICCSRE.2019.8807741 10.1007/s10462-020-09825-6 10.1109/CVPR.2016.90 10.1109/CVPR.2016.308 10.1109/CVPR.2017.195 10.1038/323533a0 10.1142/S0218488598000094 10.1016/j.physd.2019.132306 10.1162/neco.1997.9.8.1735 10.1162/089976600300015015 10.3115/v1/d14-1179 10.1007/s11263-015-0816-y 10.1049/iet-ipr.2019.0561 10.1007/978-3-662-38527-2_55 10.1007/978-3-642-25566-3_40 10.21105/joss.00431 10.1001/jama.2016.17216 10.1038/nature21056 10.1001/jama.2017.14585 10.1007/978-3-030-01045-4_8 10.1109/TUFFC.2020.3002249 10.1016/j.ejmp.2021.02.023 10.1016/j.compbiomed.2021.104296 10.1109/TMI.2020.2994459 10.1109/CCISP51026.2020.9273469 10.1016/j.inffus.2021.02.013 10.1136/bmjopen-2020-045120 10.3390/app11020672 10.1016/j.rcae.2015.04.008 10.1145/3292500.3330701
Precise Segmentation of COVID-19 Infected Lung from CT Images Based on Adaptive First-Order Appearance Model with Morphological/Anatomical Constraints.
A new segmentation technique is introduced for delineating the lung region in 3D computed tomography (CT) images. To accurately model the distribution of Hounsfield scale values within both chest and lung regions, a new probabilistic model is developed that depends on a linear combination of Gaussian (LCG). Moreover, we modified the conventional expectation-maximization (EM) algorithm to be run in a sequential way to estimate both the dominant Gaussian components (one for the lung region and one for the chest region) and the subdominant Gaussian components, which are used to refine the final estimated joint density. To estimate the marginal density from the mixed density, a modified k-means clustering approach is employed to classify the Gaussian subdominant components to determine which components belong properly to a lung and which components belong to a chest. The initial segmentation, based on the LCG-model, is then refined by the imposition of 3D morphological constraints based on a 3D Markov-Gibbs random field (MGRF) with analytically estimated potentials. The proposed approach was tested on CT data from 32 coronavirus disease 2019 (COVID-19) patients. Segmentation quality was quantitatively evaluated using four metrics:
Sensors (Basel, Switzerland)
"2021-08-29T00:00:00"
[ "AhmedSharafeldeen", "MohamedElsharkawy", "Norah SalehAlghamdi", "AhmedSoliman", "AymanEl-Baz" ]
10.3390/s21165482 10.1007/s11481-020-09944-5 10.1001/jamainternmed.2020.3596 10.17762/turcomat.v12i2.1102 10.1109/iembs.2007.4353317 10.1016/j.acra.2006.02.039 10.1016/S1076-6332(03)00380-5 10.1109/42.929615 10.1118/1.598898 10.1109/42.650879 10.1118/1.3147146 10.1016/j.dsp.2014.09.002 10.1155/2014/479154 10.1109/TMI.2012.2219881 10.1007/978-3-319-10404-1_100 10.1016/j.patcog.2020.107747 10.1118/1.3003066 10.1118/1.3222872 10.1016/j.compmedimag.2007.03.002 10.1016/j.media.2012.08.002 10.1109/TMI.2014.2337057 10.1007/s00521-021-06273-3 10.1155/2013/942353 10.1186/s12880-020-00529-5 10.1109/TPAMI.2016.2644615 10.1007/978-3-319-24574-4_28 10.1016/j.compbiomed.2020.104037 10.1109/TMI.2020.2996645 10.1016/j.knosys.2020.106647 10.1109/TPAMI.2019.2938758 10.1016/j.patcog.2021.108071 10.1109/3dv.2016.79 10.1148/radiol.2020200905 10.1109/cvpr.2016.90 10.3390/diagnostics11020158 10.1002/mp.14609 10.1038/s41598-020-80936-4 10.1038/s41598-020-80261-w 10.1002/mp.14676 10.1016/j.media.2020.101693 10.1109/isit.2004.1365067 10.1007/3-540-45468-3_62 10.1063/1.4825026 10.1055/a-1388-8147 10.1007/978-3-319-46723-8_49
The Applications of Artificial Intelligence in Chest Imaging of COVID-19 Patients: A Literature Review.
Diagnostic imaging is regarded as fundamental in the clinical work-up of patients with a suspected or confirmed COVID-19 infection. Recent progress has been made in diagnostic imaging with the integration of artificial intelligence (AI) and machine learning (ML) algorisms leading to an increase in the accuracy of exam interpretation and to the extraction of prognostic information useful in the decision-making process. Considering the ever expanding imaging data generated amid this pandemic, COVID-19 has catalyzed the rapid expansion in the application of AI to combat disease. In this context, many recent studies have explored the role of AI in each of the presumed applications for COVID-19 infection chest imaging, suggesting that implementing AI applications for chest imaging can be a great asset for fast and precise disease screening, identification and characterization. However, various biases should be overcome in the development of further ML-based algorithms to give them sufficient robustness and reproducibility for their integration into clinical practice. As a result, in this literature review, we will focus on the application of AI in chest imaging, in particular, deep learning, radiomics and advanced imaging as quantitative CT.
Diagnostics (Basel, Switzerland)
"2021-08-28T00:00:00"
[ "Maria ElenaLaino", "AngelaAmmirabile", "AlessandroPosa", "PierandreaCancian", "SherifShalaby", "VictorSavevski", "EmanueleNeri" ]
10.3390/diagnostics11081317 10.1148/radiol.2020204267 10.1186/s13244-021-00962-2 10.1148/radiol.2020203173 10.1016/j.clinimag.2020.04.001 10.1186/s12967-020-02324-w 10.1016/j.jacr.2020.03.006 10.1002/jmv.25822 10.26355/eurrev_202003_20549 10.1016/j.radi.2020.04.005 10.1002/jum.15284 10.1007/s11547-020-01135-9 10.1007/s11547-020-01197-9 10.1007/s11547-020-01305-9 10.1007/s11547-021-01389-x 10.1590/0100-3984.2019.0049 10.1155/2021/6677314 10.1109/RBME.2020.2987975 10.1148/ryct.2020200082 10.1148/ryct.2020200075 10.21037/atm-20-3026 10.1148/ryct.2020200044 10.1002/mp.14609 10.1109/TKDE.2009.191 10.1007/s13246-020-00865-4 10.1109/ACCESS.2020.3010287 10.1016/j.imu.2020.100412 10.1016/j.asoc.2020.106580 10.1109/TMI.2020.2993291 10.1016/j.compbiomed.2020.103792 10.1016/j.cmpb.2020.105532 10.1038/s41598-020-76550-z 10.3348/kjr.2020.0132 10.1148/radiol.2020201160 10.1016/j.ejro.2020.100231 10.12788/fp.0045 10.1007/s13755-020-00119-3 10.1016/j.chaos.2020.110122 10.1016/j.chaos.2020.110245 10.1016/j.mehy.2020.109577 10.1016/j.compbiomed.2020.103805 10.1007/s42600-021-00132-9 10.1007/s00138-020-01128-8 10.1007/s13755-020-00116-6 10.1016/j.chaos.2020.110170 10.1038/s41598-020-71294-2 10.11604/pamj.supp.2020.35.2.24258 10.1148/radiol.2020201874 10.7150/ijbs.53982 10.1007/s00330-020-06863-0 10.1148/radiol.2020201754 10.1016/j.acra.2021.01.016 10.1148/ryai.2020200079 10.1007/s00330-020-07269-8 10.1371/journal.pone.0236621 10.1016/j.ins.2020.09.041 10.1148/radiol.2020200823 10.1016/j.compbiomed.2021.104252 10.1155/2020/8889023 10.3892/etm.2020.8797 10.3348/kjr.2020.0146 10.1109/RBME.2020.2990959 10.1016/j.ejrad.2020.109233 10.21037/atm.2020.03.132 10.1038/s41467-020-17971-2 10.1007/s00330-020-07044-9 10.1038/s41598-020-76282-0 10.1007/s00330-020-06801-0 10.1007/s00259-020-04953-1 10.1007/s10140-020-01856-4 10.21037/qims-20-531 10.1148/radiol.2020201473 10.1148/radiol.2020202439 10.1186/s12967-020-02692-3 10.1007/s00330-020-07032-z 10.7150/ijms.48432 10.1101/2020.05.08.20094664 10.1016/j.cmpb.2021.106004 10.1186/s12880-020-00529-5 10.1007/s10044-020-00950-0 10.1007/s10489-020-01943-6 10.1016/j.ejrad.2020.109402 10.1038/s41591-020-0931-3 10.1007/s10140-020-01821-1 10.1007/s00330-020-07033-y 10.2214/AJR.20.22976 10.1097/RLI.0000000000000672 10.1164/rccm.201908-1581ST 10.3390/ijerph18062842 10.1016/j.media.2020.101824 10.1183/13993003.00775-2020 10.1109/JBHI.2020.3034296 10.2196/21604 10.2196/24973 10.21037/jtd-20-1584 10.1007/s00330-020-07042-x 10.1371/journal.pone.0236858 10.1038/s41598-020-80261-w 10.1016/j.jrid.2020.04.004 10.1007/s00330-020-07271-0 10.1016/j.ejro.2020.100272 10.1007/s00330-020-07013-2 10.24875/RIC.20000451 10.3390/jcm9051514 10.7150/thno.45985 10.1016/j.accpm.2020.10.014 10.5152/dir.2020.20407 10.21037/atm-20-3554 10.1007/s12539-020-00410-7 10.1097/RCT.0000000000001094 10.1097/RTI.0000000000000544 10.1259/bjr.20200634 10.7150/thno.46428 10.1109/JBHI.2020.3036722 10.1186/s12880-020-00521-z 10.1016/j.ijid.2020.03.017 10.1007/s00592-020-01654-x 10.18632/aging.103000 10.1186/s40001-020-00450-1 10.1007/s00259-020-04929-1 10.1016/j.chaos.2020.110153 10.1186/s12938-020-00809-9 10.3233/XST-200735 10.1148/radiol.2020201491 10.1148/radiol.2020200905 10.1007/s00330-020-07087-y 10.1002/mco2.14 10.1038/s42256-021-00307-0 10.1007/s10479-021-04006-2 10.1109/ACCESS.2021.3085418 10.1109/TCBB.2021.3066331 10.1007/s12553-021-00520-2
Microscopic segmentation and classification of COVID-19 infection with ensemble convolutional neural network.
The detection of biological RNA from sputum has a comparatively poor positive rate in the initial/early stages of discovering COVID-19, as per the World Health Organization. It has a different morphological structure as compared to healthy images, manifested by computer tomography (CT). COVID-19 diagnosis at an early stage can aid in the timely cure of patients, lowering the mortality rate. In this reported research, three-phase model is proposed for COVID-19 detection. In Phase I, noise is removed from CT images using a denoise convolutional neural network (DnCNN). In the Phase II, the actual lesion region is segmented from the enhanced CT images by using deeplabv3 and ResNet-18. In Phase III, segmented images are passed to the stack sparse autoencoder (SSAE) deep learning model having two stack auto-encoders (SAE) with the selected hidden layers. The designed SSAE model is based on both SAE and softmax layers for COVID19 classification. The proposed method is evaluated on actual patient data of Pakistan Ordinance Factories and other public benchmark data sets with different scanners/mediums. The proposed method achieved global segmentation accuracy of 0.96 and 0.97 for classification.
Microscopy research and technique
"2021-08-27T00:00:00"
[ "JaveriaAmin", "Muhammad AlmasAnjum", "MuhammadSharif", "AmjadRehman", "TanzilaSaba", "RidaZahra" ]
10.1002/jemt.23913 10.1007/s11042-019-7324-y 10.1109/ACCESS.2020.3016627 10.1002/jemt 10.1109/MITP.2020.3036820 10.1109/MITP.2020.3042379 10.1002/jemt.23326 10.1002/jemt.23702
Detection of COVID-19 from chest x-ray images using transfer learning.
Journal of medical imaging (Bellingham, Wash.)
"2021-08-27T00:00:00"
[ "JenitaManokaran", "FatemehZabihollahy", "AndrewHamilton-Wright", "ErangaUkwatta" ]
10.1117/1.JMI.8.S1.017503 10.1063/5.0015626 10.1001/jama.2020.3786 10.1016/j.measurement.2019.05.076 10.1148/radiol.2019194005 10.1109/TMI.2020.2995965 10.1109/TMI.2020.2995965 10.2196/19569 10.1016/j.ajem.2012.08.041 10.1016/j.clinimag.2020.04.001 10.1109/ACCESS.2018.2810849 10.3233/JIFS-190861 10.1007/s11045-019-00686-z 10.1016/j.compbiomed.2020.103792 10.3389/fmed.2020.00427 10.1007/s00330-020-07044-9 10.1038/s41598-020-76282-0 10.1007/s13246-020-00865-4 10.33889/IJMEMS.2020.5.4.02 10.1016/j.eswa.2020.114054 10.1038/s41598-020-76550-z 10.1016/j.chaos.2020.109944 10.1016/j.bbe.2020.08.008 10.1007/s10044-021-00984-y 10.1080/07391102.2020.1767212 10.1117/12.2581314 10.1007/978-3-642-35289-8_26 10.1007/978-3-030-50420-5_47 10.3390/jimaging6120131
Application of deep learning to identify COVID-19 infection in posteroanterior chest X-rays.
The objective of this study was to assess seven configurations of six convolutional deep neural network architectures for classification of chest X-rays (CXRs) as COVID-19 positive or negative. The primary dataset consisted of 294 COVID-19 positive and 294 COVID-19 negative CXRs, the latter comprising roughly equally many pneumonia, emphysema, fibrosis, and healthy images. We used six common convolutional neural network architectures, VGG16, DenseNet121, DenseNet201, MobileNet, NasNetMobile and InceptionV3. We studied six models (one for each architecture) which were pre-trained on a vast repository of generic (non-CXR) images, as well as a seventh DenseNet121 model, which was pre-trained on a repository of CXR images. For each model, we replaced the output layers with custom fully connected layers for the task of binary classification of images as COVID-19 positive or negative. Performance metrics were calculated on a hold-out test set with CXRs from patients who were not included in the training/validation set. When pre-trained on generic images, the VGG16, DenseNet121, DenseNet201, MobileNet, NasNetMobile, and InceptionV3 architectures respectively produced hold-out test set areas under the receiver operating characteristic (AUROCs) of 0.98, 0.95, 0.97, 0.95, 0.99, and 0.96 for the COVID-19 classification of CXRs. The X-ray pre-trained DenseNet121 model, in comparison, had a test set AUROC of 0.87. Common convolutional neural network architectures with parameters pre-trained on generic images yield high-performance and well-calibrated COVID-19 CXR classification.
Clinical imaging
"2021-08-24T00:00:00"
[ "JenishMaharjan", "JacobCalvert", "EmilyPellegrini", "AbigailGreen-Saxena", "JanaHoffman", "AndreaMcCoy", "QingqingMao", "RitankarDas" ]
10.1016/j.clinimag.2021.07.004 10.1016/j.molmed.2020.02.008 10.1001/jama.2020.2565 10.1007/s11427-020-1661-4 10.2807/1560-7917.ES.2020.25.10.2000180 10.1148/radiol.2020200642 10.1097/RLI.0000000000000670 10.1148/radiol.2020200490 10.3760/cma.j.issn.1005-1201.2020.0001 10.1109/TMI.2020.2995965 10.1148/radiol.2020200241 10.1148/ryct.2020200034 10.1148/radiol.2020200236 10.1148/radiol.2020200230 10.1016/j.patcog.2020.107613 10.1038/s41598-020-76550-z 10.1016/j.compbiomed.2020.103792 10.1016/j.cmpb.2020.105581 10.3389/fmed.2020.00427 10.1109/ACCESS.2020.3010287 10.1016/j.compbiomed.2020.103805 10.1007/s10916-020-01562-1 10.1007/s13246-020-00888-x 10.1007/s12559-020-09775-9 10.1007/s10489-020-01943-6 10.1109/JBHI.2021.3069169 10.1145/3431804 10.1016/j.media.2021.102046 10.1097/RTI.0000000000000347 10.1007/s12098-020-03263-6 10.1016/j.asoc.2020.106897 10.1007/s13755-020-00119-3 10.3390/ijerph17186933 10.1016/j.irbm.2019.10.006 10.20944/preprints202003.0300.v1 10.1101/2020.03.19.20039354 10.1101/2020.02.23.20026930 10.1101/2020.03.26.20044610 10.1007/s13755-020-00116-6
COVID-19 lung infection segmentation with a novel two-stage cross-domain transfer learning framework.
With the global outbreak of COVID-19 in early 2020, rapid diagnosis of COVID-19 has become the urgent need to control the spread of the epidemic. In clinical settings, lung infection segmentation from computed tomography (CT) images can provide vital information for the quantification and diagnosis of COVID-19. However, accurate infection segmentation is a challenging task due to (i) the low boundary contrast between infections and the surroundings, (ii) large variations of infection regions, and, most importantly, (iii) the shortage of large-scale annotated data. To address these issues, we propose a novel two-stage cross-domain transfer learning framework for the accurate segmentation of COVID-19 lung infections from CT images. Our framework consists of two major technical innovations, including an effective infection segmentation deep learning model, called nCoVSegNet, and a novel two-stage transfer learning strategy. Specifically, our nCoVSegNet conducts effective infection segmentation by taking advantage of attention-aware feature fusion and large receptive fields, aiming to resolve the issues related to low boundary contrast and large infection variations. To alleviate the shortage of the data, the nCoVSegNet is pre-trained using a two-stage cross-domain transfer learning strategy, which makes full use of the knowledge from natural images (i.e., ImageNet) and medical images (i.e., LIDC-IDRI) to boost the final training on CT images with COVID-19 infections. Extensive experiments demonstrate that our framework achieves superior segmentation accuracy and outperforms the cutting-edge models, both quantitatively and qualitatively.
Medical image analysis
"2021-08-24T00:00:00"
[ "JiannanLiu", "BoDong", "ShuaiWang", "HuiCui", "Deng-PingFan", "JiquanMa", "GengChen" ]
10.1016/j.media.2021.102205
Deep transfer learning for COVID-19 detection and infection localization with superpixel based segmentation.
The evolution the novel corona virus disease (COVID-19) as a pandemic has inflicted several thousand deaths per day endangering the lives of millions of people across the globe. In addition to thermal scanning mechanisms, chest imaging examinations provide valuable insights to the detection of this virus, diagnosis and prognosis of the infections. Though Chest CT and Chest X-ray imaging are common in the clinical protocols of COVID-19 management, the latter is highly preferred, attributed to its simple image acquisition procedure and mobility of the imaging mechanism. However, Chest X-ray images are found to be less sensitive compared to Chest CT images in detecting infections in the early stages. In this paper, we propose a deep learning based framework to enhance the diagnostic values of these images for improved clinical outcomes. It is realized as a variant of the conventional SqueezeNet classifier with segmentation capabilities, which is trained with deep features extracted from the Chest X-ray images of a standard dataset for binary and multi class classification. The binary classifier achieves an accuracy of 99.53% in the discrimination of COVID-19 and Non COVID-19 images. Similarly, the multi class classifier performs classification of COVID-19, Viral Pneumonia and Normal cases with an accuracy of 99.79%. This model called the COVID-19 Super pixel SqueezNet (COVID-SSNet) performs super pixel segmentation of the activation maps to extract the regions of interest which carry perceptual image features and constructs an overlay of the Chest X-ray images with these regions. The proposed classifier model adds significant value to the Chest X-rays for an integral examination of the image features and the image regions influencing the classifier decisions to expedite the COVID-19 treatment regimen.
Sustainable cities and society
"2021-08-24T00:00:00"
[ "N BPrakash", "MMurugappan", "G RHemalakshmi", "MJayalakshmi", "MuftiMahmud" ]
10.1016/j.scs.2021.103252 10.1007/s12559-020-09774-w
Exploiting Shared Knowledge From Non-COVID Lesions for Annotation-Efficient COVID-19 CT Lung Infection Segmentation.
The novel Coronavirus disease (COVID-19) is a highly contagious virus and has spread all over the world, posing an extremely serious threat to all countries. Automatic lung infection segmentation from computed tomography (CT) plays an important role in the quantitative analysis of COVID-19. However, the major challenge lies in the inadequacy of annotated COVID-19 datasets. Currently, there are several public non-COVID lung lesion segmentation datasets, providing the potential for generalizing useful information to the related COVID-19 segmentation task. In this paper, we propose a novel relation-driven collaborative learning model to exploit shared knowledge from non-COVID lesions for annotation-efficient COVID-19 CT lung infection segmentation. The model consists of a general encoder to capture general lung lesion features based on multiple non-COVID lesions, and a target encoder to focus on task-specific features based on COVID-19 infections. We develop a collaborative learning scheme to regularize feature-level relation consistency of given input and encourage the model to learn more general and discriminative representation of COVID-19 infections. Extensive experiments demonstrate that trained with limited COVID-19 data, exploiting shared knowledge from non-COVID lesions can further improve state-of-the-art performance with up to 3.0% in dice similarity coefficient and 4.2% in normalized surface dice. In addition, experimental results on large scale 2D dataset with CT slices show that our method significantly outperforms cutting-edge segmentation methods metrics. Our method promotes new insights into annotation-efficient deep learning and illustrates strong potential for real-world applications in the global fight against COVID-19 in the absence of sufficient high-quality annotations.
IEEE journal of biomedical and health informatics
"2021-08-21T00:00:00"
[ "YichiZhang", "QingchengLiao", "LinYuan", "HeZhu", "JiezhenXing", "JicongZhang" ]
10.1109/JBHI.2021.3106341 10.1109/TPAMI.2021.3100536 10.21203/rs.3.rs-571332/v1
An explainable AI system for automated COVID-19 assessment and lesion categorization from CT-scans.
COVID-19 infection caused by SARS-CoV-2 pathogen has been a catastrophic pandemic outbreak all over the world, with exponential increasing of confirmed cases and, unfortunately, deaths. In this work we propose an AI-powered pipeline, based on the deep-learning paradigm, for automated COVID-19 detection and lesion categorization from CT scans. We first propose a new segmentation module aimed at automatically identifying lung parenchyma and lobes. Next, we combine the segmentation network with classification networks for COVID-19 identification and lesion categorization. We compare the model's classification results with those obtained by three expert radiologists on a dataset of 166 CT scans. Results showed a sensitivity of 90.3% and a specificity of 93.5% for COVID-19 detection, at least on par with those yielded by the expert radiologists, and an average lesion categorization accuracy of about 84%. Moreover, a significant role is played by prior lung and lobe segmentation, that allowed us to enhance classification performance by over 6 percent points. The interpretation of the trained AI models reveals that the most significant areas for supporting the decision on COVID-19 identification are consistent with the lesions clinically associated to the virus, i.e., crazy paving, consolidation and ground glass. This means that the artificial models are able to discriminate a positive patient from a negative one (both controls and patients with interstitial pneumonia tested negative to COVID) by evaluating the presence of those lesions into CT scans. Finally, the AI models are integrated into a user-friendly GUI to support AI explainability for radiologists, which is publicly available at http://perceivelab.com/covid-ai. The whole AI system is unique since, to the best of our knowledge, it is the first AI-based software, publicly available, that attempts to explain to radiologists what information is used by AI methods for making decisions and that proactively involves them in the decision loop to further improve the COVID-19 understanding.
Artificial intelligence in medicine
"2021-08-21T00:00:00"
[ "MatteoPennisi", "IsaakKavasidis", "ConcettoSpampinato", "VincenzoSchinina", "SimonePalazzo", "Federica ProiettoSalanitri", "GiovanniBellitto", "FrancescoRundo", "MarcoAldinucci", "MassimoCristofaro", "PaoloCampioni", "ElisaPianura", "FedericaDi Stefano", "AdaPetrone", "FabrizioAlbarello", "GiuseppeIppolito", "SalvatoreCuzzocrea", "SabrinaConoci" ]
10.1016/j.artmed.2021.102114 10.1109/TCYB.2020.2990162 10.1118/1.3528204 10.1145/3203217.3205340
Prediction of COVID Criticality Score with Laboratory, Clinical and CT Images using Hybrid Regression Models.
Rapid and precise diagnosis of COVID-19 is very critical in hotspot regions. The main aim of this proposed work is to investigate the baseline, laboratory and CT features of COVID-19 affected patients of two groups (Early and Critical stages). The detection model for COVID-19 is built depending upon the manifestations that define the severity of the disease. The CT scan images are fed into the various deep learning, machine learning and hybrid learning models to mine the necessary features and predict CT Score. The predicted CT score along with other clinical, laboratory and CT scan image features are then passed to train the various Regression models for predicting the COVID Criticality (CC) Score. These baseline, laboratory and CT features of COVID-19 are reduced using Statistical analysis and Univariate logistic regression analysis. When analysing the prediction of CT scores using images alone, AlexNet+Lasso yields better outcome with regression score of 0.9643 and RMSE of 0.0023 when compared with Decision tree (RMSE of 0.0034; Regression score of 0.9578) and GRU (RMSE of 0.1253; regression score of 0.9323). When analysing the prediction of CC scores using CT scores and other baseline, laboratory and CT features, VGG-16+Linear Regression yields better results with regression score of 0.9911 and RMSE of 0.0002 when compared with Linear SVR (RMSE of 0.0006; Regression score of 0.9911) and LSTM (RMSE of 0.0005; Regression score of 0.9877). The correlation analysis is performed to identify the significance of utilizing other features in prediction of CC Score. The correlation coefficient of CT scores with actual value is 0.93 and 0.92 for Early stage group and Critical stage group respectively. The correlation coefficient of CC scores with actual value is 0.96 for Early stage group and 0.95 for Critical stage group.The classification of COVID-19 patients are carried out with the help of predicted CC Scores. This proposed work is carried out in the motive of helping radiologists in faster categorization of COVID patients as Early or Severe staged using CC Scores. The automated prediction of COVID Criticality Score using our diagnostic model can help radiologists and physicians save time for carrying out further treatment and procedures.
Computer methods and programs in biomedicine
"2021-08-18T00:00:00"
[ "VaralakshmiPerumal", "VasumathiNarayanan", "Sakthi Jaya SundarRajasekar" ]
10.1016/j.cmpb.2021.106336 10.1148/radiol.2020200642 10.1007/s13246-020-00865-4 10.1148/radiol.2020201237 10.3760/cma.j.issn.1001-0939.2020.0005 10.1016/S0140-6736(20)30211-7 10.1148/radiol.2020200230 10.1148/radiol.2020200432 10.1016/S0140-6736(20)30183-5 10.1016/j.cmpb.2020.105581 10.1007/s00330-020-06748-2 10.1148/radiol.2020200236 10.1097/RLI.0000000000000672 10.1148/radiol.2020200905 10.1002/jmv.25786 10.1016/j.cmpb.2020.105532 10.1148/ryct.2020200034 10.1007/s00330-020-06713-z 10.1007/s00330-020-06731-x 10.1017/ice.2020.61 10.1148/radiol.2020200274 10.1001/jama.2020.1585 10.1093/cid/ciaa225 10.1148/ryct.2020200031 10.1001/jama.2020.2648 10.1148/radiol.2020200343 10.4103/ijmr.IJMR66320 10.1007/s00330-020-06801-0 10.1371/journal.pone.0236621 10.1056/NEJMoa2001017 10.1148/radiol.2020200490
Multidimensional Evaluation of All-Cause Mortality Risk and Survival Analysis for Hospitalized Patients with COVID-19.
International journal of medical sciences
"2021-08-18T00:00:00"
[ "JingwenLi", "HuLuo", "GangDeng", "JinyingChang", "XiaomingQiu", "ChenLiu", "BoQin" ]
10.7150/ijms.58889
iCOVID: interpretable deep learning framework for early recovery-time prediction of COVID-19 patients.
Most prior studies focused on developing models for the severity or mortality prediction of COVID-19 patients. However, effective models for recovery-time prediction are still lacking. Here, we present a deep learning solution named iCOVID that can successfully predict the recovery-time of COVID-19 patients based on predefined treatment schemes and heterogeneous multimodal patient information collected within 48 hours after admission. Meanwhile, an interpretable mechanism termed FSR is integrated into iCOVID to reveal the features greatly affecting the prediction of each patient. Data from a total of 3008 patients were collected from three hospitals in Wuhan, China, for large-scale verification. The experiments demonstrate that iCOVID can achieve a time-dependent concordance index of 74.9% (95% CI: 73.6-76.3%) and an average day error of 4.4 days (95% CI: 4.2-4.6 days). Our study reveals that treatment schemes, age, symptoms, comorbidities, and biomarkers are highly related to recovery-time predictions.
NPJ digital medicine
"2021-08-18T00:00:00"
[ "JunWang", "ChenLiu", "JingwenLi", "ChengYuan", "LichiZhang", "ChengJin", "JianweiXu", "YaqiWang", "YaofengWen", "HongbingLu", "BiaoLi", "ChangChen", "XiangdongLi", "DinggangShen", "DahongQian", "JianWang" ]
10.1038/s41746-021-00496-3 10.1007/s42979-020-00401-x 10.1109/TMI.2020.2994908 10.1109/TMI.2020.2996645 10.1109/ACCESS.2021.3058537 10.1016/j.imu.2020.100412 10.1016/j.imu.2020.100505 10.1038/s41746-021-00399-3 10.1038/s41746-020-00369-1 10.1007/s42979-020-00335-4 10.1038/s41746-020-00372-6 10.1007/s42979-020-00300-1 10.1002/dmrr.3319 10.1016/j.clinthera.2020.04.009 10.1007/s00330-020-06978-4 10.1007/s42979-020-00216-w 10.1038/s41467-020-20657-4 10.1016/j.jaci.2020.04.006 10.1038/s41551-020-00633-5 10.1093/cid/ciaa1012 10.1001/jamainternmed.2020.3539 10.1016/j.inffus.2019.12.012 10.1073/pnas.2005615117 10.1038/s42256-021-00307-0 10.1080/01621459.1989.10478874 10.1002/sim.2427 10.1214/08-AOAS169 10.1016/j.ajem.2020.10.013 10.1515/cclm-2020-0198 10.1186/s12874-020-01153-1 10.1038/s41467-020-20816-7 10.1038/s41467-020-18297-9 10.1038/s41467-020-18786-x 10.21037/atm-20-3026 10.1038/nrclinonc.2017.141 10.1080/01621459.1988.10478612 10.1097/CM9.0000000000000819 10.2214/AJR.20.22954 10.2214/AJR.20.22976 10.1183/13993003.00775-2020 10.1001/jama.1982.03320430047030 10.1214/ss/1032280214
Hybrid Deep-Learning and Machine-Learning Models for Predicting COVID-19.
The COVID-19 pandemic has had a significant impact on public life and health worldwide, putting the world's healthcare systems at risk. The first step in stopping this outbreak is to detect the infection in its early stages, which will relieve the risk, control the outbreak's spread, and restore full functionality to the world's healthcare systems. Currently, PCR is the most prevalent diagnosis tool for COVID-19. However, chest X-ray images may play an essential role in detecting this disease, as they are successful for many other viral pneumonia diseases. Unfortunately, there are common features between COVID-19 and other viral pneumonia, and hence manual differentiation between them seems to be a critical problem and needs the aid of artificial intelligence. This research employs deep- and transfer-learning techniques to develop accurate, general, and robust models for detecting COVID-19. The developed models utilize either convolutional neural networks or transfer-learning models or hybridize them with powerful machine-learning techniques to exploit their full potential. For experimentation, we applied the proposed models to two data sets: the COVID-19 Radiography Database from Kaggle and a local data set from Asir Hospital, Abha, Saudi Arabia. The proposed models achieved promising results in detecting COVID-19 cases and discriminating them from normal and other viral pneumonia with excellent accuracy. The hybrid models extracted features from the flatten layer or the first hidden layer of the neural network and then fed these features into a classification algorithm. This approach enhanced the results further to full accuracy for binary COVID-19 classification and 97.8% for multiclass classification.
Computational intelligence and neuroscience
"2021-08-17T00:00:00"
[ "Talal SQaid", "HusseinMazaar", "Mohammad Yahya HAl-Shamri", "Mohammed SAlqahtani", "Abeer ARaweh", "WafaaAlakwaa" ]
10.1155/2021/9996737 10.1016/j.patcog.2020.107613 10.1016/j.bcp.2020.114184 10.1186/s40537-021-00444-8 10.1148/rg.2018170048 10.1016/j.compbiomed.2020.103792 10.1177/2472630320958376 10.1148/rg.2020200097 10.1007/s10489-020-01867-1 10.1007/s00521-020-05410-8 10.1371/journal.pone.0247839 10.1109/42.476112 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2020.104181 10.1016/j.chaos.2021.110713 10.1016/j.chaos.2020.110495 10.1016/j.imu.2020.100505 10.3390/v12070769 10.1016/j.media.2020.101794 10.1007/s10489-020-01902-1 10.1186/s12938-020-00831-x 10.1038/s41598-020-76550-z 10.1371/journal.pone.0242535 10.1016/j.imu.2020.100360 10.1016/j.scs.2020.102589 10.1109/JPROC.2020.3004555 10.1109/ACCESS.2018.2871027 10.17632/3pxjb8knp7.3
Pareto optimization of deep networks for COVID-19 diagnosis from chest X-rays.
The year 2020 was characterized by the COVID-19 pandemic that has caused, by the end of March 2021, more than 2.5 million deaths worldwide. Since the beginning, besides the laboratory test, used as the gold standard, many applications have been applying deep learning algorithms to chest X-ray images to recognize COVID-19 infected patients. In this context, we found out that convolutional neural networks perform well on a single dataset but struggle to generalize to other data sources. To overcome this limitation, we propose a late fusion approach where we combine the outputs of several state-of-the-art CNNs, introducing a novel method that allows us to construct an optimum ensemble determining which and how many base learners should be aggregated. This choice is driven by a two-objective function that maximizes, on a validation set, the accuracy and the diversity of the ensemble itself. A wide set of experiments on several publicly available datasets, accounting for more than 92,000 images, shows that the proposed approach provides average recognition rates up to 93.54% when tested on external datasets.
Pattern recognition
"2021-08-17T00:00:00"
[ "ValerioGuarrasi", "Natascha ClaudiaD'Amico", "RosaSicilia", "ErmannoCordelli", "PaoloSoda" ]
10.1016/j.patcog.2021.108242