title
stringlengths
2
287
abstract
stringlengths
0
5.14k
journal
stringlengths
4
184
date
unknown
authors
sequencelengths
1
57
doi
stringlengths
16
6.63k
Dynamic deformable attention network (DDANet) for COVID-19 lesions semantic segmentation.
Deep learning based medical image segmentation is an important step within diagnosis, which relies strongly on capturing sufficient spatial context without requiring too complex models that are hard to train with limited labelled data. Training data is in particular scarce for segmenting infection regions of CT images of COVID-19 patients. Attention models help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent criss-cross-attention module aims to approximate global self-attention while remaining memory and time efficient by separating horizontal and vertical self-similarity computations. However, capturing attention from all non-local locations can adversely impact the accuracy of semantic segmentation networks. We propose a new Dynamic Deformable Attention Network (DDANet) that enables a more accurate contextual information computation in a similarly efficient way. Our novel technique is based on a deformable criss-cross attention block that learns both attention coefficients and attention offsets in a continuous way. A deep U-Net (Schlemper et al., 2019) segmentation network that employs this attention mechanism is able to capture attention from pertinent non-local locations and also improves the performance on semantic segmentation tasks compared to criss-cross attention within a U-Net on a challenging COVID-19 lesion segmentation task. Our validation experiments show that the performance gain of the recursively applied dynamic deformable attention blocks comes from their ability to capture dynamic and precise attention context. Our DDANet achieves Dice scores of 73.4% and 61.3% for Ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.9% points compared to a baseline U-Net and 24.4% points compared to current state of art methods (Fan et al., 2020).
Journal of biomedical informatics
"2021-05-23T00:00:00"
[ "Kumar TRajamani", "HannaSiebert", "Mattias PHeinrich" ]
10.1016/j.jbi.2021.103816
COVID-19-CT-CXR: A Freely Accessible and Weakly Labeled Chest X-Ray and CT Image Collection on COVID-19 From Biomedical Literature.
The latest threat to global health is the COVID-19 outbreak. Although there exist large datasets of chest X-rays (CXR) and computed tomography (CT) scans, few COVID-19 image collections are currently available due to patient privacy. At the same time, there is a rapid growth of COVID-19-relevant articles in the biomedical literature, including those that report findings on radiographs. Here, we present COVID-19-CT-CXR, a public database of COVID-19 CXR and CT images, which are automatically extracted from COVID-19-relevant articles from the PubMed Central Open Access (PMC-OA) Subset. We extracted figures, associated captions, and relevant figure descriptions in the article and separated compound figures into subfigures. Because a large portion of figures in COVID-19 articles are not CXR or CT, we designed a deep-learning model to distinguish them from other figure types and to classify them accordingly. The final database includes 1,327 CT and 263 CXR images (as of May 9, 2020) with their relevant text. To demonstrate the utility of COVID-19-CT-CXR, we conducted four case studies. (1) We show that COVID-19-CT-CXR, when used as additional training data, is able to contribute to improved deep-learning (DL) performance for the classification of COVID-19 and non-COVID-19 CT. (2) We collected CT images of influenza, another common infectious respiratory illness that may present similarly to COVID-19, and fine-tuned a baseline deep neural network to distinguish a diagnosis of COVID-19, influenza, or normal or other types of diseases on CT. (3) We fine-tuned an unsupervised one-class classifier from non-COVID-19 CXR and performed anomaly detection to detect COVID-19 CXR. (4) From text-mined captions and figure descriptions, we compared 15 clinical symptoms and 20 clinical findings of COVID-19 versus those of influenza to demonstrate the disease differences in the scientific publications. Our database is unique, as the figures are retrieved along with relevant text with fine-grained descriptions, and it can be extended easily in the future. We believe that our work is complementary to existing resources and hope that it will contribute to medical image analysis of the COVID-19 pandemic. The dataset, code, and DL models are publicly available at https://github.com/ncbi-nlp/COVID-19-CT-CXR.
IEEE transactions on big data
"2021-05-18T00:00:00"
[ "YifanPeng", "YuxingTang", "SungwonLee", "YingyingZhu", "Ronald MSummers", "ZhiyongLu" ]
10.1109/tbdata.2020.3035935 10.1109/RBME.2020.2987975 10.1101/2020.04.13.20063941 10.1101/2020.03.20.20039834 10.1101/2020.02.14.20023028 10.1016/j.eng.2020.04.010
Light-weighted ensemble network with multilevel activation visualization for robust diagnosis of COVID19 pneumonia from large-scale chest radiographic database.
Currently, the coronavirus disease 2019 (COVID19) pandemic has killed more than one million people worldwide. In the present outbreak, radiological imaging modalities such as computed tomography (CT) and X-rays are being used to diagnose this disease, particularly in the early stage. However, the assessment of radiographic images includes a subjective evaluation that is time-consuming and requires substantial clinical skills. Nevertheless, the recent evolution in artificial intelligence (AI) has further strengthened the ability of computer-aided diagnosis tools and supported medical professionals in making effective diagnostic decisions. Therefore, in this study, the strength of various AI algorithms was analyzed to diagnose COVID19 infection from large-scale radiographic datasets. Based on this analysis, a light-weighted deep network is proposed, which is the first ensemble design (based on MobileNet, ShuffleNet, and FCNet) in medical domain (particularly for COVID19 diagnosis) that encompasses the reduced number of trainable parameters (a total of 3.16 million parameters) and outperforms the various existing models. Moreover, the addition of a multilevel activation visualization layer in the proposed network further visualizes the lesion patterns as multilevel class activation maps (ML-CAMs) along with the diagnostic result (either COVID19 positive or negative). Such additional output as ML-CAMs provides a visual insight of the computer decision and may assist radiologists in validating it, particularly in uncertain situations Additionally, a novel hierarchical training procedure was adopted to perform the training of the proposed network. It proceeds the network training by the adaptive number of epochs based on the validation dataset rather than using the fixed number of epochs. The quantitative results show the better performance of the proposed training method over the conventional end-to-end training procedure. A large collection of CT-scan and X-ray datasets (based on six publicly available datasets) was used to evaluate the performance of the proposed model and other baseline methods. The experimental results of the proposed network exhibit a promising performance in terms of diagnostic decision. An average F1 score (F1) of 94.60% and 95.94% and area under the curve (AUC) of 97.50% and 97.99% are achieved for the CT-scan and X-ray datasets, respectively. Finally, the detailed comparative analysis reveals that the proposed model outperforms the various state-of-the-art methods in terms of both quantitative and computational performance.
Applied soft computing
"2021-05-18T00:00:00"
[ "MuhammadOwais", "Hyo SikYoon", "TahirMahmood", "AdnanHaider", "HaseebSultan", "Kang RyoungPark" ]
10.1016/j.asoc.2021.107490 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1148/ryct.2020200034 10.1109/TMI.2020.2993291 10.1007/s13246-020-00888-x 10.1007/s10096-020-03901-z 10.1016/j.cmpb.2020.105532 10.1016/j.cmpb.2020.105581 10.1080/07391102.2020.1767212 10.1016/j.cmpb.2020.105608 10.1109/TMI.2020.2996256 10.1016/j.compbiomed.2020.103869 10.1016/j.media.2020.101794 10.3390/info11090419 10.1007/s13246-020-00865-4 10.1080/07391102.2020.1788642 10.1016/j.compbiomed.2020.103795 10.3892/etm.2020.8797 10.1101/2020.04.24.20078998 10.3390/electronics9091388 10.1016/j.asoc.2020.106580 10.1016/j.asoc.2020.106610 10.1109/CVPR.2016.90 10.1109/CVPR.2016.308 10.1109/CVPR.2017.195 10.1109/CVPR.2018.00474 10.1109/CVPR.2018.00716 10.1109/TNNLS.2017.2672978 10.1016/j.asoc.2020.106859 10.2196/21790 10.1016/j.asoc.2020.106742 10.1148/radiol.2020200370 10.1148/radiol.2020201160 10.1007/s10278-013-9622-7 10.5121/ijdkp.2015.5201 10.1016/j.sigpro.2011.12.005 10.3390/s18030699 10.5120/2968-3968 10.1006/jcss.1997.1504 10.1109/72.991427 10.1023/A:1010933404324.pdf 10.1109/TIT.1967.1053964
COVID-19 Classification Based on Deep Convolution Neural Network Over a Wireless Network.
Corona Virus Disease 19 (COVID-19) firstly spread in China since December 2019. Then, it spread at a high rate around the world. Therefore, rapid diagnosis of COVID-19 has become a very hot research topic. One of the possible diagnostic tools is to use a deep convolution neural network (DCNN) to classify patient images. Chest X-ray is one of the most widely-used imaging techniques for classifying COVID-19 cases. This paper presents a proposed wireless communication and classification system for X-ray images to detect COVID-19 cases. Different modulation techniques are compared to select the most reliable one with less required bandwidth. The proposed DCNN architecture consists of deep feature extraction and classification layers. Firstly, the proposed DCNN hyper-parameters are adjusted in the training phase. Then, the tuned hyper-parameters are utilized in the testing phase. These hyper-parameters are the optimization algorithm, the learning rate, the mini-batch size and the number of epochs. From simulation results, the proposed scheme outperforms other related pre-trained networks. The performance metrics are accuracy, loss, confusion matrix, sensitivity, precision,
Wireless personal communications
"2021-05-18T00:00:00"
[ "Wafaa AShalaby", "WaleedSaad", "MonaShokair", "Fathi EAbd El-Samie", "Moawad IDessouky" ]
10.1007/s11277-021-08523-y 10.1007/s12098-020-03263-6 10.1056/nejmoa2001017 10.1016/S0140-6736(20)30183-5 10.1097/RTI.0000000000000404 10.1186/s40537-014-0007-7 10.1109/ACCESS.2014.2325029 10.3390/e21020168 10.3390/electronics8030292 10.1016/j.imu.2020.100360 10.1016/j.chaos.2020.109947 10.1148/radiol.2020200905 10.1007/s12652-021-02967-7
Quantitative evaluation of COVID-19 pneumonia severity by CT pneumonia analysis algorithm using deep learning technology and blood test results.
To evaluate whether early chest computed tomography (CT) lesions quantified by an artificial intelligence (AI)-based commercial software and blood test values at the initial presentation can differentiate the severity of COVID-19 pneumonia. This retrospective study included 100 SARS-CoV-2-positive patients with mild (n = 23), moderate (n = 37) or severe (n = 40) pneumonia classified according to the Japanese guidelines. Univariate Kruskal-Wallis and multivariate ordinal logistic analyses were used to examine whether CT parameters (opacity score, volume of opacity, % opacity, volume of high opacity, % high opacity and mean HU total on CT) as well as blood test parameters [procalcitonin, estimated glomerular filtration rate (eGFR), C-reactive protein, % lymphocyte, ferritin, aspartate aminotransferase, lactate dehydrogenase, alanine aminotransferase, creatine kinase, hemoglobin A1c, prothrombin time, activated partial prothrombin time (APTT), white blood cell count and creatinine] differed by disease severity. All CT parameters and all blood test parameters except procalcitonin and APPT were significantly different among mild, moderate and severe groups. By multivariate analysis, mean HU total and eGFR were two independent factors associated with severity (p < 0.0001). Cutoff values for mean HU total and eGFR were, respectively, - 801 HU and 77 ml/min/1.73 m The mean HU total of the whole lung, determined by the AI algorithm, and eGFR reflect the severity of COVID-19 pneumonia.
Japanese journal of radiology
"2021-05-15T00:00:00"
[ "TomohisaOkuma", "ShinichiHamamoto", "TetsunoriMaebayashi", "AkishigeTaniguchi", "KyokoHirakawa", "ShuMatsushita", "KazukiMatsushita", "KatsukoMurata", "TakaoManabe", "YukioMiki" ]
10.1007/s11604-021-01134-4 10.1016/S0140-6736(20)30211-7 10.1007/s11604-020-00958-w 10.1007/s11604-020-01010-7 10.1148/radiol.2020200642 10.1371/journal.pone.0230548 10.1007/s00330-020-06817-6 10.21037/atm-20-3421 10.1097/RLI.0000000000000674 10.1007/s00330-020-07042-x 10.1371/journal.pone.0236858 10.1038/s41598-020-79097-1 10.1186/s43055-020-00309-9 10.1016/j.ejro.2020.100272 10.7150/thno.45985 10.1148/ryct.2020200389 10.1001/jamainternmed.2020.2033 10.1111/all.14496 10.1016/S1473-3099(20)30086-4 10.21037/jtd-20-1743 10.1183/13993003.00547-2020 10.1111/all.14657 10.1016/j.jinf.2020.04.021 10.1681/ASN.2020030276
Digital holographic deep learning of red blood cells for field-portable, rapid COVID-19 screening.
Rapid screening of red blood cells for active infection of COVID-19 is presented using a compact and field-portable, 3D-printed shearing digital holographic microscope. Video holograms of thin blood smears are recorded, individual red blood cells are segmented for feature extraction, then a bi-directional long short-term memory network is used to classify between healthy and COVID positive red blood cells based on their spatiotemporal behavior. Individuals are then classified based on the simple majority of their cells' classifications. The proposed system may be beneficial for under-resourced healthcare systems. To the best of our knowledge, this is the first report of digital holographic microscopy for rapid screening of COVID-19.
Optics letters
"2021-05-15T00:00:00"
[ "TimothyO'Connor", "Jian-BingShen", "Bruce TLiang", "BahramJavidi" ]
10.1364/OL.426152
Joint Learning of 3D Lesion Segmentation and Classification for Explainable COVID-19 Diagnosis.
Given the outbreak of COVID-19 pandemic and the shortage of medical resource, extensive deep learning models have been proposed for automatic COVID-19 diagnosis, based on 3D computed tomography (CT) scans. However, the existing models independently process the 3D lesion segmentation and disease classification, ignoring the inherent correlation between these two tasks. In this paper, we propose a joint deep learning model of 3D lesion segmentation and classification for diagnosing COVID-19, called DeepSC-COVID, as the first attempt in this direction. Specifically, we establish a large-scale CT database containing 1,805 3D CT scans with fine-grained lesion annotations, and reveal 4 findings about lesion difference between COVID-19 and community acquired pneumonia (CAP). Inspired by our findings, DeepSC-COVID is designed with 3 subnets: a cross-task feature subnet for feature extraction, a 3D lesion subnet for lesion segmentation, and a classification subnet for disease diagnosis. Besides, the task-aware loss is proposed for learning the task interaction across the 3D lesion and classification subnets. Different from all existing models for COVID-19 diagnosis, our model is interpretable with fine-grained 3D lesion distribution. Finally, extensive experimental results show that the joint learning framework in our model significantly improves the performance of 3D lesion segmentation and disease classification in both efficiency and efficacy.
IEEE transactions on medical imaging
"2021-05-14T00:00:00"
[ "XiaofeiWang", "LaiJiang", "LiuLi", "MaiXu", "XinDeng", "LisongDai", "XiangyangXu", "TianyiLi", "YichenGuo", "ZulinWang", "Pier LuigiDragotti" ]
10.1109/TMI.2021.3079709
COVID19-CT-dataset: an open-access chest CT image repository of 1000+ patients with confirmed COVID-19 diagnosis.
The ongoing Coronavirus disease 2019 (COVID-19) pandemic has drastically impacted the global health and economy. Computed tomography (CT) is the prime imaging modality for diagnosis of lung infections in COVID-19 patients. Data-driven and Artificial intelligence (AI)-powered solutions for automatic processing of CT images predominantly rely on large-scale, heterogeneous datasets. Owing to privacy and data availability issues, open-access and publicly available COVID-19 CT datasets are difficult to obtain, thus limiting the development of AI-enabled automatic diagnostic solutions. To tackle this problem, large CT image datasets encompassing diverse patterns of lung infections are in high demand. In the present study, we provide an open-source repository containing 1000+ CT images of COVID-19 lung infections established by a team of board-certified radiologists. CT images were acquired from two main general university hospitals in Mashhad, Iran from March 2020 until January 2021. COVID-19 infections were ratified with matching tests including Reverse transcription polymerase chain reaction (RT-PCR) and accompanying clinical symptoms. All data are 16-bit grayscale images composed of 512 × 512 pixels and are stored in DICOM standard. Patient privacy is preserved by removing all patient-specific information from image headers. Subsequently, all images corresponding to each patient are compressed and stored in RAR format.
BMC research notes
"2021-05-14T00:00:00"
[ "ShokouhShakouri", "Mohammad AminBakhshali", "ParvanehLayegh", "BehzadKiani", "FaridMasoumi", "SaeedehAtaei Nakhaei", "Sayyed MostafaMostafavi" ]
10.1186/s13104-021-05592-x 10.4081/gh.2020.953 10.1109/RBME.2020.2987975 10.1038/s41597-021-00900-3 10.1148/radiol.2462070712 10.7910/DVN/6ACUZJ
Segmenting lung lesions of COVID-19 from CT images via pyramid pooling improved Unet.
Segmenting lesion regions of Coronavirus Disease 2019 (COVID-19) from computed tomography (CT) images is a challenge owing to COVID-19 lesions characterized by high variation, low contrast between infection lesions and around normal tissues, and blurred boundaries of infections. Moreover, a shortage of available CT dataset hinders deep learning techniques applying to tackling COVID-19. To address these issues, we propose a deep learning-based approach known as PPM-Unet to segmenting COVID-19 lesions from CT images. Our method improves an Unet by adopting pyramid pooling modules instead of the conventional skip connection and then enhances the representation of the neural network by aiding the global attention mechanism. We first pre-train PPM-Unet on COVID-19 dataset of pseudo labels containing1600 samples producing a coarse model. Then we fine-tune the coarse PPM-Unet on the standard COVID-19 dataset consisting of 100 pairs of samples to achieve a fine PPM-Unet. Qualitative and quantitative results demonstrate that our method can accurately segment COVID-19 infection regions from CT images, and achieve higher performance than other state-of-the-art segmentation models in this study. It offers a promising tool to lay a foundation for quantitatively detecting COVID-19 lesions.
Biomedical physics & engineering express
"2021-05-13T00:00:00"
[ "YinjinMa", "PengFeng", "PengHe", "YongRen", "XiaodongGuo", "XiaoliuYu", "BiaoWei" ]
10.1088/2057-1976/ac008a
The diagnostic accuracy of Artificial Intelligence-Assisted CT imaging in COVID-19 disease: A systematic review and meta-analysis.
Artificial intelligence (AI) systems have become critical in support of decision-making. This systematic review summarizes all the data currently available on the AI-assisted CT-Scan prediction accuracy for COVID-19. The ISI Web of Science, Cochrane Library, PubMed, Scopus, CINAHL, Science Direct, PROSPERO, and EMBASE were systematically searched. We used the revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool to assess all included studies' quality and potential bias. A hierarchical receiver-operating characteristic summary (HSROC) curve and a summary receiver operating characteristic (SROC) curve have been implemented. The area under the curve (AUC) was computed to determine the diagnostic accuracy. Finally, 36 studies (a total of 39,246 image data) were selected for inclusion into the final meta-analysis. The pooled sensitivity for AI was 0.90 (95% CI, 0.90-0.91), specificity was 0.91 (95% CI, 0.90-0.92) and the AUC was 0.96 (95% CI, 0.91-0.98). For deep learning (DL) method, the pooled sensitivity was 0.90 (95% CI, 0.90-0.91), specificity was 0.88 (95% CI, 0.87-0.88) and the AUC was 0.96 (95% CI, 0.93-0.97). In case of machine learning (ML), the pooled sensitivity was 0.90 (95% CI, 0.90-0.91), specificity was 0.95 (95% CI, 0.94-0.95) and the AUC was 0.97 (95% CI, 0.96-0.99). AI in COVID-19 patients is useful in identifying symptoms of lung involvement. More prospective real-time trials are required to confirm AI's role for high and quick COVID-19 diagnosis due to the possible selection bias and retrospective existence of currently available studies.
Informatics in medicine unlocked
"2021-05-13T00:00:00"
[ "MeisamMoezzi", "KiarashShirbandi", "Hassan KianiShahvandi", "BabakArjmand", "FakherRahim" ]
10.1016/j.imu.2021.100591 10.1007/s11548-020-02299-5 10.1002/ima.22525
COVID-19 diagnosis from CT scans and chest X-ray images using low-cost Raspberry Pi.
The diagnosis of COVID-19 is of vital demand. Several studies have been conducted to decide whether the chest X-ray and computed tomography (CT) scans of patients indicate COVID-19. While these efforts resulted in successful classification systems, the design of a portable and cost-effective COVID-19 diagnosis system has not been addressed yet. The memory requirements of the current state-of-the-art COVID-19 diagnosis systems are not suitable for embedded systems due to the required large memory size of these systems (e.g., hundreds of megabytes). Thus, the current work is motivated to design a similar system with minimal memory requirements. In this paper, we propose a diagnosis system using a Raspberry Pi Linux embedded system. First, local features are extracted using local binary pattern (LBP) algorithm. Second, the global features are extracted from the chest X-ray or CT scans using multi-channel fractional-order Legendre-Fourier moments (MFrLFMs). Finally, the most significant features (local and global) are selected. The proposed system steps are integrated to fit the low computational and memory capacities of the embedded system. The proposed method has the smallest computational and memory resources,less than the state-of-the-art methods by two to three orders of magnitude, among existing state-of-the-art deep learning (DL)-based methods.
PloS one
"2021-05-12T00:00:00"
[ "Khalid MHosny", "Mohamed MDarwish", "KenliLi", "AhmadSalah" ]
10.1371/journal.pone.0250688 10.1016/j.clinimag.2020.04.001 10.1056/NEJMoa2002032 10.1007/s10115-020-01495-8 10.1016/j.ijmedinf.2020.104284 10.1016/j.ijmedinf.2020.104340 10.1007/s13246-020-00865-4 10.1016/j.asoc.2020.106504 10.1016/j.patcog.2020.107324 10.1109/TPAMI.2002.1017623 10.1109/TAFFC.2017.2713359 10.1016/j.patcog.2018.11.014 10.1007/s10044-018-0740-1 10.1007/s11554-016-0622-y 10.1007/s11554-009-0135-z 10.1016/j.procs.2015.02.025 10.1007/s11227-016-1933-2 10.1007/s13244-018-0639-9 10.1016/j.compbiomed.2020.103792 10.1371/journal.pone.0235187 10.1007/s12559-020-09795-5 10.1016/j.media.2020.101824 10.1088/1361-6560/abbf9e 10.1109/JBHI.2020.3019505 10.1080/07391102.2020.1788642 10.1109/TMI.2020.2996645 10.1109/TIP.2021.3058783 10.1007/s10140-020-01886-y 10.1007/s00500-020-05275-y 10.1101/2020.04.24.20078584
Classification of COVID-19 chest X-Ray and CT images using a type of dynamic CNN modification method.
Understanding and classifying Chest X-Ray (CXR) and computerised tomography (CT) images are of great significance for COVID-19 diagnosis. The existing research on the classification for COVID-19 cases faces the challenges of data imbalance, insufficient generalisability, the lack of comparative study, etc. To address these problems, this paper proposes a type of modified MobileNet to classify COVID-19 CXR images and a modified ResNet architecture for CT image classification. In particular, a modification method of convolutional neural networks (CNN) is designed to solve the gradient vanishing problem and improve the classification performance through dynamically combining features in different layers of a CNN. The modified MobileNet is applied to the classification of COVID-19, Tuberculosis, viral pneumonia (with the exception of COVID-19), bacterial pneumonia and normal controls using CXR images. Also, the proposed modified ResNet is used for the classification of COVID-19, non-COVID-19 infections and normal controls using CT images. The results show that the proposed methods achieve 99.6% test accuracy on the five-category CXR image dataset and 99.3% test accuracy on the CT image dataset. Six advanced CNN architectures and two specific COVID-19 detection models, i.e., COVID-Net and COVIDNet-CT are used in comparative studies. Two benchmark datasets and a CXR image dataset which combines eight different CXR image sources are employed to evaluate the performance of the above models. The results show that the proposed methods outperform the comparative models in classification accuracy, sensitivity, and precision, which demonstrate their potential in computer-aided diagnosis for healthcare applications.
Computers in biology and medicine
"2021-05-11T00:00:00"
[ "GuangyuJia", "Hak-KeungLam", "YujiaXu" ]
10.1016/j.compbiomed.2021.104425
On the role of artificial intelligence in medical imaging of COVID-19.
Although a plethora of research articles on AI methods on COVID-19 medical imaging are published, their clinical value remains unclear. We conducted the largest systematic review of the literature addressing the utility of AI in imaging for COVID-19 patient care. By keyword searches on PubMed and preprint servers throughout 2020, we identified 463 manuscripts and performed a systematic meta-analysis to assess their technical merit and clinical relevance. Our analysis evidences a significant disparity between clinical and AI communities, in the focus on both imaging modalities (AI experts neglected CT and ultrasound, favoring X-ray) and performed tasks (71.9% of AI papers centered on diagnosis). The vast majority of manuscripts were found to be deficient regarding potential use in clinical practice, but 2.7% (n = 12) publications were assigned a high maturity level and are summarized in greater detail. We provide an itemized discussion of the challenges in developing clinically relevant AI solutions with recommendations and remedies.
Patterns (New York, N.Y.)
"2021-05-11T00:00:00"
[ "JannisBorn", "DavidBeymer", "DeeptaRajan", "AdamCoy", "Vandana VMukherjee", "MatteoManica", "PrasanthPrasanna", "DeddehBallah", "MichalGuindy", "DorithShaham", "Pallav LShah", "EmmanouilKarteris", "Jan LRobertus", "MariaGabrani", "MichalRosen-Zvi" ]
10.1016/j.patter.2021.100269 10.1109/RBME.2020.2990959 10.1016/j.chest.2020.04.003 10.1016/j.ejro.2020.100231 10.1109/RBME.2020.2987975 10.1016/j.jiph.2020.06.028 10.1097/RLI.0000000000000763 10.1101/2020.04.24.20078584 10.3390/app11020672 10.1148/radiol.2021219004 10.1128/JCM.00512-20 10.1148/radiol.2020200905 10.1007/s00330-020-07225-6 10.1109/PIMRC.2017.8292361 10.1093/intqhc/mzab010 10.3174/ajnr.A2742 10.1148/radiol.2020203173 10.5811/westjem.2020.5.47743 10.1183/23120541.00539-2020 10.1016/j.ultrasmedbio.2020.07.003 10.1016/S2213-2600(20)30120-X 10.1136/bmj.m689 10.1186/s12916-019-1426-2
Clinical Factors and Quantitative CT Parameters Associated With ICU Admission in Patients of COVID-19 Pneumonia: A Multicenter Study.
The clinical spectrum of COVID-19 pneumonia is varied. Thus, it is important to identify risk factors at an early stage for predicting deterioration that require transferring the patients to ICU. A retrospective multicenter study was conducted on COVID-19 patients admitted to designated hospitals in China from Jan 17, 2020, to Feb 17, 2020. Clinical presentation, laboratory data, and quantitative CT parameters were also collected. The result showed that increasing risks of ICU admission were associated with age > 60 years (odds ratio [OR], 12.72; 95% confidence interval [CI], 2.42-24.61;
Frontiers in public health
"2021-05-11T00:00:00"
[ "ChengxiYan", "YingChang", "HuanYu", "JingxuXu", "ChencuiHuang", "MingleiYang", "YiqiaoWang", "DiWang", "TianYu", "ShuqinWei", "ZhenyuLi", "FeifeiGong", "MingqingKou", "WenjingGou", "QiliZhao", "PenghuiSun", "XiuqinJia", "ZhaoyangFan", "JialiXu", "SijieLi", "QiYang" ]
10.3389/fpubh.2021.648360 10.1101/2020.02.06.20020974 10.1038/s41586-020-2008-3 10.1038/s41421-020-0147-1 10.1186/s40779-020-00240-0 10.1016/S0140-6736(20)30211-7 10.1056/NEJMoa2001316 10.1148/radiol.2020200370 10.1097/RTI.0000000000000524 10.1001/jama.2020.5394 10.1016/S2213-2600(20)30079-5 10.1111/jebm.12418 10.1148/radiol.2020200642 10.1016/j.jinf.2020.03.005 10.1097/RLI.0000000000000674 10.1136/bmjopen-2020-044500 10.7150/thno.46465 10.1038/s41467-020-18786-x 10.1016/j.ejrad.2019.108774 10.1177/2333794X21991531 10.1148/radiol.11092149 10.1016/S0140-6736(20)30183-5 10.1001/jama.2020.1585 10.1016/S0140-6736(20)30566-3 10.1007/s11427-020-1661-4 10.1016/j.antiviral.2016.11.006 10.1007/BF01651146 10.4081/cp.2020.1271 10.2174/138161212799504731 10.1016/S2468-1253(20)30084-4 10.1148/radiol.2020200343 10.1148/ryct.2020200110 10.1097/RLI.0000000000000672 10.3892/etm.2017.4449 10.1016/j.compbiomed.2020.103792 10.1148/radiol.2020200905 10.1038/s41467-020-17280-8
Computed Tomography Image Processing Analysis in COVID-19 Patient Follow-Up Assessment.
The rapid worldwide spread of the COVID-19 pandemic has infected patients around the world in a short space of time. Chest computed tomography (CT) images of patients who are infected with COVID-19 can offer early diagnosis and efficient forecast monitoring at a low cost. The diagnosis of COVID-19 on CT in an automated way can speed up many tasks and the application of medical treatments. This can help complement reverse transcription-polymerase chain reaction (RT-PCR) diagnosis. The aim of this work is to develop a system that automatically identifies ground-glass opacity (GGO) and pulmonary infiltrates (PIs) on CT images from patients with COVID-19. The purpose is to assess the disease progression during the patient's follow-up assessment and evaluation. We propose an efficient methodology that incorporates oversegmentation mean shift followed by superpixel-SLIC (simple linear iterative clustering) algorithm on CT images with COVID-19 for pulmonary parenchyma segmentation. To identify the pulmonary parenchyma, we described each superpixel cluster according to its position, grey intensity, second-order texture, and spatial-context-saliency features to classify by a tree random forest (TRF). Second, by applying the watershed segmentation to the mean-shift clusters, only pulmonary parenchyma segmentation-identified zones showed GGO and PI based on the description of each watershed cluster of its position, grey intensity, gradient entropy, second-order texture, Euclidean position to the border region of the PI zone, and global saliency features, after using TRF. Our classification results for pulmonary parenchyma identification on CT images with COVID-19 had a precision of over 92% and recall of over 92% on twofold cross validation. For GGO, the PI identification showed 96% precision and 96% recall on twofold cross validation.
Journal of healthcare engineering
"2021-05-11T00:00:00"
[ "SantiagoTello-Mijares", "LuisaWoo" ]
10.1155/2021/8869372 10.1016/j.chest.2020.04.003 10.1109/RBME.2020.2987975 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1148/ryct.2020200034 10.36227/techrxiv.12212516.v1 10.1101/2020.02.14.20023028 10.1101/2020.02.25.20021568 10.1148/radiol.2020200905 10.1007/s10489-020-01714-3 10.1101/2020.04.13.20063479 10.1101/2020.04.13.20063941 10.1101/2020.02.23.20026930 10.1109/TMI.2020.2996645 10.1101/2020.04.16.20064709 10.1109/TIT.1975.1055330 10.1109/34.1000236 10.1109/TPAMI.2012.120 10.1109/PROC.1979.11328 10.1155/2015/586928 10.1155/2014/536308 10.1109/CVPR.2012.6247743 10.1145/957013.957094 10.1111/j.1467-8659.2009.01645.x 10.1109/ANZIIS.1994.396988 10.1023/a:1010933404324 10.1109/TPAMI.2014.2345401
Prediction of COVID-19 with Computed Tomography Images using Hybrid Learning Techniques.
Reverse Transcription Polymerase Chain Reaction (RT-PCR) used for diagnosing COVID-19 has been found to give low detection rate during early stages of infection. Radiological analysis of CT images has given higher prediction rate when compared to RT-PCR technique. In this paper, hybrid learning models are used to classify COVID-19 CT images, Community-Acquired Pneumonia (CAP) CT images, and normal CT images with high specificity and sensitivity. The proposed system in this paper has been compared with various machine learning classifiers and other deep learning classifiers for better data analysis. The outcome of this study is also compared with other studies which were carried out recently on COVID-19 classification for further analysis. The proposed model has been found to outperform with an accuracy of 96.69%, sensitivity of 96%, and specificity of 98%.
Disease markers
"2021-05-11T00:00:00"
[ "VaralakshmiPerumal", "VasumathiNarayanan", "Sakthi Jaya SundarRajasekar" ]
10.1155/2021/5522729 10.1056/NEJMoa2001017 10.1016/S1473-3099(20)30086-4 10.1148/radiol.2020200370 10.1148/radiol.2020200343 10.1148/radiol.2020200330 10.1148/radiol.2020200432 10.3348/kjr.2020.0157 10.1148/radiol.2020200642 10.1101/2020.02.14.20023028 10.1007/s10140-020-01886-y 10.1109/ACCESS.2020.3016780 10.1007/s13246-020-00865-4 10.20944/preprints202003.0300.v1
Pneumonia Detection Using an Improved Algorithm Based on Faster R-CNN.
Pneumonia remains a threat to human health; the coronavirus disease 2019 (COVID-19) that began at the end of 2019 had a major impact on the world. It is still raging in many countries and has caused great losses to people's lives and property. In this paper, we present a method based on DeepConv-DilatedNet of identifying and localizing pneumonia in chest X-ray (CXR) images. Two-stage detector Faster R-CNN is adopted as the structure of a network. Feature Pyramid Network (FPN) is integrated into the residual neural network of a dilated bottleneck so that the deep features are expanded to preserve the deep feature and position information of the object. In the case of DeepConv-DilatedNet, the deconvolution network is used to restore high-level feature maps into its original size, and the target information is further retained. On the other hand, DeepConv-DilatedNet uses a popular fully convolution architecture with computation shared on the entire image. Then, Soft-NMS is used to screen boxes and ensure sample quality. Also, K-Means++ is used to generate anchor boxes to improve the localization accuracy. The algorithm obtained 39.23% Mean Average Precision (mAP) on the X-ray image dataset from the Radiological Society of North America (RSNA) and got 38.02% Mean Average Precision (mAP) on the ChestX-ray14 dataset, surpassing other detection algorithms. So, in this paper, an improved algorithm that can provide doctors with location information of pneumonia lesions is proposed.
Computational and mathematical methods in medicine
"2021-05-11T00:00:00"
[ "ShangjieYao", "YaowuChen", "XiangTian", "RongxinJiang" ]
10.1155/2021/8854892 10.1016/S0140-6736(16)31593-8
Automatic prediction of COVID- 19 from chest images using modified ResNet50.
Recently coronavirus 2019 (COVID-2019), discovered in Wuhan city of China in December 2019 announced as world pandemic by the World Health Organization (WHO). It has catastrophic impacts on daily lives, public health, and the global economy. The detection of coronavirus (COVID- 19) is now a critical task for medical specialists. Laboratory methods for detecting the virus such as Polymerase Chain Reaction, antigens, and antibodies have pros and cons represented in time required to obtain results, accuracy, cost and suitability of the test to phase of infection. The need for accurate, fast, and cheap auxiliary diagnostic tools has become a necessity as there are no accurate automated toolkits available. Other medical investigations such as chest X-ray and Computerized Tomography scans are imaging techniques that play an important role in the diagnosis of COVID- 19 virus. Application of advanced artificial intelligence techniques for processing radiological imaging can be helpful for the accurate detection of this virus. However, Due to the small dataset available for COVID- 19, transfer learning from pre-trained convolution neural networks, CNNs can be used as a promising solution for diagnosis of coronavirus. Transfer learning becomes an effective mechanism by transferring knowledge from generic object recognition tasks to domain-specific tasks. Hence, the main contribution of this paper is to exploit the pre-trained deep learning CNN architectures as a cornerstone to enhance and build up an automated tool for detection and diagnosis of COVID- 19 in chest X-Ray and Computerized Tomography images. The main idea is to make use of their convolutional neural network structure and its learned weights on large datasets such as ImageNet. Moreover, a modification to ResNet50 is proposed to classify the patients as COVID infected or not. This modification includes adding three new layers, named,
Multimedia tools and applications
"2021-05-11T00:00:00"
[ "MarwaElpeltagy", "HanySallam" ]
10.1007/s11042-021-10783-6 10.1016/S0140-6736(20)30211-7 10.1016/j.compbiomed.2020.103792 10.1007/s10096-019-03782-x 10.1016/j.ejrad.2020.109041 10.1148/ryct.2020200047
Determination of COVID-19 pneumonia based on generalized convolutional neural network model from chest X-ray images.
X-ray units have become one of the most advantageous candidates for triaging the new Coronavirus disease COVID-19 infected patients thanks to its relatively low radiation dose, ease of access, practical, reduced prices, and quick imaging process. This research intended to develop a reliable convolutional-neural-network (CNN) model for the classification of COVID-19 from chest X-ray views. Moreover, it is aimed to prevent bias issues due to the database. Transfer learning-based CNN model was developed by using a sum of 1,218 chest X-ray images (CXIs) consisting of 368 COVID-19 pneumonia and 850 other pneumonia cases by pre-trained architectures, including DenseNet-201, ResNet-18, and SqueezeNet. The chest X-ray images were acquired from publicly available databases, and each individual image was carefully selected to prevent any bias problem. A stratified 5-fold cross-validation approach was utilized with a ratio of 90% for training and 10% for the testing (unseen folds), in which 20% of training data was used as a validation set to prevent overfitting problems. The binary classification performances of the proposed CNN models were evaluated by the testing data. The activation mapping approach was implemented to improve the causality and visuality of the radiograph. The outcomes demonstrated that the proposed CNN model built on DenseNet-201 architecture outperformed amongst the others with the highest accuracy, precision, recall, and F1-scores of 94.96%, 89.74%, 94.59%, and 92.11%, respectively. The results indicated that the reliable diagnosis of COVID-19 pneumonia from CXIs based on the CNN model opens the door to accelerate triage, save critical time, and prioritize resources besides assisting the radiologists.
Expert systems with applications
"2021-05-11T00:00:00"
[ "AdiAlhudhaif", "KemalPolat", "OnurKaraman" ]
10.1016/j.eswa.2021.115141 10.1148/radiol.2020200642 10.1155/2020/8828855 10.1016/S0140-6736(20)30211-7 10.1007/s11554-021-01086-y 10.1542/peds.2020-0702 10.1109/JSAC.4910.1109/JSAC.2020.3020598 10.1136/bmj.m1066 10.1080/07391102.2020.1767212 10.1016/j.scitotenv.2020.138858 10.1164/rccm.2014P7 10.1007/s10489-020-01902-1 10.1148/radiol.2020200527 10.1016/j.acra.2019.10.001 10.1016/S1473-3099(20)30134 10.1016/j.ijantimicag.2020.105951 10.1016/j.media.2020.101794 10.1016/j.bspc.2020.102365 10.1109/JAS.2020.1003393 10.1016/j.chaos.2020.110245 10.1016/j.compbiomed.2020.103792 10.1109/JIOT.2020.3038009 10.3233/XST-200757 10.1016/j.eswa.2014.07.013 10.1109/Access.628763910.1109/ACCESS.2018.2817614 10.1007/s12559-021-09848-3 10.1007/s00264-020-04609-7 10.1001/jama.2020.1585 10.1038/s41598-020-76550-z 10.1101/2020.02.14.20023028 10.1148/radiol.2020200343 10.1007/s11427-020-1637-5
FBSED based automatic diagnosis of COVID-19 using X-ray and CT images.
This work introduces the Fourier-Bessel series expansion-based decomposition (FBSED) method, which is an implementation of the wavelet packet decomposition approach in the Fourier-Bessel series expansion domain. The proposed method has been used for the diagnosis of pneumonia caused by the 2019 novel coronavirus disease (COVID-19) using chest X-ray image (CXI) and chest computer tomography image (CCTI). The FBSED method is used to decompose CXI and CCTI into sub-band images (SBIs). The SBIs are then used to train various pre-trained convolutional neural network (CNN) models separately using a transfer learning approach. The combination of SBI and CNN is termed as one channel. Deep features from each channel are fused to get a feature vector. Different classifiers are used to classify pneumonia caused by COVID-19 from other viral and bacterial pneumonia and healthy subjects with the extracted feature vector. The different combinations of channels have also been analyzed to make the process computationally efficient. For CXI and CCTI databases, the best performance has been obtained with only one and four channels, respectively. The proposed model was evaluated using 5-fold and 10-fold cross-validation processes. The average accuracy for the CXI database was 100% for both 5-fold and 10-fold cross-validation processes, and for the CCTI database, it is 97.6% for the 5-fold cross-validation process. Therefore, the proposed method may be used by radiologists to rapidly diagnose patients with COVID-19.
Computers in biology and medicine
"2021-05-10T00:00:00"
[ "Pradeep KumarChaudhary", "Ram BilasPachori" ]
10.1016/j.compbiomed.2021.104454 10.20944/preprints202003.0300.v1 10.1109/34.142909
ai-corona: Radiologist-assistant deep learning framework for COVID-19 diagnosis in chest CT scans.
The development of medical assisting tools based on artificial intelligence advances is essential in the global fight against COVID-19 outbreak and the future of medical systems. In this study, we introduce ai-corona, a radiologist-assistant deep learning framework for COVID-19 infection diagnosis using chest CT scans. Our framework incorporates an EfficientNetB3-based feature extractor. We employed three datasets; the CC-CCII set, the MasihDaneshvari Hospital (MDH) cohort, and the MosMedData cohort. Overall, these datasets constitute 7184 scans from 5693 subjects and include the COVID-19, non-COVID abnormal (NCA), common pneumonia (CP), non-pneumonia, and Normal classes. We evaluate ai-corona on test sets from the CC-CCII set, MDH cohort, and the entirety of the MosMedData cohort, for which it gained AUC scores of 0.997, 0.989, and 0.954, respectively. Our results indicates ai-corona outperforms all the alternative models. Lastly, our framework's diagnosis capabilities were evaluated as assistant to several experts. Accordingly, We observed an increase in both speed and accuracy of expert diagnosis when incorporating ai-corona's assistance.
PloS one
"2021-05-08T00:00:00"
[ "MehdiYousefzadeh", "ParsaEsfahanian", "Seyed Mohammad SadeghMovahed", "SaeidGorgin", "DaraRahmati", "AtefehAbedini", "Seyed AlirezaNadji", "SaraHaseli", "MehrdadBakhshayesh Karam", "ArdaKiani", "MeisamHoseinyazdi", "JafarRoshandel", "RezaLashgari" ]
10.1371/journal.pone.0250952 10.1016/S0140-6736(20)30183-5 10.1016/S0140-6736(20)30211-7 10.1038/s41418-020-00720-9 10.1001/jama.2020.3786 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1016/S2213-2600(20)30453-7 10.1016/S1473-3099(20)30086-4 10.1007/s00330-020-06865-y 10.1148/radiol.2020200463 10.1016/S2589-7500(19)30123-2 10.1038/s41591-019-0447-x 10.1038/s41591-018-0177-5 10.1016/j.jormas.2019.06.002 10.1109/ACCESS.2020.3010287 10.1371/journal.pmed.1002686 10.1371/journal.pmed.1002699 10.1016/j.cell.2020.04.045 10.1038/s41467-020-18685-1 10.1118/1.3528204 10.2807/1560-7917.ES.2020.25.3.2000045 10.1109/ACCESS.2018.2877890 10.1016/j.ebiom.2020.102903
Addressing Biodisaster X Threats With Artificial Intelligence and 6G Technologies: Literature Review and Critical Insights.
With advances in science and technology, biotechnology is becoming more accessible to people of all demographics. These advances inevitably hold the promise to improve personal and population well-being and welfare substantially. It is paradoxical that while greater access to biotechnology on a population level has many advantages, it may also increase the likelihood and frequency of biodisasters due to accidental or malicious use. Similar to "Disease X" (describing unknown naturally emerging pathogenic diseases with a pandemic potential), we term this unknown risk from biotechnologies "Biodisaster X." To date, no studies have examined the potential role of information technologies in preventing and mitigating Biodisaster X. This study aimed to explore (1) what Biodisaster X might entail and (2) solutions that use artificial intelligence (AI) and emerging 6G technologies to help monitor and manage Biodisaster X threats. A review of the literature on applying AI and 6G technologies for monitoring and managing biodisasters was conducted on PubMed, using articles published from database inception through to November 16, 2020. Our findings show that Biodisaster X has the potential to upend lives and livelihoods and destroy economies, essentially posing a looming risk for civilizations worldwide. To shed light on Biodisaster X threats, we detailed effective AI and 6G-enabled strategies, ranging from natural language processing to deep learning-based image analysis to address issues ranging from early Biodisaster X detection (eg, identification of suspicious behaviors), remote design and development of pharmaceuticals (eg, treatment development), and public health interventions (eg, reactive shelter-at-home mandate enforcement), as well as disaster recovery (eg, sentiment analysis of social media posts to shed light on the public's feelings and readiness for recovery building). Biodisaster X is a looming but avoidable catastrophe. Considering the potential human and economic consequences Biodisaster X could cause, actions that can effectively monitor and manage Biodisaster X threats must be taken promptly and proactively. Rather than solely depending on overstretched professional attention of health experts and government officials, it is perhaps more cost-effective and practical to deploy technology-based solutions to prevent and control Biodisaster X threats. This study discusses what Biodisaster X could entail and emphasizes the importance of monitoring and managing Biodisaster X threats by AI techniques and 6G technologies. Future studies could explore how the convergence of AI and 6G systems may further advance the preparedness for high-impact, less likely events beyond Biodisaster X.
Journal of medical Internet research
"2021-05-08T00:00:00"
[ "ZhaohuiSu", "DeanMcDonnell", "Barry LBentley", "JiguangHe", "FengShi", "AliCheshmehzangi", "JunaidAhmad", "PengJia" ]
10.2196/26109 10.1016/S1473-3099(13)70323-2 10.2307/600071 10.1007/s13280-016-0809-2 10.1111/j.1523-1739.2006.00524.x 10.1111/j.1539-6924.2007.00960.x 10.1017/ice.2021.26 10.2196/26111 10.2196/26111 10.1016/j.bbih.2020.100159 10.1186/s12992-020-00654-4 10.1186/s12992-020-00654-4 10.1016/j.bbih.2021.100204 10.1016/j.pdisas.2020.100091 10.1016/j.pdisas.2020.100091 10.1017/err.2020.34 10.1111/1468-0009.12463 10.1016/j.aucc.2020.07.006 10.1038/d41586-020-01079-0 10.1016/S2214-109X(20)30120-0 10.1029/2020GH000303 10.1016/S0140-6736(19)30803-7 10.1071/MA20028 10.1016/S1473-3099(20)30123-7 10.1056/NEJMp058068 10.1016/S0140-6736(12)61684-5 10.1128/CMR.00033-15 10.1016/S1473-3099(18)30298-6 10.1126/science.283.5406.1279 10.1017/S0963180113000753 10.1016/s0140-6736(02)11797-1 10.1136/bmj.1.4490.148 10.1056/NEJM199005173222006 10.1016/j.respol.2007.10.003 10.1016/j.respol.2007.10.003 10.1111/1469-0691.12699 10.1177/000271622311000112 10.1177/000271622311000112 10.2307/1914185 10.1016/j.avb.2020.101483 10.1016/j.avb.2020.101483 10.1016/j.jen.2014.03.001 10.1001/jama.288.5.622 10.1016/s0733-8627(02)00004-4 10.1093/phr/116.S2.9 10.1080/08998280.2004.11928002 10.1002/0471686786.ebd0138 10.1002/0471686786.ebd0138 10.1001/jama.2020.4169 10.1371/journal.pmed.1003144 10.1371/journal.pmed.1003144 10.1016/S2468-2667(20)30101-8 10.1016/S2214-109X(20)30234-5 10.1016/j.socscimed.2008.06.020 10.1056/NEJMra1208802 10.1136/bmjgh-2020-003746 10.18006/2021.9(2).108.116 10.1093/qjmed/hcaa343 10.1016/S1473-3099(18)30359-1 10.1086/590567 10.1038/nrmicro1027 10.7150/ijbs.45472 10.1038/srep14830 10.1038/srep14830 10.1111/1556-4029.12312 10.1111/1556-4029.12312 10.1016/j.prevetmed.2014.11.004 10.1002/cpmc.23 10.1038/nature22975 10.1371/journal.pone.0188453 10.1371/journal.pone.0188453 10.1089/hs.2017.0061 10.1056/NEJMsb2021088 10.1016/j.bbih.2020.100144 10.1016/S2542-5196(18)30245-6 10.1016/j.tim.2021.03.003 10.1007/s11747-019-00696-0 10.2196/17234 10.1016/S2589-7500(20)30079-0 10.1016/j.telpol.2020.101976 10.1038/s41591-020-1123-x 10.1080/14616688.2020.1762118 10.1002/aisy.202000071 10.1002/aisy.202000071 10.1093/jtm/taaa080 10.1007/s10916-020-01617-3 10.3389/frai.2020.00065 10.3389/frai.2020.00065 10.1109/RBME.2020.2987975 10.2196/25314 10.1038/s41591-020-0921-5 10.3390/s18051341 10.3390/smartcities2030025 10.1016/j.scs.2020.102364 10.1016/j.scs.2020.102364 10.1109/MNET.001.1900287 10.1016/j.dcan.2020.05.003 10.1016/j.dcan.2020.05.003 10.14569/IJACSA.2020.0110281 10.3390/sym12040676 10.1038/s41928-019-0355-6 10.1109/OJCOMS.2020.3010270 10.1109/MCOM.2019.1900271 10.1016/j.pt.2020.12.004 10.1136/bmjgh-2020-002925 10.1371/journal.pone.0239694 10.1371/journal.pone.0239694 10.1089/jwh.2020.8721 10.1017/S1351324916000383 10.1126/science.aaa8685 10.4137/bii.s4706?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed 10.4137/bii.s4706 10.1109/BHI.2017.7897288 10.1155/2016/8708434 10.1155/2016/8708434 10.2196/22635 10.1016/j.scitotenv.2020.139298 10.1109/ACCESS.2017.2756872 10.1007/s11356-018-1438-z 10.1016/j.scitotenv.2020.141145 10.1016/j.jes.2015.01.007 10.1136/bmj.m2599 10.1016/j.scitotenv.2020.140980 10.1093/infdis/jiaa175 10.1631/FITEE.1700808 10.1109/MCE.2016.2640698 10.1155/2020/9756518 10.1155/2020/9756518 10.4103/ijcm.IJCM_366_20 10.1016/j.measurement.2020.108288 10.1109/ACCESS.2020.2977386
Lung Infection Segmentation for COVID-19 Pneumonia Based on a Cascade Convolutional Network from CT Images.
The COVID-19 pandemic is a global, national, and local public health concern which has caused a significant outbreak in all countries and regions for both males and females around the world. Automated detection of lung infections and their boundaries from medical images offers a great potential to augment the patient treatment healthcare strategies for tackling COVID-19 and its impacts. Detecting this disease from lung CT scan images is perhaps one of the fastest ways to diagnose patients. However, finding the presence of infected tissues and segment them from CT slices faces numerous challenges, including similar adjacent tissues, vague boundary, and erratic infections. To eliminate these obstacles, we propose a two-route convolutional neural network (CNN) by extracting global and local features for detecting and classifying COVID-19 infection from CT images. Each pixel from the image is classified into the normal and infected tissues. For improving the classification accuracy, we used two different strategies including fuzzy
BioMed research international
"2021-05-07T00:00:00"
[ "RaminRanjbarzadeh", "SaeidJafarzadeh Ghoushchi", "MalikaBendechache", "AmirAmirabadi", "Mohd NizamAb Rahman", "SoroushBaseri Saadi", "AmirhosseinAghamohammadi", "MersedehKooshki Forooshani" ]
10.1155/2021/5544742 10.1080/07391102.2020.1788642 10.1016/j.scitotenv.2020.138705 10.1109/TMI.2020.2995965 10.1007/s00500-019-04507-0 10.1109/TMI.2020.3001810 10.3390/math8081268 10.1016/j.chest.2020.04.003 10.1109/TMI.2020.3000314 10.1016/j.compbiomed.2020.103795 10.1109/TMI.2020.2996645 10.1016/j.measurement.2019.107086 10.1109/TMI.2020.2995508 10.1016/j.media.2020.101794 10.1371/journal.pone.0137016 10.1109/TIFS.2019.2904844 10.3390/sym12020310 10.1080/23080477.2020.1799135 10.1016/j.measurement.2019.107230 10.1007/s11042-020-08699-8 10.1016/j.measurement.2020.107989 10.1016/j.chemolab.2020.104054 10.1109/ACCESS.2018.2888856 10.1016/j.eswa.2020.114549 10.1016/j.measurement.2017.05.009 10.1109/TIP.2016.2522378 10.1016/j.patcog.2015.08.025 10.1016/j.compeleceng.2017.04.019 10.4103/jmss.JMSS_62_18 10.1016/j.imu.2020.100412 10.1504/IJLSM.2016.076473 10.1016/j.matpr.2020.06.245 10.1016/j.chaos.2020.110170 10.1016/j.asoc.2020.106580 10.1016/j.mehy.2020.109761 10.1109/TIM.2017.2775345 10.1162/tacl_a_00097 10.1109/TIM.2018.2871353 10.1016/j.neuroimage.2017.04.039 10.1002/jnm.2682 10.1016/j.eswa.2019.01.055 10.1016/j.measurement.2019.01.060 10.1007/s11135-019-00882-w 10.1016/j.patcog.2015.04.019 10.1016/j.eswa.2014.09.020 10.1016/j.media.2016.05.004 10.1109/TIP.2016.2547588 10.1109/ACCESS.2020.3005510 10.1016/j.compbiomed.2017.04.012 10.1016/j.eswa.2020.113909 10.1016/j.ejmp.2016.10.002 10.1016/j.ijleo.2013.10.049 10.1002/cpe.5293 10.1016/j.eng.2020.04.010
A novel augmented deep transfer learning for classification of COVID-19 and other thoracic diseases from X-rays.
Deep learning has provided numerous breakthroughs in natural imaging tasks. However, its successful application to medical images is severely handicapped with the limited amount of annotated training data. Transfer learning is commonly adopted for the medical imaging tasks. However, a large covariant shift between the source domain of natural images and target domain of medical images results in poor transfer learning. Moreover, scarcity of annotated data for the medical imaging tasks causes further problems for effective transfer learning. To address these problems, we develop an augmented ensemble transfer learning technique that leads to significant performance gain over the conventional transfer learning. Our technique uses an ensemble of deep learning models, where the architecture of each network is modified with extra layers to account for dimensionality change between the images of source and target data domains. Moreover, the model is hierarchically tuned to the target domain with augmented training data. Along with the network ensemble, we also utilize an ensemble of dictionaries that are based on features extracted from the augmented models. The dictionary ensemble provides an additional performance boost to our method. We first establish the effectiveness of our technique with the challenging ChestXray-14 radiography data set. Our experimental results show more than 50% reduction in the error rate with our method as compared to the baseline transfer learning technique. We then apply our technique to a recent COVID-19 data set for binary and multi-class classification tasks. Our technique achieves 99.49% accuracy for the binary classification, and 99.24% for multi-class classification.
Neural computing & applications
"2021-05-06T00:00:00"
[ "FouziaAltaf", "Syed M SIslam", "Naeem KhalidJanjua" ]
10.1007/s00521-021-06044-0 10.1016/j.patcog.2016.12.017 10.1109/ACCESS.2019.2929365 10.1016/j.cmpb.2019.105162 10.3390/app10020559 10.1109/JBHI.2016.2636929 10.1007/s11263-009-0275-4 10.3390/app9194130 10.1016/j.cell.2018.02.010 10.1148/radiol.2017162326 10.1038/nature14539 10.1016/j.media.2017.07.005 10.1016/j.compbiomed.2017.08.001 10.1016/j.compbiomed.2018.05.018 10.1016/j.compbiomed.2020.103792 10.1016/j.chaos.2020.109944 10.1007/s11263-015-0816-y 10.1109/TMI.2016.2528162 10.1016/j.cogsys.2018.12.007 10.1016/j.compbiomed.2020.103805 10.1016/j.chaos.2020.110122 10.1109/MSP.2010.939537 10.1016/j.mehy.2020.109761 10.1109/TPAMI.2015.2502579
Classification of COVID-19 Chest CT Images Based on Ensemble Deep Learning.
Novel coronavirus pneumonia (NCP) has become a global pandemic disease, and computed tomography-based (CT) image analysis and recognition are one of the important tools for clinical diagnosis. In order to assist medical personnel to achieve an efficient and fast diagnosis of patients with new coronavirus pneumonia, this paper proposes an assisted diagnosis algorithm based on ensemble deep learning. The method combines the Stacked Generalization ensemble learning with the VGG16 deep learning to form a cascade classifier, and the information constituting the cascade classifier comes from multiple subsets of the training set, each of which is used to collect deviant information about the generalization behavior of the data set, such that this deviant information fills the cascade classifier. The algorithm was experimentally validated for classifying patients with novel coronavirus pneumonia, patients with common pneumonia (CP), and normal controls, and the algorithm achieved a prediction accuracy of 93.57%, sensitivity of 94.21%, specificity of 93.93%, precision of 89.40%, and F1-score of 91.74% for the three categories. The results show that the method proposed in this paper has good classification performance and can significantly improve the performance of deep neural networks for multicategory prediction tasks.
Journal of healthcare engineering
"2021-05-04T00:00:00"
[ "XiaoshuoLi", "WenjunTan", "PanLiu", "QinghuaZhou", "JinzhuYang" ]
10.1155/2021/5528441 10.1016/j.cell.2020.04.045 10.1002/jmv.25763 10.1016/j.jinf.2020.03.033 10.1007/s00330-020-06880-z 10.1148/radiol.2462070712 10.1109/tmi.2020.2995965 10.1109/tmi.2020.2996256 10.1109/access.2020.2994762 10.1007/s10096-020-03901-z 10.1109/TMI.2020.2993291 10.1109/tmi.2020.2995508 10.1088/1742-6596/1722/1/012072 10.1007/s00138-020-01128-8 10.1016/s0893-6080(05)80023-1
Machine learning automatically detects COVID-19 using chest CTs in a large multicenter cohort.
To investigate machine learning classifiers and interpretable models using chest CT for detection of COVID-19 and differentiation from other pneumonias, interstitial lung disease (ILD) and normal CTs. Our retrospective multi-institutional study obtained 2446 chest CTs from 16 institutions (including 1161 COVID-19 patients). Training/validation/testing cohorts included 1011/50/100 COVID-19, 388/16/33 ILD, 189/16/33 other pneumonias, and 559/17/34 normal (no pathologies) CTs. A metric-based approach for the classification of COVID-19 used interpretable features, relying on logistic regression and random forests. A deep learning-based classifier differentiated COVID-19 via 3D features extracted directly from CT attenuation and probability distribution of airspace opacities. Most discriminative features of COVID-19 are the percentage of airspace opacity and peripheral and basal predominant opacities, concordant with the typical characterization of COVID-19 in the literature. Unsupervised hierarchical clustering compares feature distribution across COVID-19 and control cohorts. The metrics-based classifier achieved AUC = 0.83, sensitivity = 0.74, and specificity = 0.79 versus respectively 0.93, 0.90, and 0.83 for the DL-based classifier. Most of ambiguity comes from non-COVID-19 pneumonia with manifestations that overlap with COVID-19, as well as mild COVID-19 cases. Non-COVID-19 classification performance is 91% for ILD, 64% for other pneumonias, and 94% for no pathologies, which demonstrates the robustness of our method against different compositions of control groups. Our new method accurately discriminates COVID-19 from other types of pneumonia, ILD, and CTs with no pathologies, using quantitative imaging features derived from chest CT, while balancing interpretability of results and classification performance and, therefore, may be useful to facilitate diagnosis of COVID-19. • Unsupervised clustering reveals the key tomographic features including percent airspace opacity and peripheral and basal opacities most typical of COVID-19 relative to control groups. • COVID-19-positive CTs were compared with COVID-19-negative chest CTs (including a balanced distribution of non-COVID-19 pneumonia, ILD, and no pathologies). Classification accuracies for COVID-19, pneumonia, ILD, and CT scans with no pathologies are respectively 90%, 64%, 91%, and 94%. • Our deep learning (DL)-based classification method demonstrates an AUC of 0.93 (sensitivity 90%, specificity 83%). Machine learning methods applied to quantitative chest CT metrics can therefore improve diagnostic accuracy in suspected COVID-19, particularly in resource-constrained environments.
European radiology
"2021-05-03T00:00:00"
[ "Eduardo JMortani Barbosa", "BogdanGeorgescu", "ShikhaChaganti", "Gorka BastarrikaAleman", "Jordi BroncanoCabrero", "GuillaumeChabin", "ThomasFlohr", "PhilippeGrenier", "SasaGrbic", "NakulGupta", "FrançoisMellot", "SavvasNicolaou", "ThomasRe", "PinaSanelli", "Alexander WSauter", "YoungjinYoo", "ValentinZiebandt", "DorinComaniciu" ]
10.1007/s00330-021-07937-3
CT-Based COVID-19 triage: Deep multitask learning improves joint identification and severity quantification.
The current COVID-19 pandemic overloads healthcare systems, including radiology departments. Though several deep learning approaches were developed to assist in CT analysis, nobody considered study triage directly as a computer science problem. We describe two basic setups: Identification of COVID-19 to prioritize studies of potentially infected patients to isolate them as early as possible; Severity quantification to highlight patients with severe COVID-19, thus direct them to a hospital or provide emergency medical care. We formalize these tasks as binary classification and estimation of affected lung percentage. Though similar problems were well-studied separately, we show that existing methods could provide reasonable quality only for one of these setups. We employ a multitask approach to consolidate both triage approaches and propose a convolutional neural network to leverage all available labels within a single model. In contrast with the related multitask approaches, we show the benefit from applying the classification layers to the most spatially detailed feature map at the upper part of U-Net instead of the less detailed latent representation at the bottom. We train our model on approximately 1500 publicly available CT studies and test it on the holdout dataset that consists of 123 chest CT studies of patients drawn from the same healthcare system, specifically 32 COVID-19 and 30 bacterial pneumonia cases, 30 cases with cancerous nodules, and 31 healthy controls. The proposed multitask model outperforms the other approaches and achieves ROC AUC scores of 0.87±0.01 vs. bacterial pneumonia, 0.93±0.01 vs. cancerous nodules, and 0.97±0.01 vs. healthy controls in Identification of COVID-19, and achieves 0.97±0.01 Spearman Correlation in Severity quantification. We have released our code and shared the annotated lesions masks for 32 CT images of patients with COVID-19 from the test dataset.
Medical image analysis
"2021-05-02T00:00:00"
[ "MikhailGoncharov", "MaximPisov", "AlexeyShevtsov", "BorisShirokikh", "AnvarKurmukov", "IvanBlokhin", "ValeriaChernina", "AlexanderSolovev", "VictorGombolevskiy", "SergeyMorozov", "MikhailBelyaev" ]
10.1016/j.media.2021.102054
Combining Initial Radiographs and Clinical Variables Improves Deep Learning Prognostication in Patients with COVID-19 from the Emergency Department.
To train a deep learning classification algorithm to predict chest radiograph severity scores and clinical outcomes in patients with coronavirus disease 2019 (COVID-19). In this retrospective cohort study, patients aged 21-50 years who presented to the emergency department (ED) of a multicenter urban health system from March 10 to 26, 2020, with COVID-19 confirmation at real-time reverse-transcription polymerase chain reaction screening were identified. The initial chest radiographs, clinical variables, and outcomes, including admission, intubation, and survival, were collected within 30 days ( The model trained on the chest radiograph severity score produced the following areas under the receiver operating characteristic curves (AUCs): 0.80 (95% CI: 0.73, 0.88) for the chest radiograph severity score, 0.76 (95% CI: 0.68, 0.84) for admission, 0.66 (95% CI: 0.56, 0.75) for intubation, and 0.59 (95% CI: 0.49, 0.69) for death. The model trained on clinical variables produced an AUC of 0.64 (95% CI: 0.55, 0.73) for intubation and an AUC of 0.59 (95% CI: 0.50, 0.68) for death. Combining chest radiography and clinical variables increased the AUC of intubation and death to 0.88 (95% CI: 0.79, 0.96) and 0.82 (95% CI: 0.72, 0.91), respectively. The combination of imaging and clinical information improves outcome predictions.
Radiology. Artificial intelligence
"2021-05-01T00:00:00"
[ "Young Joon FredKwon", "DanielleToussie", "MarkFinkelstein", "Mario ACedillo", "Samuel ZMaron", "SayanManna", "NicholasVoutsinas", "CoreyEber", "AdamJacobi", "AdamBernheim", "Yogesh SeanGupta", "Michael SChung", "Zahi AFayad", "Benjamin SGlicksberg", "Eric KOermann", "Anthony BCosta" ]
10.1148/ryai.2020200098
U-survival for prognostic prediction of disease progression and mortality of patients with COVID-19.
The rapid increase of patients with coronavirus disease 2019 (COVID-19) has introduced major challenges to healthcare services worldwide. Therefore, fast and accurate clinical assessment of COVID-19 progression and mortality is vital for the management of COVID-19 patients. We developed an automated image-based survival prediction model, called U-survival, which combines deep learning of chest CT images with the established survival analysis methodology of an elastic-net Cox survival model. In an evaluation of 383 COVID-19 positive patients from two hospitals, the prognostic bootstrap prediction performance of U-survival was significantly higher (P < 0.0001) than those of existing laboratory and image-based reference predictors both for COVID-19 progression (maximum concordance index: 91.6% [95% confidence interval 91.5, 91.7]) and for mortality (88.7% [88.6, 88.9]), and the separation between the Kaplan-Meier survival curves of patients stratified into low- and high-risk groups was largest for U-survival (P < 3 × 10
Scientific reports
"2021-05-01T00:00:00"
[ "Janne JNäppi", "TomokiUemura", "ChinatsuWatari", "ToruHironaka", "TohruKamiya", "HiroyukiYoshida" ]
10.1038/s41598-021-88591-z 10.1148/radiol.2020203173 10.1038/s41591-020-0931-3 10.1038/s41467-020-17971-2 10.1148/radiol.2020201365 10.1007/s00330-020-07033-y 10.1186/s12931-020-01411-2 10.1148/radiol.2020201433 10.1148/radiol.2020200463 10.7150/thno.45985 10.7150/thno.46428 10.1148/ryct.2020200075 10.21037/atm-20-3554 10.7150/thno.46465 10.1016/j.cell.2020.04.045 10.1007/s00330-020-07013-2 10.1186/s41747-020-00167-0 10.1148/ryct.2020200322 10.3389/fbioe.2020.00898 10.1148/ryai.2020200053 10.1038/s42256-020-0180-7 10.1097/RLI.0000000000000672 10.1093/cid/ciaa414 10.1002/scj.20178 10.1007/s00330-020-06817-6 10.1002/(SICI)1097-0258(19960229)15:4<361::AID-SIM168>3.0.CO;2-4 10.1007/s12350-014-9908-2 10.1186/s12859-020-3431-z 10.1007/s00330-020-07034-x 10.7326/M14-0698 10.1093/aje/kwu140 10.1002/(SICI)1097-0258(19960229)15:4<361::AID-SIM168>3.0.CO;2-4 10.1007/978-3-319-24574-4_28 10.1109/RBME.2020.2987975 10.1109/ACCESS.2017.2788044 10.1038/nature14539 10.18637/jss.v039.i05 10.1080/01621459.1958.10501452 10.1038/s42256-019-0019-2 10.1007/s11548-019-02071-4
COVID-CT-MD, COVID-19 computed tomography scan dataset applicable in machine learning and deep learning.
Novel Coronavirus (COVID-19) has drastically overwhelmed more than 200 countries affecting millions and claiming almost 2 million lives, since its emergence in late 2019. This highly contagious disease can easily spread, and if not controlled in a timely fashion, can rapidly incapacitate healthcare systems. The current standard diagnosis method, the Reverse Transcription Polymerase Chain Reaction (RT- PCR), is time consuming, and subject to low sensitivity. Chest Radiograph (CXR), the first imaging modality to be used, is readily available and gives immediate results. However, it has notoriously lower sensitivity than Computed Tomography (CT), which can be used efficiently to complement other diagnostic methods. This paper introduces a new COVID-19 CT scan dataset, referred to as COVID-CT-MD, consisting of not only COVID-19 cases, but also healthy and participants infected by Community Acquired Pneumonia (CAP). COVID-CT-MD dataset, which is accompanied with lobe-level, slice-level and patient-level labels, has the potential to facilitate the COVID-19 research, in particular COVID-CT-MD can assist in development of advanced Machine Learning (ML) and Deep Neural Network (DNN) based solutions.
Scientific data
"2021-05-01T00:00:00"
[ "ParnianAfshar", "ShahinHeidarian", "NastaranEnshaei", "FarnooshNaderkhani", "Moezedin JavadRafiee", "AnastasiaOikonomou", "Faranak BabakiFard", "KavehSamimi", "Konstantinos NPlataniotis", "ArashMohammadi" ]
10.1038/s41597-021-00900-3 10.1136/bmjopen-2020-042946 10.21037/qims-20-564 10.1137/S0036139901387186 10.1016/j.jmir.2010.04.001 10.4103/ijri.IJRI_34_19 10.1007/s00330-020-07033-y 10.1007/s42399-020-00341-w 10.6084/m9.figshare.12991592 10.1016/j.compbiomed.2020.103792 10.1016/j.patrec.2020.09.010 10.1109/TMI.2020.2996645 10.5281/zenodo.3757476 10.1183/13993003.01809-2020
Radiation-Induced Pneumonitis in the Era of the COVID-19 Pandemic: Artificial Intelligence for Differential Diagnosis.
(1) Aim: To test the performance of a deep learning algorithm in discriminating radiation therapy-related pneumonitis (RP) from COVID-19 pneumonia. (2) Methods: In this retrospective study, we enrolled three groups of subjects: pneumonia-free (control group), COVID-19 pneumonia and RP patients. CT images were analyzed by mean of an artificial intelligence (AI) algorithm based on a novel deep convolutional neural network structure. The cut-off value of risk probability of COVID-19 was 30%; values higher than 30% were classified as COVID-19 High Risk, and values below 30% as COVID-19 Low Risk. The statistical analysis included the Mann-Whitney U test (significance threshold at
Cancers
"2021-05-01T00:00:00"
[ "Francesco MariaGiordano", "EdyIppolito", "Carlo CosimoQuattrocchi", "CarloGreco", "Carlo AugustoMallio", "BiancaSanto", "PasqualeD'Alessio", "PierfilippoCrucitti", "MicheleFiore", "Bruno BeomonteZobel", "Rolando MariaD'Angelillo", "SaraRamella" ]
10.3390/cancers13081960 10.1016/S0140-6736(09)60737-6 10.1093/jnci/djr325 10.1200/JCO.2005.04.6110 10.3389/fonc.2019.00877 10.1016/j.ijrobp.2012.04.043 10.1056/NEJMoa2002032 10.1016/j.radonc.2020.04.009 10.1007/s11547-020-01178-y 10.1016/S1470-2045(20)30096-6 10.1007/s11547-020-01232-9 10.1007/s11547-020-01272-1 10.1007/s11547-020-01200-3 10.1007/s11547-020-01236-5 10.1007/s11547-020-01202-1 10.1007/s11547-020-01179-x 10.1016/j.cell.2018.02.010 10.1007/s11547-020-01195-x 10.1007/s11547-020-01197-9 10.1148/radiol.2020200905 10.1148/ryct.2020200075 10.1007/s11547-020-01291-y 10.1016/S2589-7500(20)30199-0 10.3390/cancers13040652 10.21037/qims-20-782
Blockchain-Federated-Learning and Deep Learning Models for COVID-19 Detection Using CT Imaging.
With the increase of COVID-19 cases worldwide, an effective way is required to diagnose COVID-19 patients. The primary problem in diagnosing COVID-19 patients is the shortage and reliability of testing kits, due to the quick spread of the virus, medical practitioners are facing difficulty in identifying the positive cases. The second real-world problem is to share the data among the hospitals globally while keeping in view the privacy concerns of the organizations. Building a collaborative model and preserving privacy are the major concerns for training a global deep learning model. This paper proposes a framework that collects a small amount of data from different sources (various hospitals) and trains a global deep learning model using blockchain-based federated learning. Blockchain technology authenticates the data and federated learning trains the model globally while preserving the privacy of the organization. First, we propose a data normalization technique that deals with the heterogeneity of data as the data is gathered from different hospitals having different kinds of Computed Tomography (CT) scanners. Secondly, we use Capsule Network-based segmentation and classification to detect COVID-19 patients. Thirdly, we design a method that can collaboratively train a global model using blockchain technology with federated learning while preserving privacy. Additionally, we collected real-life COVID-19 patients' data open to the research community. The proposed framework can utilize up-to-date data which improves the recognition of CT images. Finally, we conducted comprehensive experiments to validate the proposed method. Our results demonstrate better performance for detecting COVID-19 patients.
IEEE sensors journal
"2021-04-30T00:00:00"
[ "RajeshKumar", "Abdullah AmanKhan", "JayKumar", "NoneZakria", "Noorbakhsh AmiriGolilarz", "SiminZhang", "YangTing", "ChengyuZheng", "WenyongWang" ]
10.1109/JSEN.2021.3076767 10.1109/JIOT.2020.3024180
[Research progress in lung parenchyma segmentation based on computed tomography].
Lung diseases such as lung cancer and COVID-19 seriously endanger human health and life safety, so early screening and diagnosis are particularly important. computed tomography (CT) technology is one of the important ways to screen lung diseases, among which lung parenchyma segmentation based on CT images is the key step in screening lung diseases, and high-quality lung parenchyma segmentation can effectively improve the level of early diagnosis and treatment of lung diseases. Automatic, fast and accurate segmentation of lung parenchyma based on CT images can effectively compensate for the shortcomings of low efficiency and strong subjectivity of manual segmentation, and has become one of the research hotspots in this field. In this paper, the research progress in lung parenchyma segmentation is reviewed based on the related literatures published at domestic and abroad in recent years. The traditional machine learning methods and deep learning methods are compared and analyzed, and the research progress of improving the network structure of deep learning model is emphatically introduced. Some unsolved problems in lung parenchyma segmentation were discussed, and the development prospect was prospected, providing reference for researchers in related fields. 肺癌和新冠肺炎等肺部疾病严重危害着人类的健康与生命安全,其早期筛查与诊断尤为重要。电子计算机断层扫描(CT)技术是肺部疾病筛查的重要途径之一。其中,基于 CT 图像的肺实质分割是肺部疾病筛查的关键步骤,高质量的肺实质分割能有效提高肺部疾病早期诊断和治疗水平。基于 CT 图像的肺实质自动、快速、准确分割能有效弥补手动分割效率低、主观性强等不足,已成为该领域研究的热点之一。本文结合近年国内外发表的相关文献,对肺实质分割的研究进展进行综述,对比分析了传统机器学习方法和深度学习方法,重点介绍了深度学习模型网络结构的改进等研究进展。讨论了肺实质分割中待解决的一些问题,对发展前景进行了展望,为相关领域的科研工作者提供参考。.
Sheng wu yi xue gong cheng xue za zhi = Journal of biomedical engineering = Shengwu yixue gongchengxue zazhi
"2021-04-30T00:00:00"
[ "HanguangXiao", "ZhiqiangRan", "JinfengHuang", "HuijiaoRen", "ChangLiu", "BanglinZhang", "BolongZhang", "JunDang" ]
10.7507/1001-5515.202008032 10.1002/cncr.32802 10.2174/1573405615666190206153321 10.1186/s12938-018-0619-9 10.3390/app8050832 10.1016/j.procs.2018.04.330 10.1109/ACCESS.2020.2987925 10.1007/s12149-017-1223-y 10.1109/TMI.2018.2890510 10.1016/j.bspc.2018.08.008 10.1109/TMI.2020.3000314 10.1109/ACCESS.2020.2993953 10.1016/j.neucom.2018.09.038 10.1007/s10278-018-0052-4 10.1007/s10278-020-00388-0 10.1016/j.neucom.2019.02.003 10.1109/JBHI.2018.2818620 10.1186/s41747-020-00173-2 10.1016/j.artmed.2020.101792 10.1109/ACCESS.2020.2987932 10.1007/s10278-019-00254-8 10.1016/j.bbe.2020.07.007 10.1016/j.cmpb.2020.105395 10.1080/0284186X.2018.1529421 10.1007/s10278-019-00223-1 10.1093/jrr/rrz086 10.1002/mp.13458
A Deep Learning Radiomics Model to Identify Poor Outcome in COVID-19 Patients With Underlying Health Conditions: A Multicenter Study.
Coronavirus disease 2019 (COVID-19) has caused considerable morbidity and mortality, especially in patients with underlying health conditions. A precise prognostic tool to identify poor outcomes among such cases is desperately needed. Total 400 COVID-19 patients with underlying health conditions were retrospectively recruited from 4 centers, including 54 dead cases (labeled as poor outcomes) and 346 patients discharged or hospitalized for at least 7 days since initial CT scan. Patients were allocated to a training set (n = 271), a test set (n = 68), and an external test set (n = 61). We proposed an initial CT-derived hybrid model by combining a 3D-ResNet10 based deep learning model and a quantitative 3D radiomics model to predict the probability of COVID-19 patients reaching poor outcome. The model performance was assessed by area under the receiver operating characteristic curve (AUC), survival analysis, and subgroup analysis. The hybrid model achieved AUCs of 0.876 (95% confidence interval: 0.752-0.999) and 0.864 (0.766-0.962) in test and external test sets, outperforming other models. The survival analysis verified the hybrid model as a significant risk factor for mortality (hazard ratio, 2.049 [1.462-2.871], P < 0.001) that could well stratify patients into high-risk and low-risk of reaching poor outcomes (P < 0.001). The hybrid model that combined deep learning and radiomics could accurately identify poor outcomes in COVID-19 patients with underlying health conditions from initial CT scans. The great risk stratification ability could help alert risk of death and allow for timely surveillance plans.
IEEE journal of biomedical and health informatics
"2021-04-28T00:00:00"
[ "SiwenWang", "DiDong", "LiangLi", "HailinLi", "YanBai", "YahuaHu", "YuanyiHuang", "XiangrongYu", "SibinLiu", "XiaomingQiu", "LigongLu", "MeiyunWang", "YunfeiZha", "JieTian" ]
10.1109/JBHI.2021.3076086
DenseCapsNet: Detection of COVID-19 from X-ray images using a capsule neural network.
At present, the global pandemic as it relates to novel coronavirus pneumonia is still a very difficult situation. Due to the recent outbreak of novel coronavirus pneumonia, novel chest X-ray (CXR) images that can be used for deep learning analysis are very rare. To solve this problem, we propose a deep learning framework that integrates a convolutional neural network and a capsule network. DenseCapsNet, a new deep learning framework, is formed by the fusion of a dense convolutional network (DenseNet) and the capsule neural network (CapsNet), leveraging their respective advantages and reducing the dependence of convolutional neural networks on a large amount of data. Using 750 CXR images of lungs of healthy patients as well as those of patients with other pneumonia and novel coronavirus pneumonia, the method can obtain an accuracy of 90.7% and an F1 score of 90.9%, and the sensitivity for detecting COVID-19 can reach 96%. These results show that the deep fusion neural network DenseCapsNet has good performance in novel coronavirus pneumonia CXR radiography detection.
Computers in biology and medicine
"2021-04-24T00:00:00"
[ "HaoQuan", "XiaosongXu", "TingtingZheng", "ZhiLi", "MingfangZhao", "XiaoyuCui" ]
10.1016/j.compbiomed.2021.104399 10.1007/s13246-020-00865-4 10.1109/ACCESS.2020.3010287 10.1016/j.compbiomed.2020.103869
RANDGAN: Randomized generative adversarial network for detection of COVID-19 in chest X-ray.
COVID-19 spread across the globe at an immense rate and has left healthcare systems incapacitated to diagnose and test patients at the needed rate. Studies have shown promising results for detection of COVID-19 from viral bacterial pneumonia in chest X-rays. Automation of COVID-19 testing using medical images can speed up the testing process of patients where health care systems lack sufficient numbers of the reverse-transcription polymerase chain reaction tests. Supervised deep learning models such as convolutional neural networks need enough labeled data for all classes to correctly learn the task of detection. Gathering labeled data is a cumbersome task and requires time and resources which could further strain health care systems and radiologists at the early stages of a pandemic such as COVID-19. In this study, we propose a randomized generative adversarial network (RANDGAN) that detects images of an unknown class (COVID-19) from known and labelled classes (Normal and Viral Pneumonia) without the need for labels and training data from the unknown class of images (COVID-19). We used the largest publicly available COVID-19 chest X-ray dataset, COVIDx, which is comprised of Normal, Pneumonia, and COVID-19 images from multiple public databases. In this work, we use transfer learning to segment the lungs in the COVIDx dataset. Next, we show why segmentation of the region of interest (lungs) is vital to correctly learn the task of classification, specifically in datasets that contain images from different resources as it is the case for the COVIDx dataset. Finally, we show improved results in detection of COVID-19 cases using our generative model (RANDGAN) compared to conventional generative adversarial networks for anomaly detection in medical images, improving the area under the ROC curve from 0.71 to 0.77.
Scientific reports
"2021-04-23T00:00:00"
[ "SamanMotamed", "PatrikRogalla", "FarzadKhalvati" ]
10.1038/s41598-021-87994-2 10.1183/13993003.00524-2020 10.1016/j.ejim.2012.04.016 10.1148/ryai.2020200053 10.7326/M20-1495 10.2200/S00196ED1V01Y200906AIM006 10.1148/ryct.2020200034
COVID-19 Automatic Diagnosis With Radiographic Imaging: Explainable Attention Transfer Deep Neural Networks.
Researchers seek help from deep learning methods to alleviate the enormous burden of reading radiological images by clinicians during the COVID-19 pandemic. However, clinicians are often reluctant to trust deep models due to their black-box characteristics. To automatically differentiate COVID-19 and community-acquired pneumonia from healthy lungs in radiographic imaging, we propose an explainable attention-transfer classification model based on the knowledge distillation network structure. The attention transfer direction always goes from the teacher network to the student network. Firstly, the teacher network extracts global features and concentrates on the infection regions to generate attention maps. It uses a deformable attention module to strengthen the response of infection regions and to suppress noise in irrelevant regions with an expanded reception field. Secondly, an image fusion module combines attention knowledge transferred from teacher network to student network with the essential information in original input. While the teacher network focuses on global features, the student branch focuses on irregularly shaped lesion regions to learn discriminative features. Lastly, we conduct extensive experiments on public chest X-ray and CT datasets to demonstrate the explainability of the proposed architecture in diagnosing COVID-19.
IEEE journal of biomedical and health informatics
"2021-04-22T00:00:00"
[ "WenqiShi", "LiTong", "YuandaZhu", "May DWang" ]
10.1109/JBHI.2021.3074893
Artificial intelligence (AI) for medical imaging to combat coronavirus disease (COVID-19): a detailed review with direction for future research.
Since early 2020, the whole world has been facing the deadly and highly contagious disease named coronavirus disease (COVID-19) and the World Health Organization declared the pandemic on 11 March 2020. Over 23 million positive cases of COVID-19 have been reported till late August 2020. Medical images such as chest X-rays and Computed Tomography scans are becoming one of the main leading clinical diagnosis tools in fighting against COVID-19, underpinned by Artificial Intelligence based techniques, resulting in rapid decision-making in saving lives. This article provides an extensive review of AI-based methods to assist medical practitioners with comprehensive knowledge of the efficient AI-based methods for efficient COVID-19 diagnosis. Nearly all the reported methods so far along with their pros and cons as well as recommendations for improvements are discussed, including image acquisition, segmentation, classification, and follow-up diagnosis phases developed between 2019 and 2020. AI and machine learning technologies have boosted the accuracy of Covid-19 diagnosis, and most of the widely used deep learning methods have been implemented and worked well with a small amount of data for COVID-19 diagnosis. This review presents a detailed mythological analysis for the evaluation of AI-based methods used in the process of detecting COVID-19 from medical images. However, due to the quick outbreak of Covid-19, there are not many ground-truth datasets available for the communities. It is necessary to combine clinical experts' observations and information from images to have a reliable and efficient COVID-19 diagnosis. This paper suggests that future research may focus on multi-modality based models as well as how to select the best model architecture where AI can introduce more intelligence to medical systems to capture the characteristics of diseases by learning from multi-modality data to obtain reliable results for COVID-19 diagnosis for timely treatment .
Artificial intelligence review
"2021-04-21T00:00:00"
[ "Toufique ASoomro", "LihongZheng", "Ahmed JAfifi", "AhmedAli", "MingYin", "JunbinGao" ]
10.1007/s10462-021-09985-z 10.1007/s13246-020-00865-4 10.1007/s00330-018-5745-z 10.1007/s11548-018-1895-3 10.1038/s41598-019-56847-4 10.1016/j.eswa.2019.113114 10.1007/s11831-019-09344-w 10.1016/j.measurement.2019.05.076 10.1007/s11042-019-7327-8 10.1145/3411760 10.2214/AJR.06.0370 10.1007/s10489-019-01511-7 10.1145/2816795.2818013 10.1016/j.jpha.2020.03.004 10.1148/ryct.2020200044 10.1038/s41598-019-56847-4 10.1016/j.eng.2020.04.010
Convolutional Sparse Support Estimator-Based COVID-19 Recognition From X-Ray Images.
Coronavirus disease (COVID-19) has been the main agenda of the whole world ever since it came into sight. X-ray imaging is a common and easily accessible tool that has great potential for COVID-19 diagnosis and prognosis. Deep learning techniques can generally provide state-of-the-art performance in many classification tasks when trained properly over large data sets. However, data scarcity can be a crucial obstacle when using them for COVID-19 detection. Alternative approaches such as representation-based classification [collaborative or sparse representation (SR)] might provide satisfactory performance with limited size data sets, but they generally fall short in performance or speed compared to the neural network (NN)-based methods. To address this deficiency, convolution support estimation network (CSEN) has recently been proposed as a bridge between representation-based and NN approaches by providing a noniterative real-time mapping from query sample to ideally SR coefficient support, which is critical information for class decision in representation-based techniques. The main premises of this study can be summarized as follows: 1) A benchmark X-ray data set, namely QaTa-Cov19, containing over 6200 X-ray images is created. The data set covering 462 X-ray images from COVID-19 patients along with three other classes; bacterial pneumonia, viral pneumonia, and normal. 2) The proposed CSEN-based classification scheme equipped with feature extraction from state-of-the-art deep NN solution for X-ray images, CheXNet, achieves over 98% sensitivity and over 95% specificity for COVID-19 recognition directly from raw X-ray images when the average performance of 5-fold cross validation over QaTa-Cov19 data set is calculated. 3) Having such an elegant COVID-19 assistive diagnosis performance, this study further provides evidence that COVID-19 induces a unique pattern in X-rays that can be discriminated with high accuracy.
IEEE transactions on neural networks and learning systems
"2021-04-20T00:00:00"
[ "MehmetYamac", "MeteAhishali", "AysenDegerli", "SerkanKiranyaz", "Muhammad E HChowdhury", "MoncefGabbouj" ]
10.1109/TNNLS.2021.3070467
CovidXrayNet: Optimizing data augmentation and CNN hyperparameters for improved COVID-19 detection from CXR.
To mitigate the spread of the current coronavirus disease 2019 (COVID-19) pandemic, it is crucial to have an effective screening of infected patients to be isolated and treated. Chest X-Ray (CXR) radiological imaging coupled with Artificial Intelligence (AI) applications, in particular Convolutional Neural Network (CNN), can speed the COVID-19 diagnostic process. In this paper, we optimize the data augmentation and the CNN hyperparameters for detecting COVID-19 from CXRs in terms of validation accuracy. This optimization increases the accuracy of the popular CNN architectures such as the Visual Geometry Group network (VGG-19) and the Residual Neural Network (ResNet-50), by 11.93% and 4.97%, respectively. We then proposed CovidXrayNet model that is based on EfficientNet-B0 and our optimization results. We evaluated CovidXrayNet on two datasets, including our generated balanced COVIDcxr dataset (960 CXRs) and the benchmark COVIDx dataset (15,496 CXRs). With only 30 epochs of training, CovidXrayNet achieves state-of-the-art accuracy of 95.82% on the COVIDx dataset in the three-class classification task (COVID-19, normal or pneumonia). The CovidXRayNet model, the COVIDcxr dataset, and several optimization experiments are publicly available at https://github.com/MaramMonshi/CovidXrayNet.
Computers in biology and medicine
"2021-04-19T00:00:00"
[ "Maram Mahmoud AMonshi", "JosiahPoon", "VeraChung", "Fahad MahmoudMonshi" ]
10.1016/j.compbiomed.2021.104375
BS-Net: Learning COVID-19 pneumonia severity on a large chest X-ray dataset.
In this work we design an end-to-end deep learning architecture for predicting, on Chest X-rays images (CXR), a multi-regional score conveying the degree of lung compromise in COVID-19 patients. Such semi-quantitative scoring system, namely Brixia score, is applied in serial monitoring of such patients, showing significant prognostic value, in one of the hospitals that experienced one of the highest pandemic peaks in Italy. To solve such a challenging visual task, we adopt a weakly supervised learning strategy structured to handle different tasks (segmentation, spatial alignment, and score estimation) trained with a "from-the-part-to-the-whole" procedure involving different datasets. In particular, we exploit a clinical dataset of almost 5,000 CXR annotated images collected in the same hospital. Our BS-Net demonstrates self-attentive behavior and a high degree of accuracy in all processing stages. Through inter-rater agreement tests and a gold standard comparison, we show that our solution outperforms single human annotators in rating accuracy and consistency, thus supporting the possibility of using this tool in contexts of computer-assisted monitoring. Highly resolved (super-pixel level) explainability maps are also generated, with an original technique, to visually help the understanding of the network activity on the lung areas. We also consider other scores proposed in literature and provide a comparison with a recently proposed non-specific approach. We eventually test the performance robustness of our model on an assorted public COVID-19 dataset, for which we also provide Brixia score annotations, observing good direct generalization and fine-tuning capabilities that highlight the portability of BS-Net in other clinical settings. The CXR dataset along with the source code and the trained model are publicly released for research purposes.
Medical image analysis
"2021-04-17T00:00:00"
[ "AlbertoSignoroni", "MattiaSavardi", "SergioBenini", "NicolaAdami", "RiccardoLeonardi", "PaoloGibellini", "FilippoVaccher", "MarcoRavanelli", "AndreaBorghesi", "RobertoMaroldi", "DavideFarina" ]
10.1016/j.media.2021.102046 10.1148/radiol.2020202439 10.1007/s00330-020-07504-2
Automated detection of pneumonia cases using deep transfer learning with paediatric chest X-ray images.
Pneumonia is a lung infection and causes the inflammation of the small air sacs (Alveoli) in one or both lungs. Proper and faster diagnosis of pneumonia at an early stage is imperative for optimal patient care. Currently, chest X-ray is considered as the best imaging modality for diagnosing pneumonia. However, the interpretation of chest X-ray images is challenging. To this end, we aimed to use an automated convolutional neural network-based transfer-learning approach to detect pneumonia in paediatric chest radiographs. Herein, an automated convolutional neural network-based transfer-learning approach using four different pre-trained models ( All proposed models provide accuracy greater than 83.0% for binary classification. The pre-trained DenseNet121 model provides the highest classification performance of automated pneumonia classification with 86.8% accuracy, followed by Xception model with an accuracy of 86.0%. The sensitivity of the proposed models was greater than 91.0%. The Xception and DenseNet121 models achieve the highest classification performance with F1-score greater than 89.0%. The plotted area under curve of receiver operating characteristics of VGG19, Xception, ResNet50, and DenseNet121 models are 0.78, 0.81, 0.81, and 0.86, respectively. Our data showed that the proposed models achieve a high accuracy for binary classification. Transfer learning was used to accelerate training of the proposed models and resolve the problem associated with insufficient data. We hope that these proposed models can help radiologists for a quick diagnosis of pneumonia at radiology departments. Moreover, our proposed models may be useful to detect other chest-related diseases such as novel Coronavirus 2019. Herein, we used transfer learning as a machine learning approach to accelerate training of the proposed models and resolve the problem associated with insufficient data. Our proposed models achieved accuracy greater than 83.0% for binary classification.
The British journal of radiology
"2021-04-17T00:00:00"
[ "MohammadSalehi", "RezaMohammadi", "HamedGhaffari", "NahidSadighi", "RezaReiazi" ]
10.1259/bjr.20201263 10.3390/s20041068 10.1016/S0140-6736(10)61459-6 10.1016/j.compbiomed.2020.103898 10.1016/j.cmpb.2019.06.023 10.1016/j.crad.2018.12.015 10.3390/s19122781 10.1148/rg.2017160130 10.1148/radiol.2018171820 10.3348/kjr.2019.0312 10.1155/2018/4168538 10.1148/radiol.2017162326 10.1109/TKDE.2009.191 10.1186/s40537-016-0043-6 10.1016/j.cell.2018.02.010 10.1613/jair.953 10.31661/jbpe.v0i0.2008-1153 10.1007/978-981-15-6315-7_14 10.2196/19104
A deep-learning pipeline for the diagnosis and discrimination of viral, non-viral and COVID-19 pneumonia from chest X-ray images.
Common lung diseases are first diagnosed using chest X-rays. Here, we show that a fully automated deep-learning pipeline for the standardization of chest X-ray images, for the visualization of lesions and for disease diagnosis can identify viral pneumonia caused by coronavirus disease 2019 (COVID-19) and assess its severity, and can also discriminate between viral pneumonia caused by COVID-19 and other types of pneumonia. The deep-learning system was developed using a heterogeneous multicentre dataset of 145,202 images, and tested retrospectively and prospectively with thousands of additional images across four patient cohorts and multiple countries. The system generalized across settings, discriminating between viral pneumonia, other types of pneumonia and the absence of disease with areas under the receiver operating characteristic curve (AUCs) of 0.94-0.98; between severe and non-severe COVID-19 with an AUC of 0.87; and between COVID-19 pneumonia and other viral or non-viral pneumonia with AUCs of 0.87-0.97. In an independent set of 440 chest X-rays, the system performed comparably to senior radiologists and improved the performance of junior radiologists. Automated deep-learning systems for the assessment of pneumonia could facilitate early intervention and provide support for clinical decision-making.
Nature biomedical engineering
"2021-04-17T00:00:00"
[ "GuangyuWang", "XiaohongLiu", "JunShen", "ChengdiWang", "ZhihuanLi", "LinsenYe", "XingwangWu", "TingChen", "KaiWang", "XuanZhang", "ZhongguoZhou", "JianYang", "YeSang", "RuiyunDeng", "WenhuaLiang", "TaoYu", "MingGao", "JinWang", "ZehongYang", "HuiminCai", "GuangmingLu", "LingyanZhang", "LeiYang", "WenqinXu", "WinstonWang", "AndreaOlvera", "IanZiyar", "CharlotteZhang", "OulanLi", "WeihuaLiao", "JunLiu", "WenChen", "WeiChen", "JichanShi", "LianghongZheng", "LongjiangZhang", "ZhihanYan", "XiaoguangZou", "GuipingLin", "GuiqunCao", "Laurance LLau", "LongMo", "YongLiang", "MichaelRoberts", "EvisSala", "Carola-BibianeSchönlieb", "MansonFok", "Johnson Yiu-NamLau", "TaoXu", "JianxingHe", "KangZhang", "WeiminLi", "TianxinLin" ]
10.1038/s41551-021-00704-1
Machine learning based on clinical characteristics and chest CT quantitative measurements for prediction of adverse clinical outcomes in hospitalized patients with COVID-19.
To develop and validate a machine learning model for the prediction of adverse outcomes in hospitalized patients with COVID-19. We included 424 patients with non-severe COVID-19 on admission from January 17, 2020, to February 17, 2020, in the primary cohort of this retrospective multicenter study. The extent of lung involvement was quantified on chest CT images by a deep learning-based framework. The composite endpoint was the occurrence of severe or critical COVID-19 or death during hospitalization. The optimal machine learning classifier and feature subset were selected for model construction. The performance was further tested in an external validation cohort consisting of 98 patients. There was no significant difference in the prevalence of adverse outcomes (8.7% vs. 8.2%, p = 0.858) between the primary and validation cohorts. The machine learning method extreme gradient boosting (XGBoost) and optimal feature subset including lactic dehydrogenase (LDH), presence of comorbidity, CT lesion ratio (lesion%), and hypersensitive cardiac troponin I (hs-cTnI) were selected for model construction. The XGBoost classifier based on the optimal feature subset performed well for the prediction of developing adverse outcomes in the primary and validation cohorts, with AUCs of 0.959 (95% confidence interval [CI]: 0.936-0.976) and 0.953 (95% CI: 0.891-0.986), respectively. Furthermore, the XGBoost classifier also showed clinical usefulness. We presented a machine learning model that could be effectively used as a predictor of adverse outcomes in hospitalized patients with COVID-19, opening up the possibility for patient stratification and treatment allocation. • Developing an individually prognostic model for COVID-19 has the potential to allow efficient allocation of medical resources. • We proposed a deep learning-based framework for accurate lung involvement quantification on chest CT images. • Machine learning based on clinical and CT variables can facilitate the prediction of adverse outcomes of COVID-19.
European radiology
"2021-04-16T00:00:00"
[ "ZhichaoFeng", "HuiShen", "KaiGao", "JianpoSu", "ShanhuYao", "QinLiu", "ZhiminYan", "JunhongDuan", "DaliYi", "HuafeiZhao", "HuilingLi", "QizhiYu", "WenmingZhou", "XiaowenMao", "XinOuyang", "JiMei", "QiuhuaZeng", "LindyWilliams", "XiaoqianMa", "PengfeiRong", "DewenHu", "WeiWang" ]
10.1007/s00330-021-07957-z 10.1056/NEJMoa2002032 10.1016/S2213-2600(20)30079-5 10.1164/rccm.202002-0445OC 10.1016/S0140-6736(20)31022-9 10.1056/NEJMoa2007016 10.1016/S0140-6736(20)31042-4 10.14336/AD.2020.0630 10.1001/jama.2019.20153 10.1001/jamainternmed.2020.0994 10.1136/bmj.m1328 10.1093/cid/ciaa414 10.1038/s41467-020-18786-x 10.1016/S1473-3099(20)30086-4 10.1016/j.cell.2020.04.045 10.7326/M14-0697 10.1016/j.media.2020.101836 10.1016/j.ebiom.2019.05.023 10.1016/j.jchf.2019.06.013 10.1001/jamainternmed.2020.2033 10.1371/journal.pone.0143486 10.1111/bjh.14830 10.1053/j.gastro.2020.03.065 10.1002/hep.31301 10.1038/s41569-020-0360-5 10.7326/M18-0670 10.1161/CIRCULATIONAHA.120.047008 10.1183/13993003.00524-2020 10.1016/j.chest.2020.04.010 10.1056/NEJMsr2005760 10.1042/CS20200363 10.1007/s00330-020-07042-x 10.1016/S0140-6736(20)30183-5
An Insight of the First Community Infected COVID-19 Patient in Beijing by Imported Case: Role of Deep Learning-Assisted CT Diagnosis.
In the era of coronavirus disease 2019 (COVID-19) pandemic, imported COVID-19 cases pose great challenges to many countries. Chest CT examination is considered to be complementary to nucleic acid test for COVID-19 detection and diagnosis. We report the first community infected COVID-19 patient by an imported case in Beijing, which manifested as nodular lesions on chest CT imaging at the early stage. Deep Learning (DL)-based diagnostic systems quantitatively monitored the progress of pulmonary lesions in 6 days and timely made alert for suspected pneumonia, so that prompt medical isolation was taken. The patient was confirmed as COVID-19 case after nucleic acid test, for which the community transmission was prevented timely. The roles of DL-assisted diagnosis in helping radiologists screening suspected COVID cases were discussed.
Chinese medical sciences journal = Chung-kuo i hsueh k'o hsueh tsa chih
"2021-04-16T00:00:00"
[ "Da ShengLi", "Da WeiWang", "Na NaWang", "Hai WangXu", "HeHuang", "Jian PingDong", "ChenXia" ]
10.24920/003788 10.23750/abm.v91il.9397 10.3348/kjr.2020.0146 10.1186/s12967-020-02324-W 10.3348/kjr.2020.0132 10.1093/infdis/jiaa119 10.1007/s11427-020-1661-4 10.3348/kjr.2020.0195 10.1148/ryai.2019180084 10.1148/ryct.2020200075 10.1016/j.ebiom.2019.05.040 10.1016/S2589-7500(20)30199-0
Deep Convolutional Neural Network-Based Computer-Aided Detection System for COVID-19 Using Multiple Lung Scans: Design and Implementation Study.
Owing to the COVID-19 pandemic and the imminent collapse of health care systems following the exhaustion of financial, hospital, and medicinal resources, the World Health Organization changed the alert level of the COVID-19 pandemic from high to very high. Meanwhile, more cost-effective and precise COVID-19 detection methods are being preferred worldwide. Machine vision-based COVID-19 detection methods, especially deep learning as a diagnostic method in the early stages of the pandemic, have been assigned great importance during the pandemic. This study aimed to design a highly efficient computer-aided detection (CAD) system for COVID-19 by using a neural search architecture network (NASNet)-based algorithm. NASNet, a state-of-the-art pretrained convolutional neural network for image feature extraction, was adopted to identify patients with COVID-19 in their early stages of the disease. A local data set, comprising 10,153 computed tomography scans of 190 patients with and 59 without COVID-19 was used. After fitting on the training data set, hyperparameter tuning, and topological alterations of the classifier block, the proposed NASNet-based model was evaluated on the test data set and yielded remarkable results. The proposed model's performance achieved a detection sensitivity, specificity, and accuracy of 0.999, 0.986, and 0.996, respectively. The proposed model achieved acceptable results in the categorization of 2 data classes. Therefore, a CAD system was designed on the basis of this model for COVID-19 detection using multiple lung computed tomography scans. The system differentiated all COVID-19 cases from non-COVID-19 ones without any error in the application phase. Overall, the proposed deep learning-based CAD system can greatly help radiologists detect COVID-19 in its early stages. During the COVID-19 pandemic, the use of a CAD system as a screening tool would accelerate disease detection and prevent the loss of health care resources.
Journal of medical Internet research
"2021-04-14T00:00:00"
[ "MustafaGhaderzadeh", "FarkhondehAsadi", "RamezanJafari", "DavoodBashash", "HassanAbolghasemi", "MehradAria" ]
10.2196/27468 10.3390/jcm9020330 10.1148/radiol.2020200230 10.1007/s00330-020-06801-0 10.1056/NEJMoa2001316 10.1038/s41598-020-76550-z 10.1038/s41598-020-76550-z 10.1016/S0140-6736(20)30183-5 10.1148/ryct.2020200107 10.1001/jama.2020.1585 10.1148/radiol.2020200642 10.1007/s13244-018-0639-9 10.1007/s00330-021-07715-1 10.1007/s00330-020-07044-9 10.1016/j.cell.2020.04.045 10.1183/13993003.00775-2020 10.3390/e22050517 10.1016/j.irbm.2020.05.003 10.1016/j.compbiomed.2020.103795 10.1148/radiol.2020200905 10.1038/s41591-020-0931-3 10.1007/s00259-020-04929-1 10.1109/cvpr.2018.00907 10.1016/j.media.2017.07.005 10.17148/IARJSET.2015.2305 10.1111/j.1467-9892.1994.tb00184.x 10.1186/s40537-019-0197-0 10.1080/07391102.2020.1788642 10.1016/j.chaos.2020.109944 10.1016/j.media.2020.101794 10.1109/JBHI.2020.3023246 10.1109/3dv.2016.79 10.1007/s10489-020-02051-1 10.1155/2021/6677314 10.1155/2021/6677314
Fusion of convolution neural network, support vector machine and Sobel filter for accurate detection of COVID-19 patients using X-ray images.
The coronavirus (COVID-19) is currently the most common contagious disease which is prevalent all over the world. The main challenge of this disease is the primary diagnosis to prevent secondary infections and its spread from one person to another. Therefore, it is essential to use an automatic diagnosis system along with clinical procedures for the rapid diagnosis of COVID-19 to prevent its spread. Artificial intelligence techniques using computed tomography (CT) images of the lungs and chest radiography have the potential to obtain high diagnostic performance for Covid-19 diagnosis. In this study, a fusion of convolutional neural network (CNN), support vector machine (SVM), and Sobel filter is proposed to detect COVID-19 using X-ray images. A new X-ray image dataset was collected and subjected to high pass filter using a Sobel filter to obtain the edges of the images. Then these images are fed to CNN deep learning model followed by SVM classifier with ten-fold cross validation strategy. This method is designed so that it can learn with not many data. Our results show that the proposed CNN-SVM with Sobel filter (CNN-SVM + Sobel) achieved the highest classification accuracy, sensitivity and specificity of 99.02%, 100% and 95.23%, respectively in automated detection of COVID-19. It showed that using Sobel filter can improve the performance of CNN. Unlike most of the other researches, this method does not use a pre-trained network. We have also validated our developed model using
Biomedical signal processing and control
"2021-04-14T00:00:00"
[ "DanialSharifrazi", "RoohallahAlizadehsani", "MohamadRoshanzamir", "Javad HassannatajJoloudari", "AfshinShoeibi", "MahboobehJafari", "SadiqHussain", "Zahra AlizadehSani", "FereshtehHasanzadeh", "FahimeKhozeimeh", "AbbasKhosravi", "SaeidNahavandi", "MaryamPanahiazar", "AssefZare", "Sheikh Mohammed SharifulIslam", "U RajendraAcharya" ]
10.1016/j.bspc.2021.102622 10.1016/S0140-6736(20)30211-7 10.1001/jama.2020.1585 10.1056/NEJMoa2001316 10.1056/NEJMoa2001191 10.1148/radiol.2020200432 10.1007/s42600-020-00091-7 10.1007/s00330-019-06163-2 10.1016/j.eswa.2020.113788 10.1038/s41597-019-0206-3 10.1109/TMI.2016.2535865 10.1109/TMI.2020.2993291 10.1080/07391102.2020.1767212 10.1016/j.compbiomed.2020.103795 10.1109/ACCESS.2019.2952946 10.17632/2fxz4px6d8.4
A multi-center study of COVID-19 patient prognosis using deep learning-based CT image analysis and electronic health records.
As of August 30th, there were in total 25.1 million confirmed cases and 845 thousand deaths caused by coronavirus disease of 2019 (COVID-19) worldwide. With overwhelming demands on medical resources, patient stratification based on their risks is essential. In this multi-center study, we built prognosis models to predict severity outcomes, combining patients' electronic health records (EHR), which included vital signs and laboratory data, with deep learning- and CT-based severity prediction. We first developed a CT segmentation network using datasets from multiple institutions worldwide. Two biomarkers were extracted from the CT images: total opacity ratio (TOR) and consolidation ratio (CR). After obtaining TOR and CR, further prognosis analysis was conducted on datasets from INSTITUTE-1, INSTITUTE-2 and INSTITUTE-3. For each data cohort, generalized linear model (GLM) was applied for prognosis prediction. For the deep learning model, the correlation coefficient of the network prediction and manual segmentation was 0.755, 0.919, and 0.824 for the three cohorts, respectively. The AUC (95 % CI) of the final prognosis models was 0.85(0.77,0.92), 0.93(0.87,0.98), and 0.86(0.75,0.94) for INSTITUTE-1, INSTITUTE-2 and INSTITUTE-3 cohorts, respectively. Either TOR or CR exist in all three final prognosis models. Age, white blood cell (WBC), and platelet (PLT) were chosen predictors in two cohorts. Oxygen saturation (SpO2) was a chosen predictor in one cohort. The developed deep learning method can segment lung infection regions. Prognosis results indicated that age, SpO2, CT biomarkers, PLT, and WBC were the most important prognostic predictors of COVID-19 in our prognosis model.
European journal of radiology
"2021-04-14T00:00:00"
[ "KuangGong", "DufanWu", "Chiara DanielaArru", "FatemehHomayounieh", "NirNeumark", "JiahuiGuan", "VarunBuch", "KyungsangKim", "Bernardo CanedoBizzo", "HuiRen", "Won YoungTak", "Soo YoungPark", "Yu RimLee", "Min KyuKang", "Jung GilPark", "AlessandroCarriero", "LucaSaba", "MahsaMasjedi", "HamidrezaTalari", "RosaBabaei", "Hadi KarimiMobin", "ShadiEbrahimian", "NingGuo", "Subba RDigumarthy", "IttaiDayan", "Mannudeep KKalra", "QuanzhengLi" ]
10.1016/j.ejrad.2021.109583 10.1148/radiol.2020200642 10.1148/radiol.2020200432 10.1148/radiol.2020200463 10.2214/AJR.20.22976 10.1148/radiol.2020200370 10.1371/journal.pone.0093885 10.1371/journal.pone.0230548 10.1148/radiol.2020200905 10.1007/s10489-020-01714-3 10.1109/TMI.2020.2994908 10.1148/ryct.2020200075 10.1109/TMI.2020.2992546 10.1148/ryct.2020200082 10.1007/s00330-020-07087-y 10.1016/j.compbiomed.2020.104037 10.1101/2020.02.29.20029603 10.2139/ssrn.3557984 10.1016/j.compbiomed.2020.103949 10.1016/S2214-109X(20)30068-1 10.1038/s41591-020-0895-3 10.1109/CVPR.2017.243 10.1109/3DV.2016.79 10.1109/TMI.2020.2996645 10.1109/TMI.2020.3000314 10.1109/JBHI.2020.3030224 10.1088/1361-6560/ab440d 10.1097/00003246-200006000-00031 10.1016/j.cca.2020.03.022 10.1016/S0140-6736(20)30566-3 10.1186/s13054-020-2833-7 10.1038/s41392-020-0148-4 10.1038/s42256-020-0180-7 10.2139/ssrn.3551365 10.2139/ssrn.3543603 10.2139/ssrn.3562456 10.1101/2020.02.27.20028027 10.1016/j.cell.2020.04.045
Multilevel Deep-Aggregated Boosted Network to Recognize COVID-19 Infection from Large-Scale Heterogeneous Radiographic Data.
In the present epidemic of the coronavirus disease 2019 (COVID-19), radiological imaging modalities, such as X-ray and computed tomography (CT), have been identified as effective diagnostic tools. However, the subjective assessment of radiographic examination is a time-consuming task and demands expert radiologists. Recent advancements in artificial intelligence have enhanced the diagnostic power of computer-aided diagnosis (CAD) tools and assisted medical specialists in making efficient diagnostic decisions. In this work, we propose an optimal multilevel deep-aggregated boosted network to recognize COVID-19 infection from heterogeneous radiographic data, including X-ray and CT images. Our method leverages multilevel deep-aggregated features and multistage training via a mutually beneficial approach to maximize the overall CAD performance. To improve the interpretation of CAD predictions, these multilevel deep features are visualized as additional outputs that can assist radiologists in validating the CAD results. A total of six publicly available datasets were fused to build a single large-scale heterogeneous radiographic collection that was used to analyze the performance of the proposed technique and other baseline methods. To preserve generality of our method, we selected different patient data for training, validation, and testing, and consequently, the data of same patient were not included in training, validation, and testing subsets. In addition, fivefold cross-validation was performed in all the experiments for a fair evaluation. Our method exhibits promising performance values of 95.38%, 95.57%, 92.53%, 98.14%, 93.16%, and 98.55% in terms of average accuracy, F-measure, specificity, sensitivity, precision, and area under the curve, respectively and outperforms various state-of-the-art methods.
IEEE journal of biomedical and health informatics
"2021-04-10T00:00:00"
[ "MuhammadOwais", "Young WonLee", "TahirMahmood", "AdnanHaider", "HaseebSultan", "Kang RyoungPark" ]
10.1109/JBHI.2021.3072076 10.1109/JBHI.2020.3045274 10.1109/JBHI.2020.3042069
An automated and fast system to identify COVID-19 from X-ray radiograph of the chest using image processing and machine learning.
A type of coronavirus disease called COVID-19 is spreading all over the globe. Researchers and scientists are endeavoring to find new and effective methods to diagnose and treat this disease. This article presents an automated and fast system that identifies COVID-19 from X-ray radiographs of the chest using image processing and machine learning algorithms. Initially, the system extracts the feature descriptors from the radiographs of both healthy and COVID-19 affected patients using the speeded up robust features algorithm. Then, visual vocabulary is built by reducing the number of feature descriptors via quantization of feature space using the K-means clustering algorithm. The visual vocabulary train the support vector machine (SVM) classifier. During testing, an X-ray radiograph's visual vocabulary is sent to the trained SVM classifier to detect the absence or presence of COVID-19. The study used the dataset of 340 X-ray radiographs, 170 images of each Healthy and Positive COVID-19 class. During simulations, the dataset split into training and testing parts at various ratios. After training, the system does not require any human intervention and can process thousands of images with high precision in a few minutes. The performance of the system is measured using standard parameters of accuracy and confusion matrix. We compared the performance of the proposed SVM-based classier with the deep-learning-based convolutional neural networks (CNN). The SVM yields better results than CNN and achieves a maximum accuracy of up to 94.12%.
International journal of imaging systems and technology
"2021-04-07T00:00:00"
[ "Murtaza AliKhan" ]
10.1002/ima.22564 10.1007/s13369-020-04447-0 10.1155/2020/8828855 10.1148/radiol.2020200905 10.1016/j.imu.2020.100378 10.1007/11744023_32 10.1017/CBO9780511801389 10.1038/s41598-020-74539-2
Convolutional capsule network for COVID-19 detection using radiography images.
Novel corona virus COVID-19 has spread rapidly all over the world. Due to increasing COVID-19 cases, there is a dearth of testing kits. Therefore, there is a severe need for an automatic recognition system as a solution to reduce the spreading of the COVID-19 virus. This work offers a decision support system based on the X-ray image to diagnose the presence of the COVID-19 virus. A deep learning-based computer-aided decision support system will be capable to differentiate between COVID-19 and pneumonia. Recently, convolutional neural network (CNN) is designed for the diagnosis of COVID-19 patients through
International journal of imaging systems and technology
"2021-04-07T00:00:00"
[ "ShamikTiwari", "AnuragJain" ]
10.1002/ima.22566 10.34740/KAGGLE/DSV/1019469
Future IoT tools for COVID-19 contact tracing and prediction: A review of the state-of-the-science.
In 2020 the world is facing unprecedented challenges due to COVID-19. To address these challenges, many digital tools are being explored and developed to contain the spread of the disease. With the lack of availability of vaccines, there is an urgent need to avert resurgence of infections by putting some measures, such as contact tracing, in place. While digital tools, such as phone applications are advantageous, they also pose challenges and have limitations (eg, wireless coverage could be an issue in some cases). On the other hand, wearable devices, when coupled with the Internet of Things (IoT), are expected to influence lifestyle and healthcare directly, and they may be useful for health monitoring during the global pandemic and beyond. In this work, we conduct a literature review of contact tracing methods and applications. Based on the literature review, we found limitations in gathering health data, such as insufficient network coverage. To address these shortcomings, we propose a novel intelligent tool that will be useful for contact tracing and prediction of COVID-19 clusters. The solution comprises a phone application combined with a wearable device, infused with unique intelligent IoT features (complex data analysis and intelligent data visualization) embedded within the system to aid in COVID-19 analysis. Contact tracing applications must establish data collection and data interpretation. Intelligent data interpretation can assist epidemiological scientists in anticipating clusters, and can enable them to take necessary action in improving public health management. Our proposed tool could also be used to curb disease incidence in future global health crises.
International journal of imaging systems and technology
"2021-04-07T00:00:00"
[ "VicneshJahmunah", "Vidya KSudarshan", "Shu LihOh", "RajGururajan", "RashmiGururajan", "XujuanZhou", "XiaohuiTao", "OliverFaust", "Edward JCiaccio", "Kwan HoongNg", "U RajendraAcharya" ]
10.1002/ima.22552 10.1186/s40537-019-0268-2 10.7326/M20-1033 10.1016/j.dcan.2017.10.002 10.1016/j.compbiomed.2017.09.017 10.1016/j.cmpb.2018.04.012 10.1162/neco.1997.9.8.1735
Automatic detection and localization of COVID-19 pneumonia using axial computed tomography images and deep convolutional neural networks.
COVID-19 was first reported as an unknown group of pneumonia in Wuhan City, Hubei province of China in late December of 2019. The rapid increase in the number of cases diagnosed with COVID-19 and the lack of experienced radiologists can cause diagnostic errors in the interpretation of the images along with the exceptional workload occurring in this process. Therefore, the urgent development of automated diagnostic systems that can scan radiological images quickly and accurately is important in combating the pandemic. With this motivation, a deep convolutional neural network (CNN)-based model that can automatically detect patterns related to lesions caused by COVID-19 from chest computed tomography (CT) images is proposed in this study. In this context, the image ground-truth regarding the COVID-19 lesions scanned by the radiologist was evaluated as the main criteria of the segmentation process. A total of 16 040 CT image segments were obtained by applying segmentation to the raw 102 CT images. Then, 10 420 CT image segments related to healthy lung regions were labeled as COVID-negative, and 5620 CT image segments, in which the findings related to the lesions were detected in various forms, were labeled as COVID-positive. With the proposed CNN architecture, 93.26% diagnostic accuracy performance was achieved. The sensitivity and specificity performance metrics for the proposed automatic diagnosis model were 93.27% and 93.24%, respectively. Additionally, it has been shown that by scanning the small regions of the lungs, COVID-19 pneumonia can be localized automatically with high resolution and the lesion densities can be successfully evaluated quantitatively.
International journal of imaging systems and technology
"2021-04-07T00:00:00"
[ "HasanPolat", "Mehmet SiraçÖzerdem", "FaysalEkici", "VeysiAkpolat" ]
10.1002/ima.22558 10.1016/j.compbiomed.2020.103805 10.1016/j.cmpb.2020.105532 10.1016/j.amjsurg.2020.04.018 10.1016/S0140-6736(20)30183-5 10.1148/radiol.2020200463 10.2214/AJR.20.22976 10.2214/AJR.20.23034 10.1007/s00330-020-06731-x 10.1148/radiol.2020200370 10.1016/j.cmpb.2020.105320 10.1016/j.acra.2019.11.007 10.1016/j.compbiomed.2019.103345 10.1016/j.imu.2019.100173 10.1016/j.crad.2020.01.010 10.1016/j.neucom.2018.12.086 10.1016/j.compbiomed.2020.103795 10.1101/2020.04.16.20064709 10.1016/j.mehy.2020.109761 10.1109/TMI.2020.2995965 10.1097/RTI.0000000000000387 10.1016/j.ins.2018.01.051 10.1016/j.compag.2019.104932 10.1016/j.compmedimag.2017.06.001 10.1016/j.compbiomed.2017.09.017 10.1016/j.csite.2020.100625 10.1109/ISCAS.2010.5537907 10.1162/NECO_a_00990 10.1007/978-3-642-15825-4_10 10.1016/j.ecoinf.2018.10.002 10.1016/j.neunet.2018.07.011 10.1101/2020.02.23.20026930 10.20944/preprints202003.0300.v1 10.1016/j.compbiomed.2020.103792 10.1007/s10489-020-01714-3
A novel perceptual two layer image fusion using deep learning for imbalanced COVID-19 dataset.
COVID-19 is a new strain of viruses that causes life stoppage worldwide. At this time, the new coronavirus COVID-19 is spreading rapidly across the world and poses a threat to people's health. Experimental medical tests and analysis have shown that the infection of lungs occurs in almost all COVID-19 patients. Although Computed Tomography of the chest is a useful imaging method for diagnosing diseases related to the lung, chest X-ray (CXR) is more widely available, mainly due to its lower price and results. Deep learning (DL), one of the significant popular artificial intelligence techniques, is an effective way to help doctors analyze how a large number of CXR images is crucial to performance. In this article, we propose a novel perceptual two-layer image fusion using DL to obtain more informative CXR images for a COVID-19 dataset. To assess the proposed algorithm performance, the dataset used for this work includes 87 CXR images acquired from 25 cases, all of which were confirmed with COVID-19. The dataset preprocessing is needed to facilitate the role of convolutional neural networks (CNN). Thus, hybrid decomposition and fusion of Nonsubsampled Contourlet Transform (NSCT) and CNN_VGG19 as feature extractor was used. Our experimental results show that imbalanced COVID-19 datasets can be reliably generated by the algorithm established here. Compared to the COVID-19 dataset used, the fuzed images have more features and characteristics. In evaluation performance measures, six metrics are applied, such as Q A novel image fusion algorithm using DL for an imbalanced COVID-19 dataset is the crucial contribution of this work. Extensive results of the experiment display that the proposed algorithm NSCT + CNN_VGG19 outperforms competitive image fusion algorithms.
PeerJ. Computer science
"2021-04-06T00:00:00"
[ "Omar MElzeki", "MohamedAbd Elfattah", "HanaaSalem", "Aboul EllaHassanien", "MahmoudShams" ]
10.7717/peerj-cs.364 10.1007/s10489-020-01829-7 10.1016/j.inffus.2019.02.003 10.3390/diagnostics10010027 10.1007/s11042-018-6229-5 10.1007/s40964-019-00108-3 10.1109/JSEN.2015.2465935 10.1109/TMM.2013.2244870 10.1016/j.imavis.2007.12.002 10.1007/s13755-020-00119-3 10.1007/s11517-012-0943-3 10.1016/j.neucom.2015.07.160 10.1016/j.knosys.2016.09.008 10.1016/j.eij.2015.09.002 10.1016/j.scitotenv.2020.138858 10.1007/s10278-015-9806-4 10.4236/cs.2016.78139 10.1007/s00138-020-01060-x 10.1007/s00521-018-3441-1 10.1049/el:20081754 10.1109/ACCESS.2020.2974242 10.3390/electronics9010190 10.1155/2020/8279342 10.1016/j.asoc.2008.05.001 10.1007/s10462-020-09825-6 10.1364/OL.33.000738 10.1145/3065386 10.3390/s18041019 10.1016/j.inffus.2016.05.004 10.1109/TIP.2013.2244222 10.1016/j.artmed.2019.101744 10.1016/j.inffus.2019.07.009 10.1016/j.inffus.2016.12.001 10.1016/j.inffus.2017.10.007 10.1016/j.inffus.2014.09.004 10.1007/s11045-015-0343-6 10.1016/j.compmedimag.2019.05.005 10.1016/j.jneumeth.2019.108520 10.1016/j.inffus.2019.12.001 10.1016/j.chaos.2020.110190 10.1016/j.cmpb.2020.105532 10.34172/aim.2020.02 10.1038/s41598-019-56847-4 10.1109/RBME.2020.2987975 10.1049/iet-cvi.2015.0251 10.1016/j.ins.2017.12.043 10.1177/1460458218824711 10.1016/j.inffus.2020.10.004 10.1016/j.ijleo.2019.163497 10.1016/j.infrared.2015.01.002 10.1049/el:20000267 10.1016/j.inffus.2006.09.001 10.1109/TIM.2018.2838778 10.1016/j.ins.2017.09.010 10.1109/ACCESS.2019.2898111
A multi-task pipeline with specialized streams for classification and segmentation of infection manifestations in COVID-19 scans.
We are concerned with the challenge of coronavirus disease (COVID-19) detection in chest X-ray and Computed Tomography (CT) scans, and the classification and segmentation of related infection manifestations. Even though it is arguably not an established diagnostic tool, using machine learning-based analysis of COVID-19 medical scans has shown the potential to provide a preliminary digital second opinion. This can help in managing the current pandemic, and thus has been attracting significant research attention. In this research, we propose a multi-task pipeline that takes advantage of the growing advances in deep neural network models. In the first stage, we fine-tuned an Inception-v3 deep model for COVID-19 recognition using multi-modal learning, that is, using X-ray and CT scans. In addition to outperforming other deep models on the same task in the recent literature, with an attained accuracy of 99.4%, we also present comparative analysis for multi-modal learning against learning from X-ray scans alone. The second and the third stages of the proposed pipeline complement one another in dealing with different types of infection manifestations. The former features a convolutional neural network architecture for recognizing three types of manifestations, while the latter transfers learning from another knowledge domain, namely, pulmonary nodule segmentation in CT scans, to produce binary masks for segmenting the regions corresponding to these manifestations. Our proposed pipeline also features specialized streams in which multiple deep models are trained separately to segment specific types of infection manifestations, and we show the significant impact that this framework has on various performance metrics. We evaluate the proposed models on widely adopted datasets, and we demonstrate an increase of approximately 2.5% and 4.5% for dice coefficient and mean intersection-over-union (mIoU), respectively, while achieving 60% reduction in computational time, compared to the recent literature.
PeerJ. Computer science
"2021-04-06T00:00:00"
[ "ShimaaEl-Bana", "AhmadAl-Kabbany", "MahaSharkas" ]
10.7717/peerj-cs.303 10.1148/radiol.2020200642 10.1101/2020.04.16.20064709 10.1016/S0140-6736(20)30154-9 10.2214/AJR.20.23012 10.1093/clinchem/hvaa029 10.3390/diagnostics10030131 10.1101/2020.04.22.20074948 10.1148/radiol.2020200432 10.1016/S0140-6736(20)30728-5 10.1016/j.jacr.2020.03.011 10.1007/s00330-020-06817-6 10.1056/NEJMoa2001316 10.1016/S2589-7500(19)30123-2 10.3390/sym12040651 10.3390/rs10071119 10.3390/s19173722 10.1038/s41746-018-0076-7 10.3390/app9050940 10.33889/IJMEMS.2020.5.4.052 10.1001/jama.2020.3864 10.2196/10010 10.1101/2020.02.23.20026930 10.1080/14737159.2020.1757437 10.1001/jama.2020.1585 10.1016/j.molcel.2018.03.004 10.1016/S0140-6736(20)30260-9 10.1007/s00330-020-06801-0 10.1109/TMI.2020.2995965 10.1109/TMI.2019.2959609 10.1148/radiol.2020200490
Comparing a deep learning model's diagnostic performance to that of radiologists to detect Covid -19 features on chest radiographs.
Whether the sensitivity of Deep Learning (DL) models to screen chest radiographs (CXR) for CoVID-19 can approximate that of radiologists, so that they can be adopted and used if real-time review of CXRs by radiologists is not possible, has not been explored before. To evaluate the diagnostic performance of a doctor-trained DL model (Svita_DL8) to screen for COVID-19 on CXR, and to compare the performance of the DL model with that of expert radiologists. We used a pre-trained convolutional neural network to develop a publicly available online DL model to evaluate CXR examinations saved in .jpeg or .png format. The initial model was subsequently curated and trained by an internist and a radiologist using 1062 chest radiographs to classify a submitted CXR as either normal, COVID-19, or a non-COVID-19 abnormal. For validation, we collected a separate set of 430 CXR examinations from numerous publicly available datasets from 10 different countries, case presentations, and two hospital repositories. These examinations were assessed for COVID-19 by the DL model and by two independent radiologists. Diagnostic performance was compared between the model and the radiologists and the correlation coefficient calculated. For detecting COVID-19 on CXR, our DL model demonstrated sensitivity of 91.5%, specificity of 55.3%, PPV 60.9%, NPV 77.9%, accuracy 70.1%, and AUC 0.73 (95% CI: 0.86, 0.95). There was a significant correlation (r = 0.617, The DL model demonstrated high sensitivity for detecting COVID-19 on CXR. The doctor trained DL tool Svita_DL8 can be used in resource-constrained settings to quickly triage patients with suspected COVID-19 for further in-depth review and testing.
The Indian journal of radiology & imaging
"2021-04-06T00:00:00"
[ "SabithaKrishnamoorthy", "SudhakarRamakrishnan", "Lanson BrijeshColaco", "AkshayDias", "Indu KGopi", "Gautham A GGowda", "K CAishwarya", "VeenaRamanan", "ManjuChandran" ]
10.4103/ijri.IJRI_914_20
Pneumocystis pneumonia: An important consideration when investigating artificial intelligence-based methods in the radiological diagnosis of COVID-19.
null
Clinical imaging
"2021-04-04T00:00:00"
[ "TemiLampejo" ]
10.1016/j.clinimag.2021.02.044 10.1016/j.clinimag.2021.01.019 10.1148/radiol.2020200905 10.1259/bjr.20200703 10.1002/jia2.25533 10.7861/CLINMED.2020-0565 10.1016/j.medin.2020.07.007 10.21203/rs.3.rs-53350/v1 10.4414/smw.2020.20312 10.1136/bmj.m1808 10.1016/j.cmi.2020.12.007 10.1128/CMR.00013-12 10.1093/ofid/ofaa633
A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images.
Recent studies indicate that detecting radiographic patterns on CT chest scans can yield high sensitivity and specificity for COVID-19 identification. In this paper, we scrutinize the effectiveness of deep learning models for semantic segmentation of pneumonia-infected area segmentation in CT images for the detection of COVID-19. Traditional methods for CT scan segmentation exploit a supervised learning paradigm, so they (a) require large volumes of data for their training, and (b) assume fixed (static) network weights once the training procedure has been completed. Recently, to overcome these difficulties, few-shot learning (FSL) has been introduced as a general concept of network model training using a very small amount of samples. In this paper, we explore the efficacy of few-shot learning in U-Net architectures, allowing for a dynamic fine-tuning of the network weights as new few samples are being fed into the U-Net. Experimental results indicate improvement in the segmentation accuracy of identifying COVID-19 infected regions. In particular, using 4-fold cross-validation results of the different classifiers, we observed an improvement of 5.388 ± 3.046% for all test data regarding the IoU metric and a similar increment of 5.394 ± 3.015% for the F1 score. Moreover, the statistical significance of the improvement obtained using our proposed few-shot U-Net architecture compared with the traditional U-Net model was confirmed by applying the Kruskal-Wallis test (
Sensors (Basel, Switzerland)
"2021-04-04T00:00:00"
[ "AthanasiosVoulodimos", "EftychiosProtopapadakis", "IasonKatsamenis", "AnastasiosDoulamis", "NikolaosDoulamis" ]
10.3390/s21062215 10.1016/S0140-6736(20)30183-5 10.2139/ssrn.3557504 10.1148/radiol.2020200432 10.1109/RBME.2020.2987975 10.1016/j.neunet.2020.03.007 10.1109/TMI.2020.2995965 10.1101/2020.05.08.20094664 10.1145/3386252 10.2214/AJR.20.22954 10.1016/j.ejrad.2020.109009 10.1016/j.jinf.2020.04.004 10.1155/2018/7068349 10.1021/acs.jproteome.9b00411 10.1148/radiol.2020200905 10.1016/j.imu.2020.100412 10.1109/TMI.2020.2996645 10.1016/j.ejrad.2020.109041 10.1016/j.neucom.2019.01.110 10.1038/s41598-020-76282-0 10.1016/j.compbiomed.2020.103795 10.1007/s10096-020-03901-z 10.14299/ijser.2020.03.02 10.1109/ACCESS.2020.3005510 10.1007/s10044-020-00950-0 10.1016/j.eswa.2020.114142 10.1016/j.knosys.2020.106647 10.21037/atm-20-2464 10.1088/1361-6560/abe838 10.1007/s10489-018-01396-y 10.5281/zenodo.3757476 10.1016/j.cmpb.2020.105581 10.1017/S0950268820001727 10.1504/IJDMMM.2019.10019369 10.1007/s11042-015-2512-x 10.1109/ACCESS.2017.2776349 10.1016/j.eswa.2005.09.019 10.1016/j.compbiomed.2020.103805 10.1080/01621459.1952.10483441 10.1016/j.scitotenv.2016.06.201 10.1016/j.patrec.2005.10.010
Role of Hybrid Deep Neural Networks (HDNNs), Computed Tomography, and Chest X-rays for the Detection of COVID-19.
COVID-19 syndrome has extensively escalated worldwide with the induction of the year 2020 and has resulted in the illness of millions of people. COVID-19 patients bear an elevated risk once the symptoms deteriorate. Hence, early recognition of diseased patients can facilitate early intervention and avoid disease succession. This article intends to develop a hybrid deep neural networks (HDNNs), using computed tomography (CT) and X-ray imaging, to predict the risk of the onset of disease in patients suffering from COVID-19. To be precise, the subjects were classified into 3 categories namely normal, Pneumonia, and COVID-19. Initially, the CT and chest X-ray images, denoted as 'hybrid images' (with resolution 1080 × 1080) were collected from different sources, including GitHub, COVID-19 radiography database, Kaggle, COVID-19 image data collection, and Actual Med COVID-19 Chest X-ray Dataset, which are open source and publicly available data repositories. The 80% hybrid images were used to train the hybrid deep neural network model and the remaining 20% were used for the testing purpose. The capability and prediction accuracy of the HDNNs were calculated using the confusion matrix. The hybrid deep neural network showed a 99% classification accuracy on the test set data.
International journal of environmental research and public health
"2021-04-04T00:00:00"
[ "MuhammadIrfan", "Muhammad AksamIftikhar", "SanaYasin", "UmarDraz", "TariqAli", "ShafiqHussain", "SarahBukhari", "Abdullah SaeedAlwadie", "SaifurRahman", "AdamGlowacz", "FaisalAlthobiani" ]
10.3390/ijerph18063056 10.1093/cid/ciaa461 10.1148/radiol.2020200642 10.1148/radiol.2020200905 10.1007/s00330-019-06163-2 10.1080/14737159.2020.1757437 10.21106/ijma.421 10.1101/2020.10.29.339317 10.1007/s10916-020-1536-6 10.2214/AJR.18.20509 10.1016/j.compmedimag.2019.101688 10.1016/j.measurement.2019.05.028 10.1109/TMI.2019.2963248 10.1007/s13246-020-00957-1 10.1108/WJE-10-2020-0529 10.1016/j.neuroimage.2011.01.008 10.3390/diagnostics10080565 10.1001/jamasurg.2020.4998 10.1016/j.neucom.2015.09.034 10.1038/s41568-018-0016-5 10.1007/s00247-017-3943-5 10.1101/2020.06.12.20129643 10.1016/j.compbiomed.2020.103792 10.1101/2020.08.14.20170290 10.1101/2020.02.23.20026930 10.1101/2020.07.11.20151332 10.3389/fbioe.2020.00898 10.1007/s00500-020-05275-y 10.1016/j.cell.2020.04.045 10.1038/s42256-019-0057-9 10.1109/TMI.2018.2832217 10.1109/ACCESS.2020.3009908 10.14569/IJACSA.2018.090543 10.3390/e22121370
Hyperparameter Optimization for COVID-19 Pneumonia Diagnosis Based on Chest CT.
Convolutional Neural Networks (CNNs) have been successfully applied in the medical diagnosis of different types of diseases. However, selecting the architecture and the best set of hyperparameters among the possible combinations can be a significant challenge. The purpose of this work is to investigate the use of the Hyperband optimization algorithm in the process of optimizing a CNN applied to the diagnosis of SARS-Cov2 disease (COVID-19). The test was performed with the Optuna framework, and the optimization process aimed to optimize four hyperparameters: (1) backbone architecture, (2) the number of inception modules, (3) the number of neurons in the fully connected layers, and (4) the learning rate. CNNs were trained on 2175 computed tomography (CT) images. The CNN that was proposed by the optimization process was a VGG16 with five inception modules, 128 neurons in the two fully connected layers, and a learning rate of 0.0027. The proposed method achieved a sensitivity, precision, and accuracy of 97%, 82%, and 88%, outperforming the sensitivity of the Real-Time Polymerase Chain Reaction (RT-PCR) tests (53-88%) and the accuracy of the diagnosis performed by human experts (72%).
Sensors (Basel, Switzerland)
"2021-04-04T00:00:00"
[ "PauloLacerda", "BrunoBarros", "CélioAlbuquerque", "AuraConci" ]
10.3390/s21062174 10.1590/s1678-9946202062044 10.1016/j.rmed.2020.105980 10.1038/s41568-018-0016-5 10.1016/j.eswa.2019.01.060 10.1007/s10044-020-00950-0 10.1038/s41591-020-0931-3 10.1016/j.eng.2020.04.010 10.1016/j.inffus.2020.11.005 10.3390/diagnostics10010024 10.1016/j.apacoust.2020.107549 10.1007/s11263-015-0816-y 10.1016/j.drudis.2018.01.039 10.1016/j.media.2017.06.015 10.1148/radiol.2020200463 10.1007/s00330-020-07347-x 10.1148/radiol.2020201237 10.1162/neco.1997.9.8.1735
COVID-19 Recognition Using Ensemble-CNNs in Two New Chest X-ray Databases.
The recognition of COVID-19 infection from X-ray images is an emerging field in the learning and computer vision community. Despite the great efforts that have been made in this field since the appearance of COVID-19 (2019), the field still suffers from two drawbacks. First, the number of available X-ray scans labeled as COVID-19-infected is relatively small. Second, all the works that have been carried out in the field are separate; there are no unified data, classes, and evaluation protocols. In this work, based on public and newly collected data, we propose two X-ray COVID-19 databases, which are three-class COVID-19 and five-class COVID-19 datasets. For both databases, we evaluate different deep learning architectures. Moreover, we propose an Ensemble-CNNs approach which outperforms the deep learning architectures and shows promising results in both databases. In other words, our proposed Ensemble-CNNs achieved a high performance in the recognition of COVID-19 infection, resulting in accuracies of 100% and 98.1% in the three-class and five-class scenarios, respectively. In addition, our approach achieved promising results in the overall recognition accuracy of 75.23% and 81.0% for the three-class and five-class scenarios, respectively. We make our databases of COVID-19 X-ray scans publicly available to encourage other researchers to use it as a benchmark for their studies and comparisons.
Sensors (Basel, Switzerland)
"2021-04-04T00:00:00"
[ "EdoardoVantaggiato", "EmanuelaPaladini", "FaresBougourzi", "CosimoDistante", "AbdenourHadid", "AbdelmalikTaleb-Ahmed" ]
10.3390/s21051742 10.1007/s10489-020-01888-w 10.7326/M20-1495 10.1148/radiol.2020200527 10.3389/fmed.2020.00427 10.1038/s41598-020-76550-z 10.3390/app10093233 10.1016/j.patrec.2021.01.010 10.1155/2020/8889023 10.1016/j.eswa.2020.113459 10.1049/iet-ipr.2018.6235 10.1007/s13246-020-00865-4 10.1145/3065386 10.1007/BF00994018 10.1038/s41598-020-71294-2 10.1016/j.cell.2018.02.010 10.1148/ryai.2019180041 10.1088/1757-899X/428/1/012043 10.1016/j.inffus.2020.11.007
Volume-of-Interest Aware Deep Neural Networks for Rapid Chest CT-Based COVID-19 Patient Risk Assessment.
Since December 2019, the world has been devastated by the Coronavirus Disease 2019 (COVID-19) pandemic. Emergency Departments have been experiencing situations of urgency where clinical experts, without long experience and mature means in the fight against COVID-19, have to rapidly decide the most proper patient treatment. In this context, we introduce an artificially intelligent tool for effective and efficient Computed Tomography (CT)-based risk assessment to improve treatment and patient care. In this paper, we introduce a data-driven approach built on top of volume-of-interest aware deep neural networks for automatic COVID-19 patient risk assessment (discharged, hospitalized, intensive care unit) based on lung infection quantization through segmentation and, subsequently, CT classification. We tackle the high and varying dimensionality of the CT input by detecting and analyzing only a sub-volume of the CT, the Volume-of-Interest (VoI). Differently from recent strategies that consider infected CT slices without requiring any spatial coherency between them, or use the whole lung volume by applying abrupt and lossy volume down-sampling, we assess only the "most infected volume" composed of slices at its original spatial resolution. To achieve the above, we create, present and publish a new labeled and annotated CT dataset with 626 CT samples from COVID-19 patients. The comparison against such strategies proves the effectiveness of our VoI-based approach. We achieve remarkable performance on patient risk assessment evaluated on balanced data by reaching 88.88%, 89.77%, 94.73% and 88.88% accuracy, sensitivity, specificity and F1-score, respectively.
International journal of environmental research and public health
"2021-04-04T00:00:00"
[ "AnargyrosChatzitofis", "PierandreaCancian", "VasileiosGkitsas", "AlessandroCarlucci", "PanagiotisStalidis", "GeorgiosAlbanis", "AntonisKarakottas", "TheodorosSemertzidis", "PetrosDaras", "CaterinaGiannitto", "ElenaCasiraghi", "Federica MrakicSposta", "GiuliaVatteroni", "AngelaAmmirabile", "LudovicaLofino", "PasqualaRagucci", "Maria ElenaLaino", "AntonioVoza", "AntonioDesai", "MaurizioCecconi", "LucaBalzarini", "ArturoChiti", "DimitriosZarpalas", "VictorSavevski" ]
10.3390/ijerph18062842 10.1038/s41591-020-0820-9 10.21037/atm.2020.02.06 10.1056/NEJMoa2001017 10.1056/NEJMoa2001316 10.1148/radiol.2020200230 10.1148/radiol.2020200343 10.1007/s11547-020-01269-w 10.1109/ACCESS.2020.3034032 10.7150/thno.45985 10.14245/ns.1938396.198 10.1038/s41591-018-0316-z 10.1109/ACCESS.2020.2971576 10.1136/bmj.m1328 10.1148/radiol.2018172986 10.1148/ryai.2019180091 10.1038/s41591-020-0931-3 10.1007/s10916-020-01562-1 10.1038/s41467-020-17971-2 10.1148/ryct.2020200034 10.1007/s13042-017-0645-0 10.1016/j.acra.2019.07.006 10.1155/2020/9756518 10.1109/TMI.2020.2994908 10.1148/radiol.2020202439 10.1038/s41591-019-0447-x 10.1148/radiol.2020201473 10.1016/j.eng.2020.04.010 10.1148/radiol.2020200905 10.1016/j.media.2020.101860 10.1109/5254.708428 10.1016/j.bj.2020.08.003 10.1023/A:1010933404324 10.1016/j.jksuci.2020.12.010 10.1038/s41598-020-71294-2 10.1007/s10044-020-00950-0 10.1016/j.media.2020.101844 10.1109/TMI.2020.2996645 10.1007/s10462-020-09825-6 10.1016/j.bbe.2019.03.001 10.1007/s00330-020-07013-2
Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images.
Computer-aided diagnosis for the reliable and fast detection of coronavirus disease (COVID-19) has become a necessity to prevent the spread of the virus during the pandemic to ease the burden on the healthcare system. Chest X-ray (CXR) imaging has several advantages over other imaging and detection techniques. Numerous works have been reported on COVID-19 detection from a smaller set of original X-ray images. However, the effect of image enhancement and lung segmentation of a large dataset in COVID-19 detection was not reported in the literature. We have compiled a large X-ray dataset (COVQU) consisting of 18,479 CXR images with 8851 normal, 6012 non-COVID lung infections, and 3616 COVID-19 CXR images and their corresponding ground truth lung masks. To the best of our knowledge, this is the largest public COVID positive database and the lung masks. Five different image enhancement techniques: histogram equalization (HE), contrast limited adaptive histogram equalization (CLAHE), image complement, gamma correction, and balance contrast enhancement technique (BCET) were used to investigate the effect of image enhancement techniques on COVID-19 detection. A novel U-Net model was proposed and compared with the standard U-Net model for lung segmentation. Six different pre-trained Convolutional Neural Networks (CNNs) (ResNet18, ResNet50, ResNet101, InceptionV3, DenseNet201, and ChexNet) and a shallow CNN model were investigated on the plain and segmented lung CXR images. The novel U-Net model showed an accuracy, Intersection over Union (IoU), and Dice coefficient of 98.63%, 94.3%, and 96.94%, respectively for lung segmentation. The gamma correction-based enhancement technique outperforms other techniques in detecting COVID-19 from the plain and the segmented lung CXR images. Classification performance from plain CXR images is slightly better than the segmented lung CXR images; however, the reliability of network performance is significantly improved for the segmented lung images, which was observed using the visualization technique. The accuracy, precision, sensitivity, F1-score, and specificity were 95.11%, 94.55%, 94.56%, 94.53%, and 95.59% respectively for the segmented lung images. The proposed approach with very reliable and comparable performance will boost the fast and robust COVID-19 detection using chest X-ray images.
Computers in biology and medicine
"2021-04-03T00:00:00"
[ "TawsifurRahman", "AmithKhandakar", "YazanQiblawey", "AnasTahir", "SerkanKiranyaz", "Saad BinAbul Kashem", "Mohammad TariqulIslam", "SomayaAl Maadeed", "Susu MZughaier", "Muhammad SalmanKhan", "Muhammad E HChowdhury" ]
10.1016/j.compbiomed.2021.104319 10.1016/j.chest.2020.04.010 10.1109/ACCESS.2020.3010287
Semi-supervised learning for an improved diagnosis of COVID-19 in CT images.
Coronavirus disease 2019 (COVID-19) has been spread out all over the world. Although a real-time reverse-transcription polymerase chain reaction (RT-PCR) test has been used as a primary diagnostic tool for COVID-19, the utility of CT based diagnostic tools have been suggested to improve the diagnostic accuracy and reliability. Herein we propose a semi-supervised deep neural network for an improved detection of COVID-19. The proposed method utilizes CT images in a supervised and unsupervised manner to improve the accuracy and robustness of COVID-19 diagnosis. Both labeled and unlabeled CT images are employed. Labeled CT images are used for supervised leaning. Unlabeled CT images are utilized for unsupervised learning in a way that the feature representations are invariant to perturbations in CT images. To systematically evaluate the proposed method, two COVID-19 CT datasets and three public CT datasets with no COVID-19 CT images are employed. In distinguishing COVID-19 from non-COVID-19 CT images, the proposed method achieves an overall accuracy of 99.83%, sensitivity of 0.9286, specificity of 0.9832, and positive predictive value (PPV) of 0.9192. The results are consistent between the COVID-19 challenge dataset and the public CT datasets. For discriminating between COVID-19 and common pneumonia CT images, the proposed method obtains 97.32% accuracy, 0.9971 sensitivity, 0.9598 specificity, and 0.9326 PPV. Moreover, the comparative experiments with respect to supervised learning and training strategies demonstrate that the proposed method is able to improve the diagnostic accuracy and robustness without exhaustive labeling. The proposed semi-supervised method, exploiting both supervised and unsupervised learning, facilitates an accurate and reliable diagnosis for COVID-19, leading to an improved patient care and management.
PloS one
"2021-04-02T00:00:00"
[ "Chang HeeHan", "MisukKim", "Jin TaeKwak" ]
10.1371/journal.pone.0249450 10.1056/NEJMoa2001017 10.1038/s41591-020-0820-9 10.1002/jmv.25689 10.1016/S0140-6736(20)30185-9 10.3346/jkms.2020.35.e223 10.1148/radiol.2020200642 10.1148/radiol.2020200274 10.2214/AJR.20.22954 10.1148/radiol.2020200463 10.1148/radiol.2020200330 10.1148/radiol.2020200343 10.1146/annurev-bioeng-071516-044442 10.1109/TMI.2016.2528162 10.1016/S0140-6736(18)31645-3 10.1001/jama.2017.18152 10.1001/jama.2017.14585 10.1016/j.media.2019.101563 10.1038/nature14539 10.1109/RBME.2020.2987975 10.2196/19569 10.1016/j.eng.2020.04.010 10.1183/13993003.00775-2020 10.1109/TMI.2020.2994908 10.1109/TMI.2020.2995965 10.1109/TMI.2020.2996256 10.1109/TMI.2020.2995508 10.1109/TMI.2020.2996645 10.1148/ryct.2020200075 10.1007/s10994-019-05855-6 10.1109/TMI.2018.2876510 10.1016/j.chaos.2020.110153
Automated Detection of COVID-19 Cases on Radiographs using Shape-Dependent Fibonacci-p Patterns.
The coronavirus (COVID-19) pandemic has been adversely affecting people's health globally. To diminish the effect of this widespread pandemic, it is essential to detect COVID-19 cases as quickly as possible. Chest radiographs are less expensive and are a widely available imaging modality for detecting chest pathology compared with CT images. They play a vital role in early prediction and developing treatment plans for suspected or confirmed COVID-19 chest infection patients. In this paper, a novel shape-dependent Fibonacci-p patterns-based feature descriptor using a machine learning approach is proposed. Computer simulations show that the presented system (1) increases the effectiveness of differentiating COVID-19, viral pneumonia, and normal conditions, (2) is effective on small datasets, and (3) has faster inference time compared to deep learning methods with comparable performance. Computer simulations are performed on two publicly available datasets; (a) the Kaggle dataset, and (b) the COVIDGR dataset. To assess the performance of the presented system, various evaluation parameters, such as accuracy, recall, specificity, precision, and f1-score are used. Nearly 100% differentiation between normal and COVID-19 radiographs is observed for the three-class classification scheme using the lung area-specific Kaggle radiographs. While Recall of 72.65 ± 6.83 and specificity of 77.72 ± 8.06 is observed for the COVIDGR dataset.
IEEE journal of biomedical and health informatics
"2021-04-01T00:00:00"
[ "KarenPanetta", "ForamSanghavi", "SosAgaian", "NeelMadan" ]
10.1109/JBHI.2021.3069798 10.1016/j.compbiomed.2020.103792
Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study.
Data privacy mechanisms are essential for rapidly scaling medical training databases to capture the heterogeneity of patient data distributions toward robust and generalizable machine learning systems. In the current COVID-19 pandemic, a major focus of artificial intelligence (AI) is interpreting chest CT, which can be readily used in the assessment and management of the disease. This paper demonstrates the feasibility of a federated learning method for detecting COVID-19 related CT abnormalities with external validation on patients from a multinational study. We recruited 132 patients from seven multinational different centers, with three internal hospitals from Hong Kong for training and testing, and four external, independent datasets from Mainland China and Germany, for validating model generalizability. We also conducted case studies on longitudinal scans for automated estimation of lesion burden for hospitalized COVID-19 patients. We explore the federated learning algorithms to develop a privacy-preserving AI model for COVID-19 medical image diagnosis with good generalization capability on unseen multinational datasets. Federated learning could provide an effective mechanism during pandemics to rapidly develop clinically useful AI across institutions and countries overcoming the burden of central aggregation of large amounts of sensitive data.
NPJ digital medicine
"2021-03-31T00:00:00"
[ "QiDou", "Tiffany YSo", "MeiruiJiang", "QuandeLiu", "VarutVardhanabhuti", "GeorgiosKaissis", "ZejuLi", "WeixinSi", "Heather H CLee", "KevinYu", "ZuxinFeng", "LiDong", "EgonBurian", "FriederikeJungmann", "RickmerBraren", "MarcusMakowski", "BernhardKainz", "DanielRueckert", "BenGlocker", "Simon C HYu", "Pheng AnnHeng" ]
10.1038/s41746-021-00431-6 10.1038/s41591-020-0824-5 10.1038/s42256-020-0181-6 10.1038/s42256-020-0184-3 10.1038/s41591-018-0107-6 10.1038/s42256-020-0186-1 10.1038/s41746-020-00323-1 10.1038/s41746-019-0148-3 10.1038/s41591-018-0316-z 10.1038/s41598-020-69250-1 10.1038/s42256-020-0180-7 10.1038/s42256-020-0185-2 10.3390/jcm9051514 10.1148/radiol.2020200463 10.1016/j.cell.2020.04.045 10.1109/RBME.2020.2987975 10.1117/1.JMI.5.3.036501 10.1148/radiology.143.1.7063747 10.2307/2531595 10.1093/biomet/26.4.404 10.1109/TMI.2020.2974574 10.1371/journal.pmed.1002683 10.1148/rg.2017170077 10.1186/s41747-020-00173-2
Deep learning for diagnosis of COVID-19 using 3D CT scans.
A new pneumonia-type coronavirus, COVID-19, recently emerged in Wuhan, China. COVID-19 has subsequently infected many people and caused many deaths worldwide. Isolating infected people is one of the methods of preventing the spread of this virus. CT scans provide detailed imaging of the lungs and assist radiologists in diagnosing COVID-19 in hospitals. However, a person's CT scan contains hundreds of slides, and the diagnosis of COVID-19 using such scans can lead to delays in hospitals. Artificial intelligence techniques could assist radiologists with rapidly and accurately detecting COVID-19 infection from these scans. This paper proposes an artificial intelligence (AI) approach to classify COVID-19 and normal CT volumes. The proposed AI method uses the ResNet-50 deep learning model to predict COVID-19 on each CT image of a 3D CT scan. Then, this AI method fuses image-level predictions to diagnose COVID-19 on a 3D CT volume. We show that the proposed deep learning model provides 96% AUC value for detecting COVID-19 on CT scans.
Computers in biology and medicine
"2021-03-30T00:00:00"
[ "SertanSerte", "HasanDemirel" ]
10.1016/j.compbiomed.2021.104306
A Cascade-SEME network for COVID-19 detection in chest x-ray images.
The worldwide spread of the SARS-CoV-2 virus poses unprecedented challenges to medical resources and infection prevention and control measures around the world. In this case, a rapid and effective detection method for COVID-19 can not only relieve the pressure of the medical system but find and isolate patients in time, to a certain extent, slow down the development of the epidemic. In this paper, we propose a method that can quickly and accurately diagnose whether pneumonia is viral pneumonia, and classify viral pneumonia in a fine-grained way to diagnose COVID-19. We proposed a Cascade Squeeze-Excitation and Moment Exchange (Cascade-SEME) framework that can effectively detect COVID-19 cases by evaluating the chest x-ray images, where SE is the structure we designed in the network which has attention mechanism, and ME is a method for image enhancement from feature dimension. The framework integrates a model for a coarse level detection of virus cases among other forms of lung infection, and a model for fine-grained categorisation of pneumonia types identifying COVID-19 cases. In addition, a Regional Learning approach is proposed to mitigate the impact of non-lesion features on network training. The network output is also visualised, highlighting the likely areas of lesion, to assist experts' assessment and diagnosis of COVID-19. Three datasets were used: a set of Chest x-ray Images for Classification with bacterial pneumonia, viral pneumonia and normal chest x-rays, a COVID chest x-ray dataset with COVID-19, and a Lung Segmentation dataset containing 1000 chest x-rays with masks in the lung region. We evaluated all the models on the test set. The results shows the proposed SEME structure significantly improves the performance of the models: in the task of pneumonia infection type diagnosis, the sensitivity, specificity, accuracy and F1 score of ResNet50 with SEME structure are significantly improved in each category, and the accuracy and AUC of the whole test set are also enhanced; in the detection task of COVID-19, the evaluation results shows that when SEME structure was added to the task, the sensitivities, specificities, accuracy and F1 scores of ResNet50 and DenseNet169 are improved. Although the sensitivities and specificities are not significantly promoted, SEME well balanced these two significant indicators. Regional learning also plays an important role. Experiments show that Regional Learning can effectively correct the impact of non-lesion features on the network, which can be seen in the Grad-CAM method. Experiments show that after the application of SEME structure in the network, the performance of SEME-ResNet50 and SEME-DenseNet169 in both two datasets show a clear enhancement. And the proposed regional learning method effectively directs the network's attention to focus on relevant pathological regions in the lung radiograph, ensuring the performance of the proposed framework even when a small training set is used. The visual interpretation step using Grad-CAM finds that the region of attention on radiographs of different types of pneumonia are located in different regions of the lungs.
Medical physics
"2021-03-30T00:00:00"
[ "DailinLv", "YaqiWang", "ShuaiWang", "QianniZhang", "WutengQi", "YunxiangLi", "LinglingSun" ]
10.1002/mp.14711 10.1007/s11263-019-01228-7 10.1109/Confluence47617.2020.9057809
Regarding "Serial Quantitative Chest CT Assessment of COVID-19: Deep-Learning Approach".
null
Radiology. Cardiothoracic imaging
"2021-03-30T00:00:00"
[ "Marcelo StrausTakahashi", "MatheusRibeiro Furtado de Mendonça", "IanPan", "Rogerio ZaiaPinetti", "Felipe CKitamura" ]
10.1148/ryct.2020200242
Longitudinal Assessment of COVID-19 Using a Deep Learning-based Quantitative CT Pipeline: Illustration of Two Cases.
null
Radiology. Cardiothoracic imaging
"2021-03-30T00:00:00"
[ "YukunCao", "ZhanweiXu", "JianjiangFeng", "ChengJin", "XiaoyuHan", "HanpingWu", "HeshuiShi" ]
10.1148/ryct.2020200082
Serial Quantitative Chest CT Assessment of COVID-19: A Deep Learning Approach.
To quantitatively evaluate lung burden changes in patients with coronavirus disease 2019 (COVID-19) by using serial CT scan by an automated deep learning method. Patients with COVID-19, who underwent chest CT between January 1 and February 3, 2020, were retrospectively evaluated. The patients were divided into mild, moderate, severe, and critical types, according to their baseline clinical, laboratory, and CT findings. CT lung opacification percentages of the whole lung and five lobes were automatically quantified by a commercial deep learning software and compared with those at follow-up CT scans. Longitudinal changes of the CT quantitative parameter were also compared among the four clinical types. A total of 126 patients with COVID-19 (mean age, 52 years ± 15 [standard deviation]; 53.2% males) were evaluated, including six mild, 94 moderate, 20 severe, and six critical cases. CT-derived opacification percentage was significantly different among clinical groups at baseline, gradually progressing from mild to critical type (all The quantification of lung opacification in COVID-19 measured at chest CT by using a commercially available deep learning-based tool was significantly different among groups with different clinical severity. This approach could potentially eliminate the subjectivity in the initial assessment and follow-up of pulmonary findings in COVID-19.
Radiology. Cardiothoracic imaging
"2021-03-30T00:00:00"
[ "LuHuang", "RuiHan", "TaoAi", "PengxinYu", "HanKang", "QianTao", "LimingXia" ]
10.1148/ryct.2020200075 10.1016/S2213-2600(20)30076-x
Deep-chest: Multi-classification deep learning model for diagnosing COVID-19, pneumonia, and lung cancer chest diseases.
Corona Virus Disease (COVID-19) has been announced as a pandemic and is spreading rapidly throughout the world. Early detection of COVID-19 may protect many infected people. Unfortunately, COVID-19 can be mistakenly diagnosed as pneumonia or lung cancer, which with fast spread in the chest cells, can lead to patient death. The most commonly used diagnosis methods for these three diseases are chest X-ray and computed tomography (CT) images. In this paper, a multi-classification deep learning model for diagnosing COVID-19, pneumonia, and lung cancer from a combination of chest x-ray and CT images is proposed. This combination has been used because chest X-ray is less powerful in the early stages of the disease, while a CT scan of the chest is useful even before symptoms appear, and CT can precisely detect the abnormal features that are identified in images. In addition, using these two types of images will increase the dataset size, which will increase the classification accuracy. To the best of our knowledge, no other deep learning model choosing between these diseases is found in the literature. In the present work, the performance of four architectures are considered, namely: VGG19-CNN, ResNet152V2, ResNet152V2 + Gated Recurrent Unit (GRU), and ResNet152V2 + Bidirectional GRU (Bi-GRU). A comprehensive evaluation of different deep learning architectures is provided using public digital chest x-ray and CT datasets with four classes (i.e., Normal, COVID-19, Pneumonia, and Lung cancer). From the results of the experiments, it was found that the VGG19 +CNN model outperforms the three other proposed models. The VGG19+CNN model achieved 98.05% accuracy (ACC), 98.05% recall, 98.43% precision, 99.5% specificity (SPC), 99.3% negative predictive value (NPV), 98.24% F1 score, 97.7% Matthew's correlation coefficient (MCC), and 99.66% area under the curve (AUC) based on X-ray and CT images.
Computers in biology and medicine
"2021-03-29T00:00:00"
[ "Dina MIbrahim", "Nada MElshennawy", "Amany MSarhan" ]
10.1016/j.compbiomed.2021.104348
Prognostication of patients with COVID-19 using artificial intelligence based on chest x-rays and clinical data: a retrospective study.
Chest x-ray is a relatively accessible, inexpensive, fast imaging modality that might be valuable in the prognostication of patients with COVID-19. We aimed to develop and evaluate an artificial intelligence system using chest x-rays and clinical data to predict disease severity and progression in patients with COVID-19. We did a retrospective study in multiple hospitals in the University of Pennsylvania Health System in Philadelphia, PA, USA, and Brown University affiliated hospitals in Providence, RI, USA. Patients who presented to a hospital in the University of Pennsylvania Health System via the emergency department, with a diagnosis of COVID-19 confirmed by RT-PCR and with an available chest x-ray from their initial presentation or admission, were retrospectively identified and randomly divided into training, validation, and test sets (7:1:2). Using the chest x-rays as input to an EfficientNet deep neural network and clinical data, models were trained to predict the binary outcome of disease severity (ie, critical or non-critical). The deep-learning features extracted from the model and clinical data were used to build time-to-event models to predict the risk of disease progression. The models were externally tested on patients who presented to an independent multicentre institution, Brown University affiliated hospitals, and compared with severity scores provided by radiologists. 1834 patients who presented via the University of Pennsylvania Health System between March 9 and July 20, 2020, were identified and assigned to the model training (n=1285), validation (n=183), or testing (n=366) sets. 475 patients who presented via the Brown University affiliated hospitals between March 1 and July 18, 2020, were identified for external testing of the models. When chest x-rays were added to clinical data for severity prediction, area under the receiver operating characteristic curve (ROC-AUC) increased from 0·821 (95% CI 0·796-0·828) to 0·846 (0·815-0·852; p<0·0001) on internal testing and 0·731 (0·712-0·738) to 0·792 (0·780-0 ·803; p<0·0001) on external testing. When deep-learning features were added to clinical data for progression prediction, the concordance index (C-index) increased from 0·769 (0·755-0·786) to 0·805 (0·800-0·820; p<0·0001) on internal testing and 0·707 (0·695-0·729) to 0·752 (0·739-0·764; p<0·0001) on external testing. The image and clinical data combined model had significantly better prognostic performance than combined severity scores and clinical data on internal testing (C-index 0·805 vs 0·781; p=0·0002) and external testing (C-index 0·752 vs 0·715; p<0·0001). In patients with COVID-19, artificial intelligence based on chest x-rays had better prognostic performance than clinical data or radiologist-derived severity scores. Using artificial intelligence, chest x-rays can augment clinical data in predicting the risk of progression to critical illness in patients with COVID-19. Brown University, Amazon Web Services Diagnostic Development Initiative, Radiological Society of North America, National Cancer Institute and National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health.
The Lancet. Digital health
"2021-03-29T00:00:00"
[ "ZhichengJiao", "Ji WhaeChoi", "KaseyHalsey", "Thi My LinhTran", "BenHsieh", "DongcuiWang", "FeyisopeEweje", "RobinWang", "KenChang", "JingWu", "Scott ACollins", "Thomas YYi", "Andrew TDelworth", "TaoLiu", "Terrance THealey", "ShaoleiLu", "JianxinWang", "XueFeng", "Michael KAtalay", "LiYang", "MichaelFeldman", "Paul J LZhang", "Wei-HuaLiao", "YongFan", "Harrison XBai" ]
10.1016/S2589-7500(21)00039-X 10.1101/2020.05.20.20108159 10.1101/2020.05.09.20096560
COVID-19 in CXR: From Detection and Severity Scoring to Patient Disease Monitoring.
This work estimates the severity of pneumonia in COVID-19 patients and reports the findings of a longitudinal study of disease progression. It presents a deep learning model for simultaneous detection and localization of pneumonia in chest Xray (CXR) images, which is shown to generalize to COVID-19 pneumonia. The localization maps are utilized to calculate a "Pneumonia Ratio" which indicates disease severity. The assessment of disease severity serves to build a temporal disease extent profile for hospitalized patients. To validate the model's applicability to the patient monitoring task, we developed a validation strategy which involves a synthesis of Digital Reconstructed Radiographs (DRRs - synthetic Xray) from serial CT scans; we then compared the disease progression profiles that were generated from the DRRs to those that were generated from CT volumes.
IEEE journal of biomedical and health informatics
"2021-03-27T00:00:00"
[ "MaayanFrid-Adar", "RulaAmer", "OphirGozes", "JannetteNassar", "HayitGreenspan" ]
10.1109/JBHI.2021.3069169 10.1109/RBME.2020.2987975 10.1109/ACCESS.2020.3003810 10.3390/electronics9091439 10.1007/s10489-020-01867-1
Generalized chest CT and lab curves throughout the course of COVID-19.
A better understanding of temporal relationships between chest CT and labs may provide a reference for disease severity over the disease course. Generalized curves of lung opacity volume and density over time can be used as standardized references from well before symptoms develop to over a month after recovery, when residual lung opacities remain. 739 patients with COVID-19 underwent CT and RT-PCR in an outbreak setting between January 21st and April 12th, 2020. 29 of 739 patients had serial exams (121 CTs and 279 laboratory measurements) over 50 ± 16 days, with an average of 4.2 sequential CTs each. Sequential volumes of total lung, overall opacity and opacity subtypes (ground glass opacity [GGO] and consolidation) were extracted using deep learning and manual segmentation. Generalized temporal curves of CT and laboratory measurements were correlated. Lung opacities appeared 3.4 ± 2.2 days prior to symptom onset. Opacity peaked 1 day after symptom onset. GGO onset was earlier and resolved later than consolidation. Lactate dehydrogenase, and C-reactive protein peaked earlier than procalcitonin and leukopenia. The temporal relationships of quantitative CT features and clinical labs have distinctive patterns and peaks in relation to symptom onset, which may inform early clinical course in patients with mild COVID-19 pneumonia, or may shed light upon chronic lung effects or mechanisms of medical countermeasures in clinical trials.
Scientific reports
"2021-03-27T00:00:00"
[ "Michael TKassin", "NicoleVarble", "MaximeBlain", "ShengXu", "Evrim BTurkbey", "StephanieHarmon", "DongYang", "ZiyueXu", "HolgerRoth", "DaguangXu", "MonaFlores", "AmelAmalou", "KaiyunSun", "SameerKadri", "FrancescaPatella", "MaurizioCariati", "AliceScarabelli", "ElviraStellato", "Anna MariaIerardi", "GianpaoloCarrafiello", "PengAn", "BarisTurkbey", "Bradford JWood" ]
10.1038/s41598-021-85694-5 10.1148/radiol.2020201365 10.1016/S0140-6736(20)30728-5 10.1148/radiol.2020203173 10.7326/M20-1495 10.1148/radiol.2020201433 10.7150/thno.46465 10.1007/s00330-020-06817-6 10.1097/RLI.0000000000000689 10.1007/s42058-020-00034-2 10.1148/radiol.2020200843 10.1148/radiol.2020200370 10.1148/radiol.2020200463 10.1016/j.ejrad.2020.109009 10.1148/radiol.2020200230 10.1001/jama.2020.8259 10.1007/s00330-020-07013-2 10.1016/j.ijid.2020.05.006 10.7150/thno.45985 10.3348/kjr.2020.0293 10.1016/j.ejrad.2020.109202 10.1038/s41598-020-68057-4 10.1007/s00330-020-06976-6 10.1038/s41591-020-0869-5 10.2214/ajr.175.5.1751329 10.1371/journal.pone.0205490 10.1016/j.ejcts.2004.02.004 10.1148/radiol.2016142975 10.1007/s00330-016-4317-3
Metaheuristic-based Deep COVID-19 Screening Model from Chest X-Ray Images.
COVID-19 has affected the whole world drastically. A huge number of people have lost their lives due to this pandemic. Early detection of COVID-19 infection is helpful for treatment and quarantine. Therefore, many researchers have designed a deep learning model for the early diagnosis of COVID-19-infected patients. However, deep learning models suffer from overfitting and hyperparameter-tuning issues. To overcome these issues, in this paper, a metaheuristic-based deep COVID-19 screening model is proposed for X-ray images. The modified AlexNet architecture is used for feature extraction and classification of the input images. Strength Pareto evolutionary algorithm-II (SPEA-II) is used to tune the hyperparameters of modified AlexNet. The proposed model is tested on a four-class (i.e., COVID-19, tuberculosis, pneumonia, or healthy) dataset. Finally, the comparisons are drawn among the existing and the proposed models.
Journal of healthcare engineering
"2021-03-26T00:00:00"
[ "ManjitKaur", "VijayKumar", "VaishaliYadav", "DilbagSingh", "NareshKumar", "Nripendra NarayanDas" ]
10.1155/2021/8829829 10.1148/radiol.2303030853 10.1148/radiol.2282030593 10.1007/11677437_22 10.1109/tmi.2020.2993291 10.3390/sym12040651 10.2807/1560-7917.es.2020.25.3.2000045 10.1109/tcbb.2020.2986544 10.1016/j.radi.2020.04.017 10.1146/annurev-bioeng-071516-044442 10.1016/j.compbiomed.2020.103792 10.1016/j.irbm.2020.07.001 10.1016/j.chemolab.2020.104054 10.1016/j.compbiomed.2020.103805 10.1016/j.compbiomed.2020.103869 10.1101/2020.06.21.20136.598 10.1016/j.imu.2020.100360 10.3390/app10020559 10.1007/s13246-020-00888-x 10.1049/trit.2019.0028 10.1049/trit.2019.0051 10.1049/trit.2018.1006 10.1504/ijhm.2019.098951 10.1504/ijhm.2019.102893 10.1504/ijhm.2019.098949 10.1016/j.compeleceng.2019.03.004 10.1109/4235.797969 10.1016/j.camwa.2012.01.063 10.1007/s12652-020-02669-6
Mini-COVIDNet: Efficient Lightweight Deep Neural Network for Ultrasound Based Point-of-Care Detection of COVID-19.
Lung ultrasound (US) imaging has the potential to be an effective point-of-care test for detection of COVID-19, due to its ease of operation with minimal personal protection equipment along with easy disinfection. The current state-of-the-art deep learning models for detection of COVID-19 are heavy models that may not be easy to deploy in commonly utilized mobile platforms in point-of-care testing. In this work, we develop a lightweight mobile friendly efficient deep learning model for detection of COVID-19 using lung US images. Three different classes including COVID-19, pneumonia, and healthy were included in this task. The developed network, named as Mini-COVIDNet, was bench-marked with other lightweight neural network models along with state-of-the-art heavy model. It was shown that the proposed network can achieve the highest accuracy of 83.2% and requires a training time of only 24 min. The proposed Mini-COVIDNet has 4.39 times less number of parameters in the network compared to its next best performing network and requires a memory of only 51.29 MB, making the point-of-care detection of COVID-19 using lung US imaging plausible on a mobile platform. Deployment of these lightweight networks on embedded platforms shows that the proposed Mini-COVIDNet is highly versatile and provides optimal performance in terms of being accurate as well as having latency in the same order as other lightweight networks. The developed lightweight models are available at https://github.com/navchetan-awasthi/Mini-COVIDNet.
IEEE transactions on ultrasonics, ferroelectrics, and frequency control
"2021-03-24T00:00:00"
[ "NavchetanAwasthi", "AveenDayal", "Linga ReddyCenkeramaddi", "Phaneendra KYalavarthy" ]
10.1109/TUFFC.2021.3068190
COVID_SCREENET: COVID-19 Screening in Chest Radiography Images Using Deep Transfer Stacking.
Infectious diseases are highly contagious due to rapid transmission and very challenging to diagnose in the early stage. Artificial Intelligence and Machine Learning now become a strategic weapon in assisting infectious disease prevention, rapid-response in diagnosis, surveillance, and management. In this paper, a bifold COVID_SCREENET architecture is introduced for providing COVID-19 screening solutions using Chest Radiography (CR) images. Transfer learning using nine pre-trained ImageNet models to extract the features of Normal, Pneumonia, and COVID-19 images is adapted in the first fold and classified using baseline Convolutional Neural Network (CNN). A Modified Stacked Ensemble Learning (MSEL) is proposed in the second fold by stacking the top five pre-trained models, and then the predictions resulted. Experimentation is carried out in two folds: In first fold, open-source samples are considered and in second fold 2216 real-time samples collected from Tamilnadu Government Hospitals, India, and the screening results for COVID data is 100% accurate in both the cases. The proposed approach is also validated and blind reviewed with the help of two radiologists at Thanjavur Medical College & Hospitals by collecting 2216 chest X-ray images between the month of April and May. Based on the reports, the measures are calculated for COVID_SCREENET and it showed 100% accuracy in performing multi-class classification.
Information systems frontiers : a journal of research and innovation
"2021-03-24T00:00:00"
[ "RElakkiya", "PandiVijayakumar", "MarimuthuKaruppiah" ]
10.1007/s10796-021-10123-x 10.1016/j.asoc.2020.106642 10.1016/j.techfore.2020.120431 10.1016/j.knosys.2020.106647 10.1007/s10916-017-0861-x 10.1016/j.jpdc.2017.08.014 10.1093/clinchem/hvaa029 10.1016/j.cell.2018.02.010 10.1007/s10796-020-10028-1 10.1109/TKDE.2009.191 10.1007/s10796-020-10023-6
Deep Learning in the Detection and Diagnosis of COVID-19 Using Radiology Modalities: A Systematic Review.
The early detection and diagnosis of COVID-19 and the accurate separation of non-COVID-19 cases at the lowest cost and in the early stages of the disease are among the main challenges in the current COVID-19 pandemic. Concerning the novelty of the disease, diagnostic methods based on radiological images suffer from shortcomings despite their many applications in diagnostic centers. Accordingly, medical and computer researchers tend to use machine-learning models to analyze radiology images. This review study provides an overview of the current state of all models for the detection and diagnosis of COVID-19 through radiology modalities and their processing based on deep learning. According to the findings, deep learning-based models have an extraordinary capacity to offer an accurate and efficient system for the detection and diagnosis of COVID-19, the use of which in the processing of modalities would lead to a significant increase in sensitivity and specificity values. The application of deep learning in the field of COVID-19 radiologic image processing reduces false-positive and negative errors in the detection and diagnosis of this disease and offers a unique opportunity to provide fast, cheap, and safe diagnostic services to patients.
Journal of healthcare engineering
"2021-03-23T00:00:00"
[ "MustafaGhaderzadeh", "FarkhondehAsadi" ]
10.1155/2021/6677314 10.1016/s0140-6736(20)30211-7 10.1016/s0140-6736(20)30183-5 10.1590/0100-3984.2020.53.2e1 10.1002/14651858.CD013665 10.1016/j.xinn.2020.04.001 10.1056/nejmoa2001316 10.21037/atm.2018.04.02 10.1515/cclm-2020-0285 10.1038/d41587-020-00002-2 10.1097/moh.0000000000000322 10.1148/radiol.2020200642 10.1590/S1678-9946202062044 10.1148/ryct.2020200034 10.1101/2020.02.14.20023028 10.1016/j.diii.2020.11.008 10.2214/AJR.20.23034 10.1183/13993003.00775-2020 10.1016/j.jacr.2007.03.002 10.36416/1806-3756/e20200226 10.1016/j.chest.2020.04.003 10.1016/j.crad.2018.12.015 10.3390/s20040957 10.1126/science.1127647 10.1001/jamanetworkopen.2019.7416 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2020.103795 10.1016/j.cmpb.2020.105608 10.3390/s20113089 10.1080/07391102.2020.1788642 10.1007/s00330-020-07044-9 10.1016/j.compbiomed.2020.103792 10.1016/j.chaos.2020.109944 10.1007/s00259-020-04929-1 10.1007/s00264-020-04609-7 10.1109/access.2020.2994762 10.1097/RTI.0000000000000532 10.1007/s40846-020-00529-4 10.1007/s10489-020-01714-3 10.1007/s13246-020-00888-x 10.3390/e22050517 10.2196/19569 10.1148/radiol.2020200905 10.1016/j.compbiomed.2020.103869 10.1038/s41591-020-0931-3 10.1016/j.cmpb.2020.105532 10.1016/j.imu.2020.100360 10.33889/ijmems.2020.5.4.052 10.1016/j.compbiomed.2020.103805 10.1016/j.mehy.2020.109761 10.1007/s00330-020-06976-6 10.1016/j.ejrad.2020.109041 10.1016/j.cmpb.2020.105581 10.1080/07391102.2020.1767212 10.21037/atm.2020.03.132 10.1371/journal.pone.0235187 10.1007/s12559-020-09751-3 10.9781/ijimai.2020.04.003 10.3390/sym12040651 10.18517/ijaseit.10.2.11446 10.1016/j.irbm.2020.05.003
Computer-Aided Diagnosis of COVID-19 CT Scans Based on Spatiotemporal Information Fusion.
Coronavirus disease (COVID-19) is highly contagious and pathogenic. Currently, the diagnosis of COVID-19 is based on nucleic acid testing, but it has false negatives and hysteresis. The use of lung CT scans can help screen and effectively monitor diagnosed cases. The application of computer-aided diagnosis technology can reduce the burden on doctors, which is conducive to rapid and large-scale diagnostic screening. In this paper, we proposed an automatic detection method for COVID-19 based on spatiotemporal information fusion. Using the segmentation network in the deep learning method to segment the lung area and the lesion area, the spatiotemporal information features of multiple CT scans are extracted to perform auxiliary diagnosis analysis. The performance of this method was verified on the collected dataset. We achieved the classification of COVID-19 CT scans and non-COVID-19 CT scans and analyzed the development of the patients' condition through the CT scans. The average accuracy rate is 96.7%, sensitivity is 95.2%, and F1 score is 95.9%. Each scan takes about 30 seconds for detection.
Journal of healthcare engineering
"2021-03-23T00:00:00"
[ "TianyiLi", "WeiWei", "LidanCheng", "ShengjieZhao", "ChuanjunXu", "XiaZhang", "YiZeng", "JihuaGu" ]
10.1155/2021/6649591 10.1056/nejmoa2001017 10.1126/science.367.6475.234 10.1016/j.jgar.2020.02.021 10.1016/j.jaut.2020.102433 10.1016/j.ajem.2020.03.036 10.1148/radiol.2020200330 10.1148/ryct.2020200034 10.1016/j.jinf.2020.03.007 10.1177/0846537120913033 10.1109/tbme.2018.2845706 10.1109/tmi.2019.2951439 10.1109/tbme.2018.2814538 10.1016/S0140-6736(20)30183-5 10.1148/radiol.2020200236 10.1148/radiol.2020200823 10.1148/radiol.2020200343 10.1148/radiol.2020200463 10.21203/rs.3.rs-96782/v1 10.1016/j.media.2017.06.015 10.1007/978-3-319-24574-4_28
Domain adaptation based self-correction model for COVID-19 infection segmentation in CT images.
The capability of generalization to unseen domains is crucial for deep learning models when considering real-world scenarios. However, current available medical image datasets, such as those for COVID-19 CT images, have large variations of infections and domain shift problems. To address this issue, we propose a prior knowledge driven domain adaptation and a dual-domain enhanced self-correction learning scheme. Based on the novel learning scheme, a domain adaptation based self-correction model (DASC-Net) is proposed for COVID-19 infection segmentation on CT images. DASC-Net consists of a novel attention and feature domain enhanced domain adaptation model (AFD-DA) to solve the domain shifts and a self-correction learning process to refine segmentation results. The innovations in AFD-DA include an image-level activation feature extractor with attention to lung abnormalities and a multi-level discrimination module for hierarchical feature domain alignment. The proposed self-correction learning process adaptively aggregates the learned model and corresponding pseudo labels for the propagation of aligned source and target domain information to alleviate the overfitting to noises caused by pseudo labels. Extensive experiments over three publicly available COVID-19 CT datasets demonstrate that DASC-Net consistently outperforms state-of-the-art segmentation, domain shift, and coronavirus infection segmentation methods. Ablation analysis further shows the effectiveness of the major components in our model. The DASC-Net enriches the theory of domain adaptation and self-correction learning in medical imaging and can be generalized to multi-site COVID-19 infection segmentation on CT images for clinical deployment.
Expert systems with applications
"2021-03-23T00:00:00"
[ "QiangguoJin", "HuiCui", "ChangmingSun", "ZhaopengMeng", "LeyiWei", "RanSu" ]
10.1016/j.eswa.2021.114848
Evaluation of lung involvement in COVID-19 pneumonia based on ultrasound images.
Lung ultrasound (LUS) can be an important imaging tool for the diagnosis and assessment of lung involvement. Ultrasound sonograms have been confirmed to illustrate damage to a person's lungs, which means that the correct classification and scoring of a patient's sonogram can be used to assess lung involvement. The purpose of this study was to establish a lung involvement assessment model based on deep learning. A novel multimodal channel and receptive field attention network combined with ResNeXt (MCRFNet) was proposed to classify sonograms, and the network can automatically fuse shallow features and determine the importance of different channels and respective fields. Finally, sonogram classes were transformed into scores to evaluate lung involvement from the initial diagnosis to rehabilitation. Using multicenter and multimodal ultrasound data from 104 patients, the diagnostic model achieved 94.39% accuracy, 82.28% precision, 76.27% sensitivity, and 96.44% specificity. The lung involvement severity and the trend of COVID-19 pneumonia were evaluated quantitatively.
Biomedical engineering online
"2021-03-22T00:00:00"
[ "ZhaoyuHu", "ZhenhuaLiu", "YijieDong", "JianjianLiu", "BinHuang", "AihuaLiu", "JingjingHuang", "XujuanPu", "XiaShi", "JinhuaYu", "YangXiao", "HuiZhang", "JianqiaoZhou" ]
10.1186/s12938-021-00863-x 10.1016/j.tmaid.2020.101627 10.1148/radiol.2020200463 10.1148/radiol.2020200370 10.1378/chest.1806646 10.1136/emermed-2013-203039 10.1016/j.jcrc.2014.11.021 10.21037/jtd.2016.09.38 10.1007/s13246-020-00865-4 10.1101/2020.03.12.20027185 10.1080/03772063.2019.1575292 10.1007/s40031-019-00398-9 10.1109/Cvpr.2016.90 10.1109/Cvpr.2017.634 10.1109/TPAMI.2019.2913372 10.1007/s00134-012-2513-4 10.1164/rccm.201802-0227LE 10.1186/s13054-019-2569-4 10.1109/JBHI.2019.2936151 10.3791/58990 10.1590/S1679-45082016MD3557 10.1109/cvpr.2015.7298594 10.1097/Mcp.0000000000000468
Transfer learning-based ensemble support vector machine model for automated COVID-19 detection using lung computerized tomography scan data.
The novel discovered disease coronavirus popularly known as COVID-19 is caused due to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and declared a pandemic by the World Health Organization (WHO). An early-stage detection of COVID-19 is crucial for the containment of the pandemic it has caused. In this study, a transfer learning-based COVID-19 screening technique is proposed. The motivation of this study is to design an automated system that can assist medical staff especially in areas where trained staff are outnumbered. The study investigates the potential of transfer learning-based models for automatically diagnosing diseases like COVID-19 to assist the medical force, especially in times of an outbreak. In the proposed work, a deep learning model, i.e., truncated VGG16 (Visual Geometry Group from Oxford) is implemented to screen COVID-19 CT scans. The VGG16 architecture is fine-tuned and used to extract features from CT scan images. Further principal component analysis (PCA) is used for feature selection. For the final classification, four different classifiers, namely deep convolutional neural network (DCNN), extreme learning machine (ELM), online sequential ELM, and bagging ensemble with support vector machine (SVM) are compared. The best performing classifier bagging ensemble with SVM within 385 ms achieved an accuracy of 95.7%, the precision of 95.8%, area under curve (AUC) of 0.958, and an F1 score of 95.3% on 208 test images. The results obtained on diverse datasets prove the superiority and robustness of the proposed work. A pre-processing technique has also been proposed for radiological data. The study further compares pre-trained CNN architectures and classification models against the proposed technique.
Medical & biological engineering & computing
"2021-03-20T00:00:00"
[ "MukulSingh", "ShreyBansal", "SakshiAhuja", "Rahul KumarDubey", "Bijaya KetanPanigrahi", "NilanjanDey" ]
10.1007/s11517-020-02299-2 10.1007/s10916-020-01582-x 10.1080/14737159.2020.1757437 10.1148/radiol.2020200642 10.1016/j.cmpb.2020.105617 10.1142/S0219622020500285 10.1007/s00330-020-06801-0 10.1016/j.jiph.2020.06.028 10.1038/s41598-019-56847-4 10.1007/s10096-020-03901-z 10.1016/j.ejrad.2020.108941 10.1148/radiol.2020200527 10.1186/s40537-019-0197-0 10.1016/j.imu.2020.100427 10.1109/ACCESS.2020.3016780 10.1007/s13246-020-00888-x 10.1109/ACCESS.2020.3016780 10.1109/ACCESS.2020.3019095 10.1016/j.neucom.2005.12.126 10.1109/TNN.2006.880583
AI detection of mild COVID-19 pneumonia from chest CT scans.
An artificial intelligence model was adopted to identify mild COVID-19 pneumonia from computed tomography (CT) volumes, and its diagnostic performance was then evaluated. In this retrospective multicenter study, an atrous convolution-based deep learning model was established for the computer-assisted diagnosis of mild COVID-19 pneumonia. The dataset included 2087 chest CT exams collected from four hospitals between 1 January 2019 and 31 May 2020. The true positive rate, true negative rate, receiver operating characteristic curve, area under the curve (AUC) and convolutional feature map were used to evaluate the model. The proposed deep learning model was trained on 1538 patients and tested on an independent testing cohort of 549 patients. The overall sensitivity was 91.5% (195/213; p < 0.001, 95% CI: 89.2-93.9%), the overall specificity was 90.5% (304/336; p < 0.001, 95% CI: 88.0-92.9%) and the general AUC value was 0.955 (p < 0.001). A deep learning model can accurately detect COVID-19 and serve as an important supplement to the COVID-19 reverse transcription-polymerase chain reaction (RT-PCR) test. • The implementation of a deep learning model to identify mild COVID-19 pneumonia was confirmed to be effective and feasible. • The strategy of using a binary code instead of the region of interest label to identify mild COVID-19 pneumonia was verified. • This AI model can assist in the early screening of COVID-19 without interfering with normal clinical examinations.
European radiology
"2021-03-20T00:00:00"
[ "Jin-CaoYao", "TaoWang", "Guang-HuaHou", "DiOu", "WeiLi", "Qiao-DanZhu", "Wen-CongChen", "ChenYang", "Li-JingWang", "Li-PingWang", "Lin-YinFan", "Kai-YuanShi", "JieZhang", "DongXu", "Ya-QingLi" ]
10.1007/s00330-021-07797-x 10.1016/S0140-6736(20)30183-5 10.1038/s41586-020-2179-y 10.1001/jama.2020.2648 10.1126/science.abb4557 10.1002/jmv.26060 10.1007/s00330-020-06731-x 10.1148/radiol.2020200463 10.1007/s00330-020-06748-2 10.1056/NEJMoa2001017 10.1148/radiol.2020200905 10.1109/TMI.2020.2996645 10.1148/radiol.2019182465 10.1016/j.media.2017.06.014 10.1016/j.media.2019.07.004 10.1109/TMI.2019.2934577 10.1109/TPAMI.2017.2699184 10.1007/s00330-020-07044-9 10.1007/s00330-020-07042-x
Integration of CNN, CBMIR, and Visualization Techniques for Diagnosis and Quantification of Covid-19 Disease.
Diagnosis techniques based on medical image modalities have higher sensitivities compared to conventional RT-PCT tests. We propose two methods for diagnosing COVID-19 disease using X-ray images and differentiating it from viral pneumonia. The diagnosis section is based on deep neural networks, and the discriminating uses an image retrieval approach. Both units were trained by healthy, pneumonia, and COVID-19 images. In COVID-19 patients, the maximum intensity projection of the lung CT is visualized to a physician, and the CT Involvement Score is calculated. The performance of the CNN and image retrieval algorithms were improved by transfer learning and hashing functions. We achieved an accuracy of 97% and an overall prec@10 of 87%, respectively, concerning the CNN and the retrieval methods.
IEEE journal of biomedical and health informatics
"2021-03-19T00:00:00"
[ "SaeedMohagheghi", "MehdiAlizadeh", "Seyed MahdiSafavi", "Amir HForuzan", "Yen-WeiChen" ]
10.1109/JBHI.2021.3067333
Identification of Images of COVID-19 from Chest X-rays Using Deep Learning: Comparing COGNEX VisionPro Deep Learning 1.0™ Software with Open Source Convolutional Neural Networks.
The novel Coronavirus, COVID-19, pandemic is being considered the most crucial health calamity of the century. Many organizations have come together during this crisis and created various Deep Learning models for the effective diagnosis of COVID-19 from chest radiography images. For example, The University of Waterloo, along with Darwin AI-a start-up spin-off of this department, has designed the Deep Learning model 'COVID-Net' and created a dataset called 'COVIDx' consisting of 13,975 images across 13,870 patient cases. In this study, COGNEX's Deep Learning Software, VisionPro Deep Learning™,  is used to classify these Chest X-rays from the COVIDx dataset. The results are compared with the results of COVID-Net and various other state-of-the-art Deep Learning models from the open-source community. Deep Learning tools are often referred to as black boxes because humans cannot interpret how or why a model is classifying an image into a particular class. This problem is addressed by testing VisionPro Deep Learning with two settings, first, by selecting the entire image as the Region of Interest (ROI), and second, by segmenting the lungs in the first step, and then doing the classification step on the segmented lungs only, instead of using the entire image. VisionPro Deep Learning results: on the entire image as the ROI it achieves an overall F score of 94.0%, and on the segmented lungs, it gets an
SN computer science
"2021-03-16T00:00:00"
[ "ArjunSarkar", "JoergVandenhirtz", "JozsefNagy", "DavidBacsa", "MitchellRiley" ]
10.1007/s42979-021-00496-w 10.1016/S0140-6736(20)30183-5 10.1001/jama.2020.1585 10.1148/ryct.2020200034 10.1056/nejmoa2002032 10.1183/13993003.00607-2020 10.1016/j.virol.2007.09.045 10.1093/infdis/jis455 10.1097/RLI.0000000000000670 10.1016/j.clinimag.2020.04.001 10.1038/s41598-020-76550-z 10.1109/ACCESS.2020.3010287 10.1145/3065386 10.1007/s10916-020-01562-1 10.1007/s10489-020-01943-6 10.1007/s13246-020-00888-x 10.1016/j.cmpb.2020.105581 10.1007/s10489-020-01867-1 10.1007/s42979-020-00209-9
Quantitative CT imaging and advanced visualization methods: potential application in novel coronavirus disease 2019 (COVID-19) pneumonia.
Increasingly, quantitative lung computed tomography (qCT)-derived metrics are providing novel insights into chronic inflammatory lung diseases, including chronic obstructive pulmonary disease, asthma, interstitial lung disease, and more. Metrics related to parenchymal, airway, and vascular anatomy together with various measures associated with lung function including regional parenchymal mechanics, air trapping associated with functional small airways disease, and dual-energy derived measures of perfused blood volume are offering the ability to characterize disease phenotypes associated with the chronic inflammatory pulmonary diseases. With the emergence of COVID-19, together with its widely varying degrees of severity, its rapid progression in some cases, and the potential for lengthy post-COVID-19 morbidity, there is a new role in applying well-established qCT-based metrics. Based on the utility of qCT tools in other lung diseases, previously validated supervised classical machine learning methods, and emerging unsupervised machine learning and deep-learning approaches, we are now able to provide desperately needed insight into the acute and the chronic phases of this inflammatory lung disease. The potential areas in which qCT imaging can be beneficial include improved accuracy of diagnosis, identification of clinically distinct phenotypes, improvement of disease prognosis, stratification of care, and early objective evaluation of intervention response. There is also a potential role for qCT in evaluating an increasing population of post-COVID-19 lung parenchymal changes such as fibrosis. In this work, we discuss the basis of various lung qCT methods, using case-examples to highlight their potential application as a tool for the exploration and characterization of COVID-19, and offer scanning protocols to serve as templates for imaging the lung such that these established qCT analyses have the best chance at yielding the much needed new insights.
BJR open
"2021-03-16T00:00:00"
[ "PrashantNagpal", "JunfengGuo", "Kyung MinShin", "Jae-KwangLim", "Ki BeomKim", "Alejandro PComellas", "David WKaczka", "SamuelPeterson", "Chang HyunLee", "Eric AHoffman" ]
10.1259/bjro.20200043 10.1056/NEJMoa2001017 10.1164/ajrccm.166.1.at1102 10.1148/radiol.2020200230 10.5582/bst.2020.01043 10.1056/NEJMp2006141 10.1016/j.ijantimicag.2020.105924 10.1148/radiol.2020201365 10.1164/rccm.202003-0817LE 10.1148/radiol.2020200843 10.1148/radiol.2020200642 10.1164/ajrccm.160.2.9804094 10.1164/rccm.201607-1385OC 10.1002/art.30438 10.1056/NEJMoa0900928 10.1513/AnnalsATS.201310-364OC 10.1016/j.chest.2016.11.033 10.1378/chest.129.3.558 10.3109/15412555.2012.661000 10.1016/j.rmed.2016.08.023 10.1378/chest.06-1401 10.1111/crj.12451 10.1164/rccm.201803-0444PP 10.1148/radiol.2231010315 10.1002/jmri.25010 10.1016/j.jaci.2011.04.051 10.1164/rccm.201506-1208PP 10.1183/13993003.00547-2020 10.1148/radiol.2020200905 10.1016/S2213-2600(20)30003-5 10.1016/j.crad.2020.01.010 10.1109/TMI.2018.2858202 10.1148/radiol.2020201491 10.1101/2020.05.10.20096073 10.1007/s11547-020-01195-x 10.1148/ryai.2020200048 10.1007/s11432-020-2849-3 10.1016/S1076-6332(97)80080-3 10.1016/j.acra.2004.09.005 10.1513/pats.200603-086MS 10.1002/wsbm.1234 10.1016/j.jaci.2016.11.009 10.1164/rccm.201807-1351SO 10.1148/ryai.2020200053 10.1016/j.acra.2017.01.016 10.1152/jappl.1985.59.2.468 10.1164/ajrccm.152.2.7633722 10.1152/jappl.1995.79.5.1525 10.1016/j.acra.2008.12.024 10.1148/radiol.2431060194 10.1148/radiol.2483071434 10.1016/j.acra.2009.12.002 10.1007/s00330-012-2683-z 10.1016/j.rmed.2010.02.023 10.1164/rccm.200912-1843CC 10.3109/15412550903341513 10.1164/rccm.200510-1677OC 10.1056/NEJMoa030287 10.1152/jappl.1994.76.1.271 10.1016/S1076-6332(03)00330-1 10.1111/resp.13783 10.1164/rccm.201703-0555OC 10.1186/s12931-017-0527-8 10.1183/13993003.00129-2016 10.1097/RTI.0000000000000220 10.1371/journal.pone.0224772 10.1148/radiol.10091446 10.1259/bjr.20200538 10.1016/j.media.2019.01.012 10.1007/s00330-020-07013-2 10.1016/j.ijid.2020.06.033 10.1148/radiol.2020201433 10.1164/ajrccm.159.2.9707145 10.1164/ajrccm.156.1.9606093 10.1016/j.acra.2006.04.017 10.1109/TMI.2006.870889 10.1016/j.ejrad.2020.108852 10.1164/rccm.201711-2174OC 10.1007/s00330-019-06402-6 10.7326/M20-2003 10.1161/CIRCULATIONAHA.120.047430 10.1016/S2665-9913(20)30121-1 10.1097/00004728-197908000-00002 10.1148/radiographics.17.2.9084083 10.1038/nm.2971 10.1159/000478865 10.1097/ALN.0000000000002583 10.1136/thoraxjnl-2013-203897 10.1183/09031936.00165410 10.1002/mp.12436 10.1097/RLI.0000000000000093 10.1097/RTI.0b013e31829f6796 10.1007/s00134-020-06033-2 10.1001/jama.2012.5669 10.1513/AnnalsATS.202004-376RL 10.1016/j.ejmp.2017.05.071 10.1088/0031-9155/61/13/R150 10.1259/bjr.20170926 10.1016/j.ejca.2011.11.036 10.21037/tlcr.2019.12.19 10.1186/s12931-019-1121-z 10.1001/jama.2020.6918 10.1073/pnas.1715564115 10.1089/jamp.2018.1487 10.1186/s12931-018-0888-7 10.1136/bmjresp-2017-000252 10.1016/j.jaci.2016.11.053 10.1016/j.jaci.2016.02.001 10.1016/j.jaci.2013.09.039
Deep metric learning-based image retrieval system for chest radiograph and its clinical applications in COVID-19.
In recent years, deep learning-based image analysis methods have been widely applied in computer-aided detection, diagnosis and prognosis, and has shown its value during the public health crisis of the novel coronavirus disease 2019 (COVID-19) pandemic. Chest radiograph (CXR) has been playing a crucial role in COVID-19 patient triaging, diagnosing and monitoring, particularly in the United States. Considering the mixed and unspecific signals in CXR, an image retrieval model of CXR that provides both similar images and associated clinical information can be more clinically meaningful than a direct image diagnostic model. In this work we develop a novel CXR image retrieval model based on deep metric learning. Unlike traditional diagnostic models which aim at learning the direct mapping from images to labels, the proposed model aims at learning the optimized embedding space of images, where images with the same labels and similar contents are pulled together. The proposed model utilizes multi-similarity loss with hard-mining sampling strategy and attention mechanism to learn the optimized embedding space, and provides similar images, the visualizations of disease-related attention maps and useful clinical information to assist clinical decisions. The model is trained and validated on an international multi-site COVID-19 dataset collected from 3 different sources. Experimental results of COVID-19 image retrieval and diagnosis tasks show that the proposed model can serve as a robust solution for CXR analysis and patient management for COVID-19. The model is also tested on its transferability on a different clinical decision support task for COVID-19, where the pre-trained model is applied to extract image features from a new dataset without any further training. The extracted features are then combined with COVID-19 patient's vitals, lab tests and medical histories to predict the possibility of airway intubation in 72 hours, which is strongly associated with patient prognosis, and is crucial for patient care and hospital resource planning. These results demonstrate our deep metric learning based image retrieval model is highly efficient in the CXR retrieval, diagnosis and prognosis, and thus has great clinical value for the treatment and management of COVID-19 patients.
Medical image analysis
"2021-03-13T00:00:00"
[ "AoxiaoZhong", "XiangLi", "DufanWu", "HuiRen", "KyungsangKim", "YounggonKim", "VarunBuch", "NirNeumark", "BernardoBizzo", "Won YoungTak", "Soo YoungPark", "Yu RimLee", "Min KyuKang", "Jung GilPark", "Byung SeokKim", "Woo JinChung", "NingGuo", "IttaiDayan", "Mannudeep KKalra", "QuanzhengLi" ]
10.1016/j.media.2021.101993
Automatic deep learning-based pleural effusion classification in lung ultrasound images for respiratory pathology diagnosis.
Lung ultrasound (LUS) imaging as a point-of-care diagnostic tool for lung pathologies has been proven superior to X-ray and comparable to CT, enabling earlier and more accurate diagnosis in real-time at the patient's bedside. The main limitation to widespread use is its dependence on the operator training and experience. COVID-19 lung ultrasound findings predominantly reflect a pneumonitis pattern, with pleural effusion being infrequent. However, pleural effusion is easy to detect and to quantify, therefore it was selected as the subject of this study, which aims to develop an automated system for the interpretation of LUS of pleural effusion. A LUS dataset was collected at the Royal Melbourne Hospital which consisted of 623 videos containing 99,209 2D ultrasound images of 70 patients using a phased array transducer. A standardized protocol was followed that involved scanning six anatomical regions providing complete coverage of the lungs for diagnosis of respiratory pathology. This protocol combined with a deep learning algorithm using a Spatial Transformer Network provides a basis for automatic pathology classification on an image-based level. In this work, the deep learning model was trained using supervised and weakly supervised approaches which used frame- and video-based ground truth labels respectively. The reference was expert clinician image interpretation. Both approaches show comparable accuracy scores on the test set of 92.4% and 91.1%, respectively, not statistically significantly different. However, the video-based labelling approach requires significantly less effort from clinical experts for ground truth labelling.
Physica medica : PM : an international journal devoted to the applications of physics to medicine and biology : official journal of the Italian Association of Biomedical Physics (AIFB)
"2021-03-12T00:00:00"
[ "Chung-HanTsai", "Jeroenvan der Burgt", "DamjanVukovic", "NancyKaur", "LibertarioDemi", "DavidCanty", "AndrewWang", "AlistairRoyse", "ColinRoyse", "KaviHaji", "JasonDowling", "GirijaChetty", "DavideFontanarosa" ]
10.1016/j.ejmp.2021.02.023
Deep Learning Enables Accurate Diagnosis of Novel Coronavirus (COVID-19) With CT Images.
A novel coronavirus (COVID-19) recently emerged as an acute respiratory syndrome, and has caused a pneumonia outbreak world-widely. As the COVID-19 continues to spread rapidly across the world, computed tomography (CT) has become essentially important for fast diagnoses. Thus, it is urgent to develop an accurate computer-aided method to assist clinicians to identify COVID-19-infected patients by CT images. Here, we have collected chest CT scans of 88 patients diagnosed with COVID-19 from hospitals of two provinces in China, 100 patients infected with bacteria pneumonia, and 86 healthy persons for comparison and modeling. Based on the data, a deep learning-based CT diagnosis system was developed to identify patients with COVID-19. The experimental results showed that our model could accurately discriminate the COVID-19 patients from the bacteria pneumonia patients with an AUC of 0.95, recall (sensitivity) of 0.96, and precision of 0.79. When integrating three types of CT images, our model achieved a recall of 0.93 with precision of 0.86 for discriminating COVID-19 patients from others. Moreover, our model could extract main lesion features, especially the ground-glass opacity (GGO), which are visually helpful for assisted diagnoses by doctors. An online server is available for online diagnoses with CT images by our server (http://biomed.nscc-gz.cn/model.php). Source codes and datasets are available at our GitHub (https://github.com/SY575/COVID19-CT).
IEEE/ACM transactions on computational biology and bioinformatics
"2021-03-12T00:00:00"
[ "YingSong", "ShuangjiaZheng", "LiangLi", "XiangZhang", "XiaodongZhang", "ZiwangHuang", "JianwenChen", "RuixuanWang", "HuiyingZhao", "YutianChong", "JunShen", "YunfeiZha", "YuedongYang" ]
10.1109/TCBB.2021.3065361
Prediction of Patient Management in COVID-19 Using Deep Learning-Based Fully Automated Extraction of Cardiothoracic CT Metrics and Laboratory Findings.
To extract pulmonary and cardiovascular metrics from chest CTs of patients with coronavirus disease 2019 (COVID-19) using a fully automated deep learning-based approach and assess their potential to predict patient management. All initial chest CTs of patients who tested positive for severe acute respiratory syndrome coronavirus 2 at our emergency department between March 25 and April 25, 2020, were identified (n = 120). Three patient management groups were defined: group 1 (outpatient), group 2 (general ward), and group 3 (intensive care unit [ICU]). Multiple pulmonary and cardiovascular metrics were extracted from the chest CT images using deep learning. Additionally, six laboratory findings indicating inflammation and cellular damage were considered. Differences in CT metrics, laboratory findings, and demographics between the patient management groups were assessed. The potential of these parameters to predict patients' needs for intensive care (yes/no) was analyzed using logistic regression and receiver operating characteristic curves. Internal and external validity were assessed using 109 independent chest CT scans. While demographic parameters alone (sex and age) were not sufficient to predict ICU management status, both CT metrics alone (including both pulmonary and cardiovascular metrics; area under the curve [AUC] = 0.88; 95% confidence interval [CI] = 0.79-0.97) and laboratory findings alone (C-reactive protein, lactate dehydrogenase, white blood cell count, and albumin; AUC = 0.86; 95% CI = 0.77-0.94) were good classifiers. Excellent performance was achieved by a combination of demographic parameters, CT metrics, and laboratory findings (AUC = 0.91; 95% CI = 0.85-0.98). Application of a model that combined both pulmonary CT metrics and demographic parameters on a dataset from another hospital indicated its external validity (AUC = 0.77; 95% CI = 0.66-0.88). Chest CT of patients with COVID-19 contains valuable information that can be accessed using automated image analysis. These metrics are useful for the prediction of patient management.
Korean journal of radiology
"2021-03-10T00:00:00"
[ "ThomasWeikert", "SaikiranRapaka", "SasaGrbic", "ThomasRe", "ShikhaChaganti", "David JWinkel", "ConstantinAnastasopoulos", "TiloNiemann", "Benedikt JWiggli", "JensBremerich", "RaphaelTwerenbold", "GregorSommer", "DorinComaniciu", "Alexander WSauter" ]
10.3348/kjr.2020.0994
Novel coronavirus (COVID-19) diagnosis using computer vision and artificial intelligence techniques: a review.
The universal transmission of pandemic COVID-19 (Coronavirus) causes an immediate need to commit in the fight across the whole human population. The emergencies for human health care are limited for this abrupt outbreak and abandoned environment. In this situation, inventive automation like computer vision (machine learning, deep learning, artificial intelligence), medical imaging (computed tomography, X-Ray) has developed an encouraging solution against COVID-19. In recent months, different techniques using image processing are done by various researchers. In this paper, a major review on image acquisition, segmentation, diagnosis, avoidance, and management are presented. An analytical comparison of the various proposed algorithm by researchers for coronavirus has been carried out. Also, challenges and motivation for research in the future to deal with coronavirus are indicated. The clinical impact and use of computer vision and deep learning were discussed and we hope that dermatologists may have better understanding of these areas from the study.
Multimedia tools and applications
"2021-03-10T00:00:00"
[ "AnujaBhargava", "AtulBansal" ]
10.1007/s11042-021-10714-5 10.1007/s10916-019-1466-3 10.1007/s11042-019-07875-9 10.1007/s00521-019-04514-0 10.1007/s10916-019-1413-3
An integrated autoencoder-based hybrid CNN-LSTM model for COVID-19 severity prediction from lung ultrasound.
The COVID-19 pandemic has become one of the biggest threats to the global healthcare system, creating an unprecedented condition worldwide. The necessity of rapid diagnosis calls for alternative methods to predict the condition of the patient, for which disease severity estimation on the basis of Lung Ultrasound (LUS) can be a safe, radiation-free, flexible, and favorable option. In this paper, a frame-based 4-score disease severity prediction architecture is proposed with the integration of deep convolutional and recurrent neural networks to consider both spatial and temporal features of the LUS frames. The proposed convolutional neural network (CNN) architecture implements an autoencoder network and separable convolutional branches fused with a modified DenseNet-201 network to build a vigorous, noise-free classification model. A five-fold cross-validation scheme is performed to affirm the efficacy of the proposed network. In-depth result analysis shows a promising improvement in the classification performance by introducing the Long Short-Term Memory (LSTM) layers after the proposed CNN architecture by an average of 7-12%, which is approximately 17% more than the traditional DenseNet architecture alone. From an extensive analysis, it is found that the proposed end-to-end scheme is very effective in detecting COVID-19 severity scores from LUS images.
Computers in biology and medicine
"2021-03-09T00:00:00"
[ "Ankan GhoshDastider", "FarhanSadik", "Shaikh AnowarulFattah" ]
10.1016/j.compbiomed.2021.104296
COVID-19 classification using deep feature concatenation technique.
Detecting COVID-19 from medical images is a challenging task that has excited scientists around the world. COVID-19 started in China in 2019, and it is still spreading even now. Chest X-ray and Computed Tomography (CT) scan are the most important imaging techniques for diagnosing COVID-19. All researchers are looking for effective solutions and fast treatment methods for this epidemic. To reduce the need for medical experts, fast and accurate automated detection techniques are introduced. Deep learning convolution neural network (DL-CNN) technologies are showing remarkable results for detecting cases of COVID-19. In this paper, deep feature concatenation (DFC) mechanism is utilized in two different ways. In the first one, DFC links deep features extracted from X-ray and CT scan using a simple proposed CNN. The other way depends on DFC to combine features extracted from either X-ray or CT scan using the proposed CNN architecture and two modern pre-trained CNNs: ResNet and GoogleNet. The DFC mechanism is applied to form a definitive classification descriptor. The proposed CNN architecture consists of three deep layers to overcome the problem of large time consumption. For each image type, the proposed CNN performance is studied using different optimization algorithms and different values for the maximum number of epochs, the learning rate (LR), and mini-batch (M-B) size. Experiments have demonstrated the superiority of the proposed approach compared to other modern and state-of-the-art methodologies in terms of accuracy, precision, recall and f_score.
Journal of ambient intelligence and humanized computing
"2021-03-09T00:00:00"
[ "WaleedSaad", "Wafaa AShalaby", "MonaShokair", "Fathi AbdEl-Samie", "MoawadDessouky", "EssamAbdellatef" ]
10.1007/s12652-021-02967-7 10.1504/IJHPCN.2018.096726 10.1001/jamanetworkopen.2019.2561 10.1109/ACCESS.2014.2325029 10.1038/s41598-016-0001-8 10.1016/j.patrec.2019.11.015 10.1109/MCOM.2018.1700817 10.4018/IJSSCI.2018040103 10.1016/S0140-6736(20)30183-5 10.1109/34.216733 10.1109/JSEN.2019.2913281 10.1109/ACCESS.2019.2936215 10.1109/TIP.2007.908073 10.1186/s40537-014-0007-7 10.1109/ACCESS.2020.2978629 10.1504/IJHPCN.2019.10022723 10.1016/j.imu.2020.100360 10.1016/j.jocs.2018.12.003 10.3389/fnins.2019.00095 10.1007/s12098-020-03263-6 10.1148/radiol.2020200343 10.1148/radiol.2020200343 10.1016/j.eng.2020.04.010 10.1109/ACCESS.2017.2775038
Deep Learning-Driven Automated Detection of COVID-19 from Radiography Images: a Comparative Analysis.
The COVID-19 pandemic has wreaked havoc on the whole world, taking over half a million lives and capsizing the world economy in unprecedented magnitudes. With the world scampering for a possible vaccine, early detection and containment are the only redress. Existing diagnostic technologies with high accuracy like RT-PCRs are expensive and sophisticated, requiring skilled individuals for specimen collection and screening, resulting in lower outreach. So, methods excluding direct human intervention are much sought after, and artificial intelligence-driven automated diagnosis, especially with radiography images, captured the researchers' interest. This survey marks a detailed inspection of the deep learning-based automated detection of COVID-19 works done to date, a comparison of the available datasets, methodical challenges like imbalanced datasets and others, along with probable solutions with different preprocessing methods, and scopes of future exploration in this arena. We also benchmarked the performance of 315 deep models in diagnosing COVID-19, normal, and pneumonia from X-ray images of a custom dataset created from four others. The dataset is publicly available at https://github.com/rgbnihal2/COVID-19-X-ray-Dataset. Our results show that DenseNet201 model with Quadratic SVM classifier performs the best (accuracy: 98.16%, sensitivity: 98.93%, specificity: 98.77%) and maintains high accuracies in other similar architectures as well. This proves that even though radiography images might not be conclusive for radiologists, but it is so for deep learning algorithms for detecting COVID-19. We hope this extensive review will provide a comprehensive guideline for researchers in this field.
Cognitive computation
"2021-03-09T00:00:00"
[ "SejutiRahman", "SujanSarker", "Md Abdullah AlMiraj", "Ragib AminNihal", "A K MNadimul Haque", "Abdullah AlNoman" ]
10.1007/s12559-020-09779-5 10.1056/NEJMoa2002032 10.1002/aisy.202000071 10.7326/M20-0504 10.1016/S1473-3099(20)30237-1 10.1093/cid/ciaa310 10.1016/j.jrid.2020.04.001 10.1148/radiol.2020200905 10.1016/j.compbiomed.2020.103795 10.1007/s10489-020-01770-9 10.1016/j.jacr.2020.02.008 10.1097/RLI.0000000000000670 10.1148/radiol.2020200432 10.1016/j.clinimag.2020.04.001 10.1016/j.neucom.2015.09.116 10.1016/j.mehy.2020.109761 10.1145/3386252 10.1145/1007730.1007733 10.1613/jair.953 10.1109/TMI.2020.2992546 10.1007/s40846-020-00529-4 10.5455/jjee.204-1585312246 10.1007/s13246-020-00865-4 10.1016/j.media.2016.11.002 10.1016/j.cmpb.2020.105581 10.1016/j.cmpb.2020.105532 10.1093/oxfordjournals.pan.a004868 10.1016/j.procs.2015.08.087 10.22237/jmasm/1051747320 10.1109/21.97458
DeepAlign, a 3D alignment method based on regionalized deep learning for Cryo-EM.
Cryo Electron Microscopy (Cryo-EM) is currently one of the main tools to reveal the structural information of biological specimens at high resolution. Despite the great development of the techniques involved to solve the biological structures with Cryo-EM in the last years, the reconstructed 3D maps can present lower resolution due to errors committed while processing the information acquired by the microscope. One of the main problems comes from the 3D alignment step, which is an error-prone part of the reconstruction workflow due to the very low signal-to-noise ratio (SNR) common in Cryo-EM imaging. In fact, as we will show in this work, it is not unusual to find a disagreement in the alignment parameters in approximately 20-40% of the processed images, when outputs of different alignment algorithms are compared. In this work, we present a novel method to align sets of single particle images in the 3D space, called DeepAlign. Our proposal is based on deep learning networks that have been successfully used in plenty of problems in image classification. Specifically, we propose to design several deep neural networks on a regionalized basis to classify the particle images in sub-regions and, then, make a refinement of the 3D alignment parameters only inside that sub-region. We show that this method results in accurately aligned images, improving the Fourier shell correlation (FSC) resolution obtained with other state-of-the-art methods while decreasing computational time.
Journal of structural biology
"2021-03-07T00:00:00"
[ "AJiménez-Moreno", "DStřelák", "JFilipovič", "J MCarazo", "C O SSorzano" ]
10.1016/j.jsb.2021.107712
Development of a convolutional neural network to differentiate among the etiology of similar appearing pathological B lines on lung ultrasound: a deep learning study.
Lung ultrasound (LUS) is a portable, low-cost respiratory imaging tool but is challenged by user dependence and lack of diagnostic specificity. It is unknown whether the advantages of LUS implementation could be paired with deep learning (DL) techniques to match or exceed human-level, diagnostic specificity among similar appearing, pathological LUS images. A convolutional neural network (CNN) was trained on LUS images with B lines of different aetiologies. CNN diagnostic performance, as validated using a 10% data holdback set, was compared with surveyed LUS-competent physicians. Two tertiary Canadian hospitals. 612 LUS videos (121 381 frames) of B lines from 243 distinct patients with either (1) COVID-19 (COVID), non-COVID acute respiratory distress syndrome (NCOVID) or (3) hydrostatic pulmonary edema (HPE). The trained CNN performance on the independent dataset showed an ability to discriminate between COVID (area under the receiver operating characteristic curve (AUC) 1.0), NCOVID (AUC 0.934) and HPE (AUC 1.0) pathologies. This was significantly better than physician ability (AUCs of 0.697, 0.704, 0.967 for the COVID, NCOVID and HPE classes, respectively), p<0.01. A DL model can distinguish similar appearing LUS pathology, including COVID-19, that cannot be distinguished by humans. The performance gap between humans and the model suggests that subvisible biomarkers within ultrasound images could exist and multicentre research is merited.
BMJ open
"2021-03-07T00:00:00"
[ "RobertArntfield", "BlakeVanBerlo", "ThamerAlaifan", "NathanPhelps", "MatthewWhite", "RushilChaudhary", "JordanHo", "DerekWu" ]
10.1136/bmjopen-2020-045120 10.1097/MD.0000000000005713 10.1378/chest.07-2800 10.1016/s0196-0644(97)70341-x 10.1016/S2213-2600(20)30120-X 10.21037/jtd.2016.04.55 10.1186/1476-7120-6-16 10.1007/s00134-020-06005-6 10.1002/jum.14627 10.1016/S0140-6736(18)31645-3 10.1001/jama.2016.17216 10.1016/j.ejca.2019.04.001 10.1038/s41551-018-0195-0 10.1101/2020.02.23.20026930v1 10.1016/j.jcrc.2014.12.006 10.1007/s11548-018-1843-2 10.1109/CVPR.2017.195 10.1023/A:1020281327116 10.1109/TUFFC.2020.3002249 10.1109/TMI.2020.2994459 10.1007/s13246-020-00865-4 10.1159/000509223 10.1186/s13054-019-2569-4 10.1016/j.chest.2016.04.013 10.1155/2017/8147075 10.1007/s00134-020-06212-1 10.1109/JBHI.2019.2936151 10.1002/jum.15285
COVID-19 Detection from Chest X-ray Images Using Feature Fusion and Deep Learning.
Currently, COVID-19 is considered to be the most dangerous and deadly disease for the human body caused by the novel coronavirus. In December 2019, the coronavirus spread rapidly around the world, thought to be originated from Wuhan in China and is responsible for a large number of deaths. Earlier detection of the COVID-19 through accurate diagnosis, particularly for the cases with no obvious symptoms, may decrease the patient's death rate. Chest X-ray images are primarily used for the diagnosis of this disease. This research has proposed a machine vision approach to detect COVID-19 from the chest X-ray images. The features extracted by the histogram-oriented gradient (HOG) and convolutional neural network (CNN) from X-ray images were fused to develop the classification model through training by CNN (VGGNet). Modified anisotropic diffusion filtering (MADF) technique was employed for better edge preservation and reduced noise from the images. A watershed segmentation algorithm was used in order to mark the significant fracture region in the input X-ray images. The testing stage considered generalized data for performance evaluation of the model. Cross-validation analysis revealed that a 5-fold strategy could successfully impair the overfitting problem. This proposed feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%. When compared to other classification techniques, such as ANN, KNN, and SVM, the CNN technique used in this study showed better classification performance. K-fold cross-validation demonstrated that the proposed feature fusion technique (98.36%) provided higher accuracy than the individual feature extraction methods, such as HOG (87.34%) or CNN (93.64%).
Sensors (Basel, Switzerland)
"2021-03-07T00:00:00"
[ "Nur-A-Alam", "MominulAhsan", "Md AbdulBased", "JulfikarHaider", "MarcinKowalski" ]
10.3390/s21041480 10.1038/s41586-020-2008-3 10.1056/NEJMoa2002032 10.1016/S0140-6736(20)30211-7 10.1016/S0140-6736(20)30185-9 10.1056/NEJMoa2001017 10.1056/NEJMoa2001316 10.1056/NEJMoa2001191 10.1038/d41586-020-03441-8 10.1038/d41586-020-03334-w 10.1038/s41591-021-01230-y 10.1016/S0140-6736(21)00234-8 10.14218/ERHM.2020.00075 10.3390/s18020556 10.1016/j.neucom.2018.06.084 10.1038/s41598-019-55972-4 10.1038/nature21056 10.1038/s41598-020-76550-z 10.1016/j.jare.2020.08.002 10.1136/bmj.d947 10.1183/09031936.01.00213501 10.1148/ryct.2020200034 10.1109/RBME.2020.2987975 10.1007/s13755-020-00119-3 10.1007/s10489-020-01829-7 10.1155/2020/8828855 10.3390/electronics9091439 10.3390/info11090419 10.3390/sym12040651 10.1016/j.media.2020.101794 10.1177/2472630320958376 10.3389/fmed.2020.00427 10.1007/s11051-020-05041-z 10.1371/journal.pone.0242535 10.1038/s41598-020-71294-2 10.1136/bmj.m2426 10.3390/s19143164 10.1109/TIP.2014.2371244 10.1109/ACCESS.2018.2813395 10.1007/s11042-020-09829-y 10.1007/s42452-020-2944-4 10.1016/j.cogsys.2019.09.007 10.1016/j.precisioneng.2020.02.005 10.1155/2017/4582948 10.1016/j.jksuci.2019.12.013 10.1007/s10489-020-02019-1 10.1016/j.chaos.2020.109944 10.1016/j.compbiomed.2020.103792 10.1016/j.cmpb.2020.105581 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2020.103869 10.18517/ijaseit.10.2.11446 10.1016/j.chaos.2020.110122 10.3390/app10165683 10.3390/ijerph17186933
A Deep Learning-Based Camera Approach for Vital Sign Monitoring Using Thermography Images for ICU Patients.
Infrared thermography for camera-based skin temperature measurement is increasingly used in medical practice, e.g., to detect fevers and infections, such as recently in the COVID-19 pandemic. This contactless method is a promising technology to continuously monitor the vital signs of patients in clinical environments. In this study, we investigated both skin temperature trend measurement and the extraction of respiration-related chest movements to determine the respiratory rate using low-cost hardware in combination with advanced algorithms. In addition, the frequency of medical examinations or visits to the patients was extracted. We implemented a deep learning-based algorithm for real-time vital sign extraction from thermography images. A clinical trial was conducted to record data from patients on an intensive care unit. The YOLOv4-Tiny object detector was applied to extract image regions containing vital signs (head and chest). The infrared frames were manually labeled for evaluation. Validation was performed on a hold-out test dataset of 6 patients and revealed good detector performance (0.75 intersection over union, 0.94 mean average precision). An optical flow algorithm was used to extract the respiratory rate from the chest region. The results show a mean absolute error of 2.69 bpm. We observed a computational performance of 47 fps on an NVIDIA Jetson Xavier NX module for YOLOv4-Tiny, which proves real-time capability on an embedded GPU system. In conclusion, the proposed method can perform real-time vital sign extraction on a low-cost system-on-module and may thus be a useful method for future contactless vital sign measurements.
Sensors (Basel, Switzerland)
"2021-03-07T00:00:00"
[ "SimonLyra", "LeonMayer", "LiyangOu", "DavidChen", "PaddyTimms", "AndrewTay", "Peter YChan", "BergitaGanse", "SteffenLeonhardt", "ChristophHoog Antink" ]
10.3390/s21041495 10.1097/MD.0000000000016261 10.1097/CCM.0b013e31822f061d 10.1159/000505126 10.5694/j.1326-5377.2008.tb01825.x 10.1787/health_glance_eur-2018-12-en 10.1016/j.aenj.2016.12.003 10.1111/j.1552-6909.2001.tb01520.x 10.1186/1475-925X-10-93 10.1111/j.1469-8986.2010.01167.x 10.1364/BOE.6.004378 10.5772/intechopen.80652 10.1183/13993003.congress-2015.PA1260 10.1109/cvpr.2015.7298682 10.3390/app9204405 10.1016/j.infrared.2019.103117 10.1088/2057-1976/ab7a54 10.1371/journal.pone.0224747 10.1364/BOE.397188 10.1007/s10877-019-00437-2 10.1007/s11517-020-02251-4 10.1109/CVPRW50498.2020.00061 10.1109/IJCNN.2018.8489367 10.1117/1.JBO.25.9.097002
Does non-COVID-19 lung lesion help? investigating transferability in COVID-19 CT image segmentation.
Coronavirus disease 2019 (COVID-19) is a highly contagious virus spreading all around the world. Deep learning has been adopted as an effective technique to aid COVID-19 detection and segmentation from computed tomography (CT) images. The major challenge lies in the inadequate public COVID-19 datasets. Recently, transfer learning has become a widely used technique that leverages the knowledge gained while solving one problem and applying it to a different but related problem. However, it remains unclear whether various non-COVID19 lung lesions could contribute to segmenting COVID-19 infection areas and how to better conduct this transfer procedure. This paper provides a way to understand the transferability of non-COVID19 lung lesions and a better strategy to train a robust deep learning model for COVID-19 infection segmentation. Based on a publicly available COVID-19 CT dataset and three public non-COVID19 datasets, we evaluate four transfer learning methods using 3D U-Net as a standard encoder-decoder method. i) We introduce the multi-task learning method to get a multi-lesion pre-trained model for COVID-19 infection. ii) We propose and compare four transfer learning strategies with various performance gains and training time costs. Our proposed Hybrid-encoder Learning strategy introduces a Dedicated-encoder and an Adapted-encoder to extract COVID-19 infection features and general lung lesion features, respectively. An attention-based Selective Fusion unit is designed for dynamic feature selection and aggregation. Experiments show that trained with limited data, proposed Hybrid-encoder strategy based on multi-lesion pre-trained model achieves a mean DSC, NSD, Sensitivity, F1-score, Accuracy and MCC of 0.704, 0.735, 0.682, 0.707, 0.994 and 0.716, respectively, with better genetalization and lower over-fitting risks for segmenting COVID-19 infection. The results reveal the benefits of transferring knowledge from non-COVID19 lung lesions, and learning from multiple lung lesion datasets can extract more general features, leading to accurate and robust pre-trained models. We further show the capability of the encoder to learn feature representations of lung lesions, which improves segmentation accuracy and facilitates training convergence. In addition, our proposed Hybrid-encoder learning method incorporates transferred lung lesion features from non-COVID19 datasets effectively and achieves significant improvement. These findings promote new insights into transfer learning for COVID-19 CT image segmentation, which can also be further generalized to other medical tasks.
Computer methods and programs in biomedicine
"2021-03-05T00:00:00"
[ "YixinWang", "YaoZhang", "YangLiu", "JiangTian", "ChengZhong", "ZhongchaoShi", "YangZhang", "ZhiqiangHe" ]
10.1016/j.cmpb.2021.106004 10.1128/JCM.00310-20 10.1016/j.cmpb.2020.105581 10.1016/j.cmpb.2020.105532 10.1109/TMI.2020.3001810 10.1148/radiol.2020200642 10.1148/radiol.2020200230 10.1148/radiol.2020200905 10.1101/2020.03.19.20039354 10.1109/TMI.2020.3000314 10.1101/2020.03.12.20027185 10.1049/iet-ipr.2019.0312 10.1002/cnm.3225 10.1109/IPTA.2019.8936087 10.1016/j.compbiomed.2020.104118 10.1109/IPTA.2019.8936083 10.1007/978-3-030-32040-9_25 10.1016/j.media.2019.03.009 10.1101/2020.02.29.20029603 10.1148/ryct.2020200075 10.1109/jbhi.2017.2769800 10.1080/21681163.2016.1138324 10.1080/21681163.2015.1124249 10.3390/app10020559 10.1007/s13246-020-00865-4 10.1101/2020.05.12.20098954 10.5281/zenodo.3757476 10.7937/K9/TCIA.2014.X7ONY6B1 10.1038/sdata.2018.202 10.1148/radiol.12111607 10.1007/s10278-004-1014-6