title
stringlengths
2
287
abstract
stringlengths
0
5.14k
journal
stringlengths
4
184
date
unknown
authors
sequencelengths
1
57
doi
stringlengths
16
6.63k
Non-contrast CT synthesis using patch-based cycle-consistent generative adversarial network (Cycle-GAN) for radiomics and deep learning in the era of COVID-19.
Handcrafted and deep learning (DL) radiomics are popular techniques used to develop computed tomography (CT) imaging-based artificial intelligence models for COVID-19 research. However, contrast heterogeneity from real-world datasets may impair model performance. Contrast-homogenous datasets present a potential solution. We developed a 3D patch-based cycle-consistent generative adversarial network (cycle-GAN) to synthesize non-contrast images from contrast CTs, as a data homogenization tool. We used a multi-centre dataset of 2078 scans from 1,650 patients with COVID-19. Few studies have previously evaluated GAN-generated images with handcrafted radiomics, DL and human assessment tasks. We evaluated the performance of our cycle-GAN with these three approaches. In a modified Turing-test, human experts identified synthetic vs acquired images, with a false positive rate of 67% and Fleiss' Kappa 0.06, attesting to the photorealism of the synthetic images. However, on testing performance of machine learning classifiers with radiomic features, performance decreased with use of synthetic images. Marked percentage difference was noted in feature values between pre- and post-GAN non-contrast images. With DL classification, deterioration in performance was observed with synthetic images. Our results show that whilst GANs can produce images sufficient to pass human assessment, caution is advised before GAN-synthesized images are used in medical imaging applications.
Scientific reports
"2023-06-30T00:00:00"
[ "RezaKalantar", "SumeetHindocha", "BenjaminHunter", "BhupinderSharma", "NasirKhan", "Dow-MuKoh", "MerinaAhmed", "Eric OAboagye", "Richard WLee", "Matthew DBlackledge" ]
10.1038/s41598-023-36712-1
Explainable COVID-19 Detection Based on Chest X-rays Using an End-to-End RegNet Architecture.
COVID-19,which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is one of the worst pandemics in recent history. The identification of patients suspected to be infected with COVID-19 is becoming crucial to reduce its spread. We aimed to validate and test a deep learning model to detect COVID-19 based on chest X-rays. The recent deep convolutional neural network (CNN) RegNetX032 was adapted for detecting COVID-19 from chest X-ray (CXR) images using polymerase chain reaction (RT-PCR) as a reference. The model was customized and trained on five datasets containing more than 15,000 CXR images (including 4148COVID-19-positive cases) and then tested on 321 images (150 COVID-19-positive) from Montfort Hospital. Twenty percent of the data from the five datasets were used as validation data for hyperparameter optimization. Each CXR image was processed by the model to detect COVID-19. Multi-binary classifications were proposed, such as: COVID-19 vs. normal, COVID-19 + pneumonia vs. normal, and pneumonia vs. normal. The performance results were based on the area under the curve (AUC), sensitivity, and specificity. In addition, an explainability model was developed that demonstrated the high performance and high generalization degree of the proposed model in detecting and highlighting the signs of the disease. The fine-tuned RegNetX032 model achieved an overall accuracy score of 96.0%, with an AUC score of 99.1%. The model showed a superior sensitivity of 98.0% in detecting signs from CXR images of COVID-19 patients, and a specificity of 93.0% in detecting healthy CXR images. A second scenario compared COVID-19 + pneumonia vs. normal (healthy X-ray) patients. The model achieved an overall score of 99.1% (AUC) with a sensitivity of 96.0% and specificity of 93.0% on the Montfort dataset. For the validation set, the model achieved an average accuracy of 98.6%, an AUC score of 98.0%, a sensitivity of 98.0%, and a specificity of 96.0% for detection (COVID-19 patients vs. healthy patients). The second scenario compared COVID-19 + pneumonia vs. normal patients. The model achieved an overall score of 98.8% (AUC) with a sensitivity of 97.0% and a specificity of 96.0%. This robust deep learning model demonstrated excellent performance in detecting COVID-19 from chest X-rays. This model could be used to automate the detection of COVID-19 and improve decision making for patient triage and isolation in hospital settings. This could also be used as a complementary aid for radiologists or clinicians when differentiating to make smart decisions.
Viruses
"2023-06-28T00:00:00"
[ "MohamedChetoui", "Moulay AAkhloufi", "El MostafaBouattane", "JosephAbdulnour", "StephaneRoux", "Chantal D'AoustBernard" ]
10.3390/v15061327 10.1148/radiol.2020200432 10.7326/M20-1495 10.1186/s12938-018-0544-y 10.1148/81.2.185 10.12988/ams.2015.54348 10.1371/journal.pone.0247954 10.3390/bioengineering8060084 10.1080/14737167.2020.1823221 10.2147/RMHP.S341500 10.3390/jcm11113013 10.3389/frai.2022.919672 10.1371/journal.pone.0259179 10.3390/electronics11223836 10.1016/j.imu.2022.100945 10.1080/23311916.2022.2079221 10.1038/s41598-020-71294-2 10.59275/j.melba.2020-48g7 10.1109/ACCESS.2020.3010287 10.1002/ima.22770 10.1016/j.bspc.2022.103530 10.3389/fgene.2022.980338 10.18280/mmep.090615 10.1016/j.compbiomed.2022.106065 10.1007/s10522-021-09946-7 10.1016/j.patcog.2021.108243 10.17605/OSF.IO/NH7G8
COVID-19 Severity Prediction from Chest X-ray Images Using an Anatomy-Aware Deep Learning Model.
The COVID-19 pandemic has been adversely affecting the patient management systems in hospitals around the world. Radiological imaging, especially chest x-ray and lung Computed Tomography (CT) scans, plays a vital role in the severity analysis of hospitalized COVID-19 patients. However, with an increasing number of patients and a lack of skilled radiologists, automated assessment of COVID-19 severity using medical image analysis has become increasingly important. Chest x-ray (CXR) imaging plays a significant role in assessing the severity of pneumonia, especially in low-resource hospitals, and is the most frequently used diagnostic imaging in the world. Previous methods that automatically predict the severity of COVID-19 pneumonia mainly focus on feature pooling from pre-trained CXR models without explicitly considering the underlying human anatomical attributes. This paper proposes an anatomy-aware (AA) deep learning model that learns the generic features from x-ray images considering the underlying anatomical information. Utilizing a pre-trained model and lung segmentation masks, the model generates a feature vector including disease-level features and lung involvement scores. We have used four different open-source datasets, along with an in-house annotated test set for training and evaluation of the proposed method. The proposed method improves the geographical extent score by 11% in terms of mean squared error (MSE) while preserving the benchmark result in lung opacity score. The results demonstrate the effectiveness of the proposed AA model in COVID-19 severity prediction from chest X-ray images. The algorithm can be used in low-resource setting hospitals for COVID-19 severity prediction, especially where there is a lack of skilled radiologists.
Journal of digital imaging
"2023-06-28T00:00:00"
[ "Nusrat BintaNizam", "Sadi MohammadSiddiquee", "MahbubaShirin", "Mohammed Imamul HassanBhuiyan", "TaufiqHasan" ]
10.1007/s10278-023-00861-6 10.3390/s23010426 10.1007/s00330-020-07033-y 10.1001/jamanetworkopen.2020.22779 10.1016/j.chaos.2020.110495 10.1007/s10489-020-01829-7 10.3390/sym13010113 10.1148/rg.2018170048 10.1016/j.eswa.2020.114054 10.1007/s00500-020-05424-3 10.1016/j.compbiomed.2022.106331 10.1007/s10489-020-01867-1 10.1038/s41598-022-27266-9 10.1016/j.eswa.2020.113909 10.26599/BDMA.2020.9020012 10.3390/s23020743 10.1007/s11042-022-12156-z 10.1016/j.compbiomed.2020.103792 10.1038/s41598-021-93543-8 10.1016/j.asoc.2021.107645 10.1016/j.asoc.2022.109851 10.1007/s10489-020-01943-6 10.1016/j.compmedimag.2019.05.005 10.1109/JBHI.2022.3199594 10.1007/s10278-021-00434-5 10.1371/journal.pone.0236621 10.1016/S2589-7500(21)00039-X 10.1016/j.media.2021.102046 10.2214/ajr.174.1.1740071 10.1016/j.media.2005.02.002
Deep learning-based technique for lesions segmentation in CT scan images for COVID-19 prediction.
Since 2019, COVID-19 disease caused significant damage and it has become a serious health issue in the worldwide. The number of infected and confirmed cases is increasing day by day. Different hospitals and countries around the world to this day are not equipped enough to treat these cases and stop this pandemic evolution. Lung and chest X-ray images (e.g., radiography images) and chest CT images are the most effective imaging techniques to analyze and diagnose the COVID-19 related problems. Deep learning-based techniques have recently shown good performance in computer vision and healthcare fields. We propose developing a new deep learning-based application for COVID-19 segmentation and analysis in this work. The proposed system is developed based on the context aggregation neural network. This network consists of three main modules: the context fuse model (CFM), attention mix module (AMM) and a residual convolutional module (RCM). The developed system can detect two main COVID-19-related regions: ground glass opacity and consolidation area in CT images. Generally, these lesions are often related to common pneumonia and COVID 19 cases. Training and testing experiments have been conducted using the COVID-x-CT dataset. Based on the obtained results, the developed system demonstrated better and more competitive results compared to state-of-the-art performances. The numerical findings demonstrate the effectiveness of the proposed work by outperforming other works in terms of accuracy by a factor of over 96.23%.
Multimedia tools and applications
"2023-06-26T00:00:00"
[ "MounaAfif", "RiadhAyachi", "YahiaSaid", "MohamedAtri" ]
10.1007/s11042-023-14941-w 10.1007/s11042-022-12577-w 10.1142/S0218001421500245 10.1109/TPAMI.2016.2644615 10.3390/rs11101158 10.1109/TMM.2014.2373812 10.1148/radiol.2017162326 10.1148/radiol.2020200905 10.1186/s12880-020-00529-5 10.1007/s11042-022-12326-z 10.1016/j.patcog.2021.108498 10.1109/TGRS.2022.3170493 10.2214/AJR.20.22976
MediNet: transfer learning approach with MediNet medical visual database.
The rapid development of machine learning has increased interest in the use of deep learning methods in medical research. Deep learning in the medical field is used in disease detection and classification problems in the clinical decision-making process. Large amounts of labeled datasets are often required to train deep neural networks; however, in the medical field, the lack of a sufficient number of images in datasets and the difficulties encountered during data collection are among the main problems. In this study, we propose MediNet, a new 10-class visual dataset consisting of Rontgen (X-ray), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound, and Histopathological images such as calcaneal normal, calcaneal tumor, colon benign colon adenocarcinoma, brain normal, brain tumor, breast benign, breast malignant, chest normal, chest pneumonia. AlexNet, VGG19-BN, Inception V3, DenseNet 121, ResNet 101, EfficientNet B0, Nested-LSTM + CNN, and proposed RdiNet deep learning algorithms are used in the transfer learning for pre-training and classification application. Transfer learning aims to apply previously learned knowledge in a new task. Seven algorithms were trained with the MediNet dataset, and the models obtained from these algorithms, namely feature vectors, were recorded. Pre-training models were used for classification studies on chest X-ray images, diabetic retinopathy, and Covid-19 datasets with the transfer learning technique. In performance measurement, an accuracy of 94.84% was obtained in the traditional classification study for the InceptionV3 model in the classification study performed on the Chest X-Ray Images dataset, and the accuracy was increased 98.71% after the transfer learning technique was applied. In the Covid-19 dataset, the classification success of the DenseNet121 model before pre-trained was 88%, while the performance after the transfer application with MediNet was 92%. In the Diabetic retinopathy dataset, the classification success of the Nested-LSTM + CNN model before pre-trained was 79.35%, while the classification success was 81.52% after the transfer application with MediNet. The comparison of results obtained from experimental studies observed that the proposed method produced more successful results.
Multimedia tools and applications
"2023-06-26T00:00:00"
[ "Hatice CatalReis", "VeyselTurk", "KouroshKhoshelham", "SerhatKaya" ]
10.1007/s11042-023-14831-1 10.1109/ACCESS.2020.2989273 10.1016/j.dib.2019.104863 10.1038/s41598-021-83503-7 10.3390/cancers14051280 10.1186/s43057-021-00053-4 10.1016/j.compbiomed.2019.103345 10.1109/ACCESS.2019.2891970 10.1038/nature21056 10.1162/neco.1997.9.8.1735 10.3390/diagnostics12020274 10.1016/j.measurement.2020.108046 10.1109/ACCESS.2017.2788044 10.3934/mbe.2020328 10.1038/s41598-020-78129-0 10.1155/2022/7672196 10.1016/j.ins.2019.06.011 10.1016/j.imavis.2019.103853 10.3390/ai1040032 10.1117/1.JMI.7.3.034501 10.1109/ACCESS.2020.2978629 10.1038/s41598-020-61055-6 10.1007/s00034-019-01246-3 10.1038/s41598-019-52737-x 10.1109/TIP.2021.3058783
COVID-19 prediction based on hybrid Inception V3 with VGG16 using chest X-ray images.
The Corona Virus was first started in the Wuhan city, China in December of 2019. It belongs to the Coronaviridae family, which can infect both animals and humans. The diagnosis of coronavirus disease-2019 (COVID-19) is typically detected by Serology, Genetic Real-Time reverse transcription-Polymerase Chain Reaction (RT-PCR), and Antigen testing. These testing methods have limitations like limited sensitivity, high cost, and long turn-around time. It is necessary to develop an automatic detection system for COVID-19 prediction. Chest X-ray is a lower-cost process in comparison to chest Computed tomography (CT). Deep learning is the best fruitful technique of machine learning, which provides useful investigation for learning and screening a large amount of chest X-ray images with COVID-19 and normal. There are many deep learning methods for prediction, but these methods have a few limitations like overfitting, misclassification, and false predictions for poor-quality chest X-rays. In order to overcome these limitations, the novel hybrid model called "Inception V3 with VGG16 (Visual Geometry Group)" is proposed for the prediction of COVID-19 using chest X-rays. It is a combination of two deep learning models, Inception V3 and VGG16 (IV3-VGG). To build the hybrid model, collected 243 images from the COVID-19 Radiography Database. Out of 243 X-rays, 121 are COVID-19 positive and 122 are normal images. The hybrid model is divided into two modules namely pre-processing and the IV3-VGG. In the dataset, some of the images with different sizes and different color intensities are identified and pre-processed. The second module i.e., IV3-VGG consists of four blocks. The first block is considered for VGG-16 and blocks 2 and 3 are considered for Inception V3 networks and final block 4 consists of four layers namely Avg pooling, dropout, fully connected, and Softmax layers. The experimental results show that the IV3-VGG model achieves the highest accuracy of 98% compared to the existing five prominent deep learning models such as Inception V3, VGG16, ResNet50, DenseNet121, and MobileNet.
Multimedia tools and applications
"2023-06-26T00:00:00"
[ "KSrinivas", "RGagana Sri", "KPravallika", "KNishitha", "Subba RaoPolamuri" ]
10.1007/s11042-023-15903-y 10.1016/j.asoc.2019.04.031 10.1016/j.cell.2020.06.035 10.1007/s10489-020-01941-8 10.1186/s12985-015-0422-1 10.1016/j.chaos.2020.110495 10.1016/j.eswa.2020.114054 10.3390/sym12040651 10.1016/j.media.2020.101794 10.1016/j.compbiomed.2020.103792 10.1016/j.jinf.2020.03.051 10.1016/j.ijid.2020.05.098 10.1007/s10489-020-01900-3 10.1128/CVI.00355-10 10.1016/j.jinf.2020.04.022 10.1016/j.compbiomed.2020.103805 10.3201/eid2007.140296 10.3201/eid1608.100208 10.1007/s10489-020-02019-1
Deep efficient-nets with transfer learning assisted detection of COVID-19 using chest X-ray radiology imaging.
Corona Virus (COVID-19) could be considered as one of the most devastating pandemics of the twenty-first century. The effective and the rapid screening of infected patients could reduce the mortality and even the contagion rate. Chest X-ray radiology could be designed as one of the effective screening techniques for COVID-19 exploration. In this paper, we propose an advanced approach based on deep learning architecture to automatic and effective screening techniques dedicated to the COVID-19 exploration through chest X-ray (CXR) imaging. Despite the success of state-of-the-art deep learning-based models for COVID-19 detection, they might suffer from several problems such as the huge memory and the computational requirement, the overfitting effect, and the high variance. To alleviate these issues, we investigate the Transfer Learning to the Efficient-Nets models. Next, we fine-tuned the whole network to select the optimal hyperparameters. Furthermore, in the preprocessing step, we consider an intensity-normalization method succeeded by some data augmentation techniques to solve the imbalanced dataset classes' issues. The proposed approach has presented a good performance in detecting patients attained by COVID-19 achieving an accuracy rate of 99.0% and 98% respectively using training and testing datasets. A comparative study over a publicly available dataset with the recently published deep-learning-based architectures could attest the proposed approach's performance.
Multimedia tools and applications
"2023-06-26T00:00:00"
[ "HibaMzoughi", "InesNjeh", "Mohamed BenSlima", "AhmedBenHamida" ]
10.1007/s11042-023-15097-3 10.1007/s42979-021-00981-2 10.1007/s10278-021-00431-8 10.1016/j.eswa.2020.114054 10.1007/s42600-021-00151-6 10.1007/s10044-021-00984-y 10.1016/j.ijsu.2020.02.034 10.1142/S0218001409007326 10.1109/TII.2021.3057683 10.1038/s41598-020-76550-z 10.1016/j.patcog.2017.10.002
Bio-medical imaging (X-ray, CT, ultrasound, ECG), genome sequences applications of deep neural network and machine learning in diagnosis, detection, classification, and segmentation of COVID-19: a Meta-analysis & systematic review.
This review investigates how Deep Machine Learning (DML) has dealt with the Covid-19 epidemic and provides recommendations for future Covid-19 research. Despite the fact that vaccines for this epidemic have been developed, DL methods have proven to be a valuable asset in radiologists' arsenals for the automated assessment of Covid-19. This detailed review debates the techniques and applications developed for Covid-19 findings using DL systems. It also provides insights into notable datasets used to train neural networks, data partitioning, and various performance measurement metrics. The PRISMA taxonomy has been formed based on pretrained(45 systems) and hybrid/custom(17 systems) models with radiography modalities. A total of 62 systems with respect to X-ray(32), CT(19), ultrasound(7), ECG(2), and genome sequence(2) based modalities as taxonomy are selected from the studied articles. We originate by valuing the present phase of DL and conclude with significant limitations. The restrictions contain incomprehensibility, simplification measures, learning from incomplete labeled data, and data secrecy. Moreover, DML can be utilized to detect and classify Covid-19 from other COPD illnesses. The proposed literature review has found many DL-based systems to fight against Covid19. We expect this article will assist in speeding up the procedure of DL for Covid-19 researchers, including medical, radiology technicians, and data engineers.
Multimedia tools and applications
"2023-06-26T00:00:00"
[ "Yogesh HBhosale", "K SridharPatnaik" ]
10.1007/s11042-023-15029-1 10.1109/ACCESS.2021.3058066 10.1016/j.compbiomed.2017.09.017 10.1016/j.scs.2020.102571 10.1016/j.chaos.2020.110120 10.1007/s40846-020-00529-4 10.1016/j.compbiomed.2020.103795 10.1016/j.asoc.2020.106912 10.1016/j.bspc.2022.104445 10.3390/app11020672 10.1016/j.berh.2010.05.003 10.1016/j.ibmed.2020.100013 10.1002/pbc.20783 10.1109/RBME.2020.2990959 10.1038/nature21056 10.1097/RLI.0000000000000748 10.1016/j.bspc.2019.101678 10.1109/TMI.2020.2996256 10.1016/j.jiph.2020.03.019 10.1007/s11517-022-02591-3 10.1016/j.ibmed.2021.100027 10.1109/ACCESS.2020.3016780 10.1007/s13246-022-01102-w 10.1016/j.imu.2020.100412 10.1109/ACCESS.2021.3058537 10.1016/j.bbe.2020.08.008 10.1007/s10489-020-01902-1 10.1504/IJCAT.2021.120462 10.1007/s00521-021-05719-y 10.1109/JBHI.2020.3042523 10.1016/j.bbe.2021.01.002 10.1016/j.bspc.2021.102518 10.1016/j.asoc.2020.106744 10.1016/j.cmpb.2020.105581 10.3390/s22031211 10.1109/ACCESS.2021.3058854 10.1097/RTI.0000000000000404 10.1016/j.jhep.2008.02.005 10.1148/radiol.2020200905 10.1145/3465220 10.1016/j.measurement.2020.108288 10.1007/s42600-021-00151-6 10.1002/jmri.26534 10.1016/j.inffus.2021.02.013 10.1016/j.bspc.2020.102365 10.1007/s00330-020-07044-9 10.1016/j.asoc.2020.106580 10.1016/j.compbiomed.2020.103792 10.1109/TNNLS.2021.3054746 10.1016/j.radi.2020.10.018 10.1016/j.chaos.2020.109944 10.1016/j.imu.2020.100360 10.1109/ACCESS.2020.3003810 10.1016/j.eswa.2022.116554 10.1016/j.engappai.2021.104210 10.1016/j.artmed.2021.102228 10.1109/TMI.2020.2994459 10.9781/ijimai.2020.04.003 10.1007/s00521-020-05410-8 10.1016/j.ajem.2012.08.041 10.33889/IJMEMS.2020.5.4.052 10.1016/j.imu.2020.100427 10.1016/j.compbiomed.2021.104650 10.1016/j.asoc.2022.108883 10.3390/ijms16059749 10.1016/j.compbiomed.2022.105233 10.1016/j.compmedimag.2019.101673 10.1016/j.compbiomed.2020.103805 10.1038/ncpcardio1246 10.1038/s41591-020-0823-6 10.1109/TMI.2020.2995965 10.1016/j.eng.2020.04.010 10.1109/TBDATA.2021.3056564 10.1007/s11227-020-03535-0 10.1007/s10489-020-01867-1 10.1007/s00259-020-04953-1
Deep Learning Based COVID-19 Detection via Hard Voting Ensemble Method.
Healthcare systems throughout the world are under a great deal of strain because to the continuing COVID-19 epidemic, making early and precise diagnosis critical for limiting the virus's propagation and efficiently treating victims. The utilization of medical imaging methods like X-rays can help to speed up the diagnosis procedure. Which can offer valuable insights into the virus's existence in the lungs. We present a unique ensemble approach to identify COVID-19 using X-ray pictures (X-ray-PIC) in this paper. The suggested approach, based on hard voting, combines the confidence scores of three classic deep learning models: CNN, VGG16, and DenseNet. We also apply transfer learning to enhance performance on small medical image datasets. Experiments indicate that the suggested strategy outperforms current techniques with a 97% accuracy, a 96% precision, a 100% recall, and a 98% F1-score.These results demonstrate the effectiveness of using ensemble approaches and COVID-19 transfer-learning diagnosis using X-ray-PIC, which could greatly aid in early detection and reducing the burden on global health systems.
Wireless personal communications
"2023-06-26T00:00:00"
[ "Asaad QasimShareef", "SeferKurnaz" ]
10.1007/s11277-023-10485-2 10.1007/s13204-021-02100-2 10.1148/radiol.2020200642 10.1016/j.ijleo.2022.170396 10.3390/sym15010123 10.1080/07391102.2020.1767212 10.1155/2021/6677314 10.1016/j.rinp.2020.103811 10.1016/j.chaos.2021.110708 10.1016/S0140-6736(20)30183-5 10.1016/j.rinp.2021.103813 10.1016/j.heliyon.2022.e11185 10.1038/nature14539 10.1056/NEJMOa2001316 10.1016/j.xinn.2020.04.001 10.1515/cclm-2020-0285 10.1007/s10044-021-00984-y 10.1016/j.ipm.2022.103025 10.1109/TMI.2020.2993291 10.1016/j.compbiomed.2020.103792 10.1016/j.imu.2020.100506 10.1016/j.chaos.2020.109944 10.1109/RBME.2020.2987975 10.32604/cmc.2022.020140 10.1007/s10096-020-03901-z 10.22270/jddt.v11i2-S.4644 10.1016/j.compbiomed.2022.105233 10.1038/s41586-020-2008-3 10.1016/j.ejrad.2020.109041 10.1093/cid/ciaa725
ResNetFed: Federated Deep Learning Architecture for Privacy-Preserving Pneumonia Detection from COVID-19 Chest Radiographs.
Personal health data is subject to privacy regulations, making it challenging to apply centralized data-driven methods in healthcare, where personalized training data is frequently used. Federated Learning (FL) promises to provide a decentralized solution to this problem. In FL, siloed data is used for the model training to ensure data privacy. In this paper, we investigate the viability of the federated approach using the detection of COVID-19 pneumonia as a use case. 1411 individual chest radiographs, sourced from the public data repository COVIDx8 are used. The dataset contains radiographs of 753 normal lung findings and 658 COVID-19 related pneumonias. We partition the data unevenly across five separate data silos in order to reflect a typical FL scenario. For the binary image classification analysis of these radiographs, we propose
Journal of healthcare informatics research
"2023-06-26T00:00:00"
[ "PascalRiedel", "Reinholdvon Schwerin", "DanielSchaudt", "AlexanderHafner", "ChristianSpäte" ]
10.1007/s41666-023-00132-7 10.1038/s42256-022-00601-5 10.48550/ARXIV.2007.09339 10.1001/jama.2018.5630 10.17265/1548-6605/2021.05 10.1093/jlb/lsz013 10.1111/rego.12349 10.13052/jmbmit2245-456X.434 10.1109/TII.2022.3144016 10.1111/tmi.13383 10.1007/s10096-022-04417-4 10.1016/j.acra.2022.06.002 10.1007/s10489-020-02010-w 10.1038/s41598-020-76550-z 10.1109/ACCESS.2018.2885997 10.1109/ACCESS.2020.3010287 10.1109/ICCWAMTIP.2017.8301487 10.1016/j.asoc.2021.107330 10.1016/j.chaos.2021.110749 10.1016/j.eswa.2020.114054 10.1016/j.bspc.2022.103848 10.1016/j.compbiomed.2022.105979 10.1016/j.asoc.2022.109872 10.1016/j.asoc.2021.107160 10.1371/journal.pone.0242958 10.1148/radiol.2021204522 10.1609/aaai.v33i01.33019808 10.1109/TIFS.2020.2988575
Pathological changes or technical artefacts? The problem of the heterogenous databases in COVID-19 CXR image analysis.
When the COVID-19 pandemic commenced in 2020, scientists assisted medical specialists with diagnostic algorithm development. One scientific research area related to COVID-19 diagnosis was medical imaging and its potential to support molecular tests. Unfortunately, several systems reported high accuracy in development but did not fare well in clinical application. The reason was poor generalization, a long-standing issue in AI development. Researchers found many causes of this issue and decided to refer to them as confounders, meaning a set of artefacts and methodological errors associated with the method. We aim to contribute to this steed by highlighting an undiscussed confounder related to image resolution. 20 216 chest X-ray images (CXR) from worldwide centres were analyzed. The CXRs were bijectively projected into the 2D domain by performing Uniform Manifold Approximation and Projection (UMAP) embedding on the radiomic features (rUMAP) or CNN-based neural features (nUMAP) from the pre-last layer of the pre-trained classification neural network. Additional 44 339 thorax CXRs were used for validation. The comprehensive analysis of the multimodality of the density distribution in rUMAP/nUMAP domains and its relation to the original image properties was used to identify the main confounders. nUMAP revealed a hidden bias of neural networks towards the image resolution, which the regular up-sampling procedure cannot compensate for. The issue appears regardless of the network architecture and is not observed in a high-resolution dataset. The impact of the resolution heterogeneity can be partially diminished by applying advanced deep-learning-based super-resolution networks. rUMAP and nUMAP are great tools for image homogeneity analysis and bias discovery, as demonstrated by applying them to COVID-19 image data. Nonetheless, nUMAP could be applied to any type of data for which a deep neural network could be constructed. Advanced image super-resolution solutions are needed to reduce the impact of the resolution diversity on the classification network decision.
Computer methods and programs in biomedicine
"2023-06-26T00:00:00"
[ "MarekSocha", "WojciechPrażuch", "AleksandraSuwalska", "PawełFoszner", "JoannaTobiasz", "JerzyJaroszewicz", "KatarzynaGruszczynska", "MagdalenaSliwinska", "MateuszNowak", "BarbaraGizycka", "GabrielaZapolska", "TadeuszPopiela", "GrzegorzPrzybylski", "PiotrFiedor", "MalgorzataPawlowska", "RobertFlisiak", "KrzysztofSimon", "JerzyWalecki", "AndrzejCieszanowski", "EdytaSzurowska", "MichalMarczyk", "JoannaPolanska", "NoneNone" ]
10.1016/j.cmpb.2023.107684 10.7189/jogh.09.020318 10.2217/fvl-2020-0130 10.1016/j.cmpb.2020.105581 10.1007/s10489-020-01829-7 10.1016/j.compbiomed.2020.103792 10.1038/s41598-020-76550-z 10.1038/s42256-021-00307-0 10.1109/IMCET53404.2021.9665574 10.1007/s12553-021-00520-2 10.1148/ryai.220028 10.1371/journal.pone.0235187 10.1109/TMI.2020.3040950 10.1109/TMI.2020.2993291 10.1371/journal.pone.0100335 10.1186/s12916-019-1426-2 10.1371/journal.pmed.1002683 10.3390/s21217116 10.1016/j.media.2021.102225 10.1148/ryai.2021210011 10.1200/CCI.19.00068 10.1117/1.JMI.6.2.027501 10.1038/s42256-021-00338-7 10.1109/ACCESS.2020.3010287 10.1038/s41597-023-02229-5 10.21227/w3aw-rv39 10.1007/978-3-319-24574-4_28 10.5555/3294771.3294864 10.1109/CVPR.2017.369 10.3978/j.issn.2223-4292.2014.11.20 10.1016/j.cell.2018.02.010 10.1016/j.ejca.2011.11.036 10.1158/0008-5472.CAN-17-0339 10.1007/978-3-319-46493-0_38 10.1109/CVPR.2017.195 10.1109/CVPR.2016.308 10.1109/CVPR.2017.243 10.48550/arxiv.1409.1556 10.21105/joss.00861 10.1109/CVPRW.2017.151 10.1145/3422622 10.1007/978-3-030-11021-5_5 10.1109/TMI.2017.2715284 10.1109/TMI.2018.2827462 10.3390/DIAGNOSTICS12030741 10.1007/S11004-018-9743-0/TABLES/1 10.1109/PRAI53619.2021.9551043 10.1109/ACCESS.2019.2918926 10.1148/ryai.2019190015
LayNet-A multi-layer architecture to handle imbalance in medical imaging data.
In an imbalanced dataset, a machine learning classifier using traditional imbalance handling methods may achieve good accuracy, but in highly imbalanced datasets, it may over-predict the majority class and ignore the minority class. In the medical domain, failing to correctly estimate the minority class might lead to a false negative, which is concerning in cases of life-threatening illnesses and infectious diseases like Covid-19. Currently, classification in deep learning has a single layered architecture where a neural network is employed. This paper proposes a multilayer design entitled LayNet to address this issue. LayNet aims to lessen the class imbalance by dividing the classes among layers and achieving a balanced class distribution at each layer. To ensure that all the classes are being classified, minor classes are combined to form a single new 'hybrid' class at higher layers. The final layer has no hybrid class and only singleton(distinct) classes. Each layer of the architecture includes a separate model that determines if an input belongs to one class or a hybrid class. If it fits into the hybrid class, it advances to the following layer, which is further categorized within the hybrid class. The method to divide the classes into various architectural levels is also introduced in this paper. The Ocular Disease Intelligent Recognition Dataset, Covid-19 Radiography Dataset, and Retinal OCT Dataset are used to evaluate this methodology. The LayNet architecture performs better on these datasets when the results of the traditional single-layer architecture and the proposed multilayered architecture are compared.
Computers in biology and medicine
"2023-06-25T00:00:00"
[ "JayJani", "JayDoshi", "IshitaKheria", "KarishniMehta", "ChetashriBhadane", "RuhinaKarani" ]
10.1016/j.compbiomed.2023.107179
Automatic diagnosis of COVID-19 from CT images using CycleGAN and transfer learning.
The outbreak of the corona virus disease (COVID-19) has changed the lives of most people on Earth. Given the high prevalence of this disease, its correct diagnosis in order to quarantine patients is of the utmost importance in the steps of fighting this pandemic. Among the various modalities used for diagnosis, medical imaging, especially computed tomography (CT) imaging, has been the focus of many previous studies due to its accuracy and availability. In addition, automation of diagnostic methods can be of great help to physicians. In this paper, a method based on pre-trained deep neural networks is presented, which, by taking advantage of a cyclic generative adversarial net (CycleGAN) model for data augmentation, has reached state-of-the-art performance for the task at hand, i.e., 99.60% accuracy. Also, in order to evaluate the method, a dataset containing 3163 images from 189 patients has been collected and labeled by physicians. Unlike prior datasets, normal data have been collected from people suspected of having COVID-19 disease and not from data from other diseases, and this database is made available publicly. Moreover, the method's reliability is further evaluated by calibration metrics, and its decision is interpreted by Grad-CAM also to find suspicious regions as another output of the method and make its decisions trustworthy and explainable.
Applied soft computing
"2023-06-22T00:00:00"
[ "NavidGhassemi", "AfshinShoeibi", "MarjaneKhodatars", "JonathanHeras", "AlirezaRahimi", "AssefZare", "Yu-DongZhang", "Ram BilasPachori", "J ManuelGorriz" ]
10.1016/j.asoc.2023.110511 10.1109/ACCESS.2020.3001973 10.1007/s00264-020-04609-7 10.1016/j.jcv.2020.104438 10.1038/s41598-020-80363-5 10.1016/j.dsx.2020.05.008 10.22038/IJMP.2016.8453 10.1016/j.jiph.2020.06.028 10.1056/NEJMoa1306742 10.1016/j.molliq.2020.114706 10.1016/j.chaos.2020.110059 10.1128/JCM.02959-20 10.1016/j.clinimag.2021.01.019 10.1016/j.bspc.2021.102622 10.2196/23811 10.1109/RBME.2020.2990959 10.1007/s10916-020-01582-x 10.1080/14737159.2020.1757437 10.1001/jama.2020.2783 10.1148/radiol.2020200432 10.1016/j.compbiomed.2020.103792 10.1007/s10140-020-01886-y 10.1016/j.bspc.2021.103182 10.1016/j.compbiomed.2021.104454 10.1016/j.bspc.2020.102365 10.1016/j.compbiomed.2020.104181 10.5152/dir.2020.20205 10.1016/j.chaos.2020.110495 10.1038/s41467-020-20657-4 10.1016/j.bspc.2021.103076 10.1007/s00330-021-07715-1 10.1109/ACCESS.2020.3005510 10.1109/ACCESS.2021.3058537 10.1016/j.matpr.2020.06.245 10.1109/BIBM49941.2020.9313252 10.1016/j.chaos.2020.109947 10.1109/RBME.2020.2987975 10.1117/12.2582162 10.1109/CVPR.2016.90 10.1016/j.neucom.2020.05.078 10.1016/j.compbiomed.2021.104697 10.1016/j.bspc.2021.103417 10.1007/978-3-030-55258-9_17 10.1016/j.neucom.2020.07.144 10.1016/j.media.2020.101836 10.1109/TMI.2020.2995508 10.1007/s00521-020-05437-x 10.1007/s12559-020-09785-7 10.1016/j.irbm.2020.05.003 10.1016/j.compbiomed.2020.104037 10.1080/09720502.2020.1857905 10.1016/j.patrec.2020.10.001 10.1016/j.asoc.2020.106885 10.1007/s10489-020-02149-6 10.1007/s00259-020-04929-1 10.1016/j.irbm.2021.01.004 10.1016/j.inffus.2020.10.004 10.1007/s00330-021-07715-1 10.3389/fmed.2020.608525 10.1007/s10489-020-01826-w 10.1007/s00330-020-06956-w 10.1007/s12539-020-00408-1 10.1016/j.compbiomed.2020.103795 10.1109/ACCESS.2020.3005510 10.1007/978-3-030-55258-9_5 10.4236/jbise.2020.137014 10.32604/cmes.2020.011920 10.1109/ISCC50000.2020.9219726 10.1016/j.bspc.2019.101678 10.3390/computers10010006 10.1155/2021/4832864 10.1016/j.eswa.2020.113788 10.1109/IV48863.2021.9575841 10.1145/3377325.3377519 10.1109/ICCV.2017.74 10.1016/j.patrec.2021.11.020
To segment or not to segment: COVID-19 detection for chest X-rays.
Artificial intelligence (AI) has been integrated into most technologies we use. One of the most promising applications in AI is medical imaging. Research demonstrates that AI has improved the performance of most medical imaging analysis systems. Consequently, AI has become a fundamental element of the state of the art with improved outcomes across a variety of medical imaging applications. Moreover, it is believed that computer vision (CV) algorithms are highly effective for image analysis. Recent advances in CV facilitate the recognition of patterns in medical images. In this manner, we investigate CV segmentation techniques for COVID-19 analysis. We use different segmentation techniques, such as k-means, U-net, and flood fill, to extract the lung region from CXRs. Afterwards, we compare the effectiveness of these three segmentation approaches when applied to CXRs. Then, we use machine learning (ML) and deep learning (DL) models to identify COVID-19 lesion molecules in both healthy and pathological lung x-rays. We evaluate our ML and DL findings in the context of CV techniques. Our results indicate that the segmentation-related CV techniques do not exhibit comparable performance to DL and ML techniques. The most optimal AI algorithm yields an accuracy range of 0.92-0.94, whereas the addition of CV algorithms leads to a reduction in accuracy to approximately the range of 0.81-0.88. In addition, we test the performance of DL models under real-world noise, such as salt and pepper noise, which negatively impacts the overall performance.
Informatics in medicine unlocked
"2023-06-22T00:00:00"
[ "SaraAl Hajj Ibrahim", "KhalilEl-Khatib" ]
10.1016/j.imu.2023.101280
CT medical image segmentation algorithm based on deep learning technology.
For the problems of blurred edges, uneven background distribution, and many noise interferences in medical image segmentation, we proposed a medical image segmentation algorithm based on deep neural network technology, which adopts a similar U-Net backbone structure and includes two parts: encoding and decoding. Firstly, the images are passed through the encoder path with residual and convolutional structures for image feature information extraction. We added the attention mechanism module to the network jump connection to address the problems of redundant network channel dimensions and low spatial perception of complex lesions. Finally, the medical image segmentation results are obtained using the decoder path with residual and convolutional structures. To verify the validity of the model in this paper, we conducted the corresponding comparative experimental analysis, and the experimental results show that the DICE and IOU of the proposed model are 0.7826, 0.9683, 0.8904, 0.8069, and 0.9462, 0.9537 for DRIVE, ISIC2018 and COVID-19 CT datasets, respectively. The segmentation accuracy is effectively improved for medical images with complex shapes and adhesions between lesions and normal tissues.
Mathematical biosciences and engineering : MBE
"2023-06-16T00:00:00"
[ "TongpingShen", "FangliangHuang", "XusongZhang" ]
10.3934/mbe.2023485
COV-MobNets: a mobile networks ensemble model for diagnosis of COVID-19 based on chest X-ray images.
The medical profession is facing an excessive workload, which has led to the development of various Computer-Aided Diagnosis (CAD) systems as well as Mobile-Aid Diagnosis (MAD) systems. These technologies enhance the speed and accuracy of diagnoses, particularly in areas with limited resources or remote regions during the pandemic. The primary purpose of this research is to predict and diagnose COVID-19 infection from chest X-ray images by developing a mobile-friendly deep learning framework, which has the potential for deployment in portable devices such as mobile or tablet, especially in situations where the workload of radiology specialists may be high. Moreover, this could improve the accuracy and transparency of population screening to assist radiologists during the pandemic. In this study, the Mobile Networks ensemble model called COV-MobNets is proposed to classify positive COVID-19 X-ray images from negative ones and can have an assistant role in diagnosing COVID-19. The proposed model is an ensemble model, combining two lightweight and mobile-friendly models: MobileViT based on transformer structure and MobileNetV3 based on Convolutional Neural Network. Hence, COV-MobNets can extract the features of chest X-ray images in two different methods to achieve better and more accurate results. In addition, data augmentation techniques were applied to the dataset to avoid overfitting during the training process. The COVIDx-CXR-3 benchmark dataset was used for training and evaluation. The classification accuracy of the improved MobileViT and MobileNetV3 models on the test set has reached 92.5% and 97%, respectively, while the accuracy of the proposed model (COV-MobNets) has reached 97.75%. The sensitivity and specificity of the proposed model have also reached 98.5% and 97%, respectively. Experimental comparison proves the result is more accurate and balanced than other methods. The proposed method can distinguish between positive and negative COVID-19 cases more accurately and quickly. The proposed method proves that utilizing two automatic feature extractors with different structures as an overall framework of COVID-19 diagnosis can lead to improved performance, enhanced accuracy, and better generalization to new or unseen data. As a result, the proposed framework in this study can be used as an effective method for computer-aided diagnosis and mobile-aided diagnosis of COVID-19. The code is available publicly for open access at https://github.com/MAmirEshraghi/COV-MobNets .
BMC medical imaging
"2023-06-16T00:00:00"
[ "Mohammad AmirEshraghi", "AhmadAyatollahi", "Shahriar BaradaranShokouhi" ]
10.1186/s12880-023-01039-w 10.3390/app10165683 10.7717/PEERJ-CS.364 10.1016/j.patrec.2020.09.010 10.1038/s41598-020-76550-z 10.1109/ACCESS.2020.2994762 10.48550/arxiv.2010.11929 10.32604/CMC.2022.031147 10.48550/arxiv.2206.03671 10.3390/e23111383 10.48550/arxiv.1704.04861 10.48550/arxiv.2110.02178 10.3390/electronics12010223
A genetic programming-based convolutional deep learning algorithm for identifying COVID-19 cases via X-ray images.
Evolutionary algorithms have been successfully employed to find the best structure for many learning algorithms including neural networks. Due to their flexibility and promising results, Convolutional Neural Networks (CNNs) have found their application in many image processing applications. The structure of CNNs greatly affects the performance of these algorithms both in terms of accuracy and computational cost, thus, finding the best architecture for these networks is a crucial task before they are employed. In this paper, we develop a genetic programming approach for the optimization of CNN structure in diagnosing COVID-19 cases via X-ray images. A graph representation for CNN architecture is proposed and evolutionary operators including crossover and mutation are specifically designed for the proposed representation. The proposed architecture of CNNs is defined by two sets of parameters, one is the skeleton which determines the arrangement of the convolutional and pooling operators and their connections and one is the numerical parameters of the operators which determine the properties of these operators like filter size and kernel size. The proposed algorithm in this paper optimizes the skeleton and the numerical parameters of the CNN architectures in a co-evolutionary scheme. The proposed algorithm is used to identify covid-19 cases via X-ray images.
Artificial intelligence in medicine
"2023-06-15T00:00:00"
[ "Mohammad Hassan TayaraniNajaran" ]
10.1016/j.artmed.2023.102571 10.1109/TIP.2015.2475625 10.1109/TEVC.2011.2163638
Artificial Intelligence-assisted quantification of COVID-19 pneumonia burden from computed tomography improves prediction of adverse outcomes over visual scoring systems.
We aimed to evaluate the effectiveness of utilizing artificial intelligence (AI) to quantify the extent of pneumonia from chest computed tomography (CT) scans, and to determine its ability to predict clinical deterioration or mortality in patients admitted to the hospital with COVID-19in comparison to semi-quantitative visual scoring systems. A deep-learning algorithm was utilized to quantify the pneumonia burden, while semi-quantitative pneumonia severity scores were estimated through visual means. The primary outcomewas clinical deterioration, the composite endpoint including admission to the intensive care unit, need for invasive mechanical ventilation, or vasopressor therapy, as well as in-hospital death. The final population comprised 743 patients (mean age 65  ±  17 years, 55% men), of whom 175 (23.5%) experienced clinical deterioration or death. The area under the receiver operating characteristic curve (AUC) for predicting the primary outcome was significantly higher for AI-assisted quantitative pneumonia burden (0.739, Utilizing AI-assisted quantification of pneumonia burden from chest CT scans offers a more accurate prediction of clinical deterioration in patients with COVID-19 compared to semi-quantitative severity scores, while requiring only a fraction of the analysis time. Quantitative pneumonia burden assessed using AI demonstrated higher performance for predicting clinical deterioration compared to current semi-quantitative scoring systems. Such an AI system has the potential to be applied for image-based triage of COVID-19 patients in clinical practice.
The British journal of radiology
"2023-06-13T00:00:00"
[ "KajetanGrodecki", "AdityaKillekar", "JuditSimon", "AndrewLin", "SebastienCadet", "PriscillaMcElhinney", "CatoChan", "Michelle CWilliams", "Barry DPressman", "PeterJulien", "DebiaoLi", "PeterChen", "NicolaGaibazzi", "UditThakur", "ElisabettaMancini", "CeciliaAgalbato", "JiroMunechika", "HidenariMatsumoto", "RobertoMene", "GianfrancoParati", "FrancoCernigliaro", "NiteshNerlekar", "CamillaTorlasco", "GianlucaPontone", "PalMaurovich-Horvat", "Piotr JSlomka", "DaminiDey" ]
10.1259/bjr.20220180
A transformer-based representation-learning model with unified processing of multimodal input for clinical diagnostics.
During the diagnostic process, clinicians leverage multimodal information, such as the chief complaint, medical images and laboratory test results. Deep-learning models for aiding diagnosis have yet to meet this requirement of leveraging multimodal information. Here we report a transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner. Rather than learning modality-specific features, the model leverages embedding layers to convert images and unstructured and structured text into visual tokens and text tokens, and uses bidirectional blocks with intramodal and intermodal attention to learn holistic representations of radiographs, the unstructured chief complaint and clinical history, and structured clinical information such as laboratory test results and patient demographic information. The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary disease (by 12% and 9%, respectively) and in the prediction of adverse clinical outcomes in patients with COVID-19 (by 29% and 7%, respectively). Unified multimodal transformer-based models may help streamline the triaging of patients and facilitate the clinical decision-making process.
Nature biomedical engineering
"2023-06-13T00:00:00"
[ "Hong-YuZhou", "YizhouYu", "ChengdiWang", "ShuZhang", "YuanxuGao", "JiaPan", "JunShao", "GuangmingLu", "KangZhang", "WeiminLi" ]
10.1038/s41551-023-01045-x 10.1038/s41591-018-0307-0 10.1038/s41591-018-0335-9 10.1038/s41568-021-00408-3 10.1093/pcmedi/pbaa017 10.1111/ijd.12330 10.3390/cancers14194823 10.1038/s41746-020-00341-z 10.3389/fimmu.2022.828560 10.1016/j.cell.2020.04.045 10.1016/j.cell.2018.02.010 10.1038/s41591-021-01614-0 10.1038/nature14539 10.1016/j.neunet.2014.09.003 10.1038/s41551-021-00704-1 10.1038/s42256-021-00425-9 10.1038/s41746-020-0273-z 10.1038/s41746-022-00648-z 10.1038/s41591-020-0931-3 10.1148/radiol.2019182716 10.1038/s41551-021-00745-6 10.1038/s41746-021-00446-z 10.1148/radiol.2019182622 10.1164/ajrccm.163.5.2101039 10.1038/s41598-020-62922-y 10.1002/mef2.38 10.1002/mef2.43 10.1038/s41586-023-05881-4 10.1007/s00330-020-07044-9
Remora Namib Beetle Optimization Enabled Deep Learning for Severity of COVID-19 Lung Infection Identification and Classification Using CT Images.
Coronavirus disease 2019 (COVID-19) has seen a crucial outburst for both females and males worldwide. Automatic lung infection detection from medical imaging modalities provides high potential for increasing the treatment for patients to tackle COVID-19 disease. COVID-19 detection from lung CT images is a rapid way of diagnosing patients. However, identifying the occurrence of infectious tissues and segmenting this from CT images implies several challenges. Therefore, efficient techniques termed as Remora Namib Beetle Optimization_ Deep Quantum Neural Network (RNBO_DQNN) and RNBO_Deep Neuro Fuzzy Network (RNBO_DNFN) are introduced for the identification as well as classification of COVID-19 lung infection. Here, the pre-processing of lung CT images is performed utilizing an adaptive Wiener filter, whereas lung lobe segmentation is performed employing the Pyramid Scene Parsing Network (PSP-Net). Afterwards, feature extraction is carried out wherein features are extracted for the classification phase. In the first level of classification, DQNN is utilized, tuned by RNBO. Furthermore, RNBO is designed by merging the Remora Optimization Algorithm (ROA) and Namib Beetle Optimization (NBO). If a classified output is COVID-19, then the second-level classification is executed using DNFN for further classification. Additionally, DNFN is also trained by employing the newly proposed RNBO. Furthermore, the devised RNBO_DNFN achieved maximum testing accuracy, with TNR and TPR obtaining values of 89.4%, 89.5% and 87.5%.
Sensors (Basel, Switzerland)
"2023-06-10T00:00:00"
[ "AmgothuShanthi", "SrinivasKoppu" ]
10.3390/s23115316 10.1148/radiol.2020200343 10.1148/ryct.2020200034 10.1186/s12938-020-00831-x 10.1001/jama.2020.1097 10.1016/j.media.2021.102205 10.1056/NEJMoa2001017 10.1016/S1473-3099(20)30120-1 10.1109/ACCESS.2020.3027738 10.1155/2021/5544742 10.1109/TMI.2020.2995965 10.1109/ACCESS.2020.2994762 10.1007/s00330-021-07715-1 10.1007/s42979-021-00874-4 10.1109/TMI.2020.3000314 10.1016/j.image.2022.116835 10.1016/j.eng.2020.04.010 10.1101/2020.03.12.20027185 10.1016/j.eswa.2021.115665 10.1002/cpe.6524 10.3390/electronics11244137 10.1109/MCE.2022.3211455 10.1109/TMI.2020.2996645 10.3390/electronics9101634
A Review Paper about Deep Learning for Medical Image Analysis.
Medical imaging refers to the process of obtaining images of internal organs for therapeutic purposes such as discovering or studying diseases. The primary objective of medical image analysis is to improve the efficacy of clinical research and treatment options. Deep learning has revamped medical image analysis, yielding excellent results in image processing tasks such as registration, segmentation, feature extraction, and classification. The prime motivations for this are the availability of computational resources and the resurgence of deep convolutional neural networks. Deep learning techniques are good at observing hidden patterns in images and supporting clinicians in achieving diagnostic perfection. It has proven to be the most effective method for organ segmentation, cancer detection, disease categorization, and computer-assisted diagnosis. Many deep learning approaches have been published to analyze medical images for various diagnostic purposes. In this paper, we review the work exploiting current state-of-the-art deep learning approaches in medical image processing. We begin the survey by providing a synopsis of research works in medical imaging based on convolutional neural networks. Second, we discuss popular pretrained models and general adversarial networks that aid in improving convolutional networks' performance. Finally, to ease direct evaluation, we compile the performance metrics of deep learning models focusing on COVID-19 detection and child bone age prediction.
Computational and mathematical methods in medicine
"2023-06-07T00:00:00"
[ "BagherSistaninejhad", "HabibRasi", "ParisaNayeri" ]
10.1155/2023/7091301 10.1002/mp.13764 10.1109/MPUL.2011.942929 10.1016/j.neucom.2022.04.065 10.1016/j.patcog.2018.05.014 10.3390/su13031224 10.1109/TMI.2016.2528162 10.1016/j.artmed.2020.101938 10.1016/j.ejmp.2021.05.003 10.5772/intechopen.69792 10.3390/app10061999 10.1109/TPAMI.2016.2572683 10.1016/j.neucom.2020.10.031 10.1007/978-3-319-24574-4_28 10.1007/s13369-020-05309-5 10.1016/j.cmpb.2020.105876 10.1186/s12880-021-00728-8 10.1109/TPAMI.2018.2844175 10.1016/j.cmpb.2021.106141 10.1007/s13244-018-0639-9 10.1109/5.726791 10.1145/3065386 10.1109/ACCESS.2021.3131741 10.1007/s10278-020-00371-9 10.3390/s21175704 10.3390/s20164373 10.1016/j.eswa.2020.113274 10.24018/ejece.2021.5.1.268 10.3390/sym12111787 10.1145/3422622 10.1016/j.clinimag.2020.10.014 10.1155/2021/9956983 10.1155/2021/5536903 10.1016/j.bspc.2021.102901 10.1016/j.mehy.2020.109684 10.3390/app10020559 10.3390/electronics9071066 10.1109/ACCESS.2021.3079204 10.1007/978-3-030-73689-7_52 10.1155/2021/6296811 10.1016/j.aej.2021.03.048 10.1007/s13534-020-00168-3 10.3390/diagnostics11112147 10.3390/s20205736 10.1016/j.cmpb.2021.106018 10.3390/biomedicines10020223 10.1007/s13369-020-04480-z 10.1016/j.compbiomed.2020.103884 10.1109/TPAMI.2017.2699184 10.1038/s41598-020-76550-z 10.1148/radiol.2018180736 10.1007/s42600-021-00151-6 10.1109/ACCESS.2020.2994762 10.1109/TMI.2020.2993291 10.3390/s21041480 10.1155/2021/5528441 10.1109/TCBB.2020.3009859 10.1109/ACCESS.2020.3016780 10.1155/2020/8460493 10.3390/app10207233 10.1007/s11548-020-02266-0 10.1007/s11042-021-10935-8
Deep learning framework for rapid and accurate respiratory COVID-19 prediction using chest X-ray images.
COVID-19 is a contagious disease that affects the human respiratory system. Infected individuals may develop serious illnesses, and complications may result in death. Using medical images to detect COVID-19 from essentially identical thoracic anomalies is challenging because it is time-consuming, laborious, and prone to human error. This study proposes an end-to-end deep-learning framework based on deep feature concatenation and a Multi-head Self-attention network. Feature concatenation involves fine-tuning the pre-trained backbone models of DenseNet, VGG-16, and InceptionV3, which are trained on a large-scale ImageNet, whereas a Multi-head Self-attention network is adopted for performance gain. End-to-end training and evaluation procedures are conducted using the COVID-19_Radiography_Dataset for binary and multi-classification scenarios. The proposed model achieved overall accuracies (96.33% and 98.67%) and F1_scores (92.68% and 98.67%) for multi and binary classification scenarios, respectively. In addition, this study highlights the difference in accuracy (98.0% vs. 96.33%) and F_1 score (97.34% vs. 95.10%) when compared with feature concatenation against the highest individual model performance. Furthermore, a virtual representation of the saliency maps of the employed attention mechanism focusing on the abnormal regions is presented using explainable artificial intelligence (XAI) technology. The proposed framework provided better COVID-19 prediction results outperforming other recent deep learning models using the same dataset.
Journal of King Saud University. Computer and information sciences
"2023-06-05T00:00:00"
[ "Chiagoziem CUkwuoma", "DongshengCai", "Md Belal BinHeyat", "OlusolaBamisile", "HumphreyAdun", "ZaidAl-Huda", "Mugahed AAl-Antari" ]
10.1016/j.jksuci.2023.101596
POLCOVID: a multicenter multiclass chest X-ray database (Poland, 2020-2021).
The outbreak of the SARS-CoV-2 pandemic has put healthcare systems worldwide to their limits, resulting in increased waiting time for diagnosis and required medical assistance. With chest radiographs (CXR) being one of the most common COVID-19 diagnosis methods, many artificial intelligence tools for image-based COVID-19 detection have been developed, often trained on a small number of images from COVID-19-positive patients. Thus, the need for high-quality and well-annotated CXR image databases increased. This paper introduces POLCOVID dataset, containing chest X-ray (CXR) images of patients with COVID-19 or other-type pneumonia, and healthy individuals gathered from 15 Polish hospitals. The original radiographs are accompanied by the preprocessed images limited to the lung area and the corresponding lung masks obtained with the segmentation model. Moreover, the manually created lung masks are provided for a part of POLCOVID dataset and the other four publicly available CXR image collections. POLCOVID dataset can help in pneumonia or COVID-19 diagnosis, while the set of matched images and lung masks may serve for the development of lung segmentation solutions.
Scientific data
"2023-06-03T00:00:00"
[ "AleksandraSuwalska", "JoannaTobiasz", "WojciechPrazuch", "MarekSocha", "PawelFoszner", "DamianPiotrowski", "KatarzynaGruszczynska", "MagdalenaSliwinska", "JerzyWalecki", "TadeuszPopiela", "GrzegorzPrzybylski", "MateuszNowak", "PiotrFiedor", "MalgorzataPawlowska", "RobertFlisiak", "KrzysztofSimon", "GabrielaZapolska", "BarbaraGizycka", "EdytaSzurowska", "NoneNone", "MichalMarczyk", "AndrzejCieszanowski", "JoannaPolanska" ]
10.1038/s41597-023-02229-5 10.1038/s41591-021-01381-y 10.1038/s41579-020-00461-z 10.1136/bmj.m2426 10.1148/radiol.2020201160 10.1038/s41598-020-76550-z 10.1016/j.media.2020.101794 10.1016/j.eswa.2020.114054 10.1016/j.media.2021.102225 10.1016/j.cell.2018.02.010 10.1109/ACCESS.2020.3010287 10.1016/j.media.2021.102216 10.1109/TNB.2017.2676725 10.7303/syn50877085 10.1002/cem.1123
Harnessing Machine Learning in Early COVID-19 Detection and Prognosis: A Comprehensive Systematic Review.
During the early phase of the COVID-19 pandemic, reverse transcriptase-polymerase chain reaction (RT-PCR) testing faced limitations, prompting the exploration of machine learning (ML) alternatives for diagnosis and prognosis. Providing a comprehensive appraisal of such decision support systems and their use in COVID-19 management can aid the medical community in making informed decisions during the risk assessment of their patients, especially in low-resource settings. Therefore, the objective of this study was to systematically review the studies that predicted the diagnosis of COVID-19 or the severity of the disease using ML. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA), we conducted a literature search of MEDLINE (OVID), Scopus, EMBASE, and IEEE Xplore from January 1 to June 31, 2020. The outcomes were COVID-19 diagnosis or prognostic measures such as death, need for mechanical ventilation, admission, and acute respiratory distress syndrome. We included peer-reviewed observational studies, clinical trials, research letters, case series, and reports. We extracted data about the study's country, setting, sample size, data source, dataset, diagnostic or prognostic outcomes, prediction measures, type of ML model, and measures of diagnostic accuracy. Bias was assessed using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). This study was registered in the International Prospective Register of Systematic Reviews (PROSPERO), with the number CRD42020197109. The final records included for data extraction were 66. Forty-three (64%) studies used secondary data. The majority of studies were from Chinese authors (30%). Most of the literature (79%) relied on chest imaging for prediction, while the remainder used various laboratory indicators, including hematological, biochemical, and immunological markers. Thirteen studies explored predicting COVID-19 severity, while the rest predicted diagnosis. Seventy percent of the articles used deep learning models, while 30% used traditional ML algorithms. Most studies reported high sensitivity, specificity, and accuracy for the ML models (exceeding 90%). The overall concern about the risk of bias was "unclear" in 56% of the studies. This was mainly due to concerns about selection bias. ML may help identify COVID-19 patients in the early phase of the pandemic, particularly in the context of chest imaging. Although these studies reflect that these ML models exhibit high accuracy, the novelty of these models and the biases in dataset selection make using them as a replacement for the clinicians' cognitive decision-making questionable. Continued research is needed to enhance the robustness and reliability of ML systems in COVID-19 diagnosis and prognosis.
Cureus
"2023-06-02T00:00:00"
[ "RufaidahDabbagh", "AmrJamal", "Jakir HossainBhuiyan Masud", "Maher ATiti", "Yasser SAmer", "AfnanKhayat", "Taha SAlhazmi", "LayalHneiny", "Fatmah ABaothman", "MetabAlkubeyyer", "Samina AKhan", "Mohamad-HaniTemsah" ]
10.7759/cureus.38373
A multimodal AI-based non-invasive COVID-19 grading framework powered by deep learning, manta ray, and fuzzy inference system from multimedia vital signs.
The COVID-19 pandemic has presented unprecedented challenges to healthcare systems worldwide. One of the key challenges in controlling and managing the pandemic is accurate and rapid diagnosis of COVID-19 cases. Traditional diagnostic methods such as RT-PCR tests are time-consuming and require specialized equipment and trained personnel. Computer-aided diagnosis systems and artificial intelligence (AI) have emerged as promising tools for developing cost-effective and accurate diagnostic approaches. Most studies in this area have focused on diagnosing COVID-19 based on a single modality, such as chest X-rays or cough sounds. However, relying on a single modality may not accurately detect the virus, especially in its early stages. In this research, we propose a non-invasive diagnostic framework consisting of four cascaded layers that work together to accurately detect COVID-19 in patients. The first layer of the framework performs basic diagnostics such as patient temperature, blood oxygen level, and breathing profile, providing initial insights into the patient's condition. The second layer analyzes the coughing profile, while the third layer evaluates chest imaging data such as X-ray and CT scans. Finally, the fourth layer utilizes a fuzzy logic inference system based on the previous three layers to generate a reliable and accurate diagnosis. To evaluate the effectiveness of the proposed framework, we used two datasets: the Cough Dataset and the COVID-19 Radiography Database. The experimental results demonstrate that the proposed framework is effective and trustworthy in terms of accuracy, precision, sensitivity, specificity, F1-score, and balanced accuracy. The audio-based classification achieved an accuracy of 96.55%, while the CXR-based classification achieved an accuracy of 98.55%. The proposed framework has the potential to significantly improve the accuracy and speed of COVID-19 diagnosis, allowing for more effective control and management of the pandemic. Furthermore, the framework's non-invasive nature makes it a more attractive option for patients, reducing the risk of infection and discomfort associated with traditional diagnostic methods.
Heliyon
"2023-05-30T00:00:00"
[ "Saleh AteeqAlmutairi" ]
10.1016/j.heliyon.2023.e16552
Fusion-Extracted Features by Deep Networks for Improved COVID-19 Classification with Chest X-ray Radiography.
Convolutional neural networks (CNNs) have shown promise in accurately diagnosing coronavirus disease 2019 (COVID-19) and bacterial pneumonia using chest X-ray images. However, determining the optimal feature extraction approach is challenging. This study investigates the use of fusion-extracted features by deep networks to improve the accuracy of COVID-19 and bacterial pneumonia classification with chest X-ray radiography. A Fusion CNN method was developed using five different deep learning models after transferred learning to extract image features (Fusion CNN). The combined features were used to build a support vector machine (SVM) classifier with a RBF kernel. The performance of the model was evaluated using accuracy, Kappa values, recall rate, and precision scores. The Fusion CNN model achieved an accuracy and Kappa value of 0.994 and 0.991, with precision scores for normal, COVID-19, and bacterial groups of 0.991, 0.998, and 0.994, respectively. The results indicate that the Fusion CNN models with the SVM classifier provided reliable and accurate classification performance, with Kappa values no less than 0.990. Using a Fusion CNN approach could be a possible solution to enhance accuracy further. Therefore, the study demonstrates the potential of deep learning and fusion-extracted features for accurate COVID-19 and bacterial pneumonia classification with chest X-ray radiography.
Healthcare (Basel, Switzerland)
"2023-05-27T00:00:00"
[ "Kuo-HsuanLin", "Nan-HanLu", "TakahideOkamoto", "Yung-HuiHuang", "Kuo-YingLiu", "AkariMatsushima", "Che-ChengChang", "Tai-BeenChen" ]
10.3390/healthcare11101367 10.1177/1063293X211021435 10.1016/j.bspc.2022.104297 10.1155/2023/4310418 10.1016/j.cmpb.2020.105581 10.1016/j.compbiomed.2021.104319 10.1038/s41598-020-76550-z 10.21203/rs.3.rs-70158/v1 10.1016/j.asoc.2020.106580 10.1371/journal.pone.0242535 10.3389/fpubh.2022.1046296 10.1007/s11042-022-13739-6 10.1007/s11042-022-13843-7 10.1016/j.patrec.2020.12.010 10.1145/3065386 10.48550/arXiv.1409.1556 10.48550/arXiv.1512.03385 10.1109/CVPR.2017.243 10.48550/arXiv.1804.02767 10.1007/BF00994018 10.1155/2022/4254631 10.1007/s40747-020-00199-4 10.3390/ijerph20032035 10.1007/s10522-021-09946-7 10.1016/j.chemolab.2022.104750 10.1016/j.asoc.2022.109401 10.1177/20552076221092543 10.1145/3551690.3551695 10.1007/s40747-020-00216-6 10.7717/peerj-cs.306 10.1016/j.cell.2018.02.010
A Novel Deep Learning-Based Classification Framework for COVID-19 Assisted with Weighted Average Ensemble Modeling.
COVID-19 is an infectious disease caused by the deadly virus SARS-CoV-2 that affects the lung of the patient. Different symptoms, including fever, muscle pain and respiratory syndrome, can be identified in COVID-19-affected patients. The disease needs to be diagnosed in a timely manner, otherwise the lung infection can turn into a severe form and the patient's life may be in danger. In this work, an ensemble deep learning-based technique is proposed for COVID-19 detection that can classify the disease with high accuracy, efficiency, and reliability. A weighted average ensemble (WAE) prediction was performed by combining three CNN models, namely Xception, VGG19 and ResNet50V2, where 97.25% and 94.10% accuracy was achieved for binary and multiclass classification, respectively. To accurately detect the disease, different test methods have been proposed and developed, some of which are even being used in real-time situations. RT-PCR is one of the most successful COVID-19 detection methods, and is being used worldwide with high accuracy and sensitivity. However, complexity and time-consuming manual processes are limitations of this method. To make the detection process automated, researchers across the world have started to use deep learning to detect COVID-19 applied on medical imaging. Although most of the existing systems offer high accuracy, different limitations, including high variance, overfitting and generalization errors, can be found that can degrade the system performance. Some of the reasons behind those limitations are a lack of reliable data resources, missing preprocessing techniques, a lack of proper model selection, etc., which eventually create reliability issues. Reliability is an important factor for any healthcare system. Here, transfer learning with better preprocessing techniques applied on two benchmark datasets makes the work more reliable. The weighted average ensemble technique with hyperparameter tuning ensures better accuracy than using a randomly selected single CNN model.
Diagnostics (Basel, Switzerland)
"2023-05-27T00:00:00"
[ "Gouri ShankarChakraborty", "SalilBatra", "AmanSingh", "GhulamMuhammad", "Vanessa YelamosTorres", "MakulMahajan" ]
10.3390/diagnostics13101806 10.1016/j.knosys.2022.108207 10.1016/j.radi.2022.03.011 10.1109/ACCESS.2021.3086229 10.1016/j.bspc.2022.103772 10.1101/2020.08.20.20178913 10.1007/s00530-021-00794-6 10.1117/1.JEI.31.4.041212 10.36548/jtcsst.2021.4.004 10.1109/ACCESS.2020.3028012 10.3390/diagnostics12081805 10.1007/s10489-021-02292-8 10.1016/j.cmpbup.2022.100054 10.3390/electronics11233999 10.31224/osf.io/wx89s 10.1007/s10489-020-01904-z 10.1016/j.asoc.2021.107947 10.1145/3453170
CRV-NET: Robust Intensity Recognition of Coronavirus in Lung Computerized Tomography Scan Images.
The early diagnosis of infectious diseases is demanded by digital healthcare systems. Currently, the detection of the new coronavirus disease (COVID-19) is a major clinical requirement. For COVID-19 detection, deep learning models are used in various studies, but the robustness is still compromised. In recent years, deep learning models have increased in popularity in almost every area, particularly in medical image processing and analysis. The visualization of the human body's internal structure is critical in medical analysis; many imaging techniques are in use to perform this job. A computerized tomography (CT) scan is one of them, and it has been generally used for the non-invasive observation of the human body. The development of an automatic segmentation method for lung CT scans showing COVID-19 can save experts time and can reduce human error. In this article, the CRV-NET is proposed for the robust detection of COVID-19 in lung CT scan images. A public dataset (SARS-CoV-2 CT Scan dataset), is used for the experimental work and customized according to the scenario of the proposed model. The proposed modified deep-learning-based U-Net model is trained on a custom dataset with 221 training images and their ground truth, which was labeled by an expert. The proposed model is tested on 100 test images, and the results show that the model segments COVID-19 with a satisfactory level of accuracy. Moreover, the comparison of the proposed CRV-NET with different state-of-the-art convolutional neural network models (CNNs), including the U-Net Model, shows better results in terms of accuracy (96.67%) and robustness (low epoch value in detection and the smallest training data size).
Diagnostics (Basel, Switzerland)
"2023-05-27T00:00:00"
[ "UzairIqbal", "RomilImtiaz", "Abdul Khader JilaniSaudagar", "Khubaib AmjadAlam" ]
10.3390/diagnostics13101783 10.1056/NEJMoa2002032 10.1007/s00330-020-06801-0 10.1101/2020.04.06.20054890 10.1016/S1473-3099(20)30086-4 10.1016/j.procs.2020.03.295 10.1016/j.eswa.2019.112855 10.1016/j.imu.2020.100357 10.1016/j.procs.2018.01.104 10.13053/cys-22-3-2526 10.3390/s20051516 10.1109/icip.2018.8451295 10.1016/j.neucom.2018.12.085 10.1016/j.neunet.2020.01.005 10.1109/ACCESS.2021.3131216 10.1109/ACCESS.2021.3056516 10.3390/app9010069 10.1117/12.2540176 10.3390/app10165683 10.3390/diagnostics11050893 10.1016/j.compbiomed.2021.104526 10.1109/RBME.2020.2987975 10.1117/12.2581496 10.1016/j.compbiomed.2020.104037 10.1016/j.compbiomed.2020.103792 10.1016/j.imu.2020.100360 10.1016/j.eng.2020.04.010 10.1007/s00330-021-07715-1 10.1101/2020.03.12.20027185 10.1007/s13246-020-00865-4 10.1007/s10044-021-00984-y 10.1101/2020.05.08.20094664 10.1109/TMI.2020.2996645 10.1016/j.jstrokecerebrovasdis.2020.105089 10.3390/sym12091526 10.1016/j.chaos.2020.110059 10.1016/j.eswa.2020.114054 10.1101/2020.04.24.20078584 10.1186/s12880-020-00529-5 10.1109/ACCESS.2021.3086020 10.1002/ima.22527 10.1038/s41598-020-76550-z 10.3390/info12110471 10.7717/peerj-cs.655 10.1109/TMI.2018.2837502 10.1080/02564602.2021.1955760 10.1007/978-3-030-00889-5 10.1109/MIS.2020.2988604 10.3390/s18061714 10.1007/s12652-021-03612-z 10.3390/sym14020194 10.32604/cmc.2021.018472 10.3390/s21165571 10.1007/s11280-022-01046-x
A Survey of COVID-19 Diagnosis Using Routine Blood Tests with the Aid of Artificial Intelligence Techniques.
Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), causing a disease called COVID-19, is a class of acute respiratory syndrome that has considerably affected the global economy and healthcare system. This virus is diagnosed using a traditional technique known as the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. However, RT-PCR customarily outputs a lot of false-negative and incorrect results. Current works indicate that COVID-19 can also be diagnosed using imaging resolutions, including CT scans, X-rays, and blood tests. Nevertheless, X-rays and CT scans cannot always be used for patient screening because of high costs, radiation doses, and an insufficient number of devices. Therefore, there is a requirement for a less expensive and faster diagnostic model to recognize the positive and negative cases of COVID-19. Blood tests are easily performed and cost less than RT-PCR and imaging tests. Since biochemical parameters in routine blood tests vary during the COVID-19 infection, they may supply physicians with exact information about the diagnosis of COVID-19. This study reviewed some newly emerging artificial intelligence (AI)-based methods to diagnose COVID-19 using routine blood tests. We gathered information about research resources and inspected 92 articles that were carefully chosen from a variety of publishers, such as IEEE, Springer, Elsevier, and MDPI. Then, these 92 studies are classified into two tables which contain articles that use machine Learning and deep Learning models to diagnose COVID-19 while using routine blood test datasets. In these studies, for diagnosing COVID-19, Random Forest and logistic regression are the most widely used machine learning methods and the most widely used performance metrics are accuracy, sensitivity, specificity, and AUC. Finally, we conclude by discussing and analyzing these studies which use machine learning and deep learning models and routine blood test datasets for COVID-19 detection. This survey can be the starting point for a novice-/beginner-level researcher to perform on COVID-19 classification.
Diagnostics (Basel, Switzerland)
"2023-05-27T00:00:00"
[ "SoheilaAbbasi Habashi", "MuratKoyuncu", "RoohallahAlizadehsani" ]
10.3390/diagnostics13101749 10.1007/s12652-022-04199-9 10.1145/3529395 10.1620/tjem.255.127 10.1109/TCBB.2021.3065361 10.3389/fpubh.2022.949482 10.1080/14737159.2020.1757437 10.1371/journal.pone.0242958 10.1155/2022/6184170 10.1515/cclm-2020-1294 10.1016/j.radi.2020.09.010 10.1016/j.bspc.2021.102518 10.1016/j.phrs.2021.105920 10.1016/j.celrep.2021.109821 10.22074/IJFS.2021.137035.1018 10.1007/s11042-020-10340-7 10.18502/fbt.v8i2.6517 10.1007/s40544-022-0710-x 10.3389/fbioe.2022.940511 10.1007/s10916-020-01597-4 10.1002/mabi.202200554 10.1038/s41467-022-31997-8 10.1109/TITS.2021.3113787 10.18502/acta.v59i10.7771 10.7150/thno.63177 10.1016/j.injury.2023.01.041 10.1002/ijgo.14052 10.1109/IPRIA53572.2021.9483458 10.1109/JSEN.2022.3201015 10.1016/j.neucom.2020.10.038 10.1016/j.ymssp.2022.109821 10.3389/fnbot.2022.840594 10.1109/ICEE52715.2021.9544399 10.3390/electronics11132012 10.1155/2022/8733632 10.3390/app12062828 10.1016/j.bspc.2022.103658 10.3390/ijerph18031117 10.1007/s12034-015-1001-1 10.1017/S0022109018001564 10.1016/j.imu.2021.100564 10.1007/978-1-4899-7641-3 10.17849/insm-47-01-31-39.1 10.4249/scholarpedia.1883 10.1145/2939672.2939785 10.1007/s41870-022-00864-6 10.1016/j.cclet.2021.03.076 10.1016/j.cclet.2021.09.062 10.1016/j.cclet.2021.01.001 10.1109/CINTI-MACRo57952.2022.10029403 10.3390/jcm12020400 10.1109/TPAMI.2023.3237740 10.1109/CEC55065.2022.9870280 10.22061/jecei.2021.8051.475 10.48550/arXiv.2110.11870 10.1109/ICEE52715.2021.9544258 10.1007/978-3-030-92238-2_57 10.5120/1462-1976 10.1101/2020.04.02.20051136 10.2196/25884 10.1007/978-3-030-82529-4_38 10.2196/24048 10.2196/27293 10.1038/s41598-021-86735-9 10.3390/s22062224 10.1007/s12539-021-00499-4 10.1038/s41598-021-90265-9 10.1038/s41598-021-82885-y 10.1016/j.compbiomed.2021.104335 10.1007/s42600-020-00112-5 10.1016/j.bspc.2021.103263 10.1101/2020.06.03.20120881 10.1186/s12911-020-01266-z 10.1101/2020.03.18.20035816 10.1038/s41598-021-82492-x 10.1109/JIOT.2021.3050775 10.1038/s41598-022-07307-z 10.1016/j.intimp.2020.106705 10.1109/IEMTRONICS52119.2021.9422534 10.1093/labmed/lmaa111 10.1021/acs.analchem.0c04497 10.1038/s41551-020-00633-5 10.1016/S2589-7500(20)30274-0 10.30699/fhi.v9i1.234 10.1016/j.imu.2020.100449 10.1101/2021.04.06.21254997 10.1093/clinchem/hvaa200 10.1016/j.jcv.2020.104502 10.2196/21439 10.1101/2020.04.04.20052092 10.1101/2020.04.10.20061036 10.32604/cmc.2020.010691 10.1186/s12911-020-01316-6 10.48550/arXiv.2011.10657 10.3389/fcell.2020.00683 10.48550/arXiv.2011.12247 10.1038/s41746-020-00343-x 10.1145/3388440.3412463 10.1371/journal.pone.0239474 10.1016/j.media.2020.101844 10.2196/29514 10.2196/24018 10.1136/bmjspcare-2020-002602 10.3390/ijerph17228386 10.1183/13993003.01104-2020 10.1038/s41467-020-18684-2 10.7717/peerj.10083 10.1101/2020.12.04.20244137 10.1109/ACCESS.2020.3034032 10.1101/2020.09.18.20197319 10.1101/2020.07.07.20148361 10.7717/peerj.9482 10.1101/2020.08.18.20176776 10.3389/frai.2021.579931 10.2139/ssrn.3638427 10.1007/s13755-021-00164-6 10.1038/s41746-021-00456-x 10.1038/s41598-021-93719-2 10.1016/j.meegid.2021.104737 10.1016/j.patter.2020.100092 10.48550/arXiv.2005.06546 10.2139/ssrn.3551355 10.1016/j.cmpb.2021.106444 10.1038/s41598-021-04509-9 10.1016/j.compbiomed.2021.105166 10.1016/j.compbiomed.2022.105284 10.1038/s41598-021-03632-x 10.1038/s42256-020-0180-7 10.1155/2021/4733167 10.1101/2020.02.27.20028027 10.2139/ssrn.3594614 10.21203/rs.3.rs-38576/v1 10.1109/TIM.2021.3130675 10.1016/j.asoc.2021.107329 10.1016/j.chaos.2020.110120 10.1002/emp2.12205 10.35414/akufemubid.788898 10.1109/IBITeC53045.2021.9649254 10.1038/s41591-020-0931-3 10.1038/s41598-021-95957-w 10.4018/JGIM.302890 10.1109/WiDSTaif52235.2021.9430233 10.3390/app12105137 10.1364/OE.442321 10.1097/MD.0000000000026503 10.1002/jmv.26797 10.1002/rmv.2264 10.2196/23390 10.3390/diagnostics10090618 10.1002/JLB.5COVBCR0720-310RR 10.1002/jmv.27093 10.1093/infdis/jiaa591 10.1002/jmv.26506 10.1161/CIRCRESAHA.120.318218 10.2147/IDR.S258639 10.1016/j.lfs.2020.119010 10.1002/adma.202103646 10.3389/fmolb.2022.845179
COVID-ConvNet: A Convolutional Neural Network Classifier for Diagnosing COVID-19 Infection.
The novel coronavirus (COVID-19) pandemic still has a significant impact on the worldwide population's health and well-being. Effective patient screening, including radiological examination employing chest radiography as one of the main screening modalities, is an important step in the battle against the disease. Indeed, the earliest studies on COVID-19 found that patients infected with COVID-19 present with characteristic anomalies in chest radiography. In this paper, we introduce COVID-ConvNet, a deep convolutional neural network (DCNN) design suitable for detecting COVID-19 symptoms from chest X-ray (CXR) scans. The proposed deep learning (DL) model was trained and evaluated using 21,165 CXR images from the COVID-19 Database, a publicly available dataset. The experimental results demonstrate that our COVID-ConvNet model has a high prediction accuracy at 97.43% and outperforms recent related works by up to 5.9% in terms of prediction accuracy.
Diagnostics (Basel, Switzerland)
"2023-05-27T00:00:00"
[ "Ibtihal A LAlablani", "Mohammed J FAlenazi" ]
10.3390/diagnostics13101675 10.3390/s22031211 10.1007/s13204-021-01868-7 10.1080/14760584.2023.2157817 10.1111/risa.13500 10.1111/phn.12809 10.1007/s40258-020-00580-x 10.1016/j.scitotenv.2020.140561 10.3390/ijerph18063056 10.1016/j.radi.2020.05.012 10.1016/j.compbiomed.2020.103792 10.1016/j.procbio.2020.08.016 10.1109/ACCESS.2020.3016780 10.1109/JAS.2020.1003393 10.1109/JBHI.2020.3037127 10.1016/j.asoc.2022.109109 10.1038/s41598-020-76550-z 10.1109/ACCESS.2020.3044858 10.1109/TMI.2020.2994908 10.1007/s13755-021-00166-4 10.1016/j.eswa.2020.114054 10.1007/s10044-021-00984-y 10.1007/s10489-020-01829-7 10.1109/TMI.2013.2290491 10.1007/s10489-020-01902-1 10.1007/s10439-022-02958-5 10.1016/j.bspc.2022.103772 10.1016/j.eswa.2022.118029 10.1109/ACCESS.2020.3010287 10.1016/j.compbiomed.2021.104319 10.1016/j.chest.2020.04.010 10.1101/2020.02.10.20021584 10.1186/s12967-020-02505-7 10.1186/s40560-020-00453-4 10.1007/s40815-022-01399-5 10.1109/ACCESS.2021.3068614 10.1109/IACC48062.2019.8971494 10.1007/s11042-021-11100-x 10.1016/j.patrec.2019.04.019 10.1109/ACCESS.2021.3136129 10.1111/2041-210X.13373 10.1109/ACCESS.2023.3237851 10.1109/TMC.2019.2930506 10.3390/s21196361
Learning without forgetting by leveraging transfer learning for detecting COVID-19 infection from CT images.
COVID-19, a global pandemic, has killed thousands in the last three years. Pathogenic laboratory testing is the gold standard but has a high false-negative rate, making alternate diagnostic procedures necessary to fight against it. Computer Tomography (CT) scans help diagnose and monitor COVID-19, especially in severe cases. But, visual inspection of CT images takes time and effort. In this study, we employ Convolution Neural Network (CNN) to detect coronavirus infection from CT images. The proposed study utilized transfer learning on the three pre-trained deep CNN models, namely VGG-16, ResNet, and wide ResNet, to diagnose and detect COVID-19 infection from the CT images. However, when the pre-trained models are retrained, the model suffers the generalization capability to categorize the data in the original datasets. The novel aspect of this work is the integration of deep CNN architectures with Learning without Forgetting (LwF) to enhance the model's generalization capabilities on both trained and new data samples. The LwF makes the network use its learning capabilities in training on the new dataset while preserving the original competencies. The deep CNN models with the LwF model are evaluated on original images and CT scans of individuals infected with Delta-variant of the SARS-CoV-2 virus. The experimental results show that of the three fine-tuned CNN models with the LwF method, the wide ResNet model's performance is superior and effective in classifying original and delta-variant datasets with an accuracy of 93.08% and 92.32%, respectively.
Scientific reports
"2023-05-26T00:00:00"
[ "MalligaSubramanian", "Veerappampalayam EaswaramoorthySathishkumar", "JaehyukCho", "KogilavaniShanmugavadivel" ]
10.1038/s41598-023-34908-z 10.1148/radiol.2020200236 10.1148/radiol.2020200269 10.1145/3065386 10.1016/j.ecoinf.2020.101093 10.1007/s42979-021-00782-7 10.1089/big.2021.0218 10.1155/2020/8843664 10.1002/mp.14609 10.1007/s00330-021-07715-1 10.1109/TCBB.2021.3065361 10.1007/s10044-021-00984-y 10.1007/s10096-020-03901-z 10.1186/s43055-021-00524-y 10.1038/s42256-021-00307-0 10.1016/j.bspc.2021.103441 10.1016/j.compbiomed.2022.105604 10.1016/j.bbe.2021.12.001 10.1007/s00521-022-06918-x 10.1145/3551647 10.7717/peerj-cs.364 10.7717/peerj-cs.358 10.1016/j.compbiomed.2023.106646 10.1109/ACCESS.2023.3244952 10.1038/nature14539 10.1007/s10462-020-09825-6 10.1016/j.compag.2020.105393 10.1109/TPAMI.2017.2773081 10.1016/j.catena.2019.104249 10.3390/s23042085
A Systematic Literature Review and Future Perspectives for Handling Big Data Analytics in COVID-19 Diagnosis.
In today's digital world, information is growing along with the expansion of Internet usage worldwide. As a consequence, bulk of data is generated constantly which is known to be "Big Data". One of the most evolving technologies in twenty-first century is Big Data analytics, it is promising field for extracting knowledge from very large datasets and enhancing benefits while lowering costs. Due to the enormous success of big data analytics, the healthcare sector is increasingly shifting toward adopting these approaches to diagnose diseases. Due to the recent boom in medical big data and the development of computational methods, researchers and practitioners have gained the ability to mine and visualize medical big data on a larger scale. Thus, with the aid of integration of big data analytics in healthcare sectors, precise medical data analysis is now feasible with early sickness detection, health status monitoring, patient treatment, and community services is now achievable. With all these improvements, a deadly disease COVID is considered in this comprehensive review with the intention of offering remedies utilizing big data analytics. The use of big data applications is vital to managing pandemic conditions, such as predicting outbreaks of COVID-19 and identifying cases and patterns of spread of COVID-19. Research is still being done on leveraging big data analytics to forecast COVID-19. But precise and early identification of COVID disease is still lacking due to the volume of medical records like dissimilar medical imaging modalities. Meanwhile, Digital imaging has now become essential to COVID diagnosis, but the main challenge is the storage of massive volumes of data. Taking these limitations into account, a comprehensive analysis is presented in the systematic literature review (SLR) to provide a deeper understanding of big data in the field of COVID-19.
New generation computing
"2023-05-25T00:00:00"
[ "NagamaniTenali", "Gatram Rama MohanBabu" ]
10.1007/s00354-023-00211-8 10.1016/j.ijinfomgt.2020.102231 10.1007/s10462-019-09685-9 10.1016/j.jbusres.2020.09.012 10.1080/0960085X.2020.1740618 10.1016/j.jbusres.2020.01.022 10.1016/j.idh.2018.10.002 10.1109/TFUZZ.2020.3016346 10.1007/s11036-020-01700-6 10.1016/j.jbusres.2020.03.028 10.1016/j.ijinfomgt.2018.12.011 10.1007/s10916-019-1419-x 10.1108/BPMJ-11-2020-0527 10.22381/JSME8320202 10.3390/ijerph17176161 10.1109/ACCESS.2019.2948949 10.1109/TBDATA.2018.2847629 10.1109/ACCESS.2020.3033279 10.1109/ACCESS.2021.3062467 10.1109/ACCESS.2020.3023344 10.1109/TSMC.2019.2961378 10.1109/TII.2019.2927349 10.1186/s40537-017-0111-6 10.1109/ACCESS.2019.2955983 10.1007/s12652-018-1095-6 10.26599/BDMA.2020.9020024 10.1111/poms.12737 10.1007/s10796-020-10022-7 10.1016/j.asoc.2021.107447 10.1016/j.cose.2021.102435 10.1016/j.neucom.2021.08.086 10.1080/09720529.2021.1969733 10.1111/anec.12839 10.1016/j.jpdc.2021.03.002 10.1002/cpe.6316 10.1016/j.bdr.2021.100285 10.26599/BDMA.2021.9020011 10.1007/s00521-020-05076-2 10.1007/s10619-021-07375-6 10.1007/s10586-020-03146-7 10.1007/s11633-021-1312-1 10.1186/s40537-021-00519-6 10.1093/comjnl/bxab135 10.4018/IJSKD.297043 10.1007/s10619-020-07285-z 10.1007/s12652-020-02181-x 10.1016/j.compeleceng.2017.03.009 10.1109/TSMC.2017.2700889 10.1016/j.inffus.2017.09.005 10.1007/s10766-017-0513-2 10.26599/BDMA.2018.9020005 10.1007/s11227-020-03190-5 10.1007/s11227-018-2362-1 10.1016/j.displa.2021.102061 10.1016/j.eswa.2021.115076 10.1016/j.knosys.2021.107417 10.1109/ACCESS.2020.3001973 10.1109/TMI.2020.2993291 10.1016/j.chaos.2020.110120 10.1016/j.ijhm.2020.102849 10.1016/j.chaos.2020.110336 10.1007/s11227-020-03586-3 10.1016/j.chaos.2020.109944 10.1007/s10796-021-10135-7 10.3233/JIFS-189284 10.3390/ijerph181910147 10.1016/j.bspc.2020.102257 10.1109/TUFFC.2021.3068190 10.1007/s42979-021-00782-7 10.1109/ACCESS.2020.3040245 10.1109/TCYB.2020.3042837 10.1109/TII.2021.3057683 10.1016/j.aej.2021.03.052 10.1007/s11042-020-10000-w 10.1007/s11042-023-14605-9 10.1007/s00354-014-0407-4
Deep Convolutional Neural Networks for Detecting COVID-19 Using Medical Images: A Survey.
Coronavirus Disease 2019 (COVID-19), which is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2), surprised the world in December 2019 and has threatened the lives of millions of people. Countries all over the world closed worship places and shops, prevented gatherings, and implemented curfews to stand against the spread of COVID-19. Deep Learning (DL) and Artificial Intelligence (AI) can have a great role in detecting and fighting this disease. Deep learning can be used to detect COVID-19 symptoms and signs from different imaging modalities, such as X-Ray, Computed Tomography (CT), and Ultrasound Images (US). This could help in identifying COVID-19 cases as a first step to curing them. In this paper, we reviewed the research studies conducted from January 2020 to September 2022 about deep learning models that were used in COVID-19 detection. This paper clarified the three most common imaging modalities (X-Ray, CT, and US) in addition to the DL approaches that are used in this detection and compared these approaches. This paper also provided the future directions of this field to fight COVID-19 disease.
New generation computing
"2023-05-25T00:00:00"
[ "RanaKhattab", "Islam RAbdelmaksoud", "SamirAbdelrazek" ]
10.1007/s00354-023-00213-6 10.1016/j.jiph.2020.03.019 10.1016/j.ijid.2020.01.009 10.1001/jama.2020.2633 10.1016/S2213-2600(20)30056-4 10.1016/j.tmrv.2020.02.003 10.1016/j.jare.2020.03.005 10.1038/s41423-020-0400-4 10.1038/s41579-020-0336-9 10.3390/pathogens9030231 10.1016/S0140-6736(20)30211-7 10.3390/v12020135 10.1038/s41579-020-00459-7 10.3389/fpubh.2016.00159 10.1001/jama.2020.2131 10.46234/ccdcw2020.032 10.1016/j.jinf.2020.02.018 10.1093/cid/ciaa198 10.1016/S0140-6736(10)61459-6 10.1001/jama.2020.1585 10.1016/j.rmed.2020.105980 10.1118/1.2208736 10.1148/radiol.2020200642 10.1038/s41467-020-18685-1 10.1148/ryct.2020200026 10.1016/S1473-3099(20)30086-4 10.2214/AJR.20.22975 10.1007/s10489-020-01943-6 10.3390/app9081526 10.3390/diagnostics10010027 10.1016/j.media.2017.07.005 10.1159/000455809 10.1109/TNNLS.2020.2995800 10.26599/BDMA.2018.9020001 10.1109/TMI.2019.2919951 10.1109/5.726791 10.1167/9.8.1037 10.1016/j.compbiomed.2020.103792 10.1007/s13246-020-00865-4 10.1016/j.media.2020.101794 10.1007/s10044-021-00984-y 10.1142/S0218001421510046 10.1016/j.ins.2016.01.068 10.1016/j.radi.2020.10.018 10.1109/TMI.2020.3040950 10.5455/jjee.204-1585312246 10.1016/j.imu.2022.100916 10.1016/j.cmpb.2020.105581 10.1016/j.cmpb.2020.105532 10.1038/s41598-020-74539-2 10.3390/sym12040651 10.1016/j.patrec.2020.09.010 10.1016/j.mehy.2020.109761 10.1007/s10489-020-01900-3 10.1007/s10489-020-02076-6 10.3390/ai1040032 10.1016/j.imu.2020.100412 10.1016/j.imu.2020.100360 10.1016/j.compbiomed.2021.105188 10.32604/iasc.2022.022179 10.3390/electronics11010103 10.1016/j.inffus.2021.04.008 10.3390/electronics10212701 10.1007/s10140-020-01886-y 10.1007/s00330-021-07715-1 10.1016/j.compbiomed.2020.104037 10.1109/TMI.2020.2995965 10.1148/radiol.2020200905 10.7717/peerj.10086 10.1007/s00330-020-07087-y 10.18280/isi.250505 10.1016/j.compbiomed.2020.103795 10.1016/j.ipm.2022.103025 10.1016/j.eng.2020.04.010 10.1109/TCBB.2021.3065361 10.1007/s10096-020-03901-z 10.1007/s11356-020-10133-3 10.1016/j.cell.2020.04.045 10.1007/s00259-020-04953-1 10.3390/diagnostics11020158 10.1109/TMI.2020.2994459 10.1016/j.radi.2020.04.005 10.1016/j.acra.2020.04.032 10.1109/ACCESS.2020.3016780 10.1016/j.chaos.2020.110190 10.1080/07391102.2020.1767212 10.1016/j.bbe.2021.12.001 10.1109/ACCESS.2020.3010287 10.1186/s13104-021-05592-x 10.1002/jum.15285 10.1016/j.cell.2018.02.010
COVID-19 Diagnosis in Computerized Tomography (CT) and X-ray Scans Using Capsule Neural Network.
This study proposes a deep-learning-based solution (named CapsNetCovid) for COVID-19 diagnosis using a capsule neural network (CapsNet). CapsNets are robust for image rotations and affine transformations, which is advantageous when processing medical imaging datasets. This study presents a performance analysis of CapsNets on standard images and their augmented variants for binary and multi-class classification. CapsNetCovid was trained and evaluated on two COVID-19 datasets of CT images and X-ray images. It was also evaluated on eight augmented datasets. The results show that the proposed model achieved classification accuracy, precision, sensitivity, and F1-score of 99.929%, 99.887%, 100%, and 99.319%, respectively, for the CT images. It also achieved a classification accuracy, precision, sensitivity, and F1-score of 94.721%, 93.864%, 92.947%, and 93.386%, respectively, for the X-ray images. This study presents a comparative analysis between CapsNetCovid, CNN, DenseNet121, and ResNet50 in terms of their ability to correctly identify randomly transformed and rotated CT and X-ray images without the use of data augmentation techniques. The analysis shows that CapsNetCovid outperforms CNN, DenseNet121, and ResNet50 when trained and evaluated on CT and X-ray images without data augmentation. We hope that this research will aid in improving decision making and diagnostic accuracy of medical professionals when diagnosing COVID-19.
Diagnostics (Basel, Switzerland)
"2023-05-16T00:00:00"
[ "Andronicus AAkinyelu", "BubacarrBah" ]
10.3390/diagnostics13081484 10.3389/frai.2022.919672 10.1007/s10586-022-03703-2 10.1613/jair.953 10.1109/TAI.2021.3104791 10.1016/j.chaos.2020.110122 10.1002/ima.22566 10.1016/j.patrec.2020.09.010 10.3389/frai.2021.598932 10.1016/j.compbiomed.2021.104399 10.1016/j.compbiomed.2021.105182 10.1016/j.chemolab.2022.104750 10.1038/s41598-023-27697-y 10.1007/s10140-020-01886-y 10.1016/j.asoc.2022.109401 10.1177/20552076221092543 10.1038/s41598-021-93832-2 10.1007/s40747-020-00216-6 10.3390/ijerph20032035 10.1007/s42979-020-00209-9 10.1016/j.cell.2020.04.045 10.1016/j.bspc.2021.102588 10.1038/s41551-020-00633-5 10.3389/fmed.2021.729287 10.1109/ACCESS.2020.3010287 10.1016/j.compbiomed.2021.104319 10.1002/ima.22706 10.1016/j.imu.2020.100505 10.1007/s13246-020-00865-4 10.1007/s10044-021-00984-y 10.1007/s42979-021-00881-5 10.1007/s11042-023-14353-w 10.1016/j.imu.2020.100360 10.1007/s10140-020-01808-y 10.1038/s41598-020-76550-z
A Real Time Method for Distinguishing COVID-19 Utilizing 2D-CNN and Transfer Learning.
Rapid identification of COVID-19 can assist in making decisions for effective treatment and epidemic prevention. The PCR-based test is expert-dependent, is time-consuming, and has limited sensitivity. By inspecting Chest R-ray (CXR) images, COVID-19, pneumonia, and other lung infections can be detected in real time. The current, state-of-the-art literature suggests that deep learning (DL) is highly advantageous in automatic disease classification utilizing the CXR images. The goal of this study is to develop models by employing DL models for identifying COVID-19 and other lung disorders more efficiently. For this study, a dataset of 18,564 CXR images with seven disease categories was created from multiple publicly available sources. Four DL architectures including the proposed CNN model and pretrained VGG-16, VGG-19, and Inception-v3 models were applied to identify healthy and six lung diseases (fibrosis, lung opacity, viral pneumonia, bacterial pneumonia, COVID-19, and tuberculosis). Accuracy, precision, recall, f1 score, area under the curve (AUC), and testing time were used to evaluate the performance of these four models. The results demonstrated that the proposed CNN model outperformed all other DL models employed for a seven-class classification with an accuracy of 93.15% and average values for precision, recall, f1-score, and AUC of 0.9343, 0.9443, 0.9386, and 0.9939. The CNN model equally performed well when other multiclass classifications including normal and COVID-19 as the common classes were considered, yielding accuracy values of 98%, 97.49%, 97.81%, 96%, and 96.75% for two, three, four, five, and six classes, respectively. The proposed model can also identify COVID-19 with shorter training and testing times compared to other transfer learning models.
Sensors (Basel, Switzerland)
"2023-05-13T00:00:00"
[ "AbidaSultana", "MdNahiduzzaman", "Sagor ChandroBakchy", "Saleh MohammedShahriar", "Hasibul IslamPeyal", "Muhammad E HChowdhury", "AmithKhandakar", "MohamedArselene Ayari", "MominulAhsan", "JulfikarHaider" ]
10.3390/s23094458 10.1038/s41586-020-2008-3 10.1016/j.jaut.2020.102433 10.1001/jama.2020.3786 10.1002/jmv.25674 10.1097/RTI.0000000000000404 10.1016/S0140-6736(20)30211-7 10.1016/j.neunet.2020.01.018 10.1109/ACCESS.2021.3061621 10.1109/ACCESS.2022.3182498 10.1109/TMI.2015.2457891 10.1038/nature21056 10.1109/ACCESS.2020.3031384 10.1016/j.chaos.2020.109944 10.3390/s22031211 10.3390/diagnostics13010131 10.1016/j.eswa.2020.114054 10.1109/JAS.2020.1003393 10.1109/ACCESS.2020.3010287 10.1109/ACCESS.2022.3221531 10.1016/j.cmpb.2020.105608 10.1016/j.compbiomed.2020.103792 10.1016/j.eswa.2020.113909 10.3390/s21051742 10.1016/j.eswa.2022.118576 10.3390/s22020669 10.3390/s21175813 10.3390/s21041480 10.1016/j.eswa.2022.118029 10.1016/j.compeleceng.2022.108405 10.1186/s40537-020-00392-9 10.1186/s40537-019-0192-5 10.1186/s40537-019-0263-7 10.3390/pathogens12010017 10.1016/j.eng.2020.04.010 10.1016/j.bspc.2022.103848 10.1007/s13246-020-00865-4 10.3389/fmed.2020.00427 10.1016/j.chaos.2020.110495 10.1016/j.cmpb.2020.105581
A Lightweight AMResNet Architecture with an Attention Mechanism for Diagnosing COVID-19.
COVID-19 has become a worldwide epidemic disease and a new challenge for all mankind. The potential advantages of chest X-ray images on COVID-19 were discovered. We proposed a lightweight and effective Convolution Neural Network framework based on chest X-ray images for the diagnosis of COVID-19, named AMResNet. COVID-19 has become a worldwide epidemic disease and a new challenge for all mankind. The potential advantages of chest X-ray images on COVID-19 were discovered. A lightweight and effective Convolution Neural Network framework based on chest X-ray images for the diagnosis of COVID-19. By introducing the channel attention mechanism and image spatial information attention mechanism, a better level can be achieved without increasing the number of model parameters. In the collected data sets, we achieved an average accuracy rate of more than 92%, and the sensitivity and specificity of specific disease categories were also above 90%. The convolution neural network framework can be used as a novel method for artificial intelligence to diagnose COVID-19 or other diseases based on medical images.
Current medical imaging
"2023-05-12T00:00:00"
[ "QiZhou", "Jamal AlzobairHammad Kowah", "HuijunLi", "MingqingYuan", "LiheJiang", "XuLiu" ]
10.2174/1573405620666230426121437
Deep learning approach for early prediction of COVID-19 mortality using chest X-ray and electronic health records.
An artificial-intelligence (AI) model for predicting the prognosis or mortality of coronavirus disease 2019 (COVID-19) patients will allow efficient allocation of limited medical resources. We developed an early mortality prediction ensemble model for COVID-19 using AI models with initial chest X-ray and electronic health record (EHR) data. We used convolutional neural network (CNN) models (Inception-ResNet-V2 and EfficientNet) for chest X-ray analysis and multilayer perceptron (MLP), Extreme Gradient Boosting (XGBoost), and random forest (RF) models for EHR data analysis. The Gradient-weighted Class Activation Mapping and Shapley Additive Explanations (SHAP) methods were used to determine the effects of these features on COVID-19. We developed an ensemble model (Area under the receiver operating characteristic curve of 0.8698) using a soft voting method with weight differences for CNN, XGBoost, MLP, and RF models. To resolve the data imbalance, we conducted F1-score optimization by adjusting the cutoff values to optimize the model performance (F1 score of 0.77). Our study is meaningful in that we developed an early mortality prediction model using only the initial chest X-ray and EHR data of COVID-19 patients. Early prediction of the clinical courses of patients is helpful for not only treatment but also bed management. Our results confirmed the performance improvement of the ensemble model achieved by combining AI models. Through the SHAP method, laboratory tests that indicate the factors affecting COVID-19 mortality were discovered, highlighting the importance of these tests in managing COVID-19 patients.
BMC bioinformatics
"2023-05-10T00:00:00"
[ "Seung MinBaik", "Kyung SookHong", "Dong JinPark" ]
10.1186/s12859-023-05321-0 10.1016/j.radi.2020.09.010 10.3390/diagnostics12040920 10.3390/diagnostics12040821 10.1002/jmv.27352 10.1371/journal.pone.0252384 10.1371/journal.pone.0249285 10.3389/fcvm.2021.638011 10.1109/jbhi.2020.3012383 10.1007/978-3-030-33128-3_1 10.1111/cyt.12942 10.1007/s13244-018-0639-9 10.1007/s12559-020-09751-3 10.1016/j.compbiomed.2020.103792 10.1007/s00500-020-05424-3 10.1007/s00330-021-08049-8 10.1016/s2589-7500(21)00039-x 10.3389/fpsyg.2021.651398 10.1186/s40462-021-00245-x 10.1016/j.jneumeth.2021.109098 10.1016/j.compbiomed.2022.105550 10.3389/fmed.2021.676343 10.1002/jcla.24053 10.1002/jmv.26082 10.1016/j.mayocp.2020.04.006 10.3389/fpubh.2022.857368 10.3389/fcvm.2021.697737 10.1109/tpami.2021.3083089 10.1007/s10439-018-02116-w 10.1038/s41598-021-87171-5 10.1016/j.compbiomed.2021.104829 10.3390/s21227475 10.1007/s00521-021-06177-2 10.2174/1573409914666180828105228 10.1001/jamapsychiatry.2019.3671 10.1016/j.envpol.2019.06.088
COVID-19 disease identification network based on weakly supervised feature selection.
The coronavirus disease 2019 (COVID-19) outbreak has resulted in countless infections and deaths worldwide, posing increasing challenges for the health care system. The use of artificial intelligence to assist in diagnosis not only had a high accuracy rate but also saved time and effort in the sudden outbreak phase with the lack of doctors and medical equipment. This study aimed to propose a weakly supervised COVID-19 classification network (W-COVNet). This network was divided into three main modules: weakly supervised feature selection module (W-FS), deep learning bilinear feature fusion module (DBFF) and Grad-CAM++ based network visualization module (Grad-Ⅴ). The first module, W-FS, mainly removed redundant background features from computed tomography (CT) images, performed feature selection and retained core feature regions. The second module, DBFF, mainly used two symmetric networks to extract different features and thus obtain rich complementary features. The third module, Grad-Ⅴ, allowed the visualization of lesions in unlabeled images. A fivefold cross-validation experiment showed an average classification accuracy of 85.3%, and a comparison with seven advanced classification models showed that our proposed network had a better performance.
Mathematical biosciences and engineering : MBE
"2023-05-10T00:00:00"
[ "JingyaoLiu", "QingheFeng", "YuMiao", "WeiHe", "WeiliShi", "ZhengangJiang" ]
10.3934/mbe.2023409
An efficient, lightweight MobileNetV2-based fine-tuned model for COVID-19 detection using chest X-ray images.
In recent years, deep learning's identification of cancer, lung disease and heart disease, among others, has contributed to its rising popularity. Deep learning has also contributed to the examination of COVID-19, which is a subject that is currently the focus of considerable scientific debate. COVID-19 detection based on chest X-ray (CXR) images primarily depends on convolutional neural network transfer learning techniques. Moreover, the majority of these methods are evaluated by using CXR data from a single source, which makes them prohibitively expensive. On a variety of datasets, current methods for COVID-19 detection may not perform as well. Moreover, most current approaches focus on COVID-19 detection. This study introduces a rapid and lightweight MobileNetV2-based model for accurate recognition of COVID-19 based on CXR images; this is done by using machine vision algorithms that focused largely on robust and potent feature-learning capabilities. The proposed model is assessed by using a dataset obtained from various sources. In addition to COVID-19, the dataset includes bacterial and viral pneumonia. This model is capable of identifying COVID-19, as well as other lung disorders, including bacterial and viral pneumonia, among others. Experiments with each model were thoroughly analyzed. According to the findings of this investigation, MobileNetv2, with its 92% and 93% training validity and 88% precision, was the most applicable and reliable model for this diagnosis. As a result, one may infer that this study has practical value in terms of giving a reliable reference to the radiologist and theoretical significance in terms of establishing strategies for developing robust features with great presentation ability.
Mathematical biosciences and engineering : MBE
"2023-05-10T00:00:00"
[ "ShubashiniVelu" ]
10.3934/mbe.2023368
Data augmentation based semi-supervised method to improve COVID-19 CT classification.
The Coronavirus (COVID-19) outbreak of December 2019 has become a serious threat to people around the world, creating a health crisis that infected millions of lives, as well as destroying the global economy. Early detection and diagnosis are essential to prevent further transmission. The detection of COVID-19 computed tomography images is one of the important approaches to rapid diagnosis. Many different branches of deep learning methods have played an important role in this area, including transfer learning, contrastive learning, ensemble strategy, etc. However, these works require a large number of samples of expensive manual labels, so in order to save costs, scholars adopted semi-supervised learning that applies only a few labels to classify COVID-19 CT images. Nevertheless, the existing semi-supervised methods focus primarily on class imbalance and pseudo-label filtering rather than on pseudo-label generation. Accordingly, in this paper, we organized a semi-supervised classification framework based on data augmentation to classify the CT images of COVID-19. We revised the classic teacher-student framework and introduced the popular data augmentation method Mixup, which widened the distribution of high confidence to improve the accuracy of selected pseudo-labels and ultimately obtain a model with better performance. For the COVID-CT dataset, our method makes precision, F1 score, accuracy and specificity 21.04%, 12.95%, 17.13% and 38.29% higher than average values for other methods respectively, For the SARS-COV-2 dataset, these increases were 8.40%, 7.59%, 9.35% and 12.80% respectively. For the Harvard Dataverse dataset, growth was 17.64%, 18.89%, 19.81% and 20.20% respectively. The codes are available at https://github.com/YutingBai99/COVID-19-SSL.
Mathematical biosciences and engineering : MBE
"2023-05-10T00:00:00"
[ "XiangtaoChen", "YutingBai", "PengWang", "JiaweiLuo" ]
10.3934/mbe.2023294
A deep learning-based application for COVID-19 diagnosis on CT: The Imaging COVID-19 AI initiative.
Recently, artificial intelligence (AI)-based applications for chest imaging have emerged as potential tools to assist clinicians in the diagnosis and management of patients with coronavirus disease 2019 (COVID-19). To develop a deep learning-based clinical decision support system for automatic diagnosis of COVID-19 on chest CT scans. Secondarily, to develop a complementary segmentation tool to assess the extent of lung involvement and measure disease severity. The Imaging COVID-19 AI initiative was formed to conduct a retrospective multicentre cohort study including 20 institutions from seven different European countries. Patients with suspected or known COVID-19 who underwent a chest CT were included. The dataset was split on the institution-level to allow external evaluation. Data annotation was performed by 34 radiologists/radiology residents and included quality control measures. A multi-class classification model was created using a custom 3D convolutional neural network. For the segmentation task, a UNET-like architecture with a backbone Residual Network (ResNet-34) was selected. A total of 2,802 CT scans were included (2,667 unique patients, mean [standard deviation] age = 64.6 [16.2] years, male/female ratio 1.3:1). The distribution of classes (COVID-19/Other type of pulmonary infection/No imaging signs of infection) was 1,490 (53.2%), 402 (14.3%), and 910 (32.5%), respectively. On the external test dataset, the diagnostic multiclassification model yielded high micro-average and macro-average AUC values (0.93 and 0.91, respectively). The model provided the likelihood of COVID-19 vs other cases with a sensitivity of 87% and a specificity of 94%. The segmentation performance was moderate with Dice similarity coefficient (DSC) of 0.59. An imaging analysis pipeline was developed that returned a quantitative report to the user. We developed a deep learning-based clinical decision support system that could become an efficient concurrent reading tool to assist clinicians, utilising a newly created European dataset including more than 2,800 CT scans.
PloS one
"2023-05-02T00:00:00"
[ "LaurensTopff", "JoséSánchez-García", "RafaelLópez-González", "Ana JiménezPastor", "Jacob JVisser", "MerelHuisman", "JulienGuiot", "Regina G HBeets-Tan", "AngelAlberich-Bayarri", "AlmudenaFuster-Matanzo", "Erik RRanschaert", "NoneNone" ]
10.1371/journal.pone.0285121 10.1186/s12941-021-00438-7 10.1038/s41576-021-00360-w 10.1016/j.talanta.2022.123409 10.1007/s15010-022-01819-6 10.1148/radiol.2020201365 10.1186/s13244-021-01096-1 10.1016/j.diii.2020.11.008 10.1148/radiol.2020200642 10.1148/radiol.2020200343 10.1259/bjr.20201039 10.1183/13993003.00398-2020 10.1183/13993003.00334-2020 10.1016/S1473-3099(20)30134-1 10.1016/S1473-3099(20)30086-4 10.1016/j.radi.2020.09.010 10.1016/j.ejrad.2020.108961 10.1016/j.ejrad.2019.108774 10.1148/radiol.2021203957 10.1016/j.ejmp.2021.06.001 10.1148/radiol.2020200370 10.3389/fmed.2021.704256 10.1148/radiol.2020201491 10.1016/j.cell.2020.04.045 10.1038/s41598-020-76282-0 10.1007/s00330-021-07715-1 10.1183/13993003.00775-2020 10.1007/s00330-020-07033-y 10.1148/ryct.2020200389 10.1007/s00330-020-07013-2 10.1148/ryct.2020200047 10.3389/fmed.2022.930055 10.1007/s11042-021-11153-y 10.1038/s41598-022-06854-9 10.1016/j.ejro.2020.100272
Contemporary Concise Review 2022: Interstitial lung disease.
Novel genetic associations for idiopathic pulmonary fibrosis (IPF) risk have been identified. Common genetic variants associated with IPF are also associated with chronic hypersensitivity pneumonitis. The characterization of underlying mechanisms, such as pathways involved in myofibroblast differentiation, may reveal targets for future treatments. Newly identified circulating biomarkers are associated with disease progression and mortality. Deep learning and machine learning may increase accuracy in the interpretation of CT scans. Novel treatments have shown benefit in phase 2 clinical trials. Hospitalization with COVID-19 is associated with residual lung abnormalities in a substantial number of patients. Inequalities exist in delivering and accessing interstitial lung disease specialist care.
Respirology (Carlton, Vic.)
"2023-05-01T00:00:00"
[ "David J FSmith", "R GisliJenkins" ]
10.1111/resp.14511
Progressive attention integration-based multi-scale efficient network for medical imaging analysis with application to COVID-19 diagnosis.
In this paper, a novel deep learning-based medical imaging analysis framework is developed, which aims to deal with the insufficient feature learning caused by the imperfect property of imaging data. Named as multi-scale efficient network (MEN), the proposed method integrates different attention mechanisms to realize sufficient extraction of both detailed features and semantic information in a progressive learning manner. In particular, a fused-attention block is designed to extract fine-grained details from the input, where the squeeze-excitation (SE) attention mechanism is applied to make the model focus on potential lesion areas. A multi-scale low information loss (MSLIL)-attention block is proposed to compensate for potential global information loss and enhance the semantic correlations among features, where the efficient channel attention (ECA) mechanism is adopted. The proposed MEN is comprehensively evaluated on two COVID-19 diagnostic tasks, and the results show that as compared with some other advanced deep learning models, the proposed method is competitive in accurate COVID-19 recognition, which yields the best accuracy of 98.68% and 98.85%, respectively, and exhibits satisfactory generalization ability as well.
Computers in biology and medicine
"2023-04-27T00:00:00"
[ "TingyiXie", "ZidongWang", "HanLi", "PeishuWu", "HuixiangHuang", "HongyiZhang", "Fuad EAlsaadi", "NianyinZeng" ]
10.1016/j.compbiomed.2023.106947 10.1080/00207721.2022.2083262
A novel CT image de-noising and fusion based deep learning network to screen for disease (COVID-19).
A COVID-19, caused by SARS-CoV-2, has been declared a global pandemic by WHO. It first appeared in China at the end of 2019 and quickly spread throughout the world. During the third layer, it became more critical. COVID-19 spread is extremely difficult to control, and a huge number of suspected cases must be screened for a cure as soon as possible. COVID-19 laboratory testing takes time and can result in significant false negatives. To combat COVID-19, reliable, accurate and fast methods are urgently needed. The commonly used Reverse Transcription Polymerase Chain Reaction has a low sensitivity of approximately 60% to 70%, and sometimes even produces negative results. Computer Tomography (CT) has been observed to be a subtle approach to detecting COVID-19, and it may be the best screening method. The scanned image's quality, which is impacted by motion-induced Poisson or Impulse noise, is vital. In order to improve the quality of the acquired image for post segmentation, a novel Impulse and Poisson noise reduction method employing boundary division max/min intensities elimination along with an adaptive window size mechanism is proposed. In the second phase, a number of CNN techniques are explored for detecting COVID-19 from CT images and an Assessment Fusion Based model is proposed to predict the result. The AFM combines the results for cutting-edge CNN architectures and generates a final prediction based on choices. The empirical results demonstrate that our proposed method performs extensively and is extremely useful in actual diagnostic situations.
Scientific reports
"2023-04-24T00:00:00"
[ "Sajid UllahKhan", "ImdadUllah", "NajeebUllah", "SajidShah", "Mohammed ElAffendi", "BumshikLee" ]
10.1038/s41598-023-33614-0 10.1038/s41586-020-2008-3 10.1016/S0140-6736(20)30183-5 10.22207/JPAM.14.SPL1.40 10.1007/s12098-020-03263-6 10.1148/radiol.2020200490 10.1148/radiol.2020200527 10.1148/radiol.2020200343 10.1016/S1473-3099(20)30134-1 10.1016/j.media.2017.07.005 10.1109/ACCESS.2017.2788044 10.1146/annurev-bioeng-071516-044442 10.1038/s41591-018-0268-3 10.1016/j.compbiomed.2017.08.022 10.1109/TIP.2005.871129 10.1109/5.192071 10.1145/358198.358222 10.1109/31.83870 10.1109/83.902289 10.3390/app12147092 10.32604/cmc.2022.029134 10.1038/s41598-022-25539-x 10.1038/s41598-021-99015-3 10.3390/ijerph19042013
Quo vadis Radiomics? Bibliometric analysis of 10-year Radiomics journey.
Radiomics is the high-throughput extraction of mineable and-possibly-reproducible quantitative imaging features from medical imaging. The aim of this work is to perform an unbiased bibliometric analysis on Radiomics 10 years after the first work became available, to highlight its status, pitfalls, and growing interest. Scopus database was used to investigate all the available English manuscripts about Radiomics. R Bibliometrix package was used for data analysis: a cumulative analysis of document categories, authors affiliations, country scientific collaborations, institution collaboration networks, keyword analysis, comprehensive of co-occurrence network, thematic map analysis, and 2021 sub-analysis of trend topics was performed. A total of 5623 articles and 16,833 authors from 908 different sources have been identified. The first available document was published in March 2012, while the most recent included was released on the 31st of December 2021. China and USA were the most productive countries. Co-occurrence network analysis identified five words clusters based on top 50 authors' keywords: Radiomics, computed tomography, radiogenomics, deep learning, tomography. Trend topics analysis for 2021 showed an increased interest in artificial intelligence (n = 286), nomogram (n = 166), hepatocellular carcinoma (n = 125), COVID-19 (n = 63), and X-ray computed (n = 60). Our work demonstrates the importance of bibliometrics in aggregating information that otherwise would not be available in a granular analysis, detecting unknown patterns in Radiomics publications, while highlighting potential developments to ensure knowledge dissemination in the field and its future real-life applications in the clinical practice. This work aims to shed light on the state of the art in radiomics, which offers numerous tangible and intangible benefits, and to encourage its integration in the contemporary clinical practice for more precise imaging analysis. • ML-based bibliometric analysis is fundamental to detect unknown pattern of data in Radiomics publications. • A raising interest in the field, the most relevant collaborations, keywords co-occurrence network, and trending topics have been investigated. • Some pitfalls still exist, including the scarce standardization and the relative lack of homogeneity across studies.
European radiology
"2023-04-19T00:00:00"
[ "StefaniaVolpe", "FedericoMastroleo", "MarcoKrengli", "Barbara AlicjaJereczek-Fossa" ]
10.1007/s00330-023-09645-6 10.1016/j.ejca.2011.11.036 10.1038/s41598-021-01470-5 10.1007/s00330-021-08009-2 10.1038/s42003-021-02894-5 10.1080/0284186X.2021.1983207 10.3390/diagnostics12040794 10.3389/fonc.2018.00131 10.3389/fonc.2018.00294 10.1038/nrclinonc.2017.141 10.1016/j.jbusres.2021.04.070 10.1016/j.joi.2017.08.007 10.3390/su14063643 10.1038/s41598-020-69250-1 10.1148/radiol.2015151169 10.1038/ncomms5006 10.1148/radiol.2020191145 10.1002/asi.20317
Analysis of Covid-19 CT chest image classification using Dl4jMlp classifier and Multilayer Perceptron in WEKA Environment.
In recent years, various deep learning algorithms have exhibited remarkable performance in various data-rich applications, like health care, medical imaging, as well as in computer vision. Covid-19, which is a rapidly spreading virus, has affected people of all ages both socially and economically. Early detection of this virus is therefore important in order to prevent its further spread. Covid-19 crisis has also galvanized researchers to adopt various machine learning as well as deep learning techniques in order to combat the pandemic. Lung images can be used in the diagnosis of Covid-19. In this paper, we have analysed the Covid-19 chest CT image classification efficiency using multilayer perceptron with different imaging filters, like edge histogram filter, colour histogram equalization filter, color-layout filter, and Garbo filter in the WEKA environment. The performance of CT image classification has also been compared comprehensively with the deep learning classifier Dl4jMlp. It was observed that the multilayer perceptron with edge histogram filter outperformed other classifiers compared in this paper with 89.6% of correctly classified instances.
Current medical imaging
"2023-04-19T00:00:00"
[ "NoneSreejith S", "NoneJ Ajayan", "NoneN V Uma Reddy", "BabuDevasenapati S", "ShashankRebelli" ]
10.2174/1573405620666230417090246
CCS-GAN: COVID-19 CT Scan Generation and Classification with Very Few Positive Training Images.
We present a novel algorithm that is able to generate deep synthetic COVID-19 pneumonia CT scan slices using a very small sample of positive training images in tandem with a larger number of normal images. This generative algorithm produces images of sufficient accuracy to enable a DNN classifier to achieve high classification accuracy using as few as 10 positive training slices (from 10 positive cases), which to the best of our knowledge is one order of magnitude fewer than the next closest published work at the time of writing. Deep learning with extremely small positive training volumes is a very difficult problem and has been an important topic during the COVID-19 pandemic, because for quite some time it was difficult to obtain large volumes of COVID-19-positive images for training. Algorithms that can learn to screen for diseases using few examples are an important area of research. Furthermore, algorithms to produce deep synthetic images with smaller data volumes have the added benefit of reducing the barriers of data sharing between healthcare institutions. We present the cycle-consistent segmentation-generative adversarial network (CCS-GAN). CCS-GAN combines style transfer with pulmonary segmentation and relevant transfer learning from negative images in order to create a larger volume of synthetic positive images for the purposes of improving diagnostic classification performance. The performance of a VGG-19 classifier plus CCS-GAN was trained using a small sample of positive image slices ranging from at most 50 down to as few as 10 COVID-19-positive CT scan images. CCS-GAN achieves high accuracy with few positive images and thereby greatly reduces the barrier of acquiring large training volumes in order to train a diagnostic classifier for COVID-19.
Journal of digital imaging
"2023-04-18T00:00:00"
[ "SumeetMenon", "JayalakshmiMangalagiri", "JoshGalita", "MichaelMorris", "BabakSaboury", "YaacovYesha", "YelenaYesha", "PhuongNguyen", "AryyaGangopadhyay", "DavidChapman" ]
10.1007/s10278-023-00811-2 10.1007/s10489-020-01862-6 10.1109/CSCI51800.2020.00160 10.1016/j.eswa.2021.114848 10.1016/j.neucom.2018.09.013 10.1016/S0031-3203(02)00060-2
A lightweight CORONA-NET for COVID-19 detection in X-ray images.
Since December 2019, COVID-19 has posed the most serious threat to living beings. With the advancement of vaccination programs around the globe, the need to quickly diagnose COVID-19 in general with little logistics is fore important. As a consequence, the fastest diagnostic option to stop COVID-19 from spreading, especially among senior patients, should be the development of an automated detection system. This study aims to provide a lightweight deep learning method that incorporates a convolutional neural network (CNN), discrete wavelet transform (DWT), and a long short-term memory (LSTM), called CORONA-NET for diagnosing COVID-19 from chest X-ray images. In this system, deep feature extraction is performed by CNN, the feature vector is reduced yet strengthened by DWT, and the extracted feature is detected by LSTM for prediction. The dataset included 3000 X-rays, 1000 of which were COVID-19 obtained locally. Within minutes of the test, the proposed test platform's prototype can accurately detect COVID-19 patients. The proposed method achieves state-of-the-art performance in comparison with the existing deep learning methods. We hope that the suggested method will hasten clinical diagnosis and may be used for patients in remote areas where clinical labs are not easily accessible due to a lack of resources, location, or other factors.
Expert systems with applications
"2023-04-18T00:00:00"
[ "Muhammad UsmanHadi", "RizwanQureshi", "AyeshaAhmed", "NadeemIftikhar" ]
10.1016/j.eswa.2023.120023
A multicenter evaluation of a deep learning software (LungQuant) for lung parenchyma characterization in COVID-19 pneumonia.
The role of computed tomography (CT) in the diagnosis and characterization of coronavirus disease 2019 (COVID-19) pneumonia has been widely recognized. We evaluated the performance of a software for quantitative analysis of chest CT, the LungQuant system, by comparing its results with independent visual evaluations by a group of 14 clinical experts. The aim of this work is to evaluate the ability of the automated tool to extract quantitative information from lung CT, relevant for the design of a diagnosis support model. LungQuant segments both the lungs and lesions associated with COVID-19 pneumonia (ground-glass opacities and consolidations) and computes derived quantities corresponding to qualitative characteristics used to clinically assess COVID-19 lesions. The comparison was carried out on 120 publicly available CT scans of patients affected by COVID-19 pneumonia. Scans were scored for four qualitative metrics: percentage of lung involvement, type of lesion, and two disease distribution scores. We evaluated the agreement between the LungQuant output and the visual assessments through receiver operating characteristics area under the curve (AUC) analysis and by fitting a nonlinear regression model. Despite the rather large heterogeneity in the qualitative labels assigned by the clinical experts for each metric, we found good agreement on the metrics compared to the LungQuant output. The AUC values obtained for the four qualitative metrics were 0.98, 0.85, 0.90, and 0.81. Visual clinical evaluation could be complemented and supported by computer-aided quantification, whose values match the average evaluation of several independent clinical experts. We conducted a multicenter evaluation of the deep learning-based LungQuant automated software. We translated qualitative assessments into quantifiable metrics to characterize coronavirus disease 2019 (COVID-19) pneumonia lesions. Comparing the software output to the clinical evaluations, results were satisfactory despite heterogeneity of the clinical evaluations. An automatic quantification tool may contribute to improve the clinical workflow of COVID-19 pneumonia.
European radiology experimental
"2023-04-10T00:00:00"
[ "CamillaScapicchio", "AndreaChincarini", "ElenaBallante", "LucaBerta", "EleonoraBicci", "ChandraBortolotto", "FrancescaBrero", "Raffaella FiammaCabini", "GiuseppeCristofalo", "Salvatore ClaudioFanni", "Maria EvelinaFantacci", "SilviaFigini", "MassimoGalia", "PietroGemma", "EmanueleGrassedonio", "AlessandroLascialfari", "CristinaLenardi", "AliceLionetti", "FrancescaLizzi", "MaurizioMarrale", "MassimoMidiri", "CosimoNardi", "PiernicolaOliva", "NoemiPerillo", "IanPostuma", "LorenzoPreda", "VieriRastrelli", "FrancescoRizzetto", "NicolaSpina", "CinziaTalamonti", "AlbertoTorresin", "AngeloVanzulli", "FedericaVolpi", "EmanueleNeri", "AlessandraRetico" ]
10.1186/s41747-023-00334-z 10.1007/s00330-020-07347-x 10.21037/atm-20-3311 10.1093/rheumatology/keab615 10.1016/j.ejrad.2021.109650 10.1148/radiol.2020200527 10.1007/s11547-020-01237-4 10.1097/RLI.0000000000000689 10.1016/j.ejmp.2021.01.004 10.1007/s10278-019-00223-1 10.1016/j.patcog.2021.108071 10.1016/j.ejmp.2021.06.001 10.1016/j.ejro.2020.100272 10.1016/j.radonc.2020.09.045 10.1038/srep23376 10.1007/s10140-020-01867-1 10.21037/qims-22-175 10.1007/s11548-021-02501-2 10.1148/ryai.2020200029 10.1148/radiol.2462070712 10.7150/ijms.50568 10.1016/j.jcm.2016.02.012 10.1016/j.nicl.2019.101846 10.1007/s11547-020-01291-y 10.1016/j.jinf.2020.02.017 10.1016/j.acra.2020.03.003 10.1016/j.ejrad.2020.109209 10.1120/jacmp.v16i4.5001 10.1016/j.chest.2021.06.063
Interpretable CNN-Multilevel Attention Transformer for Rapid Recognition of Pneumonia from Chest X-Ray Images.
Chest imaging plays an essential role in diagnosing and predicting patients with COVID-19 with evidence of worsening respiratory status. Many deep learning-based approaches for pneumonia recognition have been developed to enable computer-aided diagnosis. However, the long training and inference time makes them inflexible, and the lack of interpretability reduces their credibility in clinical medical practice. This paper aims to develop a pneumonia recognition framework with interpretability, which can understand the complex relationship between lung features and related diseases in chest X-ray (CXR) images to provide high-speed analytics support for medical practice. To reduce the computational complexity to accelerate the recognition process, a novel multi-level self-attention mechanism within Transformer has been proposed to accelerate convergence and emphasize the task-related feature regions. Moreover, a practical CXR image data augmentation has been adopted to address the scarcity of medical image data problems to boost the model's performance. The effectiveness of the proposed method has been demonstrated on the classic COVID-19 recognition task using the widespread pneumonia CXR image dataset. In addition, abundant ablation experiments validate the effectiveness and necessity of all of the components of the proposed method.
IEEE journal of biomedical and health informatics
"2023-04-08T00:00:00"
[ "ShengchaoChen", "SufenRen", "GuanjunWang", "MengxingHuang", "ChenyangXue" ]
10.1109/JBHI.2023.3247949
Redefining Lobe-Wise Ground-Glass Opacity in COVID-19 Through Deep Learning and its Correlation With Biochemical Parameters.
During COVID-19 pandemic qRT-PCR, CT scans and biochemical parameters were studied to understand the patients' physiological changes and disease progression. There is a lack of clear understanding of the correlation of lung inflammation with biochemical parameters available. Among the 1136 patients studied, C-reactive-protein (CRP) is the most critical parameter for classifying symptomatic and asymptomatic groups. Elevated CRP is corroborated with increased D-dimer, Gamma-glutamyl-transferase (GGT), and urea levels in COVID-19 patients. To overcome the limitations of manual chest CT scoring system, we segmented the lungs and detected ground-glass-opacity (GGO) in specific lobes from 2D CT images by 2D U-Net-based deep learning (DL) approach. Our method shows accuracy, compared to the manual method (  ∼ 80%), which is subjected to the radiologist's experience. We determined a positive correlation of GGO in the right upper-middle (0.34) and lower (0.26) lobe with D-dimer. However, a modest correlation was observed with CRP, ferritin and other studied parameters. The final Dice Coefficient (or the F1 score) and Intersection-Over-Union for testing accuracy are 95.44% and 91.95%, respectively. This study can help reduce the burden and manual bias besides increasing the accuracy of GGO scoring. Further study on geographically diverse large populations may help to understand the association of the biochemical parameters and pattern of GGO in lung lobes with different SARS-CoV-2 Variants of Concern's disease pathogenesis in these populations.
IEEE journal of biomedical and health informatics
"2023-04-07T00:00:00"
[ "BudhadevBaral", "KartikMuduli", "ShwetaJakhmola", "OmkarIndari", "JatinJangir", "Ashraf HaroonRashid", "SuchitaJain", "Amrut KumarMohapatra", "ShubhransuPatro", "PreetinandaParida", "NamrataMisra", "Ambika PrasadMohanty", "Bikash RSahu", "Ajay KumarJain", "SelvakumarElangovan", "Hamendra SinghParmar", "MTanveer", "Nirmal KumarMohakud", "Hem ChandraJha" ]
10.1109/JBHI.2023.3263431
Smart Artificial Intelligence techniques using embedded band for diagnosis and combating COVID-19.
Recently, COVID-19 virus spread to create a major impact in human body worldwide. The Corona virus, initiated by the SARS-CoV-2 virus, was known in China, December 2019 and affirmed a worldwide epidemic by the World Health Organization on 11 March 2020. The core aim of this research is to detect the spreading of COVID-19 virus and solve the problems in human lungs infection quickly. An Artificial Intelligence (AI) technique is a possibly controlling device in the battle against the corona virus epidemic. Recently, AI with computational techniques are utilized for COVID-19 virus with the building blocks of Deep Learning method using Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) is used to classify and identify the lung images affected region. These two algorithms used to diagnose COVID-19 infections rapidly. The AI applications against COVID-19 are Medical Imaging for Diagnosis, Lung delineation, Lesion measurement, Non-Invasive Measurements for Disease Tracking, Patient Outcome Prediction, Molecular Scale: from Proteins to Drug Development and Societal Scale: Epidemiology and Infodemiology.
Microprocessors and microsystems
"2023-04-06T00:00:00"
[ "MAshwin", "Abdulrahman SaadAlqahtani", "AzathMubarakali" ]
10.1016/j.micpro.2023.104819 10.1109/rbme.2020.2987975 10.1186/s40537-020-00392-9
Federated Active Learning for Multicenter Collaborative Disease Diagnosis.
Current computer-aided diagnosis system with deep learning method plays an important role in the field of medical imaging. The collaborative diagnosis of diseases by multiple medical institutions has become a popular trend. However, large scale annotations put heavy burdens on medical experts. Furthermore, the centralized learning system has defects in privacy protection and model generalization. To meet these challenges, we propose two federated active learning methods for multicenter collaborative diagnosis of diseases: the Labeling Efficient Federated Active Learning (LEFAL) and the Training Efficient Federated Active Learning (TEFAL). The proposed LEFAL applies a task-agnostic hybrid sampling strategy considering data uncertainty and diversity simultaneously to improve data efficiency. The proposed TEFAL evaluates the client informativeness with a discriminator to improve client efficiency. On the Hyper-Kvasir dataset for gastrointestinal disease diagnosis, with only 65% of labeled data, the LEFAL achieves 95% performance on the segmentation task with whole labeled data. Moreover, on the CC-CCII dataset for COVID-19 diagnosis, with only 50 iterations, the accuracy and F1-score of TEFAL are 0.90 and 0.95, respectively on the classification task. Extensive experimental results demonstrate that the proposed federated active learning methods outperform state-of-the-art methods on segmentation and classification tasks for multicenter collaborative disease diagnosis.
IEEE transactions on medical imaging
"2023-04-05T00:00:00"
[ "XingWu", "JiePei", "ChengChen", "YiminZhu", "JianjiaWang", "QuanQian", "JianZhang", "QunSun", "YikeGuo" ]
10.1109/TMI.2022.3227563
Benchmark methodological approach for the application of artificial intelligence to lung ultrasound data from COVID-19 patients: From frame to prognostic-level.
Automated ultrasound imaging assessment of the effect of CoronaVirus disease 2019 (COVID-19) on lungs has been investigated in various studies using artificial intelligence-based (AI) methods. However, an extensive analysis of state-of-the-art Convolutional Neural Network-based (CNN) models for frame-level scoring, a comparative analysis of aggregation techniques for video-level scoring, together with a thorough evaluation of the capability of these methodologies to provide a clinically valuable prognostic-level score is yet missing within the literature. In addition to that, the impact on the analysis of the posterior probability assigned by the network to the predicted frames as well as the impact of temporal downsampling of LUS data are topics not yet extensively investigated. This paper takes on these challenges by providing a benchmark analysis of methods from frame to prognostic level. For frame-level scoring, state-of-the-art deep learning models are evaluated with additional analysis of best performing model in transfer-learning settings. A novel cross-correlation based aggregation technique is proposed for video and exam-level scoring. Results showed that ResNet-18, when trained from scratch, outperformed the existing methods with an F1-Score of 0.659. The proposed aggregation method resulted in 59.51%, 63.29%, and 84.90% agreement with clinicians at the video, exam, and prognostic levels, respectively; thus, demonstrating improved performances over the state of the art. It was also found that filtering frames based on the posterior probability shows higher impact on the LUS analysis in comparison to temporal downsampling. All of these analysis were conducted over the largest standardized and clinically validated LUS dataset from COVID-19 patients.
Ultrasonics
"2023-04-05T00:00:00"
[ "UmairKhan", "SajjadAfrakhteh", "FedericoMento", "NoreenFatima", "LauraDe Rosa", "Leonardo LucioCustode", "ZihadulAzam", "ElenaTorri", "GinoSoldati", "FrancescoTursi", "Veronica NarvenaMacioce", "AndreaSmargiassi", "RiccardoInchingolo", "TizianoPerrone", "GiovanniIacca", "LibertarioDemi" ]
10.1016/j.ultras.2023.106994 10.1038/d41586-022-00858-1
PCovNet+: A CNN-VAE anomaly detection framework with LSTM embeddings for smartwatch-based COVID-19 detection.
The world is slowly recovering from the Coronavirus disease 2019 (COVID-19) pandemic; however, humanity has experienced one of its According to work by Mishra et al. (2020), the study's first phase included a cohort of 5,262 subjects, with 3,325 Fitbit users constituting the majority. However, among this large cohort of 5,262 subjects, most significant trials in modern times only to learn about its lack of preparedness in the face of a highly contagious pathogen. To better prepare the world for any new mutation of the same pathogen or the newer ones, technological development in the healthcare system is a must. Hence, in this work, PCovNet+, a deep learning framework, was proposed for smartwatches and fitness trackers to monitor the user's Resting Heart Rate (RHR) for the infection-induced anomaly. A convolutional neural network (CNN)-based variational autoencoder (VAE) architecture was used as the primary model along with a long short-term memory (LSTM) network to create latent space embeddings for the VAE. Moreover, the framework employed pre-training using normal data from healthy subjects to circumvent the data shortage problem in the personalized models. This framework was validated on a dataset of 68 COVID-19-infected subjects, resulting in anomalous RHR detection with precision, recall, F-beta, and F-1 score of 0.993, 0.534, 0.9849, and 0.6932, respectively, which is a significant improvement compared to the literature. Furthermore, the PCovNet+ framework successfully detected COVID-19 infection for 74% of the subjects (47% presymptomatic and 27% post-symptomatic detection). The results prove the usability of such a system as a secondary diagnostic tool enabling continuous health monitoring and contact tracing.
Engineering applications of artificial intelligence
"2023-04-04T00:00:00"
[ "Farhan FuadAbir", "Muhammad E HChowdhury", "Malisha IslamTapotee", "AdamMushtak", "AmithKhandakar", "SakibMahmud", "Md AnwarulHasan" ]
10.1016/j.engappai.2023.106130 10.48550/arXiv.1603.04467 10.1016/j.compbiomed.2022.105682 10.1038/s41591-021-01593-2 10.1016/j.compbiomed.2022.106070 10.1109/MPRV.2020.3021321 10.3390/jcm9103372 10.3390/biology9080182 10.1016/S2666-5247(20)30172-5 10.3390/s21175787 10.1038/s41598-022-11329-y 10.2217/pme-2018-0044 10.1056/NEJMe2009758 10.1371/journal.pone.0240123 10.1038/s41586-020-2649-2 10.48550/arXiv.1312.6114 10.3390/jimaging4020036 10.1056/NEJMoa2001316 10.1109/ICASSP40776.2020.9053558 10.48550/arXiv.2205.13607 10.1016/S2589-7500(22)00019-X 10.1117/1.JBO.25.10.102703 10.3389/fdgth.2020.00008 10.5281/zenodo.3509134
A COVID-19 medical image classification algorithm based on Transformer.
Coronavirus 2019 (COVID-19) is a new acute respiratory disease that has spread rapidly throughout the world. This paper proposes a novel deep learning network based on ResNet-50 merged transformer named RMT-Net. On the backbone of ResNet-50, it uses Transformer to capture long-distance feature information, adopts convolutional neural networks and depth-wise convolution to obtain local features, reduce the computational cost and acceleration the detection process. The RMT-Net includes four stage blocks to realize the feature extraction of different receptive fields. In the first three stages, the global self-attention method is adopted to capture the important feature information and construct the relationship between tokens. In the fourth stage, the residual blocks are used to extract the details of feature. Finally, a global average pooling layer and a fully connected layer perform classification tasks. Training, verification and testing are carried out on self-built datasets. The RMT-Net model is compared with ResNet-50, VGGNet-16, i-CapsNet and MGMADS-3. The experimental results show that the RMT-Net model has a Test_ acc of 97.65% on the X-ray image dataset, 99.12% on the CT image dataset, which both higher than the other four models. The size of RMT-Net model is only 38.5 M, and the detection speed of X-ray image and CT image is 5.46 ms and 4.12 ms per image, respectively. It is proved that the model can detect and classify COVID-19 with higher accuracy and efficiency.
Scientific reports
"2023-04-04T00:00:00"
[ "KeyingRen", "GengHong", "XiaoyanChen", "ZichenWang" ]
10.1038/s41598-023-32462-2 10.1016/j.compbiomed.2021.105134 10.1016/j.chaos.2020.110495 10.1148/radiol.2020200343 10.1148/radiol.2020200463 10.1016/j.compbiomed.2021.105123 10.1038/s41598-021-97428-8 10.1109/TCBB.2021.3065361 10.1016/j.patcog.2020.107747 10.1016/j.bspc.2021.103371 10.1016/j.irbm.2020.05.003 10.3390/jpm12020310 10.3390/jcm11113013 10.1007/s13042-022-01676-7 10.1016/j.cmpb.2022.107141 10.1109/TMI.2020.2995965 10.1007/s40846-020-00529-4 10.1007/s12559-020-09775-9 10.1007/s10489-020-01829-7 10.1016/j.asoc.2022.108780 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2020.103792 10.1016/j.compbiomed.2022.105244 10.1016/j.compbiomed.2021.104399 10.1016/j.cmpb.2020.105581 10.1016/j.bspc.2021.102588 10.3390/tomography8020071 10.1007/s10096-020-03901-z 10.3389/frai.2021.598932 10.1148/radiol.2020200905 10.1016/j.compbiomed.2020.104037
Multi-head deep learning framework for pulmonary disease detection and severity scoring with modified progressive learning.
Chest X-rays (CXR) are the most commonly used imaging methodology in radiology to diagnose pulmonary diseases with close to 2 billion CXRs taken every year. The recent upsurge of COVID-19 and its variants accompanied by pneumonia and tuberculosis can be fatal in some cases and lives could be saved through early detection and appropriate intervention for the advanced cases. Thus CXRs can be used for an automated severity grading of pulmonary diseases that can aid radiologists in making better and informed diagnoses. In this article, we propose a single framework for disease classification and severity scoring produced by segmenting the lungs into six regions. We present a modified progressive learning technique in which the amount of augmentations at each step is capped. Our base network in the framework is first trained using modified progressive learning and can then be tweaked for new data sets. Furthermore, the segmentation task makes use of an attention map generated within and by the network itself. This attention mechanism allows to achieve segmentation results that are on par with networks having an order of magnitude or more parameters. We also propose severity score grading for 4 thoracic diseases that can provide a single-digit score corresponding to the spread of opacity in different lung segments with the help of radiologists. The proposed framework is evaluated using the BRAX data set for segmentation and classification into six classes with severity grading for a subset of the classes. On the BRAX validation data set, we achieve F1 scores of 0.924 and 0.939 without and with fine-tuning, respectively. A mean matching score of 80.8% is obtained for severity score grading while an average area under receiver operating characteristic curve of 0.88 is achieved for classification.
Biomedical signal processing and control
"2023-03-30T00:00:00"
[ "Asad MansoorKhan", "Muhammad UsmanAkram", "SajidNazir", "TaimurHassan", "Sajid GulKhawaja", "TatheerFatima" ]
10.1016/j.bspc.2023.104855
Computer-Aided Diagnosis of COVID-19 from Chest X-ray Images Using Hybrid-Features and Random Forest Classifier.
In recent years, a lot of attention has been paid to using radiology imaging to automatically find COVID-19. (1) Background: There are now a number of computer-aided diagnostic schemes that help radiologists and doctors perform diagnostic COVID-19 tests quickly, accurately, and consistently. (2) Methods: Using chest X-ray images, this study proposed a cutting-edge scheme for the automatic recognition of COVID-19 and pneumonia. First, a pre-processing method based on a Gaussian filter and logarithmic operator is applied to input chest X-ray (CXR) images to improve the poor-quality images by enhancing the contrast, reducing the noise, and smoothing the image. Second, robust features are extracted from each enhanced chest X-ray image using a Convolutional Neural Network (CNNs) transformer and an optimal collection of grey-level co-occurrence matrices (GLCM) that contain features such as contrast, correlation, entropy, and energy. Finally, based on extracted features from input images, a random forest machine learning classifier is used to classify images into three classes, such as COVID-19, pneumonia, or normal. The predicted output from the model is combined with Gradient-weighted Class Activation Mapping (Grad-CAM) visualisation for diagnosis. (3) Results: Our work is evaluated using public datasets with three different train-test splits (70-30%, 80-20%, and 90-10%) and achieved an average accuracy, F1 score, recall, and precision of 97%, 96%, 96%, and 96%, respectively. A comparative study shows that our proposed method outperforms existing and similar work. The proposed approach can be utilised to screen COVID-19-infected patients effectively. (4) Conclusions: A comparative study with the existing methods is also performed. For performance evaluation, metrics such as accuracy, sensitivity, and F1-measure are calculated. The performance of the proposed method is better than that of the existing methodologies, and it can thus be used for the effective diagnosis of the disease.
Healthcare (Basel, Switzerland)
"2023-03-30T00:00:00"
[ "KashifShaheed", "PiotrSzczuko", "QaisarAbbas", "AyyazHussain", "MubarakAlbathan" ]
10.3390/healthcare11060837 10.1002/jmv.25678 10.1016/S0140-6736(21)02046-8 10.1016/S0140-6736(21)02249-2 10.1148/radiol.2020200230 10.1109/TMI.2020.3040950 10.1109/TMI.2020.2993291 10.1152/physiolgenomics.00029.2020 10.3390/ijerph19042013 10.1148/radiol.2020200527 10.1016/j.jcv.2020.104384 10.1016/S1473-3099(20)30086-4 10.1038/s41598-020-76550-z 10.1016/j.asoc.2022.109319 10.1016/j.compbiomed.2022.105233 10.1016/j.compbiomed.2020.103792 10.1109/TMI.2020.2996645 10.1007/s10489-020-01826-w 10.1007/s10044-021-00984-y 10.1016/j.compbiomed.2020.103795 10.3390/ijerph20032035 10.1101/2020.03.30.20047787 10.1016/j.compbiomed.2021.104781 10.1016/j.ipm.2022.103025 10.1016/j.neucom.2022.01.055 10.1016/j.radi.2022.03.011 10.1007/s00521-020-05017-z 10.3390/diagnostics12123109 10.1109/ICCV48922.2021.00009 10.1007/978-3-030-62008-0_35 10.1016/j.ijid.2020.06.058 10.1371/journal.pone.0096385 10.7717/peerj.5518 10.1007/978-3-540-74825-0_11 10.1136/bmjopen-2018-025925 10.1016/j.chaos.2020.109944 10.1177/2472630320962002 10.1007/s11042-020-09431-2 10.1109/ACCESS.2021.3058854 10.3991/ijoe.v18i07.30807
Comparative Analysis of Clinical and CT Findings in Patients with SARS-CoV-2 Original Strain, Delta and Omicron Variants.
To compare the clinical characteristics and chest CT findings of patients infected with Omicron and Delta variants and the original strain of COVID-19. A total of 503 patients infected with the original strain (245 cases), Delta variant (90 cases), and Omicron variant (168 cases) were retrospectively analyzed. The differences in clinical severity and chest CT findings were analyzed. We also compared the infection severity of patients with different vaccination statuses and quantified pneumonia by a deep-learning approach. The rate of severe disease decreased significantly from the original strain to the Delta variant and Omicron variant (27% vs. 10% vs. 4.8%, Compared with the original strain and Delta variant, the Omicron variant had less clinical severity and less lung injury on CT scans.
Biomedicines
"2023-03-30T00:00:00"
[ "XiaoyuHan", "JingzeChen", "LuChen", "XiJia", "YanqingFan", "YutingZheng", "OsamahAlwalid", "JieLiu", "YuminLi", "NaLi", "JinGu", "JiangtaoWang", "HeshuiShi" ]
10.3390/biomedicines11030901 10.1016/S0140-6736(20)30183-5 10.1016/S0140-6736(20)30185-9 10.1016/S0140-6736(21)00370-6 10.1038/s41579-021-00573-0 10.1016/S0140-6736(20)30211-7 10.1016/S1473-3099(20)30086-4 10.1016/S0140-6736(20)30566-3 10.1148/radiol.220533 10.1148/radiol.229022 10.1038/s41467-020-18685-1 10.1109/TMI.2020.2992546 10.1148/radiol.2020200823 10.15585/mmwr.mm7101a4 10.1038/s41467-022-28089-y 10.1080/22221751.2021.2022440 10.1148/ryct.2020200152 10.1148/radiol.2462070712 10.1148/radiol.2363040958 10.1016/S2589-7500(20)30199-0 10.1148/ryct.2020200075 10.1016/S0140-6736(22)00017-4 10.1001/jama.2022.2274 10.1016/S0140-6736(22)00462-7 10.1016/S1473-3099(21)00475-8 10.1503/cmaj.211248 10.1016/j.cell.2020.04.004 10.1038/s41598-020-74497-9 10.1001/jama.2021.24315 10.1136/bmj.n3144 10.2214/AJR.21.26560 10.1016/j.jacr.2020.06.006 10.1016/S0140-6736(21)02249-2 10.1038/s41422-022-00674-2 10.3390/jpm12060955
Design and Analysis of a Deep Learning Ensemble Framework Model for the Detection of COVID-19 and Pneumonia Using Large-Scale CT Scan and X-ray Image Datasets.
Recently, various methods have been developed to identify COVID-19 cases, such as PCR testing and non-contact procedures such as chest X-rays and computed tomography (CT) scans. Deep learning (DL) and artificial intelligence (AI) are critical tools for early and accurate detection of COVID-19. This research explores the different DL techniques for identifying COVID-19 and pneumonia on medical CT and radiography images using ResNet152, VGG16, ResNet50, and DenseNet121. The ResNet framework uses CT scan images with accuracy and precision. This research automates optimum model architecture and training parameters. Transfer learning approaches are also employed to solve content gaps and shorten training duration. An upgraded VGG16 deep transfer learning architecture is applied to perform multi-class classification for X-ray imaging tasks. Enhanced VGG16 has been proven to recognize three types of radiographic images with 99% accuracy, typical for COVID-19 and pneumonia. The validity and performance metrics of the proposed model were validated using publicly available X-ray and CT scan data sets. The suggested model outperforms competing approaches in diagnosing COVID-19 and pneumonia. The primary outcomes of this research result in an average F-score (95%, 97%). In the event of healthy viral infections, this research is more efficient than existing methodologies for coronavirus detection. The created model is appropriate for recognition and classification pre-training. The suggested model outperforms traditional strategies for multi-class categorization of various illnesses.
Bioengineering (Basel, Switzerland)
"2023-03-30T00:00:00"
[ "XingsiXue", "SeelammalChinnaperumal", "Ghaida MuttasharAbdulsahib", "Rajasekhar ReddyManyam", "RajaMarappan", "Sekar KidambiRaju", "Osamah IbrahimKhalaf" ]
10.3390/bioengineering10030363 10.1007/s42979-021-00823-1 10.1016/j.compbiomed.2022.105344 10.1038/s42003-020-01535-7 10.1007/s11517-020-02299-2 10.1007/s10044-021-00970-4 10.31224/osf.io/wx89s 10.1038/s41598-021-99015-3 10.1007/s40846-021-00653-9 10.3390/cmim.2021.10008 10.1007/s12530-021-09385-2 10.1109/JIOT.2021.3056185 10.1007/s10489-020-01829-7 10.1016/j.imu.2020.100412 10.1007/s10489-020-01867-1 10.1155/2021/5513679 10.1109/ACCESS.2020.3016780 10.3390/s20236985 10.1007/s10489-021-02393-4 10.48084/etasr.4613 10.11591/ijece.v11i1.pp844-850 10.1007/s10522-021-09946-7 10.21817/indjcse/2021/v12i1/211201064 10.3390/s21175813 10.3390/diagnostics11020340.2021 10.1166/jctn.2020.9439 10.14704/WEB/V19I1/WEB19071 10.1016/j.chaos.2020.110120 10.1007/s10489-020-02055-x 10.1016/j.irbm.2020.05.003 10.1007/s40747-021-00509-4 10.1109/CIMSim.2013.17 10.1109/ICCIC.2013.6724190 10.1109/ICICES.2016.7518914 10.3390/telecom4010008c 10.1007/s13369-017-2686-9 10.3390/math8030303 10.3390/math8071106 10.3390/math9020197 10.3390/bioengineering10020138 10.1155/2022/9227343 10.32604/iasc.2022.025609 10.1155/2021/5574376 10.4018/IJRQEH.289176 10.1007/s13369-021-06323-x 10.1109/ICICES.2016.7518911 10.1109/ICCIC.2018.8782425 10.1007/s41870-023-01165-2
Perceptive SARS-CoV-2 End-To-End Ultrasound Video Classification through X3D and Key-Frames Selection.
The SARS-CoV-2 pandemic challenged health systems worldwide, thus advocating for practical, quick and highly trustworthy diagnostic instruments to help medical personnel. It features a long incubation period and a high contagion rate, causing bilateral multi-focal interstitial pneumonia, generally growing into acute respiratory distress syndrome (ARDS), causing hundreds of thousands of casualties worldwide. Guidelines for first-line diagnosis of pneumonia suggest Chest X-rays (CXR) for patients exhibiting symptoms. Potential alternatives include Computed Tomography (CT) scans and Lung UltraSound (LUS). Deep learning (DL) has been helpful in diagnosis using CT scans, LUS, and CXR, whereby the former commonly yields more precise results. CXR and CT scans present several drawbacks, including high costs. Radiation-free LUS imaging requires high expertise, and physicians thus underutilise it. LUS demonstrated a strong correlation with CT scans and reliability in pneumonia detection, even in the early stages. Here, we present an LUS video-classification approach based on contemporary DL strategies in close collaboration with Fondazione IRCCS Policlinico San Matteo's Emergency Department (ED) of Pavia. This research addressed SARS-CoV-2 patterns detection, ranked according to three severity scales by operating a trustworthy dataset comprising ultrasounds from linear and convex probes in 5400 clips from 450 hospitalised subjects. The main contributions of this study are related to the adoption of a standardised severity ranking scale to evaluate pneumonia. This evaluation relies on video summarisation through key-frame selection algorithms. Then, we designed and developed a video-classification architecture which emerged as the most promising. In contrast, the literature primarily concentrates on frame-pattern recognition. By using advanced techniques such as transfer learning and data augmentation, we were able to achieve an F1-Score of over 89% across all classes.
Bioengineering (Basel, Switzerland)
"2023-03-30T00:00:00"
[ "MarcoGazzoni", "MarcoLa Salvia", "EmanueleTorti", "GianmarcoSecco", "StefanoPerlini", "FrancescoLeporati" ]
10.3390/bioengineering10030282 10.1056/NEJMoa2001316 10.1186/s13000-020-01017-8 10.1016/S1473-3099(20)30086-4 10.1002/jum.15285 10.1002/jmv.25727 10.1164/ajrccm.163.7.at1010 10.4103/IJMR.IJMR_3669_20 10.1016/j.compbiomed.2021.104742 10.1186/1465-9921-15-50 10.1007/s11739-015-1297-2 10.1016/j.ajem.2013.10.003 10.3389/FRAI.2022.912022/BIBTEX 10.1016/j.imu.2021.100687 10.3390/s21165486 10.3390/jpm12101707 10.1055/s-0042-120260 10.4081/ecj.2020.9017 10.1007/978-3-319-15371-1_8 10.1136/bmjopen-2020-045120 10.1109/TMI.2020.2994459 10.1109/ACCESS.2021.3058537 10.1016/j.compbiomed.2021.104375 10.1186/s40537-019-0197-0 10.1016/J.PROCS.2015.10.021 10.3390/e18030073 10.1016/S1007-0214(05)70050-X 10.1109/ICEIC49074.2020.9051332 10.1007/978-3-030-87199-4_16/COVER 10.1109/THMS.2022.3144000 10.1109/ICCV48922.2021.00675 10.1016/j.cviu.2022.103484 10.1109/CVPR42600.2020.00028 10.1007/978-0-387-84858-7 10.1109/TUFFC.2020.3002249 10.1109/ACCESS.2020.3016780
GW- CNNDC: Gradient weighted CNN model for diagnosing COVID-19 using radiography X-ray images.
COVID-19 is one of the dangerous viruses that cause death if the patient doesn't identify it in the early stages. Firstly, this virus is identified in China, Wuhan city. This virus spreads very fast compared with other viruses. Many tests are there for detecting this virus, and also side effects may find while testing this disease. Corona-virus tests are now rare; there are restricted COVID-19 testing units and they can't be made quickly enough, causing alarm. Thus, we want to depend on other determination measures. There are three distinct sorts of COVID-19 testing systems: RTPCR, CT, and CXR. There are certain limitations to RTPCR, which is the most time-consuming technique, and CT-scan results in exposure to radiation which may cause further diseases. So, to overcome these limitations, the CXR technique emits comparatively less radiation, and the patient need not be close to the medical staff. COVID-19 detection from CXR images has been tested using a diversity of pre-trained deep-learning algorithms, with the best methods being fine-tuned to maximize detection accuracy. In this work, the model called GW-CNNDC is presented. The Lung Radiography pictures are portioned utilizing the Enhanced CNN model, deployed with RESNET-50 Architecture with an image size of 255*255 pixels. Afterward, the Gradient Weighted model is applied, which shows the specific separation regardless of whether the individual is impacted by Covid-19 affected area. This framework can perform twofold class assignments with exactness and accuracy, precision, recall, F1-score, and Loss value, and the model turns out proficiently for huge datasets with less measure of time.
Measurement. Sensors
"2023-03-28T00:00:00"
[ "PamulaUdayaraju", "T VenkataNarayana", "Sri HarshaVemparala", "ChopparapuSrinivasarao", "BhV S R KRaju" ]
10.1016/j.measen.2023.100735 10.1007/s12098-020 10.1109/TII.2021.3057683 10.1109/TLA.2021.9451239 10.1109/JAS.2020.1003393 10.1109/TMI.2020.2994459
Lightweight deep CNN-based models for early detection of COVID-19 patients from chest X-ray images.
Hundreds of millions of people worldwide have recently been infected by the novel Coronavirus disease (COVID-19), causing significant damage to the health, economy, and welfare of the world's population. Moreover, the unprecedented number of patients with COVID-19 has placed a massive burden on healthcare centers, making timely and rapid diagnosis challenging. A crucial step in minimizing the impact of such problems is to automatically detect infected patients and place them under special care as quickly as possible. Deep learning algorithms, such as Convolutional Neural Networks (CNN), can be used to meet this need. Despite the desired results, most of the existing deep learning-based models were built on millions of parameters (weights), which are not applicable to devices with limited resources. Inspired by such fact, in this research, we developed two new lightweight CNN-based diagnostic models for the automatic and early detection of COVID-19 subjects from chest X-ray images. The first model was built for binary classification (COVID-19 and Normal), whereas the second one was built for multiclass classification (COVID-19, viral pneumonia, or normal). The proposed models were tested on a relatively large dataset of chest X-ray images, and the results showed that the accuracy rates of the 2- and 3-class-based classification models are 98.55% and 96.83%, respectively. The results also revealed that our models achieved competitive performance compared with the existing heavyweight models while significantly reducing cost and memory requirements for computing resources. With these findings, we can indicate that our models are helpful to clinicians in making insightful diagnoses of COVID-19 and are potentially easily deployable on devices with limited computational power and resources.
Expert systems with applications
"2023-03-28T00:00:00"
[ "Haval IHussein", "Abdulhakeem OMohammed", "Masoud MHassan", "Ramadhan JMstafa" ]
10.1016/j.eswa.2023.119900 10.1109/ISIEA49364.2020.9188133 10.1016/j.compbiomed.2022.105350 10.3390/computation9010003 10.3390/biology10111174 10.1109/ACCESS.2021.3054484 10.1016/j.compbiomed.2021.104672 10.3390/s21020455 10.1007/s40846-020-00529-4 10.1007/s13246-020-00865-4 10.1016/j.bspc.2021.103182 10.1016/j.compbiomed.2021.104454 10.1148/radiol.2020200230 10.1007/s13748-019-00203-0 10.1016/j.bbe.2022.07.009 10.1016/j.bbe.2022.11.003 10.1016/j.compbiomed.2021.104920 10.1007/s12648-022-02425-w 10.2139/ssrn.3833706 10.1007/978-981-19-4453-6_7 10.1007/s11042-022-12156-z 10.1016/j.imu.2022.100945 10.1016/j.ijmedinf.2020.104284 10.1016/j.bea.2022.100041 10.1016/j.chaos.2020.110495 10.1016/j.neucom.2022.01.055 10.1016/j.eswa.2022.116942 10.1016/j.compbiomed.2022.106331 10.1109/ACCAI53970.2022.9752511 10.1016/j.cmpb.2020.105581 10.1016/j.eswa.2021.115695 10.1109/TNNLS.2021.3084827 10.1007/s42600-021-00151-6 10.1016/j.media.2020.101794 10.1007/s10044-021-00984-y 10.1016/j.bspc.2020.102365 10.3390/diagnostics13010131 10.1016/j.bspc.2022.103977 10.1109/RTEICT52294.2021.9573980 10.1016/j.health.2022.100096 10.1038/s41598-020-76550-z 10.1001/jama.2020.3786 10.1007/s13244-018-0639-9 10.1007/s10489-020-01867-1
MTMC-AUR2CNet: Multi-textural multi-class attention recurrent residual convolutional neural network for COVID-19 classification using chest X-ray images.
Coronavirus disease (COVID-19) has infected over 603 million confirmed cases as of September 2022, and its rapid spread has raised concerns worldwide. More than 6.4 million fatalities in confirmed patients have been reported. According to reports, the COVID-19 virus causes lung damage and rapidly mutates before the patient receives any diagnosis-specific medicine. Daily increasing COVID-19 cases and the limited number of diagnosis tool kits encourage the use of deep learning (DL) models to assist health care practitioners using chest X-ray (CXR) images. The CXR is a low radiation radiography tool available in hospitals to diagnose COVID-19 and combat this spread. We propose a Multi-Textural Multi-Class (MTMC) UNet-based Recurrent Residual Convolutional Neural Network (MTMC-UR2CNet) and MTMC-UR2CNet with attention mechanism (MTMC-AUR2CNet) for multi-class lung lobe segmentation of CXR images. The lung lobe segmentation output of MTMC-UR2CNet and MTMC-AUR2CNet are mapped individually with their input CXRs to generate the region of interest (ROI). The multi-textural features are extracted from the ROI of each proposed MTMC network. The extracted multi-textural features from ROI are fused and are trained to the Whale optimization algorithm (WOA) based DeepCNN classifier on classifying the CXR images into normal (healthy), COVID-19, viral pneumonia, and lung opacity. The experimental result shows that the MTMC-AUR2CNet has superior performance in multi-class lung lobe segmentation of CXR images with an accuracy of 99.47%, followed by MTMC-UR2CNet with an accuracy of 98.39%. Also, MTMC-AUR2CNet improves the multi-textural multi-class classification accuracy of the WOA-based DeepCNN classifier to 97.60% compared to MTMC-UR2CNet.
Biomedical signal processing and control
"2023-03-28T00:00:00"
[ "AnandbabuGopatoti", "PVijayalakshmi" ]
10.1016/j.bspc.2023.104857 10.1109/TMI.2018.2806086 10.1016/j.bspc.2022.103860
PDAtt-Unet: Pyramid Dual-Decoder Attention Unet for Covid-19 infection segmentation from CT-scans.
Since the emergence of the Covid-19 pandemic in late 2019, medical imaging has been widely used to analyze this disease. Indeed, CT-scans of the lungs can help diagnose, detect, and quantify Covid-19 infection. In this paper, we address the segmentation of Covid-19 infection from CT-scans. To improve the performance of the Att-Unet architecture and maximize the use of the Attention Gate, we propose the PAtt-Unet and DAtt-Unet architectures. PAtt-Unet aims to exploit the input pyramids to preserve the spatial awareness in all of the encoder layers. On the other hand, DAtt-Unet is designed to guide the segmentation of Covid-19 infection inside the lung lobes. We also propose to combine these two architectures into a single one, which we refer to as PDAtt-Unet. To overcome the blurry boundary pixels segmentation of Covid-19 infection, we propose a hybrid loss function. The proposed architectures were tested on four datasets with two evaluation scenarios (intra and cross datasets). Experimental results showed that both PAtt-Unet and DAtt-Unet improve the performance of Att-Unet in segmenting Covid-19 infections. Moreover, the combination architecture PDAtt-Unet led to further improvement. To Compare with other methods, three baseline segmentation architectures (Unet, Unet++, and Att-Unet) and three state-of-the-art architectures (InfNet, SCOATNet, and nCoVSegNet) were tested. The comparison showed the superiority of the proposed PDAtt-Unet trained with the proposed hybrid loss (PDEAtt-Unet) over all other methods. Moreover, PDEAtt-Unet is able to overcome various challenges in segmenting Covid-19 infections in four datasets and two evaluation scenarios.
Medical image analysis
"2023-03-27T00:00:00"
[ "FaresBougourzi", "CosimoDistante", "FadiDornaika", "AbdelmalikTaleb-Ahmed" ]
10.1016/j.media.2023.102797 10.1016/j.knosys.2020.106647 10.3390/s21175878 10.3390/jimaging7090189 10.1016/j.eswa.2020.113459 10.1007/s42979-021-00874-4 10.1109/TMI.2020.2996645 10.1186/s12967-021-02992-2 10.1186/s40779-020-0233-6 10.7326/M20-1495 10.3390/diagnostics11020158 10.1016/j.media.2021.102205 10.1002/mp.14676 10.1101/2020.05.20.20100362 10.1016/j.patcog.2021.108168 10.1148/radiol.2020200370 10.1148/ryai.2019180031 10.21037/qims-20-564 10.3390/s21051742 10.1109/TNNLS.2021.3126305 10.1109/TMI.2020.3000314 10.1109/TIP.2021.3058783 10.1016/j.media.2021.101992 10.1016/j.cell.2020.04.045 10.1016/j.compbiomed.2021.104526
Deep-Learning-Based Whole-Lung and Lung-Lesion Quantification Despite Inconsistent Ground Truth: Application to Computerized Tomography in SARS-CoV-2 Nonhuman Primate Models.
Animal modeling of infectious diseases such as coronavirus disease 2019 (COVID-19) is important for exploration of natural history, understanding of pathogenesis, and evaluation of countermeasures. Preclinical studies enable rigorous control of experimental conditions as well as pre-exposure baseline and longitudinal measurements, including medical imaging, that are often unavailable in the clinical research setting. Computerized tomography (CT) imaging provides important diagnostic, prognostic, and disease characterization to clinicians and clinical researchers. In that context, automated deep-learning systems for the analysis of CT imaging have been broadly proposed, but their practical utility has been limited. Manual outlining of the ground truth (i.e., lung-lesions) requires accurate distinctions between abnormal and normal tissues that often have vague boundaries and is subject to reader heterogeneity in interpretation. Indeed, this subjectivity is demonstrated as wide inconsistency in manual outlines among experts and from the same expert. The application of deep-learning data-science tools has been less well-evaluated in the preclinical setting, including in nonhuman primate (NHP) models of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection/COVID-19, in which the translation of human-derived deep-learning tools is challenging. The automated segmentation of the whole lung and lung lesions provides a potentially standardized and automated method to detect and quantify disease. We used deep-learning-based quantification of the whole lung and lung lesions on CT scans of NHPs exposed to SARS-CoV-2. We proposed a novel multi-model ensemble technique to address the inconsistency in the ground truths for deep-learning-based automated segmentation of the whole lung and lung lesions. Multiple models were obtained by training the convolutional neural network (CNN) on different subsets of the training data instead of having a single model using the entire training dataset. Moreover, we employed a feature pyramid network (FPN), a CNN that provides predictions at different resolution levels, enabling the network to predict objects with wide size variations. We achieved an average of 99.4 and 60.2% Dice coefficients for whole-lung and lung-lesion segmentation, respectively. The proposed multi-model FPN outperformed well-accepted methods U-Net (50.5%), V-Net (54.5%), and Inception (53.4%) for the challenging lesion-segmentation task. We show the application of segmentation outputs for longitudinal quantification of lung disease in SARS-CoV-2-exposed and mock-exposed NHPs. Deep-learning methods should be optimally characterized for and targeted specifically to preclinical research needs in terms of impact, automation, and dynamic quantification independently from purely clinical applications.
Academic radiology
"2023-03-26T00:00:00"
[ "Syed M SReza", "Winston TChu", "FatemehHomayounieh", "MaximBlain", "Fatemeh DFirouzabadi", "Pouria YAnari", "Ji HyunLee", "GabriellaWorwa", "Courtney LFinch", "Jens HKuhn", "AshkanMalayeri", "IanCrozier", "Bradford JWood", "Irwin MFeuerstein", "JeffreySolomon" ]
10.1016/j.acra.2023.02.027 10.1038/s41586-020-2787-6 10.1038/s41572-020-0147-3 10.3390/pathogens9030197 10.1002/path.4444 10.48550/arXiv.2004.1285220 10.1371/journal.pone.0084599 10.1097/HP.0000000000001280 10.1117/12.2514000 10.1117/12.2607154 10.1007/s00134-020-06033-2 10.48550/arXiv.2003.04655 10.1109/CBMS.2014.59 10.1007/978-3-030-59861-7_58
MESTrans: Multi-scale embedding spatial transformer for medical image segmentation.
Transformers profiting from global information modeling derived from the self-attention mechanism have recently achieved remarkable performance in computer vision. In this study, a novel transformer-based medical image segmentation network called the multi-scale embedding spatial transformer (MESTrans) was proposed for medical image segmentation. First, a dataset called COVID-DS36 was created from 4369 computed tomography (CT) images of 36 patients from a partner hospital, of which 18 had COVID-19 and 18 did not. Subsequently, a novel medical image segmentation network was proposed, which introduced a self-attention mechanism to improve the inherent limitation of convolutional neural networks (CNNs) and was capable of adaptively extracting discriminative information in both global and local content. Specifically, based on U-Net, a multi-scale embedding block (MEB) and multi-layer spatial attention transformer (SATrans) structure were designed, which can dynamically adjust the receptive field in accordance with the input content. The spatial relationship between multi-level and multi-scale image patches was modeled, and the global context information was captured effectively. To make the network concentrate on the salient feature region, a feature fusion module (FFM) was established, which performed global learning and soft selection between shallow and deep features, adaptively combining the encoder and decoder features. Four datasets comprising CT images, magnetic resonance (MR) images, and H&E-stained slide images were used to assess the performance of the proposed network. Experiments were performed using four different types of medical image datasets. For the COVID-DS36 dataset, our method achieved a Dice similarity coefficient (DSC) of 81.23%. For the GlaS dataset, 89.95% DSC and 82.39% intersection over union (IoU) were obtained. On the Synapse dataset, the average DSC was 77.48% and the average Hausdorff distance (HD) was 31.69 mm. For the I2CVB dataset, 92.3% DSC and 85.8% IoU were obtained. The experimental results demonstrate that the proposed model has an excellent generalization ability and outperforms other state-of-the-art methods. It is expected to be a potent tool to assist clinicians in auxiliary diagnosis and to promote the development of medical intelligence technology.
Computer methods and programs in biomedicine
"2023-03-26T00:00:00"
[ "YatongLiu", "YuZhu", "YingXin", "YananZhang", "DaweiYang", "TaoXu" ]
10.1016/j.cmpb.2023.107493
COVID-19 diagnosis: A comprehensive review of pre-trained deep learning models based on feature extraction algorithm.
Due to the augmented rise of COVID-19, clinical specialists are looking for fast faultless diagnosis strategies to restrict Covid spread while attempting to lessen the computational complexity. In this way, swift diagnosis techniques for COVID-19 with high precision can offer valuable aid to clinical specialists. RT- PCR test is an expensive and tedious COVID diagnosis technique in practice. Medical imaging is feasible to diagnose COVID-19 by X-ray chest radiography to get around the shortcomings of RT-PCR. Through a variety of Deep Transfer-learning models, this research investigates the potential of Artificial Intelligence -based early diagnosis of COVID-19 via X-ray chest radiographs. With 10,192 normal and 3616 Covid X-ray chest radiographs, the deep transfer-learning models are optimized to further the accurate diagnosis. The x-ray chest radiographs undergo a data augmentation phase before developing a modified dataset to train the Deep Transfer-learning models. The Deep Transfer-learning architectures are trained using the extracted features from the Feature Extraction stage. During training, the classification of X-ray Chest radiographs based on feature extraction algorithm values is converted into a feature label set containing the classified image data with a feature string value representing the number of edges detected after edge detection. The feature label set is further tested with the SVM, KNN, NN, Naive Bayes and Logistic Regression classifiers to audit the quality metrics of the proposed model. The quality metrics include accuracy, precision, F1 score, recall and AUC. The Inception-V3 dominates the six Deep Transfer-learning models, according to the assessment results, with a training accuracy of 84.79% and a loss function of 2.4%. The performance of Cubic SVM was superior to that of the other SVM classifiers, with an AUC score of 0.99, precision of 0.983, recall of 0.8977, accuracy of 95.8%, and F1 score of 0.9384. Cosine KNN fared better than the other KNN classifiers with an AUC score of 0.95, precision of 0.974, recall of 0.777, accuracy of 90.8%, and F1 score of 0.864. Wide NN fared better than the other NN classifiers with an AUC score of 0.98, precision of 0.975, recall of 0.907, accuracy of 95.5%, and F1 score of 0.939. According to the findings, SVM classifiers topped other classifiers in terms of performance indicators like accuracy, precision, recall, F1-score, and AUC. The SVM classifiers reported better mean optimal scores compared to other classifiers. The performance assessment metrics uncover that the proposed methodology can aid in preliminary COVID diagnosis.
Results in engineering
"2023-03-23T00:00:00"
[ "Rahul GowthamPoola", "LahariPl", "Siva SankarY" ]
10.1016/j.rineng.2023.101020 10.1016/j.catena.2019.104426 10.1021/acs.molpharmaceut.7b00578 10.1109/TMI.2020.3040950 10.1016/j.irbm.2020.05.003 10.1007/s13089-009-0003-x 10.1007/s10096-020-03901-z 10.1007/s10489-020-01829-7 10.1109/ACCESS.2020.3010287 10.1080/07391102.2020.1767212
COVID-19 and pneumonia diagnosis from chest X-ray images using convolutional neural networks.
X-ray is a useful imaging modality widely utilized for diagnosing COVID-19 virus that infected a high number of people all around the world. The manual examination of these X-ray images may cause problems especially when there is lack of medical staff. Usage of deep learning models is known to be helpful for automated diagnosis of COVID-19 from the X-ray images. However, the widely used convolutional neural network architectures typically have many layers causing them to be computationally expensive. To address these problems, this study aims to design a lightweight differential diagnosis model based on convolutional neural networks. The proposed model is designed to classify the X-ray images belonging to one of the four classes that are Healthy, COVID-19, viral pneumonia, and bacterial pneumonia. To evaluate the model performance, accuracy, precision, recall, and F1-Score were calculated. The performance of the proposed model was compared with those obtained by applying transfer learning to the widely used convolutional neural network models. The results showed that the proposed model with low number of computational layers outperforms the pre-trained benchmark models, achieving an accuracy value of 89.89% while the best pre-trained model (Efficient-Net B2) achieved accuracy of 85.7%. In conclusion, the proposed lightweight model achieved the best overall result in classifying lung diseases allowing it to be used on devices with limited computational power. On the other hand, all the models showed a poor precision on viral pneumonia class and confusion in distinguishing it from bacterial pneumonia class, thus a decrease in the overall accuracy.
Network modeling and analysis in health informatics and bioinformatics
"2023-03-21T00:00:00"
[ "MuhabHariri", "ErcanAvşar" ]
10.1007/s13721-023-00413-6 10.31803//tg-20210422205610 10.1016/j.bspc.2020.102257 10.1109/ACCESS.2020.3010287 10.1007/s12553-022-00700-8 10.1016/j.rinp.2021.105045 10.1109/ACCESS.2021.3078241 10.1016/j.bspc.2022.103677 10.1007/s11042-022-12484-0 10.3390/app12136364 10.1109/JAS.2020.1003393 10.1016/j.compbiomed.2022.106083 10.1007/s11548-020-02305-w 10.1007/s11263-015-0816-y 10.31803/tg-20190712095507 10.1016/j.asoc.2021.107522 10.3390/ijerph182111086 10.1109/ACCESS.2019.2892795 10.1016/j.micinf.2020.05.016
Computed tomography-based COVID-19 triage through a deep neural network using mask-weighted global average pooling.
There is an urgent need to find an effective and accurate method for triaging coronavirus disease 2019 (COVID-19) patients from millions or billions of people. Therefore, this study aimed to develop a novel deep-learning approach for COVID-19 triage based on chest computed tomography (CT) images, including normal, pneumonia, and COVID-19 cases. A total of 2,809 chest CT scans (1,105 COVID-19, 854 normal, and 850 non-3COVID-19 pneumonia cases) were acquired for this study and classified into the training set (n = 2,329) and test set (n = 480). A U-net-based convolutional neural network was used for lung segmentation, and a mask-weighted global average pooling (GAP) method was proposed for the deep neural network to improve the performance of COVID-19 classification between COVID-19 and normal or common pneumonia cases. The results for lung segmentation reached a dice value of 96.5% on 30 independent CT scans. The performance of the mask-weighted GAP method achieved the COVID-19 triage with a sensitivity of 96.5% and specificity of 87.8% using the testing dataset. The mask-weighted GAP method demonstrated 0.9% and 2% improvements in sensitivity and specificity, respectively, compared with the normal GAP. In addition, fusion images between the CT images and the highlighted area from the deep learning model using the Grad-CAM method, indicating the lesion region detected using the deep learning method, were drawn and could also be confirmed by radiologists. This study proposed a mask-weighted GAP-based deep learning method and obtained promising results for COVID-19 triage based on chest CT images. Furthermore, it can be considered a convenient tool to assist doctors in diagnosing COVID-19.
Frontiers in cellular and infection microbiology
"2023-03-21T00:00:00"
[ "Hong-TaoZhang", "Ze-YuSun", "JuanZhou", "ShenGao", "Jing-HuiDong", "YuanLiu", "XuBai", "Jin-LinMa", "MingLi", "GuangLi", "Jian-MingCai", "Fu-GengSheng" ]
10.3389/fcimb.2023.1116285 10.1148/radiol.2020200642 10.1118/1.3528204 10.1016/j.jacr.2020.03.006 10.1016/j.compbiomed.2022.106439 10.1109/ICIP42928.2021.9506119 10.1016/j.displa.2022.102150 10.1148/radiol.2021211583 10.1056/NEJMoa2002032 10.1038/s41467-020-17971-2 10.1016/j.imu.2020.100412 10.1109/EMBC.2018.8512356 10.1007/s11606-020-05762-w 10.1615/CritRevBiomedEng.2022042286 10.3390/s19092167 10.1007/s00521-022-08127-y 10.1177/00368504221135457 10.1007/s00330-020-07042-x 10.1167/tvst.12.1.22 10.1007/s11042-022-13843-7 10.1007/s00521-022-08099-z 10.1016/j.cmpb.2022.107321 10.1007/s00330-020-07022-1 10.1007/s11356-022-24853-1 10.1016/j.media.2022.102605 10.1016/j.compbiomed.2021.105127 10.1016/j.engappai.2023.105820 10.1109/EMBC.2019.8856972 10.1002/mp.16217 10.1007/s00330-020-06801-0 10.1088/1741-2552/acb089
Deep SVDD and Transfer Learning for COVID-19 Diagnosis Using CT Images.
The novel coronavirus disease (COVID-19), which appeared in Wuhan, China, is spreading rapidly worldwide. Health systems in many countries have collapsed as a result of this pandemic, and hundreds of thousands of people have died due to acute respiratory distress syndrome caused by this virus. As a result, diagnosing COVID-19 in the early stages of infection is critical in the fight against the disease because it saves the patient's life and prevents the disease from spreading. In this study, we proposed a novel approach based on transfer learning and deep support vector data description (DSVDD) to distinguish among COVID-19, non-COVID-19 pneumonia, and intact CT images. Our approach consists of three models, each of which can classify one specific category as normal and the other as anomalous. To our knowledge, this is the first study to use the one-class DSVDD and transfer learning to diagnose lung disease. For the proposed approach, we used two scenarios: one with pretrained VGG16 and one with ResNet50. The proposed models were trained using data gathered with the assistance of an expert radiologist from three internet-accessible sources in end-to-end fusion using three split data ratios. Based on training with 70%, 50%, and 30% of the data, the proposed VGG16 models achieved (0.8281, 0.9170, and 0.9294) for the F1 score, while the proposed ResNet50 models achieved (0.9109, 0.9188, and 0.9333).
Computational intelligence and neuroscience
"2023-03-18T00:00:00"
[ "Akram AAlhadad", "Reham RMostafa", "Hazem MEl-Bakry" ]
10.1155/2023/6070970 10.1056/nejmoa2001017 10.1016/s0140-6736(20)30183-5 10.32604/cmc.2022.024193 10.1016/j.ijid.2020.01.009 10.1155/2022/5681574 10.1155/2021/2158184 10.1016/j.compbiomed.2022.106156 10.1016/j.ejrad.2020.108961 10.1016/j.bspc.2021.102588 10.1016/j.media.2017.07.005 10.3390/electronics9091439 10.1016/j.cmpb.2020.105581 10.1016/j.compbiomed.2022.105233 10.1146/annurev-bioeng-071516-044442 10.1016/j.ejrad.2020.109041 10.1016/j.compbiomed.2020.103795 10.1016/j.compbiomed.2020.104037 10.1109/tip.2021.3058783 10.1371/journal.pone.0257119 10.1109/tmi.2020.2994908 10.1109/access.2020.3005510 10.1007/s00330-021-07715-1 10.1016/j.asoc.2020.106885 10.1007/s13246-020-00865-4 10.1109/tmi.2020.2996256 10.1038/s41598-021-03287-8 10.1007/s10916-022-01868-2 10.3390/s21020455
CCTCOVID: COVID-19 detection from chest X-ray images using Compact Convolutional Transformers.
COVID-19 is a novel virus that attacks the upper respiratory tract and the lungs. Its person-to-person transmissibility is considerably rapid and this has caused serious problems in approximately every facet of individuals' lives. While some infected individuals may remain completely asymptomatic, others have been frequently witnessed to have mild to severe symptoms. In addition to this, thousands of death cases around the globe indicated that detecting COVID-19 is an urgent demand in the communities. Practically, this is prominently done with the help of screening medical images such as Computed Tomography (CT) and X-ray images. However, the cumbersome clinical procedures and a large number of daily cases have imposed great challenges on medical practitioners. Deep Learning-based approaches have demonstrated a profound potential in a wide range of medical tasks. As a result, we introduce a transformer-based method for automatically detecting COVID-19 from X-ray images using Compact Convolutional Transformers (CCT). Our extensive experiments prove the efficacy of the proposed method with an accuracy of 99.22% which outperforms the previous works.
Frontiers in public health
"2023-03-17T00:00:00"
[ "AbdolrezaMarefat", "MahdiehMarefat", "JavadHassannataj Joloudari", "Mohammad AliNematollahi", "RezaLashgari" ]
10.3389/fpubh.2023.1025746 10.1007/s00405-020-06319-7 10.1007/s00415-020-10067-3 10.3390/v12040372 10.1016/j.eswa.2020.114054 10.1016/j.mlwa.2021.100134 10.1145/3065386 10.1007/s13244-018-0639-9 10.1016/j.neucom.2016.12.038 10.3390/s20020342 10.1186/s12880-022-00793-7 10.1016/j.asoc.2018.05.018 10.18653/v1/2020.emnlp-demos.6 10.18653/v1/D18-1045 10.1002/widm.1412 10.1007/s11432-018-9941-6 10.1145/3505244 10.1016/j.media.2017.07.005 10.1038/s41598-020-76550-z 10.1016/j.asoc.2020.106691 10.1007/s11517-020-02299-2 10.1016/j.imu.2020.100412 10.1007/s10044-021-00984-y 10.1007/s10489-020-01904-z 10.3390/jpm12020310 10.1109/JTEHM.2021.3134096 10.1109/ACCESS.2021.3058854 10.1007/s00264-020-04609-7 10.1016/j.compbiomed.2020.103792 10.1016/j.chaos.2020.109944 10.1007/s10489-020-01826-w 10.1016/j.bspc.2021.102622 10.1038/s41598-021-93543-8 10.1016/j.patcog.2021.108255 10.1016/j.compbiomed.2022.106483 10.5555/3295222.3295349 10.3389/fcvm.2021.760178 10.1109/TMI.2020.2995965 10.1148/radiol.2020201473 10.1016/j.chaos.2020.110120 10.1016/j.ipm.2022.103025 10.1016/j.bspc.2022.103848 10.36548/jismac.2021.2.006 10.1016/j.radi.2022.03.011 10.1016/j.bbe.2020.08.008 10.32604/cmc.2021.012955 10.1016/j.compbiomed.2020.103795 10.3390/diagnostics11101887 10.1007/s11042-022-12156-z 10.1002/cpe.6747
IRCM-Caps: An X-ray image detection method for COVID-19.
COVID-19 is ravaging the world, but traditional reverse transcription-polymerase reaction (RT-PCR) tests are time-consuming and have a high false-negative rate and lack of medical equipment. Therefore, lung imaging screening methods are proposed to diagnose COVID-19 due to its fast test speed. Currently, the commonly used convolutional neural network (CNN) model requires a large number of datasets, and the accuracy of the basic capsule network for multiple classification is limital. For this reason, this paper proposes a novel model based on CNN and CapsNet. The proposed model integrates CNN and CapsNet. And attention mechanism module and multi-branch lightweight module are applied to enhance performance. Use the contrast adaptive histogram equalization (CLAHE) algorithm to preprocess the image to enhance image contrast. The preprocessed images are input into the network for training, and ReLU was used as the activation function to adjust the parameters to achieve the optimal. The test dataset includes 1200 X-ray images (400 COVID-19, 400 viral pneumonia, and 400 normal), and we replace CNN of VGG16, InceptionV3, Xception, Inception-Resnet-v2, ResNet50, DenseNet121, and MoblieNetV2 and integrate with CapsNet. Compared with CapsNet, this network improves 6.96%, 7.83%, 9.37%, 10.47%, and 10.38% in accuracy, area under the curve (AUC), recall, and F1 scores, respectively. In the binary classification experiment, compared with CapsNet, the accuracy, AUC, accuracy, recall rate, and F1 score were increased by 5.33%, 5.34%, 2.88%, 8.00%, and 5.56%, respectively. The proposed embedded the advantages of traditional convolutional neural network and capsule network and has a good classification effect on small COVID-19 X-ray image dataset.
The clinical respiratory journal
"2023-03-17T00:00:00"
[ "ShuoQiu", "JinlinMa", "ZipingMa" ]
10.1111/crj.13599 10.1016/j.eswa.2019.01.048 10.1016/j.imu.2020.100360 10.1016/j.patrec.2020.09.010 10.1016/j.chaos.2021.110713 10.1016/j.chaos.2020.110122 10.1002/ima.22566 10.1016/j.compbiomed.2021.104399 10.1016/j.knosys.2020.105542 10.5815/ijigsp.2020.02.04 10.3389/frai.2021.598932 10.1016/j.measurement.2021.110289 10.1101/2020.03.12.20027185 10.1007/s10044-021-00984-y 10.1038/s41598-020-76550-z 10.1183/13993003.00775-2020 10.1016/j.bspc.2021.103272 10.1016/j.bspc.2022.104268 10.1016/j.bbe.2022.11.003 10.1016/j.eswa.2022.118576 10.1016/j.compeleceng.2022.108479
Implementation of deep learning artificial intelligence in vision-threatening disease screenings for an underserved community during COVID-19.
Age-related macular degeneration, diabetic retinopathy, and glaucoma are vision-threatening diseases that are leading causes of vision loss. Many studies have validated deep learning artificial intelligence for image-based diagnosis of vision-threatening diseases. Our study prospectively investigated deep learning artificial intelligence applications in student-run non-mydriatic screenings for an underserved, primarily Hispanic community during COVID-19. Five supervised student-run community screenings were held in West New York, New Jersey. Participants underwent non-mydriatic 45-degree retinal imaging by medical students. Images were uploaded to a cloud-based deep learning artificial intelligence for vision-threatening disease referral. An on-site tele-ophthalmology grader and remote clinical ophthalmologist graded images, with adjudication by a senior ophthalmologist to establish the gold standard diagnosis, which was used to assess the performance of deep learning artificial intelligence. A total of 385 eyes from 195 screening participants were included (mean age 52.43  ±  14.5 years, 40.0% female). A total of 48 participants were referred for at least one vision-threatening disease. Deep learning artificial intelligence marked 150/385 (38.9%) eyes as ungradable, compared to 10/385 (2.6%) ungradable as per the human gold standard ( Deep learning artificial intelligence can increase the efficiency and accessibility of vision-threatening disease screenings, particularly in underserved communities. Deep learning artificial intelligence should be adaptable to different environments. Consideration should be given to how deep learning artificial intelligence can best be utilized in a real-world application, whether in computer-aided or autonomous diagnosis.
Journal of telemedicine and telecare
"2023-03-14T00:00:00"
[ "ArethaZhu", "PriyaTailor", "RashikaVerma", "IsisZhang", "BrianSchott", "CatherineYe", "BernardSzirth", "MiriamHabiel", "Albert SKhouri" ]
10.1177/1357633X231158832
COVID-Net USPro: An Explainable Few-Shot Deep Prototypical Network for COVID-19 Screening Using Point-of-Care Ultrasound.
As the Coronavirus Disease 2019 (COVID-19) continues to impact many aspects of life and the global healthcare systems, the adoption of rapid and effective screening methods to prevent the further spread of the virus and lessen the burden on healthcare providers is a necessity. As a cheap and widely accessible medical image modality, point-of-care ultrasound (POCUS) imaging allows radiologists to identify symptoms and assess severity through visual inspection of the chest ultrasound images. Combined with the recent advancements in computer science, applications of deep learning techniques in medical image analysis have shown promising results, demonstrating that artificial intelligence-based solutions can accelerate the diagnosis of COVID-19 and lower the burden on healthcare professionals. However, the lack of large, well annotated datasets poses a challenge in developing effective deep neural networks, especially in the case of rare diseases and new pandemics. To address this issue, we present COVID-Net USPro, an explainable few-shot deep prototypical network that is designed to detect COVID-19 cases from very few ultrasound images. Through intensive quantitative and qualitative assessments, the network not only demonstrates high performance in identifying COVID-19 positive cases, using an explainability component, but it is also shown that the network makes decisions based on the actual representative patterns of the disease. Specifically, COVID-Net USPro achieves 99.55% overall accuracy, 99.93% recall, and 99.83% precision for COVID-19-positive cases when trained with only five shots. In addition to the quantitative performance assessment, our contributing clinician with extensive experience in POCUS interpretation verified the analytic pipeline and results, ensuring that the network's decisions are based on clinically relevant image patterns integral to COVID-19 diagnosis. We believe that network explainability and clinical validation are integral components for the successful adoption of deep learning in the medical field. As part of the COVID-Net initiative, and to promote reproducibility and foster further innovation, the network is open-sourced and available to the public.
Sensors (Basel, Switzerland)
"2023-03-12T00:00:00"
[ "JessySong", "AshkanEbadi", "AdrianFlorea", "PengchengXi", "StéphaneTremblay", "AlexanderWong" ]
10.3390/s23052621 10.31083/j.fbl2707198 10.1002/14651858.CD013705.pub2 10.1038/s41598-021-99015-3 10.1038/s41598-020-76550-z 10.3389/fmed.2021.729287 10.3389/fmed.2020.608525 10.18653/v1/D19-1045 10.1109/ICCV.2017.74 10.3389/fmed.2021.821120 10.1016/j.compbiomed.2020.103792 10.1016/j.patrec.2020.09.010 10.1016/j.bspc.2021.102920 10.1371/journal.pone.0255886 10.1016/j.patcog.2020.107700 10.48550/ARXIV.2109.03793 10.1007/s13534-017-0021-8 10.1186/s12911-020-01332-6 10.1109/TPAMI.2019.2918284 10.1378/chest.09-0001
Deep Learning Algorithms with LIME and Similarity Distance Analysis on COVID-19 Chest X-ray Dataset.
In the last few years, many types of research have been conducted on the most harmful pandemic, COVID-19. Machine learning approaches have been applied to investigate chest X-rays of COVID-19 patients in many respects. This study focuses on the deep learning algorithm from the standpoint of feature space and similarity analysis. Firstly, we utilized Local Interpretable Model-agnostic Explanations (LIME) to justify the necessity of the region of interest (ROI) process and further prepared ROI via U-Net segmentation that masked out non-lung areas of images to prevent the classifier from being distracted by irrelevant features. The experimental results were promising, with detection performance reaching an overall accuracy of 95.5%, a sensitivity of 98.4%, a precision of 94.7%, and an F1 score of 96.5% on the COVID-19 category. Secondly, we applied similarity analysis to identify outliers and further provided an objective confidence reference specific to the similarity distance to centers or boundaries of clusters while inferring. Finally, the experimental results suggested putting more effort into enhancing the low-accuracy subspace locally, which is identified by the similarity distance to the centers. The experimental results were promising, and based on those perspectives, our approach could be more flexible to deploy dedicated classifiers specific to different subspaces instead of one rigid end-to-end black box model for all feature space.
International journal of environmental research and public health
"2023-03-12T00:00:00"
[ "Kuan-YungChen", "Hsi-ChiehLee", "Tsung-ChiehLin", "Chih-YingLee", "Zih-PingHo" ]
10.3390/ijerph20054330 10.1021/acsnano.0c02624 10.1148/radiol.2020200432 10.1148/rg.2017160130 10.1016/j.media.2017.07.005 10.1109/TMI.2016.2535302 10.3390/app10020559 10.1109/ACCESS.2020.3010287 10.1038/s41598-020-76550-z 10.1109/TII.2021.3057683 10.1016/j.engappai.2022.105151 10.1016/S0893-6080(05)80023-1
On the Implementation of a Post-Pandemic Deep Learning Algorithm Based on a Hybrid CT-Scan/X-ray Images Classification Applied to Pneumonia Categories.
The identification and characterization of lung diseases is one of the most interesting research topics in recent years. They require accurate and rapid diagnosis. Although lung imaging techniques have many advantages for disease diagnosis, the interpretation of medial lung images has always been a major problem for physicians and radiologists due to diagnostic errors. This has encouraged the use of modern artificial intelligence techniques such as deep learning. In this paper, a deep learning architecture based on EfficientNetB7, known as the most advanced architecture among convolutional networks, has been constructed for classification of medical X-ray and CT images of lungs into three classes namely: common pneumonia, coronavirus pneumonia and normal cases. In terms of accuracy, the proposed model is compared with recent pneumonia detection techniques. The results provided robust and consistent features to this system for pneumonia detection with predictive accuracy according to the three classes mentioned above for both imaging modalities: radiography at 99.81% and CT at 99.88%. This work implements an accurate computer-aided system for the analysis of radiographic and CT medical images. The results of the classification are promising and will certainly improve the diagnosis and decision making of lung diseases that keep appearing over time.
Healthcare (Basel, Switzerland)
"2023-03-12T00:00:00"
[ "AbdelghaniMoussaid", "NabilaZrira", "IbtissamBenmiloud", "ZinebFarahat", "YoussefKarmoun", "YasmineBenzidia", "SoumayaMouline", "BahiaEl Abdi", "Jamal EddineBourkadi", "NabilNgote" ]
10.3390/healthcare11050662 10.1128/CMR.00028-20 10.1016/j.ajem.2022.03.036 10.1016/j.rmr.2021.11.004 10.1016/j.gie.2020.06.040 10.1371/journal.pone.0072457 10.1016/j.ophtha.2017.02.008 10.1371/journal.pone.0174944 10.1016/j.bbe.2021.06.011 10.1038/s41598-020-74539-2 10.1016/j.mehy.2020.109761 10.1109/TMI.2020.3040950 10.1371/journal.pone.0262052 10.1016/j.chemolab.2021.104256 10.1007/s42979-021-00695-5 10.1016/j.bspc.2021.103441 10.1007/s11548-021-02317-0 10.1088/1361-6560/abe838 10.3389/frai.2021.694875 10.1016/j.compbiomed.2021.104835 10.17632/fvk7h5dg2p.3
A hybrid deep learning approach for COVID-19 detection based on genomic image processing techniques.
The coronavirus disease 2019 (COVID-19) pandemic has been spreading quickly, threatening the public health system. Consequently, positive COVID-19 cases must be rapidly detected and treated. Automatic detection systems are essential for controlling the COVID-19 pandemic. Molecular techniques and medical imaging scans are among the most effective approaches for detecting COVID-19. Although these approaches are crucial for controlling the COVID-19 pandemic, they have certain limitations. This study proposes an effective hybrid approach based on genomic image processing (GIP) techniques to rapidly detect COVID-19 while avoiding the limitations of traditional detection techniques, using whole and partial genome sequences of human coronavirus (HCoV) diseases. In this work, the GIP techniques convert the genome sequences of HCoVs into genomic grayscale images using a genomic image mapping technique known as the frequency chaos game representation. Then, the pre-trained convolution neural network, AlexNet, is used to extract deep features from these images using the last convolution (conv5) and second fully-connected (fc7) layers. The most significant features were obtained by removing the redundant ones using the ReliefF and least absolute shrinkage and selection operator (LASSO) algorithms. These features are then passed to two classifiers: decision trees and k-nearest neighbors (KNN). Results showed that extracting deep features from the fc7 layer, selecting the most significant features using the LASSO algorithm, and executing the classification process using the KNN classifier is the best hybrid approach. The proposed hybrid deep learning approach detected COVID-19, among other HCoV diseases, with 99.71% accuracy, 99.78% specificity, and 99.62% sensitivity.
Scientific reports
"2023-03-11T00:00:00"
[ "Muhammed SHammad", "Vidan FGhoneim", "Mai SMabrouk", "Walid IAl-Atabany" ]
10.1038/s41598-023-30941-0 10.1038/s41586-020-2008-3 10.14309/ajg.0000000000000620 10.1213/ANE.0000000000004845 10.3390/pathogens9030186 10.1038/s41591-020-0820-9 10.1016/S0140-6736(20)30251-8 10.1016/j.jinf.2020.03.041 10.1109/ACCESS.2021.3076158 10.1038/s41598-021-88807-2 10.1016/j.eswa.2020.113909 10.1002/ima.22469 10.1016/j.compbiomed.2020.103805 10.1007/s10489-020-01888-w 10.1109/JIOT.2021.3055804 10.1016/j.asoc.2020.106642 10.1016/j.asoc.2022.108780 10.1038/s41598-020-76550-z 10.1148/radiol.2020200642 10.1021/acsnano.0c02624 10.1016/j.cie.2021.107666 10.1038/s41598-020-80363-5 10.3389/fgene.2021.569120 10.1007/s11517-022-02591-3 10.1038/s41598-021-90766-7 10.1093/bib/bbaa170 10.1371/journal.pone.0232391 10.1016/j.bspc.2022.104192 10.1016/j.compbiomed.2021.104650 10.13053/rcs-148-3-9 10.1016/j.jmgm.2020.107603 10.1016/j.aej.2022.08.023 10.1093/bioinformatics/17.5.429 10.1016/j.gene.2004.10.021 10.1016/j.neucom.2020.10.068 10.1016/j.compbiomed.2017.08.001 10.1145/3065386 10.1109/TMI.2016.2535302 10.1007/s42979-021-00815-1 10.1109/ACCESS.2019.2919122 10.1016/j.neucom.2017.11.077 10.1109/ACCESS.2021.3053759 10.1016/j.jbi.2018.07.014 10.1111/j.1467-9868.2011.00771.x 10.1111/j.1467-9868.2007.00577.x 10.1016/j.ipm.2009.03.002
A hybrid CNN and ensemble model for COVID-19 lung infection detection on chest CT scans.
COVID-19 is highly infectious and causes acute respiratory disease. Machine learning (ML) and deep learning (DL) models are vital in detecting disease from computerized chest tomography (CT) scans. The DL models outperformed the ML models. For COVID-19 detection from CT scan images, DL models are used as end-to-end models. Thus, the performance of the model is evaluated for the quality of the extracted feature and classification accuracy. There are four contributions included in this work. First, this research is motivated by studying the quality of the extracted feature from the DL by feeding these extracted to an ML model. In other words, we proposed comparing the end-to-end DL model performance against the approach of using DL for feature extraction and ML for the classification of COVID-19 CT scan images. Second, we proposed studying the effect of fusing extracted features from image descriptors, e.g., Scale-Invariant Feature Transform (SIFT), with extracted features from DL models. Third, we proposed a new Convolutional Neural Network (CNN) to be trained from scratch and then compared to the deep transfer learning on the same classification problem. Finally, we studied the performance gap between classic ML models against ensemble learning models. The proposed framework is evaluated using a CT dataset, where the obtained results are evaluated using five different metrics The obtained results revealed that using the proposed CNN model is better than using the well-known DL model for the purpose of feature extraction. Moreover, using a DL model for feature extraction and an ML model for the classification task achieved better results in comparison to using an end-to-end DL model for detecting COVID-19 CT scan images. Of note, the accuracy rate of the former method improved by using ensemble learning models instead of the classic ML models. The proposed method achieved the best accuracy rate of 99.39%.
PloS one
"2023-03-10T00:00:00"
[ "Ahmed AAkl", "Khalid MHosny", "Mostafa MFouda", "AhmadSalah" ]
10.1371/journal.pone.0282608 10.1016/j.compbiomed.2020.103795 10.1097/RLI.0000000000000700 10.1371/journal.pone.0236621 10.1007/s13246-020-00865-4 10.1016/j.cmpb.2020.105532 10.1038/nature14539 10.1109/JPROC.2020.3004555 10.1371/journal.pone.0235187 10.1371/journal.pone.0250688 10.2196/19569 10.1007/s10489-020-02055-x 10.1007/s00500-020-05275-y 10.1109/ACCESS.2020.3016780 10.1016/j.eswa.2021.116377 10.1007/s10723-022-09615-0 10.1002/ima.22706 10.1109/TMI.2017.2712367 10.1016/j.aquaeng.2020.102117 10.1023/B:VISI.0000029664.99615.94 10.7717/peerj.10086 10.1016/j.patrec.2020.10.001 10.1007/s10489-020-01826-w 10.1007/s00521-020-05437-x
Artificial intelligence for assistance of radiology residents in chest CT evaluation for COVID-19 pneumonia: a comparative diagnostic accuracy study.
In hospitals, it is crucial to rule out coronavirus disease 2019 (COVID-19) timely and reliably. Artificial intelligence (AI) provides sufficient accuracy to identify chest computed tomography (CT) scans with signs of COVID-19. To compare the diagnostic accuracy of radiologists with different levels of experience with and without assistance of AI in CT evaluation for COVID-19 pneumonia and to develop an optimized diagnostic pathway. The retrospective, single-center, comparative case-control study included 160 consecutive participants who had undergone chest CT scan between March 2020 and May 2021 without or with confirmed diagnosis of COVID-19 pneumonia in a ratio of 1:3. Index tests were chest CT evaluation by five radiological senior residents, five junior residents, and an AI software. Based on the diagnostic accuracy in every group and on comparison of groups, a sequential CT assessment pathway was developed. Areas under receiver operating curves were 0.95 (95% confidence interval [CI]=0.88-0.99), 0.96 (95% CI=0.92-1.0), 0.77 (95% CI=0.68-0.86), and 0.95 (95% CI=0.9-1.0) for junior residents, senior residents, AI, and sequential CT assessment, respectively. Proportions of false negatives were 9%, 3%, 17%, and 2%, respectively. With the developed diagnostic pathway, junior residents evaluated all CT scans with the support of AI. Senior residents were only required as second readers in 26% (41/160) of the CT scans. AI can support junior residents with chest CT evaluation for COVID-19 and reduce the workload of senior residents. A review of selected CT scans by senior residents is mandatory.
Acta radiologica (Stockholm, Sweden : 1987)
"2023-03-10T00:00:00"
[ "LucjaMlynska", "AmerMalouhi", "MajaIngwersen", "FelixGüttler", "StephanieGräger", "UlfTeichgräber" ]
10.1177/02841851231162085
MCSC-Net: COVID-19 detection using deep-Q-neural network classification with RFNN-based hybrid whale optimization.
COVID-19 is the most dangerous virus, and its accurate diagnosis saves lives and slows its spread. However, COVID-19 diagnosis takes time and requires trained professionals. Therefore, developing a deep learning (DL) model on low-radiated imaging modalities like chest X-rays (CXRs) is needed. The existing DL models failed to diagnose COVID-19 and other lung diseases accurately. This study implements a multi-class CXR segmentation and classification network (MCSC-Net) to detect COVID-19 using CXR images. Initially, a hybrid median bilateral filter (HMBF) is applied to CXR images to reduce image noise and enhance the COVID-19 infected regions. Then, a skip connection-based residual network-50 (SC-ResNet50) is used to segment (localize) COVID-19 regions. The features from CXRs are further extracted using a robust feature neural network (RFNN). Since the initial features contain joint COVID-19, normal, pneumonia bacterial, and viral properties, the conventional methods fail to separate the class of each disease-based feature. To extract the distinct features of each class, RFNN includes a disease-specific feature separate attention mechanism (DSFSAM). Furthermore, the hunting nature of the Hybrid whale optimization algorithm (HWOA) is used to select the best features in each class. Finally, the deep-Q-neural network (DQNN) classifies CXRs into multiple disease classes. The proposed MCSC-Net shows the enhanced accuracy of 99.09% for 2-class, 99.16% for 3-class, and 99.25% for 4-class classification of CXR images compared to other state-of-art approaches. The proposed MCSC-Net enables to conduct multi-class segmentation and classification tasks applying to CXR images with high accuracy. Thus, together with gold-standard clinical and laboratory tests, this new method is promising to be used in future clinical practice to evaluate patients.
Journal of X-ray science and technology
"2023-03-07T00:00:00"
[ "GerardDeepak", "MMadiajagan", "SanjeevKulkarni", "Ahmed NajatAhmed", "AnandbabuGopatoti", "VeeraswamyAmmisetty" ]
10.3233/XST-221360
Robust framework for COVID-19 identication from a multicenter dataset of chest CT scans.
The main objective of this study is to develop a robust deep learning-based framework to distinguish COVID-19, Community-Acquired Pneumonia (CAP), and Normal cases based on volumetric chest CT scans, which are acquired in different imaging centers using different scanners and technical settings. We demonstrated that while our proposed model is trained on a relatively small dataset acquired from only one imaging center using a specific scanning protocol, it performs well on heterogeneous test sets obtained by multiple scanners using different technical parameters. We also showed that the model can be updated via an unsupervised approach to cope with the data shift between the train and test sets and enhance the robustness of the model upon receiving a new external dataset from a different center. More specifically, we extracted the subset of the test images for which the model generated a confident prediction and used the extracted subset along with the training set to retrain and update the benchmark model (the model trained on the initial train set). Finally, we adopted an ensemble architecture to aggregate the predictions from multiple versions of the model. For initial training and development purposes, an in-house dataset of 171 COVID-19, 60 CAP, and 76 Normal cases was used, which contained volumetric CT scans acquired from one imaging center using a single scanning protocol and standard radiation dose. To evaluate the model, we collected four different test sets retrospectively to investigate the effects of the shifts in the data characteristics on the model's performance. Among the test cases, there were CT scans with similar characteristics as the train set as well as noisy low-dose and ultra-low-dose CT scans. In addition, some test CT scans were obtained from patients with a history of cardiovascular diseases or surgeries. This dataset is referred to as the "SPGC-COVID" dataset. The entire test dataset used in this study contains 51 COVID-19, 28 CAP, and 51 Normal cases. Experimental results indicate that our proposed framework performs well on all test sets achieving total accuracy of 96.15% (95%CI: [91.25-98.74]), COVID-19 sensitivity of 96.08% (95%CI: [86.54-99.5]), CAP sensitivity of 92.86% (95%CI: [76.50-99.19]), Normal sensitivity of 98.04% (95%CI: [89.55-99.95]) while the confidence intervals are obtained using the significance level of 0.05. The obtained AUC values (One class vs Others) are 0.993 (95%CI: [0.977-1]), 0.989 (95%CI: [0.962-1]), and 0.990 (95%CI: [0.971-1]) for COVID-19, CAP, and Normal classes, respectively. The experimental results also demonstrate the capability of the proposed unsupervised enhancement approach in improving the performance and robustness of the model when being evaluated on varied external test sets.
PloS one
"2023-03-03T00:00:00"
[ "SadafKhademi", "ShahinHeidarian", "ParnianAfshar", "NastaranEnshaei", "FarnooshNaderkhani", "Moezedin JavadRafiee", "AnastasiaOikonomou", "AkbarShafiee", "FaranakBabaki Fard", "Konstantinos NPlataniotis", "ArashMohammadi" ]
10.1371/journal.pone.0282121 10.1148/radiol.2020200432 10.1109/MSP.2021.3090674 10.1016/j.cell.2020.04.045 10.1148/radiol.2019190928 10.1038/srep34921 10.1016/j.numecd.2020.04.013 10.3389/frai.2021.598932 10.1016/j.imu.2022.100945 10.1038/s41597-021-00900-3 10.1038/s41598-022-08796-8 10.1007/s00330-010-1990-5 10.1016/j.patcog.2021.107942 10.1109/LSP.2020.3034858 10.1007/s42058-020-00034-2 10.1002/widm.1353 10.1016/j.media.2021.102062 10.1109/ACCESS.2021.3084358 10.2307/2685469 10.1007/BF02295996 10.1109/TMI.2020.3009029 10.1109/TMI.2020.2971258 10.1186/s13244-021-01096-1 10.1016/j.chest.2021.04.004 10.25259/JCIS_138_2020 10.7189/jogh.10.010347 10.1016/j.jhin.2020.03.001 10.3389/fonc.2020.556334 10.1016/j.chest.2020.04.003 10.1016/j.tmaid.2020.101627 10.1007/s15010-020-01467-8 10.1007/s00330-020-06809-6
Deep Learning Solution for Quantification of Fluorescence Particles on a Membrane.
The detection and quantification of severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) virus particles in ambient waters using a membrane-based in-gel loop-mediated isothermal amplification (mgLAMP) method can play an important role in large-scale environmental surveillance for early warning of potential outbreaks. However, counting particles or cells in fluorescence microscopy is an expensive, time-consuming, and tedious task that only highly trained technicians and researchers can perform. Although such objects are generally easy to identify, manually annotating cells is occasionally prone to fatigue errors and arbitrariness due to the operator's interpretation of borderline cases. In this research, we proposed a method to detect and quantify multiscale and shape variant SARS-CoV-2 fluorescent cells generated using a portable (
Sensors (Basel, Switzerland)
"2023-03-01T00:00:00"
[ "Abdellah ZakariaSellam", "AzeddineBenlamoudi", "Clément AntoineCid", "LeopoldDobelle", "AminaSlama", "YassinEl Hillali", "AbdelmalikTaleb-Ahmed" ]
10.3390/s23041794 10.1038/s41586-021-04188-6 10.1021/acs.est.0c02388 10.1038/s41587-020-0684-z 10.1016/j.watres.2020.116404 10.1016/j.envint.2020.105689 10.1016/j.scitotenv.2021.149618 10.1128/jcm.02446-21 10.1021/acs.est.1c04623 10.1109/MSP.2012.2204190 10.1109/TSMCB.2012.2228639 10.1016/0031-3203(95)00067-4 10.1109/TBME.2009.2035102 10.3390/s22103760 10.1016/j.neunet.2014.09.003 10.1155/2018/7068349 10.1109/ICCV.2015.169 10.1109/MSP.2009.934181 10.1007/978-3-319-10578-9_23 10.1109/TPAMI.2016.2577031 10.1109/TPAMI.2019.2956516
MonkeyNet: A robust deep convolutional neural network for monkeypox disease detection and classification.
The monkeypox virus poses a new pandemic threat while we are still recovering from COVID-19. Despite the fact that monkeypox is not as lethal and contagious as COVID-19, new patient cases are recorded every day. If preparations are not made, a global pandemic is likely. Deep learning (DL) techniques are now showing promise in medical imaging for figuring out what diseases a person has. The monkeypox virus-infected human skin and the region of the skin can be used to diagnose the monkeypox early because an image has been used to learn more about the disease. But there is still no reliable Monkeypox database that is available to the public that can be used to train and test DL models. As a result, it is essential to collect images of monkeypox patients. The "MSID" dataset, short form of "Monkeypox Skin Images Dataset", which was developed for this research, is free to use and can be downloaded from the Mendeley Data database by anyone who wants to use it. DL models can be built and used with more confidence using the images in this dataset. These images come from a variety of open-source and online sources and can be used for research purposes without any restrictions. Furthermore, we proposed and evaluated a modified DenseNet-201 deep learning-based CNN model named MonkeyNet. Using the original and augmented datasets, this study suggested a deep convolutional neural network that was able to correctly identify monkeypox disease with an accuracy of 93.19% and 98.91% respectively. This implementation also shows the Grad-CAM which indicates the level of the model's effectiveness and identifies the infected regions in each class image, which will help the clinicians. The proposed model will also help doctors make accurate early diagnoses of monkeypox disease and protect against the spread of the disease.
Neural networks : the official journal of the International Neural Network Society
"2023-02-28T00:00:00"
[ "DiponkorBala", "Md ShamimHossain", "Mohammad AlamgirHossain", "Md IbrahimAbdullah", "Md MizanurRahman", "BalachandranManavalan", "NaijieGu", "Mohammad SIslam", "ZhangjinHuang" ]
10.1016/j.neunet.2023.02.022 10.17632/r9bfpnvyxr.3
Conv-CapsNet: capsule based network for COVID-19 detection through X-Ray scans.
Coronavirus, a virus that spread worldwide rapidly and was eventually declared a pandemic. The rapid spread made it essential to detect Coronavirus infected people to control the further spread. Recent studies show that radiological images such as X-Rays and CT scans provide essential information in detecting infection using deep learning models. This paper proposes a shallow architecture based on Capsule Networks with convolutional layers to detect COVID-19 infected persons. The proposed method combines the ability of the capsule network to understand spatial information with convolutional layers for efficient feature extraction. Due to the model's shallow architecture, it has 23
Multimedia tools and applications
"2023-02-28T00:00:00"
[ "PulkitSharma", "RhythmArya", "RichaVerma", "BinduVerma" ]
10.1007/s11042-023-14353-w 10.1016/j.patrec.2020.09.010 10.1007/s13246-020-00865-4 10.1007/s42979-020-00383-w 10.1016/j.cmpb.2022.106833 10.1109/TGRS.2021.3090410 10.1016/j.chemosphere.2021.132569 10.1007/s42979-021-00881-5 10.1016/j.eswa.2020.113909 10.1109/ACCESS.2020.3010287 10.1007/s11547-020-01232-9 10.1016/j.imu.2020.100297 10.1016/j.chaos.2020.110495 10.1016/j.imu.2020.100412 10.1109/ACCESS.2021.3058537 10.1007/s42979-020-00335-4 10.1016/j.eswa.2020.114054 10.1016/j.asoc.2020.106744 10.1007/s10140-020-01808-y 10.1016/j.slast.2021.10.011 10.1007/s42979-020-00216-w 10.1016/j.chaos.2020.110245 10.1016/j.imu.2020.100360 10.1007/s42979-021-00774-7 10.1016/j.imu.2020.100505 10.1016/j.aej.2021.01.011 10.1016/j.chaos.2020.110122 10.1016/j.mehy.2020.109761 10.1109/JSTSP.2019.2902305 10.1007/s13244-018-0639-9 10.1038/s41746-020-00372-6
Detection of COVID-19 Case from Chest CT Images Using Deformable Deep Convolutional Neural Network.
The infectious coronavirus disease (COVID-19) has become a great threat to global human health. Timely and rapid detection of COVID-19 cases is very crucial to control its spreading through isolation measures as well as for proper treatment. Though the real-time reverse transcription-polymerase chain reaction (RT-PCR) test is a widely used technique for COVID-19 infection, recent researches suggest chest computed tomography (CT)-based screening as an effective substitute in cases of time and availability limitations of RT-PCR. In consequence, deep learning-based COVID-19 detection from chest CT images is gaining momentum. Furthermore, visual analysis of data has enhanced the opportunities of maximizing the prediction performance in this big data and deep learning realm. In this article, we have proposed two separate deformable deep networks converting from the conventional convolutional neural network (CNN) and the state-of-the-art ResNet-50, to detect COVID-19 cases from chest CT images. The impact of the deformable concept has been observed through performance comparative analysis among the designed deformable and normal models, and it is found that the deformable models show better prediction results than their normal form. Furthermore, the proposed deformable ResNet-50 model shows better performance than the proposed deformable CNN model. The gradient class activation mapping (Grad-CAM) technique has been used to visualize and check the targeted regions' localization effort at the final convolutional layer and has been found excellent. Total 2481 chest CT images have been used to evaluate the performance of the proposed models with a train-valid-test data splitting ratio of 80 : 10 : 10 in random fashion. The proposed deformable ResNet-50 model achieved training accuracy of 99.5% and test accuracy of 97.6% with specificity of 98.5% and sensitivity of 96.5% which are satisfactory compared with related works. The comprehensive discussion demonstrates that the proposed deformable ResNet-50 model-based COVID-19 detection technique can be useful for clinical applications.
Journal of healthcare engineering
"2023-02-28T00:00:00"
[ "MdFoysal", "A B M AowladHossain", "AbdulsalamYassine", "M ShamimHossain" ]
10.1155/2023/4301745 10.1016/j.ijid.2020.02.050 10.1007/s10489-020-01943-6 10.1016/j.patcog.2020.107700 10.1007/s11042-020-09894-3 10.1007/s00521-020-05437-x 10.1007/s00330-020-07044-9 10.1016/j.compbiomed.2020.104037 10.1007/s10096-020-03901-z 10.3389/fmed.2020.608525 10.1109/tmi.2020.2994908 10.1109/access.2020.3018498 10.1007/s11548-020-02286-w 10.3390/app11157004 10.1109/INCET51464.2021.9456387 10.1109/jbhi.2020.3019505 10.1007/s42979-021-00782-7 10.1007/s00330-021-07715-1 10.1109/CITS49457.2020.9232574 10.1016/j.inffus.2021.02.013 10.1159/000509223 10.1109/mnet.011.2000458 10.1016/j.scs.2020.102582 10.1155/2022/2950699 10.1109/tnse.2020.3026637 10.1109/jiot.2020.3033129 10.1109/jiot.2020.3047662 10.1109/jiot.2020.3013710 10.1109/jiot.2017.2772959 10.1016/j.eswa.2019.112821 10.1148/radiol.2020200642 10.1007/s11547-020-01237-4 10.1007/s11263-019-01228-7
Multi-scale Triplet Hashing for Medical Image Retrieval.
For medical image retrieval task, deep hashing algorithms are widely applied in large-scale datasets for auxiliary diagnosis due to the retrieval efficiency advantage of hash codes. Most of which focus on features learning, whilst neglecting the discriminate area of medical images and hierarchical similarity for deep features and hash codes. In this paper, we tackle these dilemmas with a new Multi-scale Triplet Hashing (MTH) algorithm, which can leverage multi-scale information, convolutional self-attention and hierarchical similarity to learn effective hash codes simultaneously. The MTH algorithm first designs multi-scale DenseBlock module to learn multi-scale information of medical images. Meanwhile, a convolutional self-attention mechanism is developed to perform information interaction of the channel domain, which can capture the discriminate area of medical images effectively. On top of the two paths, a novel loss function is proposed to not only conserve the category-level information of deep features and the semantic information of hash codes in the learning process, but also capture the hierarchical similarity for deep features and hash codes. Extensive experiments on the Curated X-ray Dataset, Skin Cancer MNIST Dataset and COVID-19 Radiography Dataset illustrate that the MTH algorithm can further enhance the effect of medical retrieval compared to other state-of-the-art medical image retrieval algorithms.
Computers in biology and medicine
"2023-02-25T00:00:00"
[ "YaxiongChen", "YiboTang", "JinghaoHuang", "ShengwuXiong" ]
10.1016/j.compbiomed.2023.106633
Deep learning attention-guided radiomics for COVID-19 chest radiograph classification.
Accurate assessment of coronavirus disease 2019 (COVID-19) lung involvement through chest radiograph plays an important role in effective management of the infection. This study aims to develop a two-step feature merging method to integrate image features from deep learning and radiomics to differentiate COVID-19, non-COVID-19 pneumonia and normal chest radiographs (CXR). In this study, a deformable convolutional neural network (deformable CNN) was developed and used as a feature extractor to obtain 1,024-dimensional deep learning latent representation (DLR) features. Then 1,069-dimensional radiomics features were extracted from the region of interest (ROI) guided by deformable CNN's attention. The two feature sets were concatenated to generate a merged feature set for classification. For comparative experiments, the same process has been applied to the DLR-only feature set for verifying the effectiveness of feature concatenation. Using the merged feature set resulted in an overall average accuracy of 91.0% for three-class classification, representing a statistically significant improvement of 0.6% compared to the DLR-only classification. The recall and precision of classification into the COVID-19 class were 0.926 and 0.976, respectively. The feature merging method was shown to significantly improve the classification performance as compared to using only deep learning features, regardless of choice of classifier (P value <0.0001). Three classes' F1-score were 0.892, 0.890, and 0.950 correspondingly (i.e., normal, non-COVID-19 pneumonia, COVID-19). A two-step COVID-19 classification framework integrating information from both DLR and radiomics features (guided by deep learning attention mechanism) has been developed. The proposed feature merging method has been shown to improve the performance of chest radiograph classification as compared to the case of using only deep learning features.
Quantitative imaging in medicine and surgery
"2023-02-24T00:00:00"
[ "DongrongYang", "GeRen", "RuiyanNi", "Yu-HuaHuang", "Ngo Fung DanielLam", "HongfeiSun", "Shiu Bun NelsonWan", "Man Fung EstherWong", "King KwongChan", "Hoi Ching HaileyTsang", "LuXu", "Tak ChiuWu", "Feng-Ming SpringKong", "Yì Xiáng JWáng", "JingQin", "Lawrence Wing ChiChan", "MichaelYing", "JingCai" ]
10.21037/qims-22-531 10.1186/s13244-021-01096-1 10.1148/radiol.2021204522 10.1148/radiol.2020203173 10.21037/qims-20-771 10.1007/s13755-021-00146-8 10.1111/exsy.12749 10.1080/00325481.2021.2021741 10.1016/j.radi.2022.03.011 10.1016/j.eswa.2022.117410 10.21037/qims-21-791 10.1016/j.asoc.2021.108190 10.1016/j.bspc.2021.103182 10.1007/s10489-020-01829-7 10.1016/j.compbiomed.2020.104181 10.3390/diagnostics12020267 10.1016/j.chaos.2020.110495 10.1016/j.bspc.2021.103286 10.3389/fonc.2019.01050 10.1088/1361-6560/aae56a 10.1177/20552076221092543 10.1016/j.ejrad.2021.109673 10.1109/TNNLS.2021.3119071 10.1016/j.compbiomed.2021.104304 10.1007/s10278-021-00421-w 10.3233/XST-200831 10.3390/diagnostics11101812 10.1016/j.compbiomed.2021.104665 10.1002/mp.15582 10.1109/CVPR.2017.243 10.1109/CVPR.2017.243 10.1109/WACV.2018.00097 10.1109/WACV.2018.00097 10.1016/j.compbiomed.2021.104319 10.1016/j.compbiomed.2021.104319 10.1109/CiSt49399.2021.9357250 10.1109/CiSt49399.2021.9357250 10.1109/ICCV.2017.89 10.1109/ICCV.2017.89 10.1007/s11263-015-0816-y 10.1158/0008-5472.CAN-17-0339 10.1038/srep11044 10.1038/srep46349 10.1002/mp.13891 10.1007/s13278-021-00731-5 10.21037/qims-20-1230
DPDH-CapNet: A Novel Lightweight Capsule Network with Non-routing for COVID-19 Diagnosis Using X-ray Images.
COVID-19 has claimed millions of lives since its outbreak in December 2019, and the damage continues, so it is urgent to develop new technologies to aid its diagnosis. However, the state-of-the-art deep learning methods often rely on large-scale labeled data, limiting their clinical application in COVID-19 identification. Recently, capsule networks have achieved highly competitive performance for COVID-19 detection, but they require expensive routing computation or traditional matrix multiplication to deal with the capsule dimensional entanglement. A more lightweight capsule network is developed to effectively address these problems, namely DPDH-CapNet, which aims to enhance the technology of automated diagnosis for COVID-19 chest X-ray images. It adopts depthwise convolution (D), point convolution (P), and dilated convolution (D) to construct a new feature extractor, thus successfully capturing the local and global dependencies of COVID-19 pathological features. Simultaneously, it constructs the classification layer by homogeneous (H) vector capsules with an adaptive, non-iterative, and non-routing mechanism. We conduct experiments on two publicly available combined datasets, including normal, pneumonia, and COVID-19 images. With a limited number of samples, the parameters of the proposed model are reduced by 9x compared to the state-of-the-art capsule network. Moreover, our model has faster convergence speed and better generalization, and its accuracy, precision, recall, and F-measure are improved to 97.99%, 98.05%, 98.02%, and 98.03%, respectively. In addition, experimental results demonstrate that, contrary to the transfer learning method, the proposed model does not require pre-training and a large number of training samples.
Journal of digital imaging
"2023-02-23T00:00:00"
[ "JianjunYuan", "FujunWu", "YuxiLi", "JinyiLi", "GuojunHuang", "QuanyongHuang" ]
10.1007/s10278-023-00791-3 10.48550/arXiv.2002.04764
Diagnosis of COVID-19 from Multimodal Imaging Data Using Optimized Deep Learning Techniques.
COVID-19 had a global impact, claiming many lives and disrupting healthcare systems even in many developed countries. Various mutations of the severe acute respiratory syndrome coronavirus-2, continue to be an impediment to early detection of this disease, which is vital for social well-being. Deep learning paradigm has been widely applied to investigate multimodal medical image data such as chest X-rays and CT scan images to aid in early detection and decision making about disease containment and treatment. Any method for reliable and accurate screening of COVID-19 infection would be beneficial for rapid detection as well as reducing direct virus exposure in healthcare professionals. Convolutional neural networks (CNN) have previously proven to be quite successful in the classification of medical images. A CNN is used in this study to suggest a deep learning classification method for detecting COVID-19 from chest X-ray images and CT scans. Samples from the Kaggle repository were collected to analyse model performance. Deep learning-based CNN models such as VGG-19, ResNet-50, Inception v3 and Xception models are optimized and compared by evaluating their accuracy after pre-processing the data. Because X-ray is a less expensive process than CT scan, chest X-ray images are considered to have a significant impact on COVID-19 screening. According to this work, chest X-rays outperform CT scans in terms of detection accuracy. The fine-tuned VGG-19 model detected COVID-19 with high accuracy-up to 94.17% for chest X-rays and 93% for CT scans. This work thereby concludes that VGG-19 was found to be the best suited model to detect COVID-19 and chest X-rays yield better accuracy than CT scans for the model.
SN computer science
"2023-02-23T00:00:00"
[ "S EzhilMukhi", "R ThanujaVarshini", "S Eliza FemiSherley" ]
10.1007/s42979-022-01653-5 10.1007/s42600-021-00151-6 10.1016/j.bspc.2020.102365 10.1007/s10489-020-01902-1 10.3390/math8060890 10.3390/jcm9051547 10.1101/2020.05.05.20085902 10.1101/2020.03.19.20038950 10.32604/csse.2022.021438 10.1016/j.imu.2022.101059 10.1109/JBHI.2021.3132157
A qualitative analysis of radiography students' reflective essays regarding their experience of clinical placement during the COVID-19 pandemic.
The COVID-19 pandemic significantly impacted healthcare services and clinical placement for healthcare students. There is a paucity of qualitative research into radiography students' experiences of clinical placement during the pandemic. Students in stages three and four of a 4-year BSc Radiography degree in Ireland wrote reflective essays regarding their experience of clinical placement during the COVID-19 healthcare crisis. Permission was granted by 108 radiography students and recent graduates for their reflections to be analysed as part of this study. A thematic approach to data analysis was used, allowing themes to emerge from the reflective essays. Two researchers independently coded each reflective essay using the Braun and Clarke model. Four themes were highlighted; 1) Challenges associated with undertaking clinical placement during the pandemic, such as reduced patient throughput and PPE-related communication barriers; 2) Benefits of clinical placement during the pandemic, in terms of personal and professional development and completing degree requirements to graduate without delay; 3) Emotional impact and 4) Supporting students in clinical practice. Students recognised their resilience and felt proud of their contribution during this healthcare crisis but feared transmitting COVID-19 to family. Educational and emotional support provided by tutors, clinical staff and the university was deemed essential by students during this placement. Despite the pressure hospitals were under during the pandemic, students had positive clinical placement experiences and perceived these experiences to have contributed to their professional and personal growth. This study supports the argument for clinical placements to continue throughout healthcare crisis periods, albeit with additional learning and emotional support in place. Clinical placement experiences during the pandemic prompted a deep sense of pride amongst radiography students in their profession and contributed to the development of professional identity.
Radiography (London, England : 1995)
"2023-02-23T00:00:00"
[ "MO'Connor", "ALunney", "DKearney", "SMurphy" ]
10.1016/j.radi.2023.01.022
Artificial intelligence based approach for categorization of COVID-19 ECG images in presence of other cardiovascular disorders.
Coronavirus disease (COVID-19) is a class of SARS-CoV-2 virus which is initially identified in the later half of the year 2019 and then evolved as a pandemic. If it is not identified in the early stage then the infection and mortality rates increase with time. A timely and reliable approach for COVID-19 identification has become important in order to prevent the disease from spreading rapidly. In recent times, many methods have been suggested for the detection of COVID-19 disease have various flaws, to increase diagnosis performance, fresh investigations are required. In this article, automatically diagnosing COVID-19 using ECG images and deep learning approaches like as Visual Geometry Group (VGG) and AlexNet architectures have been proposed. The proposed method is able to classify between COVID-19, myocardial infarction, normal sinus rhythm, and other abnormal heart beats using Lead-II ECG image only. The efficacy of the technique proposed is validated by using a publicly available ECG image database. We have achieved an accuracy of 77.42% using Alexnet model and 75% accuracy with the help of VGG19 model.
Biomedical physics & engineering express
"2023-02-23T00:00:00"
[ "M KrishnaChaitanya", "Lakhan DevSharma", "JagdeepRahul", "DikshaSharma", "AmarjitRoy" ]
10.1088/2057-1976/acbd53
Deep Learning With Chest Radiographs for Making Prognoses in Patients With COVID-19: Retrospective Cohort Study.
An artificial intelligence (AI) model using chest radiography (CXR) may provide good performance in making prognoses for COVID-19. We aimed to develop and validate a prediction model using CXR based on an AI model and clinical variables to predict clinical outcomes in patients with COVID-19. This retrospective longitudinal study included patients hospitalized for COVID-19 at multiple COVID-19 medical centers between February 2020 and October 2020. Patients at Boramae Medical Center were randomly classified into training, validation, and internal testing sets (at a ratio of 8:1:1, respectively). An AI model using initial CXR images as input, a logistic regression model using clinical information, and a combined model using the output of the AI model (as CXR score) and clinical information were developed and trained to predict hospital length of stay (LOS) ≤2 weeks, need for oxygen supplementation, and acute respiratory distress syndrome (ARDS). The models were externally validated in the Korean Imaging Cohort of COVID-19 data set for discrimination and calibration. The AI model using CXR and the logistic regression model using clinical variables were suboptimal to predict hospital LOS ≤2 weeks or the need for oxygen supplementation but performed acceptably in the prediction of ARDS (AI model area under the curve [AUC] 0.782, 95% CI 0.720-0.845; logistic regression model AUC 0.878, 95% CI 0.838-0.919). The combined model performed better in predicting the need for oxygen supplementation (AUC 0.704, 95% CI 0.646-0.762) and ARDS (AUC 0.890, 95% CI 0.853-0.928) compared to the CXR score alone. Both the AI and combined models showed good calibration for predicting ARDS (P=.079 and P=.859). The combined prediction model, comprising the CXR score and clinical information, was externally validated as having acceptable performance in predicting severe illness and excellent performance in predicting ARDS in patients with COVID-19.
Journal of medical Internet research
"2023-02-17T00:00:00"
[ "Hyun WooLee", "Hyun JunYang", "HyungjinKim", "Ue-HwanKim", "Dong HyunKim", "Soon HoYoon", "Soo-YounHam", "Bo DaNam", "Kum JuChae", "DabeeLee", "Jin YoungYoo", "So HyeonBak", "Jin YoungKim", "Jin HwanKim", "Ki BeomKim", "Jung ImJung", "Jae-KwangLim", "Jong EunLee", "Myung JinChung", "Young KyungLee", "Young SeonKim", "Sang MinLee", "WoocheolKwon", "Chang MinPark", "Yun-HyeonKim", "Yeon JooJeong", "Kwang NamJin", "Jin MoGoo" ]
10.2196/42717 10.1136/thoraxjnl-2020-216425 10.1136/bmj.m1328 10.3389/fmed.2021.704256 10.2196/30157 10.21037/atm.2020.02.71 10.1183/13993003.04188-2020 10.1002/emp2.12205 10.1038/s41746-021-00546-w 10.1038/s41746-021-00546-w 10.1186/s12879-022-07617-7 10.1186/s12879-022-07617-7 10.1016/S2589-7500(21)00039-X 10.1136/bmj.g7594 10.3346/jkms.2020.35.e413 10.1097/JTO.0b013e3181ec173d 10.6339/jds.2005.03(3).206 10.1001/jamanetworkopen.2019.0204 10.1016/j.jbi.2017.10.008 10.1007/s11547-020-01232-9 10.1007/s00330-020-07504-2 10.1371/journal.pone.0245518 10.2214/AJR.20.24801 10.7759/cureus.9448 10.1038/s41598-021-86853-4 10.1038/s41598-021-86853-4 10.3904/kjim.2020.329 10.1038/s41467-020-18786-x 10.1038/s41467-020-18786-x 10.1056/NEJMcp2009575 10.1186/s13613-020-00650-2 10.7861/clinmed.2020-0214 10.4046/trd.2021.0009 10.1080/17476348.2020.1804365 10.1016/S2213-2600(21)00105-3 10.1056/NEJMoa2021436 10.1056/NEJMoa2007764
LDANet: Automatic lung parenchyma segmentation from CT images.
Automatic segmentation of the lung parenchyma from computed tomography (CT) images is helpful for the subsequent diagnosis and treatment of patients. In this paper, based on a deep learning algorithm, a lung dense attention network (LDANet) is proposed with two mechanisms: residual spatial attention (RSA) and gated channel attention (GCA). RSA is utilized to weight the spatial information of the lung parenchyma and suppress feature activation in irrelevant regions, while the weights of each channel are adaptively calibrated using GCA to implicitly predict potential key features. Then, a dual attention guidance module (DAGM) is designed to maximize the integration of the advantages of both mechanisms. In addition, LDANet introduces a lightweight dense block (LDB) that reuses feature information and a positioned transpose block (PTB) that realizes accurate positioning and gradually restores the image resolution until the predicted segmentation map is generated. Experiments are conducted on two public datasets, LIDC-IDRI and COVID-19 CT Segmentation, on which LDANet achieves Dice similarity coefficient values of 0.98430 and 0.98319, respectively, outperforming a state-of-the-art lung segmentation model. Additionally, the effectiveness of the main components of LDANet is demonstrated through ablation experiments.
Computers in biology and medicine
"2023-02-16T00:00:00"
[ "YingChen", "LongfengFeng", "ChengZheng", "TaohuiZhou", "LanLiu", "PengfeiLiu", "YiChen" ]
10.1016/j.compbiomed.2023.106659
Classifying COVID-19 Patients From Chest X-ray Images Using Hybrid Machine Learning Techniques: Development and Evaluation.
The COVID-19 pandemic has raised global concern, with moderate to severe cases displaying lung inflammation and respiratory failure. Chest x-ray (CXR) imaging is crucial for diagnosis and is usually interpreted by experienced medical specialists. Machine learning has been applied with acceptable accuracy, but computational efficiency has received less attention. We introduced a novel hybrid machine learning model to accurately classify COVID-19, non-COVID-19, and healthy patients from CXR images with reduced computational time and promising results. Our proposed model was thoroughly evaluated and compared with existing models. A retrospective study was conducted to analyze 5 public data sets containing 4200 CXR images using machine learning techniques including decision trees, support vector machines, and neural networks. The images were preprocessed to undergo image segmentation, enhancement, and feature extraction. The best performing machine learning technique was selected and combined into a multilayer hybrid classification model for COVID-19 (MLHC-COVID-19). The model consisted of 2 layers. The first layer was designed to differentiate healthy individuals from infected patients, while the second layer aimed to classify COVID-19 and non-COVID-19 patients. The MLHC-COVID-19 model was trained and evaluated on unseen COVID-19 CXR images, achieving reasonably high accuracy and F measures of 0.962 and 0.962, respectively. These results show the effectiveness of the MLHC-COVID-19 in classifying COVID-19 CXR images, with improved accuracy and a reduction in interpretation time. The model was also embedded into a web-based MLHC-COVID-19 computer-aided diagnosis system, which was made publicly available. The study found that the MLHC-COVID-19 model effectively differentiated CXR images of COVID-19 patients from those of healthy and non-COVID-19 individuals. It outperformed other state-of-the-art deep learning techniques and showed promising results. These results suggest that the MLHC-COVID-19 model could have been instrumental in early detection and diagnosis of COVID-19 patients, thus playing a significant role in controlling and managing the pandemic. Although the pandemic has slowed down, this model can be adapted and utilized for future similar situations. The model was also integrated into a publicly accessible web-based computer-aided diagnosis system.
JMIR formative research
"2023-02-14T00:00:00"
[ "ThanakornPhumkuea", "ThakerngWongsirichot", "KasikritDamkliang", "AsmaNavasakulpong" ]
10.2196/42324 10.1016/j.clim.2020.108427 10.1038/s41564-020-0695-z 10.1007/s12098-020-03263-6 10.1056/NEJMoa2001316 10.1016/j.jmii.2020.05.001 10.1016/S0140-6736(20)30211-7 10.1001/jama.2020.1585 10.1016/j.jaci.2020.04.029 10.1001/jama.2020.2783 10.1016/j.acra.2020.04.016 10.1002/jmv.26830 10.22037/aaem.v9i1.993 10.1001/jama.2020.3786 10.1016/j.ijid.2020.03.017 10.1056/nejmoa2030340 10.1016/j.patrec.2020.09.010 10.1038/s41598-020-76550-z 10.1038/s41598-020-76550-z 10.1109/access.2020.3010287 10.1038/s41746-020-00372-6 10.1038/s41746-020-00372-6 10.1371/journal.pone.0250688 10.1371/journal.pone.0250688 10.3390/a14060183 10.1016/j.cmpb.2020.105581 10.1016/j.mehy.2020.109761 10.1016/j.media.2020.101794 10.1007/s13755-020-00135-3 10.1371/journal.pone.0256630 10.1371/journal.pone.0256630 10.1371/journal.pone.0247839 10.1371/journal.pone.0247839 10.1016/j.eswa.2020.114054 10.1371/journal.pone.0242535 10.1371/journal.pone.0242535 10.1155/2021/8890226 10.1155/2021/8890226 10.1371/journal.pone.0029740 10.1371/journal.pone.0029740 10.1117/1.3115362 10.26671/ijirg.2019.6.8.101 10.1007/s10916-019-1376-4 10.7763/IJIMT.2013.V4.426 10.4015/S1016237218500412 10.1016/j.procs.2017.08.021 10.38094/JASTT20165 10.1016/j.compedu.2019.04.001 10.1016/j.procs.2019.02.085 10.14569/IJACSA.2016.070203 10.1007/s11227-018-2469-4 10.1155/2018/9385947 10.1109/MOCAST.2019.8741677 10.5121/ijdkp.2015.5201 10.1016/j.knosys.2011.06.013 10.3389/fninf.2014.00014 10.1613/jair.953 10.1016/j.ins.2019.10.048
CNN-RNN Network Integration for the Diagnosis of COVID-19 Using Chest X-ray and CT Images.
The 2019 coronavirus disease (COVID-19) has rapidly spread across the globe. It is crucial to identify positive cases as rapidly as humanely possible to provide appropriate treatment for patients and prevent the pandemic from spreading further. Both chest X-ray and computed tomography (CT) images are capable of accurately diagnosing COVID-19. To distinguish lung illnesses (i.e., COVID-19 and pneumonia) from normal cases using chest X-ray and CT images, we combined convolutional neural network (CNN) and recurrent neural network (RNN) models by replacing the fully connected layers of CNN with a version of RNN. In this framework, the attributes of CNNs were utilized to extract features and those of RNNs to calculate dependencies and classification base on extracted features. CNN models VGG19, ResNet152V2, and DenseNet121 were combined with long short-term memory (LSTM) and gated recurrent unit (GRU) RNN models, which are convenient to develop because these networks are all available as features on many platforms. The proposed method is evaluated using a large dataset totaling 16,210 X-ray and CT images (5252 COVID-19 images, 6154 pneumonia images, and 4804 normal images) were taken from several databases, which had various image sizes, brightness levels, and viewing angles. Their image quality was enhanced via normalization, gamma correction, and contrast-limited adaptive histogram equalization. The ResNet152V2 with GRU model achieved the best architecture with an accuracy of 93.37%, an F1 score of 93.54%, a precision of 93.73%, and a recall of 93.47%. From the experimental results, the proposed method is highly effective in distinguishing lung diseases. Furthermore, both CT and X-ray images can be used as input for classification, allowing for the rapid and easy detection of COVID-19.
Sensors (Basel, Switzerland)
"2023-02-12T00:00:00"
[ "IsoonKanjanasurat", "KasiTenghongsakul", "BoonchanaPurahong", "AttasitLasakul" ]
10.3390/s23031356 10.1515/labmed-2020-0135 10.1016/j.bios.2020.112830 10.1016/j.eswa.2022.117275 10.1007/s42452-021-04427-5 10.3390/app12157953 10.1109/ICEAST55249.2022.9826319 10.1109/TMI.2020.3040950 10.1016/j.imu.2020.100412 10.1016/j.compbiomed.2021.104319 10.1016/j.chemolab.2022.104695 10.1016/j.ejrad.2020.109041 10.1016/j.eng.2020.04.010 10.1007/s10489-020-01831-z 10.1016/j.asoc.2021.107918 10.48550/arXiv.2006.11988 10.1109/ACCESS.2020.3010287 10.1016/j.cell.2020.04.045 10.1016/j.cell.2018.02.010 10.1109/ICACCI.2014.6968381 10.1007/BF03178082 10.48550/arXiv.1409.1556 10.1145/3065386 10.1007/978-3-319-46493-0_38 10.1109/CVPR.2017.243 10.1162/neco.1997.9.8.1735 10.48550/arXiv.1406.1078 10.48550/arxiv.1207.0580 10.1021/ci0342472 10.1140/epjs/s11734-022-00647-x 10.1016/j.compbiomed.2020.103792 10.1080/07391102.2020.1767212 10.1016/j.imu.2020.100360 10.48550/arXiv.2201.09952 10.1159/000521658 10.1145/3551647 10.1051/matecconf/201927702001
COVID-19 Classification on Chest X-ray Images Using Deep Learning Methods.
Since December 2019, the coronavirus disease has significantly affected millions of people. Given the effect this disease has on the pulmonary systems of humans, there is a need for chest radiographic imaging (CXR) for monitoring the disease and preventing further deaths. Several studies have been shown that Deep Learning models can achieve promising results for COVID-19 diagnosis towards the CXR perspective. In this study, five deep learning models were analyzed and evaluated with the aim of identifying COVID-19 from chest X-ray images. The scope of this study is to highlight the significance and potential of individual deep learning models in COVID-19 CXR images. More specifically, we utilized the ResNet50, ResNet101, DenseNet121, DenseNet169 and InceptionV3 using Transfer Learning. All models were trained and validated on the largest publicly available repository for COVID-19 CXR images. Furthermore, they were evaluated on unknown data that was not used for training or validation, authenticating their performance and clarifying their usage in a medical scenario. All models achieved satisfactory performance where ResNet101 was the superior model achieving 96% in Precision, Recall and Accuracy, respectively. Our outcomes show the potential of deep learning models on COVID-19 medical offering a promising way for the deeper understanding of COVID-19.
International journal of environmental research and public health
"2023-02-12T00:00:00"
[ "MariosConstantinou", "ThemisExarchos", "Aristidis GVrahatis", "PanagiotisVlamos" ]
10.3390/ijerph20032035 10.15167/2421-4248/jpmh2020.61.3.1530 10.1080/14737159.2020.1757437 10.1101/2022.02.11.22270873 10.1148/radiol.2020200642 10.2214/AJR.20.23034 10.1016/S0140-6736(20)30183-5 10.1056/NEJMra072149 10.1109/RBME.2020.2987975 10.1109/TIM.2021.3128703 10.1109/TMI.2022.3219286 10.1016/j.apacoust.2020.107279 10.1016/j.measurement.2018.05.033 10.21608/mjeer.2019.76962 10.1038/nature21056 10.1038/s41598-019-48995-4 10.1109/ACCESS.2020.3010287 10.1007/s00330-021-08050-1 10.1016/j.compbiomed.2020.103805 10.1016/j.compbiomed.2020.103792 10.1007/s13246-020-00865-4 10.1038/s41598-020-76550-z 10.1109/ACCESS.2020.2994762 10.1016/j.compbiomed.2021.105002 10.1007/s11042-022-12156-z 10.1016/j.media.2020.101797 10.1109/TMI.2013.2284099 10.1109/TMI.2013.2290491 10.1007/s13755-021-00146-8 10.1088/1361-6560/ac34b2
A novel approach for detection of COVID-19 and Pneumonia using only binary classification from chest CT-scans.
The novel Coronavirus, Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) spread all over the world, causing a dramatic shift in circumstances that resulted in a massive pandemic, affecting the world's well-being and stability. It is an RNA virus that can infect both humans as well as animals. Diagnosis of the virus as soon as possible could contain and avoid a serious COVID-19 outbreak. Current pharmaceutical techniques and diagnostic methods tests such as Reverse Transcription-Polymerase Chain Reaction (RT-PCR) and Serology tests are time-consuming, expensive, and require a well-equipped laboratory for analysis, making them restrictive and inaccessible to everyone. Deep Learning has grown in popularity in recent years, and it now plays a crucial role in Image Classification, which also involves Medical Imaging. Using chest CT scans, this study explores the problem statement automation of differentiating COVID-19 contaminated individuals from healthy individuals. Convolutional Neural Networks (CNNs) can be trained to detect patterns in computed tomography scans (CT scans). Hence, different CNN models were used in the current study to identify variations in chest CT scans, with accuracies ranging from 91% to 98%. The Multiclass Classification method is used to build these architectures. This study also proposes a new approach for classifying CT images that use two binary classifications combined to work together, achieving 98.38% accuracy. All of these architectures' performances are compared using different classification metrics.
Neuroscience informatics
"2023-02-07T00:00:00"
[ "SanskarHasija", "PeddaputhaAkash", "MagantiBhargav Hemanth", "AnkitKumar", "SanjeevSharma" ]
10.1016/j.neuri.2022.100069 10.1016/j.compbiomed.2020.103795 10.1016/j.chaos.2020.110120 10.1109/ACCESS.2020.3010287 10.3389/fmed.2020.608525 10.3389/fmed.2020.608525 10.1109/ACCESS.2020.3001973 10.1148/radiol.2017162326 10.1101/2020.04.12.20062661 10.1007/s00330-020-06731-x 10.1016/j.chaos.2020.109944 10.1007/s11263-019-01228-7 10.1101/2020.07.11.20151332 10.1016/j.compbiomed.2020.103805 10.1016/j.asoc.2020.106897 10.1148/ryct.2020200067 10.1016/j.eng.2020.04.010 10.1007/s13244-018-0639-9
Classification of COVID-19 from community-acquired pneumonia: Boosting the performance with capsule network and maximum intensity projection image of CT scans.
The coronavirus disease 2019 (COVID-19) and community-acquired pneumonia (CAP) present a high degree of similarity in chest computed tomography (CT) images. Therefore, a procedure for accurately and automatically distinguishing between them is crucial. A deep learning method for distinguishing COVID-19 from CAP is developed using maximum intensity projection (MIP) images from CT scans. LinkNet is employed for lung segmentation of chest CT images. MIP images are produced by superposing the maximum gray of intrapulmonary CT values. The MIP images are input into a capsule network for patient-level pred iction and diagnosis of COVID-19. The network is trained using 333 CT scans (168 COVID-19/165 CAP) and validated on three external datasets containing 3581 CT scans (2110 COVID-19/1471 CAP). LinkNet achieves the highest Dice coefficient of 0.983 for lung segmentation. For the classification of COVID-19 and CAP, the capsule network with the DenseNet-121 feature extractor outperforms ResNet-50 and Inception-V3, achieving an accuracy of 0.970 on the training dataset. Without MIP or the capsule network, the accuracy decreases to 0.857 and 0.818, respectively. Accuracy scores of 0.961, 0.997, and 0.949 are achieved on the external validation datasets. The proposed method has higher or comparable sensitivity compared with ten state-of-the-art methods. The proposed method illustrates the feasibility of applying MIP images from CT scans to distinguish COVID-19 from CAP using capsule networks. MIP images provide conspicuous benefits when exploiting deep learning to detect COVID-19 lesions from CT scans and the capsule network improves COVID-19 diagnosis.
Computers in biology and medicine
"2023-02-05T00:00:00"
[ "YananWu", "QianqianQi", "ShouliangQi", "LimingYang", "HanlinWang", "HuiYu", "JianpengLi", "GangWang", "PingZhang", "ZhenyuLiang", "RongchangChen" ]
10.1016/j.compbiomed.2023.106567
PneuNet: deep learning for COVID-19 pneumonia diagnosis on chest X-ray image analysis using Vision Transformer.
A long-standing challenge in pneumonia diagnosis is recognizing the pathological lung texture, especially the ground-glass appearance pathological texture. One main difficulty lies in precisely extracting and recognizing the pathological features. The patients, especially those with mild symptoms, show very little difference in lung texture, neither conventional computer vision methods nor convolutional neural networks perform well on pneumonia diagnosis based on chest X-ray (CXR) images. In the meanwhile, the Coronavirus Disease 2019 (COVID-19) pandemic continues wreaking havoc around the world, where quick and accurate diagnosis backed by CXR images is in high demand. Rather than simply recognizing the patterns, extracting feature maps from the original CXR image is what we need in the classification process. Thus, we propose a Vision Transformer (VIT)-based model called PneuNet to make an accurate diagnosis backed by channel-based attention through X-ray images of the lung, where multi-head attention is applied on channel patches rather than feature patches. The techniques presented in this paper are oriented toward the medical application of deep neural networks and VIT. Extensive experiment results show that our method can reach 94.96% accuracy in the three-categories classification problem on the test set, which outperforms previous deep learning models.
Medical & biological engineering & computing
"2023-02-01T00:00:00"
[ "TianmuWang", "ZhenguoNie", "RuijingWang", "QingfengXu", "HongshiHuang", "HandingXu", "FuguiXie", "Xin-JunLiu" ]
10.1007/s11517-022-02746-2 10.2471/BLT.07.048769 10.1016/S1473-3099(20)30120-1 10.1148/radiol.2020200432 10.1148/ryct.2020200034 10.1016/S0140-6736(20)30183-5 10.1148/radiol.2020200343 10.1109/TMI.2018.2881415 10.1038/nature14539 10.1109/TPAMI.2013.50 10.1016/j.inffus.2020.11.005 10.1016/j.aej.2021.01.011 10.1007/s13246-020-00865-4 10.1016/j.compbiomed.2020.103792 10.1016/j.cmpb.2020.105581 10.1162/neco.1997.9.8.1735 10.1016/j.slast.2021.10.011 10.1007/s10489-020-02055-x