title
stringlengths 31
206
| authors
sequencelengths 1
85
| abstract
stringlengths 428
3.21k
| doi
stringlengths 21
31
| cleaned_title
stringlengths 31
206
| cleaned_abstract
stringlengths 428
3.21k
| key_phrases
sequencelengths 19
150
|
---|---|---|---|---|---|---|
Deep learning-enabled segmentation of ambiguous bioimages with deepflash2 | [
"Matthias Griebel",
"Dennis Segebarth",
"Nikolai Stein",
"Nina Schukraft",
"Philip Tovote",
"Robert Blum",
"Christoph M. Flath"
] | Bioimages frequently exhibit low signal-to-noise ratios due to experimental conditions, specimen characteristics, and imaging trade-offs. Reliable segmentation of such ambiguous images is difficult and laborious. Here we introduce deepflash2, a deep learning-enabled segmentation tool for bioimage analysis. The tool addresses typical challenges that may arise during the training, evaluation, and application of deep learning models on ambiguous data. The tool’s training and evaluation pipeline uses multiple expert annotations and deep model ensembles to achieve accurate results. The application pipeline supports various use-cases for expert annotations and includes a quality assurance mechanism in the form of uncertainty measures. Benchmarked against other tools, deepflash2 offers both high predictive accuracy and efficient computational resource usage. The tool is built upon established deep learning libraries and enables sharing of trained model ensembles with the research community. deepflash2 aims to simplify the integration of deep learning into bioimage analysis projects while improving accuracy and reliability. | 10.1038/s41467-023-36960-9 | deep learning-enabled segmentation of ambiguous bioimages with deepflash2 | bioimages frequently exhibit low signal-to-noise ratios due to experimental conditions, specimen characteristics, and imaging trade-offs. reliable segmentation of such ambiguous images is difficult and laborious. here we introduce deepflash2, a deep learning-enabled segmentation tool for bioimage analysis. the tool addresses typical challenges that may arise during the training, evaluation, and application of deep learning models on ambiguous data. the tool’s training and evaluation pipeline uses multiple expert annotations and deep model ensembles to achieve accurate results. the application pipeline supports various use-cases for expert annotations and includes a quality assurance mechanism in the form of uncertainty measures. benchmarked against other tools, deepflash2 offers both high predictive accuracy and efficient computational resource usage. the tool is built upon established deep learning libraries and enables sharing of trained model ensembles with the research community. deepflash2 aims to simplify the integration of deep learning into bioimage analysis projects while improving accuracy and reliability. | [
"bioimages",
"noise",
"experimental conditions",
"specimen characteristics",
"imaging trade-offs",
"reliable segmentation",
"such ambiguous images",
"we",
"deepflash2",
"a deep learning-enabled segmentation tool",
"bioimage analysis",
"the tool",
"typical challenges",
"that",
"the training, evaluation",
"application",
"deep learning models",
"ambiguous data",
"the tool’s training and evaluation pipeline",
"multiple expert annotations",
"deep model ensembles",
"accurate results",
"the application pipeline",
"various use-cases",
"expert annotations",
"a quality assurance mechanism",
"the form",
"uncertainty measures",
"other tools",
"deepflash2",
"both high predictive accuracy",
"efficient computational resource usage",
"the tool",
"established deep learning libraries",
"sharing",
"trained model ensembles",
"the research community",
"deepflash2",
"the integration",
"deep learning",
"bioimage analysis projects",
"accuracy",
"reliability",
"deepflash2",
"deepflash2",
"deepflash2"
] |
Automatic segmentation of inconstant fractured fragments for tibia/fibula from CT images using deep learning | [
"Hyeonjoo Kim",
"Young Dae Jeon",
"Ki Bong Park",
"Hayeong Cha",
"Moo-Sub Kim",
"Juyeon You",
"Se-Won Lee",
"Seung-Han Shin",
"Yang-Guk Chung",
"Sung Bin Kang",
"Won Seuk Jang",
"Do-Kun Yoon"
] | Orthopaedic surgeons need to correctly identify bone fragments using 2D/3D CT images before trauma surgery. Advances in deep learning technology provide good insights into trauma surgery over manual diagnosis. This study demonstrates the application of the DeepLab v3+ -based deep learning model for the automatic segmentation of fragments of the fractured tibia and fibula from CT images and the results of the evaluation of the performance of the automatic segmentation. The deep learning model, which was trained using over 11 million images, showed good performance with a global accuracy of 98.92%, a weighted intersection over the union of 0.9841, and a mean boundary F1 score of 0.8921. Moreover, deep learning performed 5–8 times faster than the experts’ recognition performed manually, which is comparatively inefficient, with almost the same significance. This study will play an important role in preoperative surgical planning for trauma surgery with convenience and speed. | 10.1038/s41598-023-47706-4 | automatic segmentation of inconstant fractured fragments for tibia/fibula from ct images using deep learning | orthopaedic surgeons need to correctly identify bone fragments using 2d/3d ct images before trauma surgery. advances in deep learning technology provide good insights into trauma surgery over manual diagnosis. this study demonstrates the application of the deeplab v3+ -based deep learning model for the automatic segmentation of fragments of the fractured tibia and fibula from ct images and the results of the evaluation of the performance of the automatic segmentation. the deep learning model, which was trained using over 11 million images, showed good performance with a global accuracy of 98.92%, a weighted intersection over the union of 0.9841, and a mean boundary f1 score of 0.8921. moreover, deep learning performed 5–8 times faster than the experts’ recognition performed manually, which is comparatively inefficient, with almost the same significance. this study will play an important role in preoperative surgical planning for trauma surgery with convenience and speed. | [
"orthopaedic surgeons",
"bone fragments",
"2d/3d ct images",
"trauma surgery",
"advances",
"deep learning technology",
"good insights",
"trauma surgery",
"manual diagnosis",
"this study",
"the application",
"the deeplab v3",
"-based deep learning model",
"the automatic segmentation",
"fragments",
"the fractured tibia",
"ct images",
"the results",
"the evaluation",
"the performance",
"the automatic segmentation",
"the deep learning model",
"which",
"over 11 million images",
"good performance",
"a global accuracy",
"98.92%",
"a weighted intersection",
"the union",
"a mean boundary f1 score",
"deep learning",
"the experts’ recognition",
"which",
"almost the same significance",
"this study",
"an important role",
"preoperative surgical planning",
"trauma surgery",
"convenience",
"speed",
"2d/3d",
"over 11 million",
"98.92%",
"0.9841",
"0.8921",
"5–8"
] |
Smart city urban planning using an evolutionary deep learning model | [
"Mansoor Alghamdi"
] | Following the evolution of big data collection, storage, and manipulation techniques, deep learning has drawn the attention of numerous recent studies proposing solutions for smart cities. These solutions were focusing especially on energy consumption, pollution levels, public services, and traffic management issues. Predicting urban evolution and planning is another recent concern for smart cities. In this context, this paper introduces a hybrid model that incorporates evolutionary optimization algorithms, such as Teaching–learning-based optimization (TLBO), into the functioning process of neural deep learning models, such as recurrent neural network (RNN) networks. According to the achieved simulations, deep learning enhanced by evolutionary optimizers can be an effective and promising method for predicting urban evolution of future smart cities. | 10.1007/s00500-023-08219-4 | smart city urban planning using an evolutionary deep learning model | following the evolution of big data collection, storage, and manipulation techniques, deep learning has drawn the attention of numerous recent studies proposing solutions for smart cities. these solutions were focusing especially on energy consumption, pollution levels, public services, and traffic management issues. predicting urban evolution and planning is another recent concern for smart cities. in this context, this paper introduces a hybrid model that incorporates evolutionary optimization algorithms, such as teaching–learning-based optimization (tlbo), into the functioning process of neural deep learning models, such as recurrent neural network (rnn) networks. according to the achieved simulations, deep learning enhanced by evolutionary optimizers can be an effective and promising method for predicting urban evolution of future smart cities. | [
"the evolution",
"big data collection",
"storage",
"manipulation techniques",
"deep learning",
"the attention",
"numerous recent studies",
"solutions",
"smart cities",
"these solutions",
"energy consumption",
"pollution levels",
"public services",
"traffic management issues",
"urban evolution",
"planning",
"another recent concern",
"smart cities",
"this context",
"this paper",
"a hybrid model",
"that",
"evolutionary optimization algorithms",
"teaching",
"learning-based optimization",
"tlbo",
"the functioning process",
"neural deep learning models",
"recurrent neural network (rnn) networks",
"the achieved simulations",
"deep learning",
"evolutionary optimizers",
"an effective and promising method",
"urban evolution",
"future smart cities"
] |
Deep Multi-task Learning for Animal Chest Circumference Estimation from Monocular Images | [
"Hongtao Zhang",
"Dongbing Gu"
] | The applications of deep learning algorithms with images to various scenarios have attracted significant research attention. However, application scenarios in animal breeding managements are still limited. In this paper we propose a new deep learning framework to estimate the chest circumference of domestic animals from images. This parameter is a key metric for breeding and monitoring the quality of animal in animal husbandry. We design a set of feature extraction methods based on a multi-task learning framework to address the challenging issues in the main estimation task. The multiple tasks in our proposed framework include object segmentation, keypoint estimation, and depth estimation of cow from monocular images. The domain-specific features extracted from these tasks improve upon our main estimation task. In addition, we also attempt to reduce unnecessary computations during the framework design to reduce the cost of subsequent practical implementation of the developed system. Our proposed framework is tested on our own collected dataset to evaluate its performance. | 10.1007/s12559-024-10250-y | deep multi-task learning for animal chest circumference estimation from monocular images | the applications of deep learning algorithms with images to various scenarios have attracted significant research attention. however, application scenarios in animal breeding managements are still limited. in this paper we propose a new deep learning framework to estimate the chest circumference of domestic animals from images. this parameter is a key metric for breeding and monitoring the quality of animal in animal husbandry. we design a set of feature extraction methods based on a multi-task learning framework to address the challenging issues in the main estimation task. the multiple tasks in our proposed framework include object segmentation, keypoint estimation, and depth estimation of cow from monocular images. the domain-specific features extracted from these tasks improve upon our main estimation task. in addition, we also attempt to reduce unnecessary computations during the framework design to reduce the cost of subsequent practical implementation of the developed system. our proposed framework is tested on our own collected dataset to evaluate its performance. | [
"the applications",
"deep learning algorithms",
"images",
"various scenarios",
"significant research attention",
"application scenarios",
"animal breeding managements",
"this paper",
"we",
"a new deep learning framework",
"the chest circumference",
"domestic animals",
"images",
"this parameter",
"a key metric",
"the quality",
"animal",
"animal husbandry",
"we",
"a set",
"feature extraction methods",
"a multi-task learning framework",
"the challenging issues",
"the main estimation task",
"the multiple tasks",
"our proposed framework",
"object segmentation",
"keypoint estimation",
"depth estimation",
"cow",
"monocular images",
"the domain-specific features",
"these tasks",
"our main estimation task",
"addition",
"we",
"unnecessary computations",
"the framework design",
"the cost",
"subsequent practical implementation",
"the developed system",
"our proposed framework",
"dataset",
"its performance"
] |
Recent advances in deep learning models: a systematic literature review | [
"Ruchika Malhotra",
"Priya Singh"
] | In recent years, deep learning has evolved as a rapidly growing and stimulating field of machine learning and has redefined state-of-the-art performances in a variety of applications. There are multiple deep learning models that have distinct architectures and capabilities. Up to the present, a large number of novel variants of these baseline deep learning models is proposed to address the shortcomings of the existing baseline models. This paper provides a comprehensive review of one hundred seven novel variants of six baseline deep learning models viz. Convolutional Neural Network, Recurrent Neural Network, Long Short Term Memory, Generative Adversarial Network, Autoencoder and Transformer Neural Network. The current review thoroughly examines the novel variants of each of the six baseline models to identify the advancements adopted by them to address one or more limitations of the respective baseline model. It is achieved by critically reviewing the novel variants based on their improved approach. It further provides the merits and demerits of incorporating the advancements in novel variants compared to the baseline deep learning model. Additionally, it reports the domain, datasets and performance measures exploited by the novel variants to make an overall judgment in terms of the improvements. This is because the performance of the deep learning models are subject to the application domain, type of datasets and may also vary on different performance measures. The critical findings of the review would facilitate the researchers and practitioners with the most recent progressions and advancements in the baseline deep learning models and guide them in selecting an appropriate novel variant of the baseline to solve deep learning based tasks in a similar setting. | 10.1007/s11042-023-15295-z | recent advances in deep learning models: a systematic literature review | in recent years, deep learning has evolved as a rapidly growing and stimulating field of machine learning and has redefined state-of-the-art performances in a variety of applications. there are multiple deep learning models that have distinct architectures and capabilities. up to the present, a large number of novel variants of these baseline deep learning models is proposed to address the shortcomings of the existing baseline models. this paper provides a comprehensive review of one hundred seven novel variants of six baseline deep learning models viz. convolutional neural network, recurrent neural network, long short term memory, generative adversarial network, autoencoder and transformer neural network. the current review thoroughly examines the novel variants of each of the six baseline models to identify the advancements adopted by them to address one or more limitations of the respective baseline model. it is achieved by critically reviewing the novel variants based on their improved approach. it further provides the merits and demerits of incorporating the advancements in novel variants compared to the baseline deep learning model. additionally, it reports the domain, datasets and performance measures exploited by the novel variants to make an overall judgment in terms of the improvements. this is because the performance of the deep learning models are subject to the application domain, type of datasets and may also vary on different performance measures. the critical findings of the review would facilitate the researchers and practitioners with the most recent progressions and advancements in the baseline deep learning models and guide them in selecting an appropriate novel variant of the baseline to solve deep learning based tasks in a similar setting. | [
"recent years",
"deep learning",
"a rapidly growing and stimulating field",
"machine learning",
"the-art",
"a variety",
"applications",
"multiple deep learning models",
"that",
"distinct architectures",
"capabilities",
"the present",
"a large number",
"novel variants",
"these baseline deep learning models",
"the shortcomings",
"the existing baseline models",
"this paper",
"a comprehensive review",
"one hundred seven novel variants",
"six baseline deep learning models",
"convolutional neural network",
"recurrent neural network",
"long short term memory",
"generative adversarial network",
"autoencoder",
"transformer neural network",
"the current review",
"the novel variants",
"each",
"the six baseline models",
"the advancements",
"them",
"one or more limitations",
"the respective baseline model",
"it",
"the novel variants",
"their improved approach",
"it",
"the merits",
"demerits",
"the advancements",
"novel variants",
"the baseline deep learning model",
"it",
"the domain",
"datasets",
"performance measures",
"the novel variants",
"an overall judgment",
"terms",
"the improvements",
"this",
"the performance",
"the deep learning models",
"the application domain",
"type",
"datasets",
"different performance measures",
"the critical findings",
"the review",
"the researchers",
"practitioners",
"the most recent progressions",
"advancements",
"the baseline deep learning models",
"them",
"an appropriate novel variant",
"the baseline",
"deep learning based tasks",
"a similar setting",
"recent years",
"one hundred seven",
"six",
"six",
"one"
] |
Deep Learning Based Alzheimer Disease Diagnosis: A Comprehensive Review | [
"S. Suganyadevi",
"A. Shiny Pershiya",
"K. Balasamy",
"V. Seethalakshmi",
"Saroj Bala",
"Kumud Arora"
] | Dementia encompasses a range of cognitive disorders, with Alzheimer’s Disease being the utmost widespread and devastating. AD gradually erodes memory and daily functioning through the progressive deterioration of brain cells. It poses a significant global health challenge, necessitating early identification and intervention. Detecting AD at its onset holds immense potential to predict future health outcomes for individuals. By harnessing the power of artificial intelligence and leveraging MRI scans, we could utilize advanced technology to not only classify AD patients but also predict the likelihood of them developing this life-altering condition. This paper delves into the latest advancements in Deep Learning techniques and their functions in image analysis in medical field. Its primary goals are to elucidate the intricacies of medical image processing and to elucidate and implement key findings and recommendations from recent research. | 10.1007/s42979-024-02743-2 | deep learning based alzheimer disease diagnosis: a comprehensive review | dementia encompasses a range of cognitive disorders, with alzheimer’s disease being the utmost widespread and devastating. ad gradually erodes memory and daily functioning through the progressive deterioration of brain cells. it poses a significant global health challenge, necessitating early identification and intervention. detecting ad at its onset holds immense potential to predict future health outcomes for individuals. by harnessing the power of artificial intelligence and leveraging mri scans, we could utilize advanced technology to not only classify ad patients but also predict the likelihood of them developing this life-altering condition. this paper delves into the latest advancements in deep learning techniques and their functions in image analysis in medical field. its primary goals are to elucidate the intricacies of medical image processing and to elucidate and implement key findings and recommendations from recent research. | [
"dementia",
"a range",
"cognitive disorders",
"alzheimer’s disease",
"ad",
"memory",
"the progressive deterioration",
"brain cells",
"it",
"a significant global health challenge",
"early identification",
"intervention",
"ad",
"its onset",
"immense potential",
"future health outcomes",
"individuals",
"the power",
"artificial intelligence",
"mri scans",
"we",
"advanced technology",
"ad patients",
"the likelihood",
"them",
"this life-altering condition",
"this paper",
"the latest advancements",
"deep learning techniques",
"their functions",
"image analysis",
"medical field",
"its primary goals",
"the intricacies",
"medical image processing",
"key findings",
"recommendations",
"recent research",
"daily"
] |
Deep Learning Models for Diagnosis of Schizophrenia Using EEG Signals: Emerging Trends, Challenges, and Prospects | [
"Rakesh Ranjan",
"Bikash Chandra Sahana",
"Ashish Kumar Bhandari"
] | Schizophrenia (ScZ) is a chronic neuropsychiatric disorder characterized by disruptions in cognitive, perceptual, social, emotional, and behavioral functions. In the traditional approach, the diagnosis of ScZ primarily relies on the subject’s response and the psychiatrist’s experience, making it highly subjective, prejudiced, and time-consuming. In recent medical research, incorporating deep learning (DL) into the diagnostic process improves performance by reducing inter-observer variation and providing qualitative and quantitative support for clinical decisions. Compared with other modalities, such as magnetic resonance images (MRI) or computed tomography (CT) scans, electroencephalogram (EEG) signals give better insights into the underlying neural mechanisms and brain biomarkers of ScZ. Deep learning models show promising results but the utilization of EEG signals as an effective biomarker for ScZ is still under research. Numerous deep learning models have recently been developed for automated ScZ diagnosis with EEG signals exclusively, yet a comprehensive assessment of these approaches still does not exist in the literature. To fill this gap, we comprehensively review the current advancements in deep learning-based schizophrenia diagnosis using EEG signals. This review is intended to provide systematic details of prominent components: deep learning models, ScZ EEG datasets, data preprocessing approaches, input data formulations for DL, chronological DL methodology advancement in ScZ diagnosis, and design trends of DL architecture. Finally, few challenges in both clinical and technical aspects that create hindrances in achieving the full potential of DL models in EEG-based ScZ diagnosis are expounded along with future outlooks. | 10.1007/s11831-023-10047-6 | deep learning models for diagnosis of schizophrenia using eeg signals: emerging trends, challenges, and prospects | schizophrenia (scz) is a chronic neuropsychiatric disorder characterized by disruptions in cognitive, perceptual, social, emotional, and behavioral functions. in the traditional approach, the diagnosis of scz primarily relies on the subject’s response and the psychiatrist’s experience, making it highly subjective, prejudiced, and time-consuming. in recent medical research, incorporating deep learning (dl) into the diagnostic process improves performance by reducing inter-observer variation and providing qualitative and quantitative support for clinical decisions. compared with other modalities, such as magnetic resonance images (mri) or computed tomography (ct) scans, electroencephalogram (eeg) signals give better insights into the underlying neural mechanisms and brain biomarkers of scz. deep learning models show promising results but the utilization of eeg signals as an effective biomarker for scz is still under research. numerous deep learning models have recently been developed for automated scz diagnosis with eeg signals exclusively, yet a comprehensive assessment of these approaches still does not exist in the literature. to fill this gap, we comprehensively review the current advancements in deep learning-based schizophrenia diagnosis using eeg signals. this review is intended to provide systematic details of prominent components: deep learning models, scz eeg datasets, data preprocessing approaches, input data formulations for dl, chronological dl methodology advancement in scz diagnosis, and design trends of dl architecture. finally, few challenges in both clinical and technical aspects that create hindrances in achieving the full potential of dl models in eeg-based scz diagnosis are expounded along with future outlooks. | [
"schizophrenia",
"scz",
"a chronic neuropsychiatric disorder",
"disruptions",
"cognitive, perceptual, social, emotional, and behavioral functions",
"the traditional approach",
"the diagnosis",
"scz",
"the subject’s response",
"the psychiatrist’s experience",
"it",
"recent medical research",
"deep learning",
"dl",
"the diagnostic process",
"performance",
"inter-observer variation",
"qualitative and quantitative support",
"clinical decisions",
"other modalities",
"magnetic resonance images",
"mri",
"tomography (ct) scans",
"electroencephalogram (eeg) signals",
"better insights",
"the underlying neural mechanisms",
"brain biomarkers",
"scz",
"deep learning models",
"promising results",
"the utilization",
"eeg signals",
"an effective biomarker",
"scz",
"research",
"numerous deep learning models",
"automated scz diagnosis",
"eeg signals",
"a comprehensive assessment",
"these approaches",
"the literature",
"this gap",
"we",
"the current advancements",
"deep learning-based schizophrenia diagnosis",
"eeg signals",
"this review",
"systematic details",
"prominent components",
"deep learning models",
"scz eeg datasets",
"data",
"approaches",
"input data formulations",
"dl, chronological dl methodology advancement",
"scz diagnosis",
"design trends",
"dl architecture",
"few challenges",
"both clinical and technical aspects",
"that",
"hindrances",
"the full potential",
"dl models",
"eeg-based scz diagnosis",
"future outlooks"
] |
GMPP-NN: a deep learning architecture for graph molecular property prediction | [
"Outhman Abbassi",
"Soumia Ziti",
"Meryam Belhiah",
"Souad Najoua Lagmiri",
"Yassine Zaoui Seghroucheni"
] | The pharmacy industry is highly focused on drug discovery and development for the identification and optimization of potential drug candidates. One of the key aspects of this process is the prediction of various molecular properties that justify their potential effectiveness in treating specific diseases. Recently, graph neural networks have gained significant attention, primarily due to their strong suitability for predicting complex relationships that exist between atoms and other molecular structures. GNNs require significant depth to capture global features and to allow the network to iteratively aggregate and propagate information across the entire graph structure. In this research study, we present a deep learning architecture known as a graph molecular property prediction neural network. which combines MPNN feature extraction with a multilayer perceptron classifier. The deep learning architecture was evaluated on four benchmark datasets, and its performance was compared to the smiles transformer, fingerprint to vector, deeper graph convolutional networks, geometry-enhanced molecular, and atom-bond transformer-based message-passing neural network. The results showed that the architecture outperformed the other models using the receiver operating characteristic area under the curve metric. These findings offer an exciting opportunity to enhance and improve molecular property prediction in drug discovery and development. | 10.1007/s42452-024-05944-9 | gmpp-nn: a deep learning architecture for graph molecular property prediction | the pharmacy industry is highly focused on drug discovery and development for the identification and optimization of potential drug candidates. one of the key aspects of this process is the prediction of various molecular properties that justify their potential effectiveness in treating specific diseases. recently, graph neural networks have gained significant attention, primarily due to their strong suitability for predicting complex relationships that exist between atoms and other molecular structures. gnns require significant depth to capture global features and to allow the network to iteratively aggregate and propagate information across the entire graph structure. in this research study, we present a deep learning architecture known as a graph molecular property prediction neural network. which combines mpnn feature extraction with a multilayer perceptron classifier. the deep learning architecture was evaluated on four benchmark datasets, and its performance was compared to the smiles transformer, fingerprint to vector, deeper graph convolutional networks, geometry-enhanced molecular, and atom-bond transformer-based message-passing neural network. the results showed that the architecture outperformed the other models using the receiver operating characteristic area under the curve metric. these findings offer an exciting opportunity to enhance and improve molecular property prediction in drug discovery and development. | [
"the pharmacy industry",
"drug discovery",
"development",
"the identification",
"optimization",
"potential drug candidates",
"the key aspects",
"this process",
"the prediction",
"various molecular properties",
"that",
"their potential effectiveness",
"specific diseases",
"graph neural networks",
"significant attention",
"their strong suitability",
"complex relationships",
"that",
"atoms",
"other molecular structures",
"gnns",
"significant depth",
"global features",
"the network",
"information",
"the entire graph structure",
"this research study",
"we",
"a deep learning architecture",
"a graph molecular property prediction neural network",
"which",
"mpnn feature extraction",
"a multilayer perceptron classifier",
"the deep learning architecture",
"four benchmark datasets",
"its performance",
"the smiles transformer",
"vector",
"deeper graph convolutional networks",
"atom-bond transformer-based message-passing neural network",
"the results",
"the architecture",
"the other models",
"the receiver operating characteristic area",
"the curve metric",
"these findings",
"an exciting opportunity",
"molecular property prediction",
"drug discovery",
"development",
"one",
"four"
] |
White-box inference attack: compromising the security of deep learning-based COVID-19 diagnosis systems | [
"Burhan Ul Haque Sheikh",
"Aasim Zafar"
] | The COVID-19 pandemic has necessitated the exploration of innovative diagnostic approaches, including the utilization of machine learning (ML) and deep learning (DL) technologies. However, recent findings shed light on the susceptibility of deep learning-based models to adversarial attacks, leading to erroneous predictions. This study investigates the vulnerability of a deep COVID-19 diagnosis model to the Fast Gradient Sign Method (FGSM) adversarial attack. Leveraging transfer learning of EfficientNet-B2 on a publicly available dataset, a deep learning-based COVID-19 diagnosis model is developed, achieving an impressive average accuracy of 94.56% on clean test data. However, when subjected to an untargeted FGSM attack with varying epsilon values, the model’s accuracy is severely compromised, plummeting to 21.72% at epsilon 0.008. Notably, the attack successfully misclassifies adversarial COVID-19 images as normal with 100% confidence. This study underscores the critical need for further research and development to address these vulnerabilities and ensure the reliability and accuracy of deep learning models in the diagnosis of COVID-19 patients. | 10.1007/s41870-023-01538-7 | white-box inference attack: compromising the security of deep learning-based covid-19 diagnosis systems | the covid-19 pandemic has necessitated the exploration of innovative diagnostic approaches, including the utilization of machine learning (ml) and deep learning (dl) technologies. however, recent findings shed light on the susceptibility of deep learning-based models to adversarial attacks, leading to erroneous predictions. this study investigates the vulnerability of a deep covid-19 diagnosis model to the fast gradient sign method (fgsm) adversarial attack. leveraging transfer learning of efficientnet-b2 on a publicly available dataset, a deep learning-based covid-19 diagnosis model is developed, achieving an impressive average accuracy of 94.56% on clean test data. however, when subjected to an untargeted fgsm attack with varying epsilon values, the model’s accuracy is severely compromised, plummeting to 21.72% at epsilon 0.008. notably, the attack successfully misclassifies adversarial covid-19 images as normal with 100% confidence. this study underscores the critical need for further research and development to address these vulnerabilities and ensure the reliability and accuracy of deep learning models in the diagnosis of covid-19 patients. | [
"the exploration",
"innovative diagnostic approaches",
"the utilization",
"machine learning",
"ml",
"deep learning",
"(dl) technologies",
"recent findings",
"light",
"the susceptibility",
"deep learning-based models",
"adversarial attacks",
"erroneous predictions",
"this study",
"the vulnerability",
"a deep covid-19 diagnosis model",
"the fast gradient sign method",
"fgsm) adversarial attack",
"transfer learning",
"efficientnet-b2",
"a publicly available dataset",
"a deep learning-based covid-19 diagnosis model",
"an impressive average accuracy",
"94.56%",
"clean test data",
"an untargeted fgsm attack",
"varying epsilon values",
"the model’s accuracy",
"21.72%",
"epsilon",
"the attack",
"adversarial covid-19 images",
"100% confidence",
"this study",
"the critical need",
"further research",
"development",
"these vulnerabilities",
"the reliability",
"accuracy",
"deep learning models",
"the diagnosis",
"covid-19 patients",
"covid-19",
"covid-19",
"covid-19",
"94.56%",
"21.72%",
"0.008",
"covid-19",
"100%",
"covid-19"
] |
Detection of vulnerabilities in blockchain smart contracts using deep learning | [
"Namya Aankur Gupta",
"Mansi Bansal",
"Seema Sharma",
"Deepti Mehrotra",
"Misha Kakkar"
] | Blockchain helps to give a sense of security as there is only one history of transactions visible to all the involved parties. Smart contracts enable users to manage significant asset amounts of finances on the blockchain without the involvement of any intermediaries. The conditions and checks that have been written in smart contract and executed to the application cannot be changed again. However, these unique features pose some other risks to the smart contract. Smart contracts have several flaws in its programmable language and methods of execution, despite being a developing technology. To build smart contracts and implement numerous complicated business logics, high-level languages are used by the developers to code smart contracts. Thus, blockchain smart contract is the most important element of any decentralized application, posing the risk for it to be attacked. So, the presence of vulnerabilities are to be taken care of on a priority basis. It is important for detection of vulnerabilities in a smart contract and only then implement and connect it with applications to ensure security of funds. The motive of the paper is to discuss how deep learning may be utilized to deliver bug-free secure smart contracts. Objective of the paper is to detect three kinds of vulnerabilities- reentrancy, timestamp and infinite loop. A deep learning model has been created for detection of smart contract vulnerabilities using graph neural networks. The performance of this model has been compared to the present automated tools and other independent methods. It has been shown that this model has greater accuracy than other methods while comparing the prediction of smart contract vulnerabilities in existing models. | 10.1007/s11276-024-03755-9 | detection of vulnerabilities in blockchain smart contracts using deep learning | blockchain helps to give a sense of security as there is only one history of transactions visible to all the involved parties. smart contracts enable users to manage significant asset amounts of finances on the blockchain without the involvement of any intermediaries. the conditions and checks that have been written in smart contract and executed to the application cannot be changed again. however, these unique features pose some other risks to the smart contract. smart contracts have several flaws in its programmable language and methods of execution, despite being a developing technology. to build smart contracts and implement numerous complicated business logics, high-level languages are used by the developers to code smart contracts. thus, blockchain smart contract is the most important element of any decentralized application, posing the risk for it to be attacked. so, the presence of vulnerabilities are to be taken care of on a priority basis. it is important for detection of vulnerabilities in a smart contract and only then implement and connect it with applications to ensure security of funds. the motive of the paper is to discuss how deep learning may be utilized to deliver bug-free secure smart contracts. objective of the paper is to detect three kinds of vulnerabilities- reentrancy, timestamp and infinite loop. a deep learning model has been created for detection of smart contract vulnerabilities using graph neural networks. the performance of this model has been compared to the present automated tools and other independent methods. it has been shown that this model has greater accuracy than other methods while comparing the prediction of smart contract vulnerabilities in existing models. | [
"blockchain",
"a sense",
"security",
"only one history",
"transactions",
"all the involved parties",
"smart contracts",
"users",
"significant asset amounts",
"finances",
"the blockchain",
"the involvement",
"any intermediaries",
"the conditions",
"checks",
"that",
"smart contract",
"the application",
"these unique features",
"some other risks",
"the smart contract",
"smart contracts",
"several flaws",
"its programmable language",
"methods",
"execution",
"a developing technology",
"smart contracts",
"numerous complicated business logics",
"high-level languages",
"the developers",
"smart contracts",
"blockchain smart contract",
"the most important element",
"any decentralized application",
"the risk",
"it",
"the presence",
"vulnerabilities",
"care",
"a priority basis",
"it",
"detection",
"vulnerabilities",
"a smart contract",
"it",
"applications",
"security",
"funds",
"the motive",
"the paper",
"how deep learning",
"bug-free secure smart contracts",
"objective",
"the paper",
"three kinds",
"vulnerabilities- reentrancy",
"timestamp",
"infinite loop",
"a deep learning model",
"detection",
"smart contract vulnerabilities",
"graph neural networks",
"the performance",
"this model",
"the present automated tools",
"other independent methods",
"it",
"this model",
"greater accuracy",
"other methods",
"the prediction",
"smart contract vulnerabilities",
"existing models",
"three"
] |
Integrating Deep Learning and Reinforcement Learning for Enhanced Financial Risk Forecasting in Supply Chain Management | [
"Yuanfei Cui",
"Fengtong Yao"
] | In today’s dynamic business landscape, the integration of supply chain management and financial risk forecasting is imperative for sustained success. This research paper introduces a groundbreaking approach that seamlessly merges deep autoencoder (DAE) models with reinforcement learning (RL) techniques to enhance financial risk forecasting within the realm of supply chain management. The primary objective of this research is to optimize financial decision-making processes by extracting key feature representations from financial data and leveraging RL for decision optimization. To achieve this, the paper presents the PSO-SDAE model, a novel and sophisticated approach to financial risk forecasting. By incorporating advanced noise reduction features and optimization algorithms, the PSO-SDAE model significantly enhances the accuracy and reliability of financial risk predictions. Notably, the PSO-SDAE model goes beyond traditional forecasting methods by addressing the need for real-time decision-making in the rapidly evolving landscape of financial risk management. This is achieved through the utilization of a distributed RL algorithm, which expedites the processing of supply chain data while maintaining both efficiency and accuracy. The results of our study showcase the exceptional precision of the PSO-SDAE model in predicting financial risks, underscoring its efficacy for proactive risk management within supply chain operations. Moreover, the augmented processing speed of the model enables real-time analysis and decision-making — a critical capability in today’s fast-paced business environment. | 10.1007/s13132-024-01946-5 | integrating deep learning and reinforcement learning for enhanced financial risk forecasting in supply chain management | in today’s dynamic business landscape, the integration of supply chain management and financial risk forecasting is imperative for sustained success. this research paper introduces a groundbreaking approach that seamlessly merges deep autoencoder (dae) models with reinforcement learning (rl) techniques to enhance financial risk forecasting within the realm of supply chain management. the primary objective of this research is to optimize financial decision-making processes by extracting key feature representations from financial data and leveraging rl for decision optimization. to achieve this, the paper presents the pso-sdae model, a novel and sophisticated approach to financial risk forecasting. by incorporating advanced noise reduction features and optimization algorithms, the pso-sdae model significantly enhances the accuracy and reliability of financial risk predictions. notably, the pso-sdae model goes beyond traditional forecasting methods by addressing the need for real-time decision-making in the rapidly evolving landscape of financial risk management. this is achieved through the utilization of a distributed rl algorithm, which expedites the processing of supply chain data while maintaining both efficiency and accuracy. the results of our study showcase the exceptional precision of the pso-sdae model in predicting financial risks, underscoring its efficacy for proactive risk management within supply chain operations. moreover, the augmented processing speed of the model enables real-time analysis and decision-making — a critical capability in today’s fast-paced business environment. | [
"today’s dynamic business landscape",
"the integration",
"supply chain management",
"financial risk forecasting",
"sustained success",
"this research paper",
"a groundbreaking approach",
"that",
"deep autoencoder (dae) models",
"reinforcement learning",
"(rl) techniques",
"financial risk forecasting",
"the realm",
"supply chain management",
"the primary objective",
"this research",
"financial decision-making processes",
"key feature representations",
"financial data",
"rl",
"decision optimization",
"this",
"the paper",
"the pso-sdae model",
"a novel and sophisticated approach",
"financial risk forecasting",
"advanced noise reduction features",
"optimization algorithms",
"the pso-sdae model",
"the accuracy",
"reliability",
"financial risk predictions",
"the pso-sdae model",
"traditional forecasting methods",
"the need",
"real-time decision-making",
"the rapidly evolving landscape",
"financial risk management",
"this",
"the utilization",
"a distributed rl algorithm",
"which",
"the processing",
"supply chain data",
"both efficiency",
"accuracy",
"the results",
"our study showcase",
"the exceptional precision",
"the pso-sdae model",
"financial risks",
"its efficacy",
"proactive risk management",
"supply chain operations",
"the augmented processing speed",
"the model",
"real-time analysis",
"decision-making",
"a critical capability",
"today’s fast-paced business environment",
"today",
"dae",
"today"
] |
ZS-DML: Zero-Shot Deep Metric Learning approach for plant leaf disease classification | [
"Davood Zabihzadeh",
"Mina Masoudifar"
] | Automatic plant disease detection plays an important role in food security. Deep learning methods are able to detect precisely various types of plant diseases but at the expense of using huge amounts of resources (processors and data). Therefore, employing few-shot or zero-shot learning methods is unavoidable. Deep Metric Learning (DML) is a widely used technique for few/zero shot learning. Existing DML methods extract features from the last hidden layer of a pre-trained deep network, which increases the dependence of the specific features on the observed classes. In this paper, the general discriminative feature learning method is used to learn general features of plant leaves. Moreover, a proxy-based loss is utilized that learns the embedding without sampling phase while having a higher convergence rate. The network is trained on the Plant Village dataset where the images are split into 32 and 6 classes as source and target, respectively. The knowledge learned from the source domain is transferred to the target in a zero-shot setting. A few samples of the target domain are presented to the network as a gallery. The network is then evaluated on the target domain. The experimental results show that by presenting few or even only one sample of new classes to the network without fine-tuning step, our method can achieve a classification accuracy of 99%/80.64% for few/one image(s) per class. | 10.1007/s11042-023-17136-5 | zs-dml: zero-shot deep metric learning approach for plant leaf disease classification | automatic plant disease detection plays an important role in food security. deep learning methods are able to detect precisely various types of plant diseases but at the expense of using huge amounts of resources (processors and data). therefore, employing few-shot or zero-shot learning methods is unavoidable. deep metric learning (dml) is a widely used technique for few/zero shot learning. existing dml methods extract features from the last hidden layer of a pre-trained deep network, which increases the dependence of the specific features on the observed classes. in this paper, the general discriminative feature learning method is used to learn general features of plant leaves. moreover, a proxy-based loss is utilized that learns the embedding without sampling phase while having a higher convergence rate. the network is trained on the plant village dataset where the images are split into 32 and 6 classes as source and target, respectively. the knowledge learned from the source domain is transferred to the target in a zero-shot setting. a few samples of the target domain are presented to the network as a gallery. the network is then evaluated on the target domain. the experimental results show that by presenting few or even only one sample of new classes to the network without fine-tuning step, our method can achieve a classification accuracy of 99%/80.64% for few/one image(s) per class. | [
"automatic plant disease detection",
"an important role",
"food security",
"deep learning methods",
"precisely various types",
"plant diseases",
"the expense",
"huge amounts",
"resources",
"processors",
"data",
"few-shot or zero-shot learning methods",
"deep metric learning",
"dml",
"a widely used technique",
"few/zero shot learning",
"existing dml methods",
"features",
"the last hidden layer",
"a pre-trained deep network",
"which",
"the dependence",
"the specific features",
"the observed classes",
"this paper",
"the general discriminative feature learning method",
"general features",
"plant leaves",
"a proxy-based loss",
"that",
"the embedding",
"sampling phase",
"a higher convergence rate",
"the network",
"the plant village dataset",
"the images",
"32 and 6 classes",
"source",
"target",
"the knowledge",
"the source domain",
"the target",
"a zero-shot setting",
"a few samples",
"the target domain",
"the network",
"a gallery",
"the network",
"the target domain",
"the experimental results",
"few or even only one sample",
"new classes",
"the network",
"fine-tuning step",
"our method",
"a classification accuracy",
"99%/80.64%",
"few/one image(s",
"class",
"zero",
"32",
"6",
"zero",
"only one",
"99%/80.64%",
"one"
] |
Deep Learning Approaches for Automatic Quality Assurance of Magnetic Resonance Images Using ACR Phantom | [
"Tarraf Torfeh",
"Souha Aouadi",
"SA Yoganathan",
"Satheesh Paloor",
"Rabih Hammoud",
"Noora Al-Hammadi"
] | BackgroundIn recent years, there has been a growing trend towards utilizing Artificial Intelligence (AI) and machine learning techniques in medical imaging, including for the purpose of automating quality assurance. In this research, we aimed to develop and evaluate various deep learning-based approaches for automatic quality assurance of Magnetic Resonance (MR) images using the American College of Radiology (ACR) standards.MethodsThe study involved the development, optimization, and testing of custom convolutional neural network (CNN) models. Additionally, popular pre-trained models such as VGG16, VGG19, ResNet50, InceptionV3, EfficientNetB0, and EfficientNetB5 were trained and tested. The use of pre-trained models, particularly those trained on the ImageNet dataset, for transfer learning was also explored. Two-class classification models were employed for assessing spatial resolution and geometric distortion, while an approach classifying the image into 10 classes representing the number of visible spokes was used for the low contrast.ResultsOur results showed that deep learning-based methods can be effectively used for MR image quality assurance and can improve the performance of these models. The low contrast test was one of the most challenging tests within the ACR phantom.ConclusionsOverall, for geometric distortion and spatial resolution, all of the deep learning models tested produced prediction accuracy of 80% or higher. The study also revealed that training the models from scratch performed slightly better compared to transfer learning. For the low contrast, our investigation emphasized the adaptability and potential of deep learning models. The custom CNN models excelled in predicting the number of visible spokes, achieving commendable accuracy, recall, precision, and F1 scores. | 10.1186/s12880-023-01157-5 | deep learning approaches for automatic quality assurance of magnetic resonance images using acr phantom | backgroundin recent years, there has been a growing trend towards utilizing artificial intelligence (ai) and machine learning techniques in medical imaging, including for the purpose of automating quality assurance. in this research, we aimed to develop and evaluate various deep learning-based approaches for automatic quality assurance of magnetic resonance (mr) images using the american college of radiology (acr) standards.methodsthe study involved the development, optimization, and testing of custom convolutional neural network (cnn) models. additionally, popular pre-trained models such as vgg16, vgg19, resnet50, inceptionv3, efficientnetb0, and efficientnetb5 were trained and tested. the use of pre-trained models, particularly those trained on the imagenet dataset, for transfer learning was also explored. two-class classification models were employed for assessing spatial resolution and geometric distortion, while an approach classifying the image into 10 classes representing the number of visible spokes was used for the low contrast.resultsour results showed that deep learning-based methods can be effectively used for mr image quality assurance and can improve the performance of these models. the low contrast test was one of the most challenging tests within the acr phantom.conclusionsoverall, for geometric distortion and spatial resolution, all of the deep learning models tested produced prediction accuracy of 80% or higher. the study also revealed that training the models from scratch performed slightly better compared to transfer learning. for the low contrast, our investigation emphasized the adaptability and potential of deep learning models. the custom cnn models excelled in predicting the number of visible spokes, achieving commendable accuracy, recall, precision, and f1 scores. | [
"a growing trend",
"artificial intelligence",
"ai",
"techniques",
"medical imaging",
"the purpose",
"quality assurance",
"this research",
"we",
"various deep learning-based approaches",
"automatic quality assurance",
"magnetic resonance",
"(mr) images",
"the american college",
"radiology",
"standards.methodsthe study",
"the development",
"optimization",
"testing",
"custom convolutional neural network (cnn) models",
"popular pre-trained models",
"vgg16",
"vgg19",
"resnet50",
"inceptionv3",
"efficientnetb0",
"efficientnetb5",
"the use",
"pre-trained models",
"particularly those",
"the imagenet dataset",
"transfer learning",
"two-class classification models",
"spatial resolution",
"geometric distortion",
"an approach",
"the image",
"10 classes",
"the number",
"visible spokes",
"the low contrast.resultsour results",
"deep learning-based methods",
"mr image quality assurance",
"the performance",
"these models",
"the low contrast test",
"the most challenging tests",
"the acr phantom.conclusionsoverall",
"geometric distortion",
"spatial resolution",
"all",
"the deep learning models",
"produced prediction accuracy",
"80%",
"the study",
"the models",
"scratch",
"learning",
"the low contrast",
"our investigation",
"the adaptability",
"potential",
"deep learning models",
"the custom cnn models",
"the number",
"visible spokes",
"commendable accuracy",
"recall",
"precision",
"f1 scores",
"backgroundin recent years",
"american",
"cnn",
"resnet50",
"inceptionv3",
"efficientnetb0",
"efficientnetb5",
"two",
"10",
"80%",
"cnn"
] |
Depression clinical detection model based on social media: a federated deep learning approach | [
"Yang Liu"
] | Depression can significantly impact people’s mental health, and recent research shows that social media can provide decision-making support for health-care professionals and serve as supplementary information for understanding patients’ health status. Deep learning models are also able to assess an individual’s likelihood of experiencing depression. However, data availability on social media is often limited due to privacy concerns, even though deep learning models benefit from having more data to analyze. To address this issue, this study proposes a methodological framework system for clinical decision support that uses federated deep learning (FDL) to identify individuals experiencing depression and provide intervention decisions for clinicians. The proposed framework involves evaluation of datasets from three social media platforms, and the experimental results demonstrate that our method achieves state-of-the-art results. The study aims to provide a clinical decision support system with evolvable features that can deliver precise solutions and assist health-care professionals in medical diagnosis. The proposed framework that incorporates social media data and deep learning models can provide valuable insights into patients’ health status, support personalized treatment decisions, and adapt to changing health-care needs. | 10.1007/s11227-023-05754-7 | depression clinical detection model based on social media: a federated deep learning approach | depression can significantly impact people’s mental health, and recent research shows that social media can provide decision-making support for health-care professionals and serve as supplementary information for understanding patients’ health status. deep learning models are also able to assess an individual’s likelihood of experiencing depression. however, data availability on social media is often limited due to privacy concerns, even though deep learning models benefit from having more data to analyze. to address this issue, this study proposes a methodological framework system for clinical decision support that uses federated deep learning (fdl) to identify individuals experiencing depression and provide intervention decisions for clinicians. the proposed framework involves evaluation of datasets from three social media platforms, and the experimental results demonstrate that our method achieves state-of-the-art results. the study aims to provide a clinical decision support system with evolvable features that can deliver precise solutions and assist health-care professionals in medical diagnosis. the proposed framework that incorporates social media data and deep learning models can provide valuable insights into patients’ health status, support personalized treatment decisions, and adapt to changing health-care needs. | [
"depression",
"people’s mental health",
"recent research",
"social media",
"decision-making support",
"health-care professionals",
"supplementary information",
"patients’ health status",
"deep learning models",
"an individual’s likelihood",
"depression",
"data availability",
"social media",
"privacy concerns",
"deep learning models",
"more data",
"this issue",
"this study",
"a methodological framework system",
"clinical decision support",
"that",
"deep learning",
"individuals",
"depression",
"intervention decisions",
"clinicians",
"the proposed framework",
"evaluation",
"datasets",
"three social media platforms",
"the experimental results",
"our method",
"the-art",
"the study",
"a clinical decision support system",
"evolvable features",
"that",
"precise solutions",
"health-care professionals",
"medical diagnosis",
"the proposed framework",
"that",
"social media data",
"deep learning models",
"valuable insights",
"patients’ health status",
"personalized treatment decisions",
"health-care needs",
"clinicians",
"three"
] |
Detecting schizophrenia with 3D structural brain MRI using deep learning | [
"Junhao Zhang",
"Vishwanatha M. Rao",
"Ye Tian",
"Yanting Yang",
"Nicolas Acosta",
"Zihan Wan",
"Pin-Yu Lee",
"Chloe Zhang",
"Lawrence S. Kegeles",
"Scott A. Small",
"Jia Guo"
] | Schizophrenia is a chronic neuropsychiatric disorder that causes distinct structural alterations within the brain. We hypothesize that deep learning applied to a structural neuroimaging dataset could detect disease-related alteration and improve classification and diagnostic accuracy. We tested this hypothesis using a single, widely available, and conventional T1-weighted MRI scan, from which we extracted the 3D whole-brain structure using standard post-processing methods. A deep learning model was then developed, optimized, and evaluated on three open datasets with T1-weighted MRI scans of patients with schizophrenia. Our proposed model outperformed the benchmark model, which was also trained with structural MR images using a 3D CNN architecture. Our model is capable of almost perfectly (area under the ROC curve = 0.987) distinguishing schizophrenia patients from healthy controls on unseen structural MRI scans. Regional analysis localized subcortical regions and ventricles as the most predictive brain regions. Subcortical structures serve a pivotal role in cognitive, affective, and social functions in humans, and structural abnormalities of these regions have been associated with schizophrenia. Our finding corroborates that schizophrenia is associated with widespread alterations in subcortical brain structure and the subcortical structural information provides prominent features in diagnostic classification. Together, these results further demonstrate the potential of deep learning to improve schizophrenia diagnosis and identify its structural neuroimaging signatures from a single, standard T1-weighted brain MRI. | 10.1038/s41598-023-41359-z | detecting schizophrenia with 3d structural brain mri using deep learning | schizophrenia is a chronic neuropsychiatric disorder that causes distinct structural alterations within the brain. we hypothesize that deep learning applied to a structural neuroimaging dataset could detect disease-related alteration and improve classification and diagnostic accuracy. we tested this hypothesis using a single, widely available, and conventional t1-weighted mri scan, from which we extracted the 3d whole-brain structure using standard post-processing methods. a deep learning model was then developed, optimized, and evaluated on three open datasets with t1-weighted mri scans of patients with schizophrenia. our proposed model outperformed the benchmark model, which was also trained with structural mr images using a 3d cnn architecture. our model is capable of almost perfectly (area under the roc curve = 0.987) distinguishing schizophrenia patients from healthy controls on unseen structural mri scans. regional analysis localized subcortical regions and ventricles as the most predictive brain regions. subcortical structures serve a pivotal role in cognitive, affective, and social functions in humans, and structural abnormalities of these regions have been associated with schizophrenia. our finding corroborates that schizophrenia is associated with widespread alterations in subcortical brain structure and the subcortical structural information provides prominent features in diagnostic classification. together, these results further demonstrate the potential of deep learning to improve schizophrenia diagnosis and identify its structural neuroimaging signatures from a single, standard t1-weighted brain mri. | [
"schizophrenia",
"a chronic neuropsychiatric disorder",
"that",
"distinct structural alterations",
"the brain",
"we",
"deep learning",
"a structural neuroimaging dataset",
"disease-related alteration",
"classification and diagnostic accuracy",
"we",
"this hypothesis",
"conventional t1-weighted mri scan",
"which",
"we",
"the 3d whole-brain structure",
"standard post-processing methods",
"a deep learning model",
"three open datasets",
"t1-weighted mri scans",
"patients",
"schizophrenia",
"our proposed model",
"the benchmark model",
"which",
"structural mr images",
"a 3d cnn architecture",
"our model",
"almost perfectly (area",
"the roc curve",
"schizophrenia patients",
"healthy controls",
"unseen structural mri scans",
"regional analysis",
"subcortical regions",
"ventricles",
"the most predictive brain regions",
"subcortical structures",
"a pivotal role",
"cognitive, affective, and social functions",
"humans",
"structural abnormalities",
"these regions",
"schizophrenia",
"our finding",
"schizophrenia",
"widespread alterations",
"subcortical brain structure",
"the subcortical structural information",
"prominent features",
"diagnostic classification",
"these results",
"the potential",
"deep learning",
"schizophrenia diagnosis",
"its structural neuroimaging signatures",
"a single, standard t1-weighted brain mri",
"schizophrenia",
"3d",
"three",
"3d",
"cnn",
"roc",
"0.987",
"schizophrenia"
] |
Application of simultaneous uncertainty quantification and segmentation for oropharyngeal cancer use-case with Bayesian deep learning | [
"Jaakko Sahlsten",
"Joel Jaskari",
"Kareem A. Wahid",
"Sara Ahmed",
"Enrico Glerean",
"Renjie He",
"Benjamin H. Kann",
"Antti Mäkitie",
"Clifton D. Fuller",
"Mohamed A. Naser",
"Kimmo Kaski"
] | BackgroundRadiotherapy is a core treatment modality for oropharyngeal cancer (OPC), where the primary gross tumor volume (GTVp) is manually segmented with high interobserver variability. This calls for reliable and trustworthy automated tools in clinician workflow. Therefore, accurate uncertainty quantification and its downstream utilization is critical.MethodsHere we propose uncertainty-aware deep learning for OPC GTVp segmentation, and illustrate the utility of uncertainty in multiple applications. We examine two Bayesian deep learning (BDL) models and eight uncertainty measures, and utilize a large multi-institute dataset of 292 PET/CT scans to systematically analyze our approach.ResultsWe show that our uncertainty-based approach accurately predicts the quality of the deep learning segmentation in 86.6% of cases, identifies low performance cases for semi-automated correction, and visualizes regions of the scans where the segmentations likely fail.ConclusionsOur BDL-based analysis provides a first-step towards more widespread implementation of uncertainty quantification in OPC GTVp segmentation. | 10.1038/s43856-024-00528-5 | application of simultaneous uncertainty quantification and segmentation for oropharyngeal cancer use-case with bayesian deep learning | backgroundradiotherapy is a core treatment modality for oropharyngeal cancer (opc), where the primary gross tumor volume (gtvp) is manually segmented with high interobserver variability. this calls for reliable and trustworthy automated tools in clinician workflow. therefore, accurate uncertainty quantification and its downstream utilization is critical.methodshere we propose uncertainty-aware deep learning for opc gtvp segmentation, and illustrate the utility of uncertainty in multiple applications. we examine two bayesian deep learning (bdl) models and eight uncertainty measures, and utilize a large multi-institute dataset of 292 pet/ct scans to systematically analyze our approach.resultswe show that our uncertainty-based approach accurately predicts the quality of the deep learning segmentation in 86.6% of cases, identifies low performance cases for semi-automated correction, and visualizes regions of the scans where the segmentations likely fail.conclusionsour bdl-based analysis provides a first-step towards more widespread implementation of uncertainty quantification in opc gtvp segmentation. | [
"backgroundradiotherapy",
"a core treatment modality",
"oropharyngeal cancer",
"the primary gross tumor volume",
"gtvp",
"high interobserver variability",
"this",
"reliable and trustworthy automated tools",
"clinician workflow",
"accurate uncertainty quantification",
"its downstream utilization",
"we",
"uncertainty-aware deep learning",
"opc",
"gtvp segmentation",
"the utility",
"uncertainty",
"multiple applications",
"we",
"two bayesian deep learning",
"bdl) models",
"eight uncertainty measures",
"a large multi-institute dataset",
"292 pet/ct",
"our approach.resultswe show",
"our uncertainty-based approach",
"the quality",
"the deep learning segmentation",
"86.6%",
"cases",
"low performance cases",
"semi-automated correction",
"regions",
"the scans",
"the segmentations",
"bdl-based analysis",
"a first-step",
"more widespread implementation",
"uncertainty quantification",
"gtvp segmentation",
"bayesian",
"eight",
"292",
"86.6%",
"fail.conclusionsour bdl",
"first"
] |
Insights from EEG analysis of evoked memory recalls using deep learning for emotion charting | [
"Muhammad Najam Dar",
"Muhammad Usman Akram",
"Ahmad Rauf Subhani",
"Sajid Gul Khawaja",
"Constantino Carlos Reyes-Aldasoro",
"Sarah Gul"
] | Affect recognition in a real-world, less constrained environment is the principal prerequisite of the industrial-level usefulness of this technology. Monitoring the psychological profile using smart, wearable electroencephalogram (EEG) sensors during daily activities without external stimuli, such as memory-induced emotions, is a challenging research gap in emotion recognition. This paper proposed a deep learning framework for improved memory-induced emotion recognition leveraging a combination of 1D-CNN and LSTM as feature extractors integrated with an Extreme Learning Machine (ELM) classifier. The proposed deep learning architecture, combined with the EEG preprocessing, such as the removal of the average baseline signal from each sample and extraction of EEG rhythms (delta, theta, alpha, beta, and gamma), aims to capture repetitive and continuous patterns for memory-induced emotion recognition, underexplored with deep learning techniques. This work has analyzed EEG signals using a wearable, ultra-mobile sports cap while recalling autobiographical emotional memories evoked by affect-denoting words, with self-annotation on the scale of valence and arousal. With extensive experimentation using the same dataset, the proposed framework empirically outperforms existing techniques for the emerging area of memory-induced emotion recognition with an accuracy of 65.6%. The EEG rhythms analysis, such as delta, theta, alpha, beta, and gamma, achieved 65.5%, 52.1%, 65.1%, 64.6%, and 65.0% accuracies for classification with four quadrants of valence and arousal. These results underscore the significant advancement achieved by our proposed method for the real-world environment of memory-induced emotion recognition. | 10.1038/s41598-024-61832-7 | insights from eeg analysis of evoked memory recalls using deep learning for emotion charting | affect recognition in a real-world, less constrained environment is the principal prerequisite of the industrial-level usefulness of this technology. monitoring the psychological profile using smart, wearable electroencephalogram (eeg) sensors during daily activities without external stimuli, such as memory-induced emotions, is a challenging research gap in emotion recognition. this paper proposed a deep learning framework for improved memory-induced emotion recognition leveraging a combination of 1d-cnn and lstm as feature extractors integrated with an extreme learning machine (elm) classifier. the proposed deep learning architecture, combined with the eeg preprocessing, such as the removal of the average baseline signal from each sample and extraction of eeg rhythms (delta, theta, alpha, beta, and gamma), aims to capture repetitive and continuous patterns for memory-induced emotion recognition, underexplored with deep learning techniques. this work has analyzed eeg signals using a wearable, ultra-mobile sports cap while recalling autobiographical emotional memories evoked by affect-denoting words, with self-annotation on the scale of valence and arousal. with extensive experimentation using the same dataset, the proposed framework empirically outperforms existing techniques for the emerging area of memory-induced emotion recognition with an accuracy of 65.6%. the eeg rhythms analysis, such as delta, theta, alpha, beta, and gamma, achieved 65.5%, 52.1%, 65.1%, 64.6%, and 65.0% accuracies for classification with four quadrants of valence and arousal. these results underscore the significant advancement achieved by our proposed method for the real-world environment of memory-induced emotion recognition. | [
"recognition",
"a real-world, less constrained environment",
"the principal prerequisite",
"the industrial-level usefulness",
"this technology",
"the psychological profile",
"smart, wearable electroencephalogram (eeg) sensors",
"daily activities",
"external stimuli",
"memory-induced emotions",
"a challenging research gap",
"emotion recognition",
"this paper",
"a deep learning framework",
"improved memory-induced emotion recognition",
"a combination",
"1d-cnn",
"lstm",
"feature extractors",
"an extreme learning machine",
"(elm) classifier",
"the proposed deep learning architecture",
"the eeg preprocessing",
"the removal",
"the average baseline signal",
"each sample",
"extraction",
"eeg rhythms",
"delta",
"theta",
"alpha",
"beta",
"gamma",
"repetitive and continuous patterns",
"memory-induced emotion recognition",
"deep learning techniques",
"this work",
"eeg signals",
"a wearable, ultra-mobile sports cap",
"autobiographical emotional memories",
"affect-denoting words",
"self-annotation",
"the scale",
"valence",
"arousal",
"extensive experimentation",
"the same dataset",
"the proposed framework",
"existing techniques",
"the emerging area",
"memory-induced emotion recognition",
"an accuracy",
"65.6%",
"the eeg rhythms analysis",
"delta",
"theta",
"alpha",
"beta",
"gamma",
"65.5%",
"52.1%",
"65.1%",
"64.6%",
"65.0% accuracies",
"classification",
"four quadrants",
"valence",
"arousal",
"these results",
"the significant advancement",
"our proposed method",
"the real-world environment",
"memory-induced emotion recognition",
"daily",
"1d",
"65.6%",
"65.5%",
"52.1%",
"65.1%",
"64.6%",
"65.0%",
"four"
] |
Pneumonia detection based on RSNA dataset and anchor-free deep learning detector | [
"Linghua Wu",
"Jing Zhang",
"Yilin Wang",
"Rong Ding",
"Yueqin Cao",
"Guiqin Liu",
"Changsheng Liufu",
"Baowei Xie",
"Shanping Kang",
"Rui Liu",
"Wenle Li",
"Furen Guan"
] | Pneumonia is a highly lethal disease, and research on its treatment and early screening tools has received extensive attention from researchers. Due to the maturity and cost reduction of chest X-ray technology, and with the development of artificial intelligence technology, pneumonia identification based on deep learning and chest X-ray has attracted attention from all over the world. Although the feature extraction capability of deep learning is strong, existing deep learning object detection frameworks are based on pre-defined anchors, which require a lot of tuning and experience to guarantee their excellent results in the face of new applications or data. To avoid the influence of anchor settings in pneumonia detection, this paper proposes an anchor-free object detection framework and RSNA dataset based on pneumonia detection. First, a data enhancement scheme is used to preprocess the chest X-ray images; second, an anchor-free object detection framework is used for pneumonia detection, which contains a feature pyramid, two-branch detection head, and focal loss. The average precision of 51.5 obtained by Intersection over Union (IoU) calculation shows that the pneumonia detection results obtained in this paper can surpass the existing classical object detection framework, providing an idea for future research and exploration. | 10.1038/s41598-024-52156-7 | pneumonia detection based on rsna dataset and anchor-free deep learning detector | pneumonia is a highly lethal disease, and research on its treatment and early screening tools has received extensive attention from researchers. due to the maturity and cost reduction of chest x-ray technology, and with the development of artificial intelligence technology, pneumonia identification based on deep learning and chest x-ray has attracted attention from all over the world. although the feature extraction capability of deep learning is strong, existing deep learning object detection frameworks are based on pre-defined anchors, which require a lot of tuning and experience to guarantee their excellent results in the face of new applications or data. to avoid the influence of anchor settings in pneumonia detection, this paper proposes an anchor-free object detection framework and rsna dataset based on pneumonia detection. first, a data enhancement scheme is used to preprocess the chest x-ray images; second, an anchor-free object detection framework is used for pneumonia detection, which contains a feature pyramid, two-branch detection head, and focal loss. the average precision of 51.5 obtained by intersection over union (iou) calculation shows that the pneumonia detection results obtained in this paper can surpass the existing classical object detection framework, providing an idea for future research and exploration. | [
"pneumonia",
"a highly lethal disease",
"research",
"its treatment",
"early screening tools",
"extensive attention",
"researchers",
"the maturity and cost reduction",
"chest x-ray technology",
"the development",
"artificial intelligence technology",
"pneumonia identification",
"deep learning",
"chest x",
"-",
"ray",
"attention",
"the world",
"the feature extraction capability",
"deep learning",
"existing deep learning object detection frameworks",
"pre-defined anchors",
"which",
"a lot",
"tuning",
"their excellent results",
"the face",
"new applications",
"data",
"the influence",
"anchor settings",
"pneumonia detection",
"this paper",
"an anchor-free object detection framework",
"rsna dataset",
"pneumonia detection",
"a data enhancement scheme",
"the chest x-ray images",
"an anchor-free object detection framework",
"pneumonia detection",
"which",
"a feature pyramid",
"two-branch detection head",
"focal loss",
"the average precision",
"intersection",
"union (iou) calculation",
"the pneumonia detection results",
"this paper",
"the existing classical object detection framework",
"an idea",
"future research",
"exploration",
"first",
"second",
"two",
"51.5"
] |
Comparative Study for Optimized Deep Learning-Based Road Accidents Severity Prediction Models | [
"Hussam Hijazi",
"Karim Sattar",
"Hassan M. Al-Ahmadi",
"Sami El-Ferik"
] | Road traffic accidents remain a major cause of fatalities and injuries worldwide. Effective classification of accident type and severity is crucial for prompt post-accident protocols and the development of comprehensive road safety policies. This study explores the application of deep learning techniques for predicting crash injury severity in the Eastern Province of Saudi Arabia. Five deep learning models were trained and evaluated, including various variants of feedforward multilayer perceptron, a back-propagated artificial neural network (ANN), an ANN with radial basis function (RPF), and tabular data learning network (TabNet). The models were optimized using Bayesian optimization (BO) and employed the synthetic minority oversampling technique (SMOTE) for oversampling the training dataset. While SMOTE enhanced balanced accuracy for ANN with RBF and TabNet, it compromised precision and increased recall. The results indicated that oversampling techniques did not consistently improve model performance. Additionally, significant features were identified using least absolute shrinkage and selection operator (LASSO) regularization, feature importance, and permutation importance. The results indicated that oversampling techniques did not consistently improve model performance. While SMOTE enhanced balanced accuracy for ANN with RBF and TabNet, it compromised precision and increased recall. The study's findings emphasize the consistent significance of the 'Number of Injuries Major' feature as a vital predictor in deep learning models, regardless of the selection techniques employed. These results shed light on the pivotal role played by the count of individuals with major injuries in influencing the severity of crash injuries, highlighting its potential relevance in shaping road safety policy development. | 10.1007/s13369-023-08510-4 | comparative study for optimized deep learning-based road accidents severity prediction models | road traffic accidents remain a major cause of fatalities and injuries worldwide. effective classification of accident type and severity is crucial for prompt post-accident protocols and the development of comprehensive road safety policies. this study explores the application of deep learning techniques for predicting crash injury severity in the eastern province of saudi arabia. five deep learning models were trained and evaluated, including various variants of feedforward multilayer perceptron, a back-propagated artificial neural network (ann), an ann with radial basis function (rpf), and tabular data learning network (tabnet). the models were optimized using bayesian optimization (bo) and employed the synthetic minority oversampling technique (smote) for oversampling the training dataset. while smote enhanced balanced accuracy for ann with rbf and tabnet, it compromised precision and increased recall. the results indicated that oversampling techniques did not consistently improve model performance. additionally, significant features were identified using least absolute shrinkage and selection operator (lasso) regularization, feature importance, and permutation importance. the results indicated that oversampling techniques did not consistently improve model performance. while smote enhanced balanced accuracy for ann with rbf and tabnet, it compromised precision and increased recall. the study's findings emphasize the consistent significance of the 'number of injuries major' feature as a vital predictor in deep learning models, regardless of the selection techniques employed. these results shed light on the pivotal role played by the count of individuals with major injuries in influencing the severity of crash injuries, highlighting its potential relevance in shaping road safety policy development. | [
"road traffic accidents",
"a major cause",
"fatalities",
"injuries",
"effective classification",
"accident type",
"severity",
"prompt post-accident protocols",
"the development",
"comprehensive road safety policies",
"this study",
"the application",
"deep learning techniques",
"crash injury severity",
"the eastern province",
"saudi arabia",
"five deep learning models",
"various variants",
"feedforward multilayer perceptron",
"a back-propagated artificial neural network",
"ann",
"radial basis function",
"rpf",
"data learning network",
"tabnet",
"the models",
"bayesian optimization",
"bo",
"the synthetic minority oversampling technique",
"smote",
"the training dataset",
"smote",
"balanced accuracy",
"ann",
"rbf",
"tabnet",
"it",
"precision",
"increased recall",
"the results",
"techniques",
"model performance",
"significant features",
"lasso",
"feature importance",
"permutation importance",
"the results",
"techniques",
"model performance",
"smote",
"balanced accuracy",
"ann",
"rbf",
"tabnet",
"it",
"precision",
"increased recall",
"the study's findings",
"the consistent significance",
"the 'number",
"injuries major' feature",
"a vital predictor",
"deep learning models",
"the selection techniques",
"these results",
"light",
"the pivotal role",
"the count",
"individuals",
"major injuries",
"the severity",
"crash injuries",
"its potential relevance",
"road safety policy development",
"saudi arabia",
"five"
] |
A Resource-Efficient Deep Learning Approach to Visual-Based Cattle Geographic Origin Prediction | [
"Camellia Ray",
"Sambit Bakshi",
"Pankaj Kumar Sa",
"Ganapati Panda"
] | Customized healthcare for cattle health monitoring is essential, which aims to optimize individual animal health, thereby enhancing productivity, minimizing illness-related risks, and improving overall welfare. Tailoring healthcare practices to individual requirements guarantees that individual animals receive proper attention and intervention, resulting in better health outcomes and sustainable cattle farming practices. In this regard, the manuscript proposes a visual cues-based region prediction methodology to design a customized cattle healthcare system. The proposed automated AI healthcare system uses resource-efficient deep learning-inspired architecture for computer vision applications like performing region-wise classification. The classification mechanism can be used further to identify a cattle and the regions it belongs. Extensive experimentation has been conducted on a redesigned image dataset to identify the best-suited deep-learning framework to perform region classification for livestock, such as cattle. MobileNetV2 outperforms the considered state-of-the-art frameworks by achieving an accuracy of 93% in identifying the regions of the cattle. | 10.1007/s11036-024-02350-8 | a resource-efficient deep learning approach to visual-based cattle geographic origin prediction | customized healthcare for cattle health monitoring is essential, which aims to optimize individual animal health, thereby enhancing productivity, minimizing illness-related risks, and improving overall welfare. tailoring healthcare practices to individual requirements guarantees that individual animals receive proper attention and intervention, resulting in better health outcomes and sustainable cattle farming practices. in this regard, the manuscript proposes a visual cues-based region prediction methodology to design a customized cattle healthcare system. the proposed automated ai healthcare system uses resource-efficient deep learning-inspired architecture for computer vision applications like performing region-wise classification. the classification mechanism can be used further to identify a cattle and the regions it belongs. extensive experimentation has been conducted on a redesigned image dataset to identify the best-suited deep-learning framework to perform region classification for livestock, such as cattle. mobilenetv2 outperforms the considered state-of-the-art frameworks by achieving an accuracy of 93% in identifying the regions of the cattle. | [
"customized healthcare",
"cattle health monitoring",
"which",
"individual animal health",
"productivity",
"illness-related risks",
"overall welfare",
"healthcare practices",
"individual requirements",
"individual animals",
"proper attention",
"intervention",
"better health outcomes",
"sustainable cattle farming practices",
"this regard",
"the manuscript",
"a visual cues-based region prediction methodology",
"a customized cattle healthcare system",
"the proposed automated ai healthcare system",
"resource-efficient deep learning-inspired architecture",
"computer vision applications",
"region-wise classification",
"the classification mechanism",
"a cattle",
"the regions",
"it",
"extensive experimentation",
"a redesigned image dataset",
"the best-suited deep-learning framework",
"region classification",
"livestock",
"cattle",
"mobilenetv2",
"the-art",
"an accuracy",
"93%",
"the regions",
"the cattle",
"tailoring healthcare",
"mobilenetv2",
"93%"
] |
Hybrid deep learning framework for weather forecast with rainfall prediction using weather bigdata analytics | [
"C. Lalitha",
"D. Ravindran"
] | The volume and complexity of weather data, along with missing values and high correlation between collected variables, make it challenging to develop efficient deep learning frameworks that can handle data with more features. This leads to a lack of accurate and predictable weather forecasts. To develop a hybrid deep learning framework for weather forecast with rainfall prediction using weather big data analytics to ensure high detection rates. A modified planet optimization (MPO) algorithm is used for data preprocessing to remove unwanted artifacts. An improved Tuna optimization (ITO) algorithm is presented to select optimal features to avoid data dimensionality issues. A hybrid memory-augmented artificial neural network (MA-ANN) classifier is developed to improve weather early forecast detection rates. The proposed framework is validated against standard benchmark datasets such as weather underground and climate forecast system reanalysis (CFSR). The simulation results are compared with other existing state-of-the-art frameworks based on error measures (RMSE, MAPE, BIAS, R) and quality measures (accuracy, sensitivity, specificity, precision, F1-measure).The MA-ANN classifier accuracy obtained 97.65% for wunderground.com Delhi and 98.88% for Tamilnadu. The hybrid deep learning framework with rainfall prediction using weather big data analytics has shown promising results for accurate and predictable weather forecasts. The proposed framework outperforms other existing state-of-the-art frameworks, and the MA-ANN classifier has improved weather early forecast detection rates. The study demonstrates the potential of utilizing big data techniques in weather forecasting and highlights the importance of developing efficient deep learning frameworks to handle complex and high-dimensional weather data. | 10.1007/s11042-023-17801-9 | hybrid deep learning framework for weather forecast with rainfall prediction using weather bigdata analytics | the volume and complexity of weather data, along with missing values and high correlation between collected variables, make it challenging to develop efficient deep learning frameworks that can handle data with more features. this leads to a lack of accurate and predictable weather forecasts. to develop a hybrid deep learning framework for weather forecast with rainfall prediction using weather big data analytics to ensure high detection rates. a modified planet optimization (mpo) algorithm is used for data preprocessing to remove unwanted artifacts. an improved tuna optimization (ito) algorithm is presented to select optimal features to avoid data dimensionality issues. a hybrid memory-augmented artificial neural network (ma-ann) classifier is developed to improve weather early forecast detection rates. the proposed framework is validated against standard benchmark datasets such as weather underground and climate forecast system reanalysis (cfsr). the simulation results are compared with other existing state-of-the-art frameworks based on error measures (rmse, mape, bias, r) and quality measures (accuracy, sensitivity, specificity, precision, f1-measure).the ma-ann classifier accuracy obtained 97.65% for wunderground.com delhi and 98.88% for tamilnadu. the hybrid deep learning framework with rainfall prediction using weather big data analytics has shown promising results for accurate and predictable weather forecasts. the proposed framework outperforms other existing state-of-the-art frameworks, and the ma-ann classifier has improved weather early forecast detection rates. the study demonstrates the potential of utilizing big data techniques in weather forecasting and highlights the importance of developing efficient deep learning frameworks to handle complex and high-dimensional weather data. | [
"the volume",
"complexity",
"weather data",
"missing values",
"high correlation",
"collected variables",
"it",
"efficient deep learning frameworks",
"that",
"data",
"more features",
"this",
"a lack",
"accurate and predictable weather forecasts",
"a hybrid deep learning framework",
"weather forecast",
"rainfall prediction",
"weather big data analytics",
"high detection rates",
"a modified planet optimization",
"mpo",
"algorithm",
"data",
"unwanted artifacts",
"an improved tuna optimization",
"ito",
"algorithm",
"optimal features",
"data dimensionality issues",
"a hybrid memory-augmented artificial neural network",
"ma-ann) classifier",
"weather early forecast detection rates",
"the proposed framework",
"standard benchmark datasets",
"weather",
"climate forecast system reanalysis",
"cfsr",
"the simulation results",
"the-art",
"error measures",
"accuracy",
"sensitivity",
"specificity",
"precision",
"f1-measure).the ma-ann classifier accuracy",
"97.65%",
"98.88%",
"tamilnadu",
"the hybrid deep learning framework",
"rainfall prediction",
"weather big data analytics",
"promising results",
"accurate and predictable weather forecasts",
"the proposed framework",
"the-art",
"the ma-ann classifier",
"weather early forecast detection rates",
"the study",
"the potential",
"big data techniques",
"weather forecasting",
"the importance",
"efficient deep learning frameworks",
"complex and high-dimensional weather data",
"rmse",
"97.65%",
"wunderground.com delhi",
"98.88%"
] |
Deep learning-based question answering: a survey | [
"Heba Abdel-Nabi",
"Arafat Awajan",
"Mostafa Z. Ali"
] | Question Answering is a crucial natural language processing task. This field of research has attracted a sudden amount of interest lately due mainly to the integration of the deep learning models in the Question Answering Systems which consequently power up many advancements and improvements. This survey aims to explore and shed light upon the recent and most powerful deep learning-based Question Answering Systems and classify them based on the deep learning model used, stating the details of the used word representation, datasets, and evaluation metrics. It aims to highlight and discuss the currently used models and give insights that direct future research to enhance this increasingly growing field. | 10.1007/s10115-022-01783-5 | deep learning-based question answering: a survey | question answering is a crucial natural language processing task. this field of research has attracted a sudden amount of interest lately due mainly to the integration of the deep learning models in the question answering systems which consequently power up many advancements and improvements. this survey aims to explore and shed light upon the recent and most powerful deep learning-based question answering systems and classify them based on the deep learning model used, stating the details of the used word representation, datasets, and evaluation metrics. it aims to highlight and discuss the currently used models and give insights that direct future research to enhance this increasingly growing field. | [
"a crucial natural language processing task",
"this field",
"research",
"a sudden amount",
"interest",
"the integration",
"the deep learning models",
"the question",
"systems",
"which",
"many advancements",
"improvements",
"this survey",
"light",
"the recent and most powerful deep learning-based question",
"systems",
"them",
"the deep learning model",
"the details",
"the used word representation",
"datasets",
"evaluation metrics",
"it",
"the currently used models",
"insights",
"that",
"future research",
"this increasingly growing field"
] |
Deep Learning Architecture for Computer Vision-based Structural Defect Detection | [
"Ruoyu Yang",
"Shubhendu Kumar Singh",
"Mostafa Tavakkoli",
"M. Amin Karami",
"Rahul Rai"
] | Structural health monitoring (SHM) refers to the implementation of a damage detection strategy for structures. Fault occurrence in these structural systems during the operation is inevitable. Efficient, fast, and precise health monitoring methods are required to proactively perform the necessary repairs and maintenance on time before it is too late. The current structural health monitoring methods involve physically attached sensors or non-contact vision-based vibration measurements. However, these methods have significant drawbacks due to the low spatial resolution, weight influence on the lightweight structure, and time/labor consumption. Recently, computer-vison-based deep learning methods like convolutional neural network (CNN) and fully convolutional neural network (FCN) have been applied for defect detection and localization, which address the aforementioned problems and obtain high accuracy. This paper proposes a novel hybrid deep learning architecture comprising CNN and temporal convolutional networks (CNN-TCN) for the computer vision-based defect detection task. Various beam samples, consisting of five different materials and various structural defects, were used to evaluate the proposed deep learning algorithms’ performance. The proposed deep learning methods treat each pixel of the video frame like a sensor to extract valuable features for defect detection. Through empirical results, we demonstrate that this ’pixel-sensor’ approach is more efficient and accurate and can achieve a better defect detection performance on different beam samples compared with the current state-of-the-art approaches, including CNN-long short-term memory (LSTM), CNN-bidirectional long short-term memory (BiLSTM), multi-scale CNN-LSTM, and CNN-gated recurrent unit(GRU) methods. | 10.1007/s10489-023-04654-w | deep learning architecture for computer vision-based structural defect detection | structural health monitoring (shm) refers to the implementation of a damage detection strategy for structures. fault occurrence in these structural systems during the operation is inevitable. efficient, fast, and precise health monitoring methods are required to proactively perform the necessary repairs and maintenance on time before it is too late. the current structural health monitoring methods involve physically attached sensors or non-contact vision-based vibration measurements. however, these methods have significant drawbacks due to the low spatial resolution, weight influence on the lightweight structure, and time/labor consumption. recently, computer-vison-based deep learning methods like convolutional neural network (cnn) and fully convolutional neural network (fcn) have been applied for defect detection and localization, which address the aforementioned problems and obtain high accuracy. this paper proposes a novel hybrid deep learning architecture comprising cnn and temporal convolutional networks (cnn-tcn) for the computer vision-based defect detection task. various beam samples, consisting of five different materials and various structural defects, were used to evaluate the proposed deep learning algorithms’ performance. the proposed deep learning methods treat each pixel of the video frame like a sensor to extract valuable features for defect detection. through empirical results, we demonstrate that this ’pixel-sensor’ approach is more efficient and accurate and can achieve a better defect detection performance on different beam samples compared with the current state-of-the-art approaches, including cnn-long short-term memory (lstm), cnn-bidirectional long short-term memory (bilstm), multi-scale cnn-lstm, and cnn-gated recurrent unit(gru) methods. | [
"structural health monitoring",
"shm",
"the implementation",
"a damage detection strategy",
"structures",
"fault occurrence",
"these structural systems",
"the operation",
"precise health monitoring methods",
"the necessary repairs",
"maintenance",
"time",
"it",
"the current structural health monitoring methods",
"physically attached sensors",
"non-contact vision-based vibration measurements",
"these methods",
"significant drawbacks",
"the low spatial resolution",
"weight influence",
"the lightweight structure",
"time/labor consumption",
"computer-vison-based deep learning methods",
"convolutional neural network",
"cnn",
"fully convolutional neural network",
"fcn",
"defect detection",
"localization",
"which",
"the aforementioned problems",
"high accuracy",
"this paper",
"a novel hybrid deep learning architecture",
"cnn",
"temporal convolutional networks",
"cnn-tcn",
"the computer vision-based defect detection task",
"various beam samples",
"five different materials",
"various structural defects",
"the proposed deep learning algorithms’ performance",
"the proposed deep learning methods",
"each pixel",
"the video frame",
"a sensor",
"valuable features",
"defect detection",
"empirical results",
"we",
"this ’pixel-sensor’ approach",
"a better defect detection performance",
"different beam samples",
"the-art",
"cnn-long short-term memory",
"lstm",
"cnn-bidirectional long short-term memory",
"bilstm",
"multi-scale cnn-lstm",
"cnn-gated recurrent unit(gru) methods",
"cnn",
"cnn",
"cnn-tcn",
"five",
"cnn",
"cnn",
"cnn",
"cnn"
] |
DCLGM: Fusion Recommendation Model Based on LightGBM and Deep Learning | [
"Bin Zhao",
"Bin Li",
"Jiqun Zhang",
"Wei Cao",
"Yilong Gao"
] | The recommendation system can mine valuable information according to user preferences, so it is widely used in various industries. However, the performance of recommendation systems is generally affected by the problem of data sparsity, and LightGBM can alleviate the impact caused by data sparsity to a certain extent. To this end, this paper proposes a fusion recommendation model based on the LightGBM and deep learning—CLGM model. The model is composed of LighGBM, cross network and deep neural network. First, the features in the dataset are fused and extracted through LightGBM, and the feature with the highest classification accuracy is selected as the input of the neural network layer; Then, using the cross network and the deep neural network, the linear cross combination feature relationship and nonlinear correlation relationship between high-order features are respectively obtained; finally, the results obtained by the pre-order network are linearly weighted and combined to obtain the final recommendation result. In this paper, AUC and Logloss are used as evaluation indicators to verify the model on the public dataset Criteo and dataset Avazu. The simulation experiment results show that, compared with the four typical recommendation models, the recommendation effect of this model is better. | 10.1007/s11063-024-11504-4 | dclgm: fusion recommendation model based on lightgbm and deep learning | the recommendation system can mine valuable information according to user preferences, so it is widely used in various industries. however, the performance of recommendation systems is generally affected by the problem of data sparsity, and lightgbm can alleviate the impact caused by data sparsity to a certain extent. to this end, this paper proposes a fusion recommendation model based on the lightgbm and deep learning—clgm model. the model is composed of lighgbm, cross network and deep neural network. first, the features in the dataset are fused and extracted through lightgbm, and the feature with the highest classification accuracy is selected as the input of the neural network layer; then, using the cross network and the deep neural network, the linear cross combination feature relationship and nonlinear correlation relationship between high-order features are respectively obtained; finally, the results obtained by the pre-order network are linearly weighted and combined to obtain the final recommendation result. in this paper, auc and logloss are used as evaluation indicators to verify the model on the public dataset criteo and dataset avazu. the simulation experiment results show that, compared with the four typical recommendation models, the recommendation effect of this model is better. | [
"the recommendation system",
"valuable information",
"user preferences",
"it",
"various industries",
"the performance",
"recommendation systems",
"the problem",
"data sparsity",
"lightgbm",
"the impact",
"data sparsity",
"a certain extent",
"this end",
"this paper",
"a fusion recommendation model",
"the lightgbm",
"deep learning",
"clgm model",
"the model",
"lighgbm",
"cross network",
"deep neural network",
"the features",
"the dataset",
"lightgbm",
"the feature",
"the highest classification accuracy",
"the input",
"the neural network layer",
"the cross network",
"the deep neural network",
"the linear cross combination feature relationship",
"nonlinear correlation relationship",
"high-order features",
"the results",
"the pre-order network",
"the final recommendation result",
"this paper",
"auc",
"logloss",
"evaluation indicators",
"the model",
"the public dataset criteo",
"dataset avazu",
"the simulation experiment results",
"the four typical recommendation models",
"the recommendation effect",
"this model",
"first",
"four"
] |
Automated assembly quality inspection by deep learning with 2D and 3D synthetic CAD data | [
"Xiaomeng Zhu",
"Pär Mårtensson",
"Lars Hanson",
"Mårten Björkman",
"Atsuto Maki"
] | In the manufacturing industry, automatic quality inspections can lead to improved product quality and productivity. Deep learning-based computer vision technologies, with their superior performance in many applications, can be a possible solution for automatic quality inspections. However, collecting a large amount of annotated training data for deep learning is expensive and time-consuming, especially for processes involving various products and human activities such as assembly. To address this challenge, we propose a method for automated assembly quality inspection using synthetic data generated from computer-aided design (CAD) models. The method involves two steps: automatic data generation and model implementation. In the first step, we generate synthetic data in two formats: two-dimensional (2D) images and three-dimensional (3D) point clouds. In the second step, we apply different state-of-the-art deep learning approaches to the data for quality inspection, including unsupervised domain adaptation, i.e., a method of adapting models across different data distributions, and transfer learning, which transfers knowledge between related tasks. We evaluate the methods in a case study of pedal car front-wheel assembly quality inspection to identify the possible optimal approach for assembly quality inspection. Our results show that the method using Transfer Learning on 2D synthetic images achieves superior performance compared with others. Specifically, it attained 95% accuracy through fine-tuning with only five annotated real images per class. With promising results, our method may be suggested for other similar quality inspection use cases. By utilizing synthetic CAD data, our method reduces the need for manual data collection and annotation. Furthermore, our method performs well on test data with different backgrounds, making it suitable for different manufacturing environments. | 10.1007/s10845-024-02375-6 | automated assembly quality inspection by deep learning with 2d and 3d synthetic cad data | in the manufacturing industry, automatic quality inspections can lead to improved product quality and productivity. deep learning-based computer vision technologies, with their superior performance in many applications, can be a possible solution for automatic quality inspections. however, collecting a large amount of annotated training data for deep learning is expensive and time-consuming, especially for processes involving various products and human activities such as assembly. to address this challenge, we propose a method for automated assembly quality inspection using synthetic data generated from computer-aided design (cad) models. the method involves two steps: automatic data generation and model implementation. in the first step, we generate synthetic data in two formats: two-dimensional (2d) images and three-dimensional (3d) point clouds. in the second step, we apply different state-of-the-art deep learning approaches to the data for quality inspection, including unsupervised domain adaptation, i.e., a method of adapting models across different data distributions, and transfer learning, which transfers knowledge between related tasks. we evaluate the methods in a case study of pedal car front-wheel assembly quality inspection to identify the possible optimal approach for assembly quality inspection. our results show that the method using transfer learning on 2d synthetic images achieves superior performance compared with others. specifically, it attained 95% accuracy through fine-tuning with only five annotated real images per class. with promising results, our method may be suggested for other similar quality inspection use cases. by utilizing synthetic cad data, our method reduces the need for manual data collection and annotation. furthermore, our method performs well on test data with different backgrounds, making it suitable for different manufacturing environments. | [
"the manufacturing industry",
"automatic quality inspections",
"improved product quality",
"productivity",
"deep learning-based computer vision technologies",
"their superior performance",
"many applications",
"a possible solution",
"automatic quality inspections",
"a large amount",
"annotated training data",
"deep learning",
"processes",
"various products",
"human activities",
"assembly",
"this challenge",
"we",
"a method",
"automated assembly quality inspection",
"synthetic data",
"computer-aided design (cad) models",
"the method",
"two steps",
"automatic data generation and model implementation",
"the first step",
"we",
"synthetic data",
"two formats",
"two-dimensional (2d) images",
"three-dimensional (3d",
"the second step",
"we",
"the-art",
"the data",
"quality inspection",
"unsupervised domain adaptation",
"i.e., a method",
"adapting models",
"different data distributions",
"transfer learning",
"which",
"knowledge",
"related tasks",
"we",
"the methods",
"a case study",
"pedal car front-wheel assembly quality inspection",
"the possible optimal approach",
"assembly quality inspection",
"our results",
"the method",
"transfer learning",
"2d synthetic images",
"superior performance",
"others",
"it",
"95% accuracy",
"fine-tuning",
"only five annotated real images",
"class",
"promising results",
"our method",
"other similar quality inspection use cases",
"synthetic cad data",
"our method",
"the need",
"manual data collection",
"annotation",
"our method",
"test data",
"different backgrounds",
"it",
"different manufacturing environments",
"two",
"first",
"two",
"two",
"2d",
"three",
"3d",
"second",
"2d",
"95%",
"only five"
] |
Domain knowledge enhanced deep learning for electrocardiogram arrhythmia classification | [
"Jie Sun \n (孙洁)"
] | Deep learning provides an effective way for automatic classification of cardiac arrhythmias, but in clinical decision-making, pure data-driven methods working as black-boxes may lead to unsatisfactory results. A promising solution is combining domain knowledge with deep learning. This paper develops a flexible and extensible framework for integrating domain knowledge with a deep neural network. The model consists of a deep neural network to capture the statistical pattern between input data and the ground-truth label, and a knowledge module to guarantee consistency with the domain knowledge. These two components are trained interactively to bring the best of both worlds. The experiments show that the domain knowledge is valuable in refining the neural network prediction and thus improves accuracy. | 10.1631/FITEE.2100519 | domain knowledge enhanced deep learning for electrocardiogram arrhythmia classification | deep learning provides an effective way for automatic classification of cardiac arrhythmias, but in clinical decision-making, pure data-driven methods working as black-boxes may lead to unsatisfactory results. a promising solution is combining domain knowledge with deep learning. this paper develops a flexible and extensible framework for integrating domain knowledge with a deep neural network. the model consists of a deep neural network to capture the statistical pattern between input data and the ground-truth label, and a knowledge module to guarantee consistency with the domain knowledge. these two components are trained interactively to bring the best of both worlds. the experiments show that the domain knowledge is valuable in refining the neural network prediction and thus improves accuracy. | [
"deep learning",
"an effective way",
"automatic classification",
"cardiac arrhythmias",
"clinical decision-making",
", pure data-driven methods",
"black-boxes",
"unsatisfactory results",
"a promising solution",
"domain knowledge",
"deep learning",
"this paper",
"a flexible and extensible framework",
"domain knowledge",
"a deep neural network",
"the model",
"a deep neural network",
"the statistical pattern",
"input data",
"the ground-truth label",
"a knowledge module",
"consistency",
"the domain knowledge",
"these two components",
"both worlds",
"the experiments",
"the domain knowledge",
"the neural network prediction",
"accuracy",
"two"
] |
Development of Deep Learning Color Recognition Model for Color Measurement Processes | [
"Sanghun Lee",
"Ki-Sub Kim",
"Jeong Won Kang"
] | We present a deep learning color recognition model for the color measurement process in the paint industry. Currently, spectrophotometers are primarily used for color measurements owing to their accuracy. The measurement method involves manually injecting the sample into a spectrophotometer. Our proposed method uses a webcam with a deep learning model on the stand of a spectrophotometer. Deep learning models are widely used for image and color detection. In this study, the “you only look once (YOLO)” algorithm is applied for real-time detection of color samples. Upon training various sample images using YOLO, the model could detect the sample area in real time using a webcam. An open source computer vision (OpenCV) library was used for the color recognition model, and the detected RGB color value was converted to the international commission on illumination color space (CIELAB) value, which is primarily used in the color measuring process. However, because of the mirror-like reflection of light from a surface with specular reflection, it is difficult to implement the color value using a camera. To address this problem, we compare several specular removal methods and propose the most suitable model for the color recognition model of color samples. The accuracy of the proposed model was verified by comparing the colors of various samples. Our proposed approach can easily detect samples and color values, which can contribute significantly to automatically calculating the exact amount of coloring required for the target color. | 10.1007/s42835-024-01791-1 | development of deep learning color recognition model for color measurement processes | we present a deep learning color recognition model for the color measurement process in the paint industry. currently, spectrophotometers are primarily used for color measurements owing to their accuracy. the measurement method involves manually injecting the sample into a spectrophotometer. our proposed method uses a webcam with a deep learning model on the stand of a spectrophotometer. deep learning models are widely used for image and color detection. in this study, the “you only look once (yolo)” algorithm is applied for real-time detection of color samples. upon training various sample images using yolo, the model could detect the sample area in real time using a webcam. an open source computer vision (opencv) library was used for the color recognition model, and the detected rgb color value was converted to the international commission on illumination color space (cielab) value, which is primarily used in the color measuring process. however, because of the mirror-like reflection of light from a surface with specular reflection, it is difficult to implement the color value using a camera. to address this problem, we compare several specular removal methods and propose the most suitable model for the color recognition model of color samples. the accuracy of the proposed model was verified by comparing the colors of various samples. our proposed approach can easily detect samples and color values, which can contribute significantly to automatically calculating the exact amount of coloring required for the target color. | [
"we",
"a deep learning color recognition model",
"the color measurement process",
"the paint industry",
"spectrophotometers",
"color measurements",
"their accuracy",
"the measurement method",
"the sample",
"a spectrophotometer",
"our proposed method",
"a webcam",
"a deep learning model",
"the stand",
"a spectrophotometer",
"deep learning models",
"image",
"color detection",
"this study",
"you",
"algorithm",
"real-time detection",
"color samples",
"various sample images",
"yolo",
"the model",
"the sample area",
"real time",
"a webcam",
"an open source computer vision",
"(opencv) library",
"the color recognition model",
"the detected rgb color value",
"the international commission",
"illumination color space",
"(cielab) value",
"which",
"the color measuring process",
"the mirror-like reflection",
"light",
"a surface",
"specular reflection",
"it",
"the color value",
"a camera",
"this problem",
"we",
"several specular removal methods",
"the most suitable model",
"the color recognition model",
"color samples",
"the accuracy",
"the proposed model",
"the colors",
"various samples",
"our proposed approach",
"samples",
"color values",
"which",
"the exact amount",
"the target color",
"deep"
] |
Deep learning nomogram for predicting neoadjuvant chemotherapy response in locally advanced gastric cancer patients | [
"Jingjing Zhang",
"Qiang Zhang",
"Bo Zhao",
"Gaofeng Shi"
] | PurposeDeveloped and validated a deep learning radiomics nomogram using multi-phase contrast-enhanced computed tomography (CECT) images to predict neoadjuvant chemotherapy (NAC) response in locally advanced gastric cancer (LAGC) patients.MethodsThis multi-center study retrospectively included 322 patients diagnosed with gastric cancer from January 2013 to June 2023 at two hospitals. Handcrafted radiomics technique and the EfficientNet V2 neural network were applied to arterial, portal venous, and delayed phase CT images to extract two-dimensional handcrafted and deep learning features. A nomogram model was built by integrating the handcrafted signature, the deep learning signature, with clinical features. Discriminative ability was assessed using the receiver operating characteristics (ROC) curve and the precision-recall (P-R) curve. Model fitting was evaluated using calibration curves, and clinical utility was assessed through decision curve analysis (DCA).ResultsThe nomogram exhibited excellent performance. The area under the ROC curve (AUC) was 0.848 [95% confidence interval (CI), 0.793–0.893)], 0.802 (95% CI 0.688–0.889), and 0.751 (95% CI 0.652–0.833) for the training, internal validation, and external validation sets, respectively. The AUCs of the P-R curves were 0.838 (95% CI 0.756–0.895), 0.541 (95% CI 0.329–0.740), and 0.556 (95% CI 0.376–0.722) for the corresponding sets. The nomogram outperformed the clinical model and handcrafted signature across all sets (all P < 0.05). The nomogram model demonstrated good calibration and provided greater net benefit within the relevant threshold range compared to other models.ConclusionThis study created a deep learning nomogram using CECT images and clinical data to predict NAC response in LAGC patients undergoing surgical resection, offering personalized treatment insights.Graphical abstract | 10.1007/s00261-024-04331-7 | deep learning nomogram for predicting neoadjuvant chemotherapy response in locally advanced gastric cancer patients | purposedeveloped and validated a deep learning radiomics nomogram using multi-phase contrast-enhanced computed tomography (cect) images to predict neoadjuvant chemotherapy (nac) response in locally advanced gastric cancer (lagc) patients.methodsthis multi-center study retrospectively included 322 patients diagnosed with gastric cancer from january 2013 to june 2023 at two hospitals. handcrafted radiomics technique and the efficientnet v2 neural network were applied to arterial, portal venous, and delayed phase ct images to extract two-dimensional handcrafted and deep learning features. a nomogram model was built by integrating the handcrafted signature, the deep learning signature, with clinical features. discriminative ability was assessed using the receiver operating characteristics (roc) curve and the precision-recall (p-r) curve. model fitting was evaluated using calibration curves, and clinical utility was assessed through decision curve analysis (dca).resultsthe nomogram exhibited excellent performance. the area under the roc curve (auc) was 0.848 [95% confidence interval (ci), 0.793–0.893)], 0.802 (95% ci 0.688–0.889), and 0.751 (95% ci 0.652–0.833) for the training, internal validation, and external validation sets, respectively. the aucs of the p-r curves were 0.838 (95% ci 0.756–0.895), 0.541 (95% ci 0.329–0.740), and 0.556 (95% ci 0.376–0.722) for the corresponding sets. the nomogram outperformed the clinical model and handcrafted signature across all sets (all p < 0.05). the nomogram model demonstrated good calibration and provided greater net benefit within the relevant threshold range compared to other models.conclusionthis study created a deep learning nomogram using cect images and clinical data to predict nac response in lagc patients undergoing surgical resection, offering personalized treatment insights.graphical abstract | [
"a deep learning radiomics nomogram",
"multi-phase contrast-enhanced computed tomography",
"cect",
"neoadjuvant chemotherapy (nac) response",
"locally advanced gastric cancer",
"lagc",
"patients.methodsthis multi-center study",
"322 patients",
"gastric cancer",
"january",
"june",
"two hospitals",
"handcrafted radiomics technique",
"the efficientnet v2 neural network",
"arterial, portal venous, and delayed phase ct images",
"two-dimensional handcrafted and deep learning features",
"a nomogram model",
"the handcrafted signature",
"the deep learning signature",
"clinical features",
"discriminative ability",
"the receiver operating characteristics",
"(roc) curve",
"the precision-recall (p-r) curve",
"model fitting",
"calibration curves",
"clinical utility",
"decision curve analysis",
"dca).resultsthe nomogram",
"excellent performance",
"the area",
"the roc curve",
"auc",
"[95% confidence interval",
"ci",
"0.793–0.893",
"0.802 (95%",
"ci 0.688–0.889",
"(95%",
"ci",
"the training",
"internal validation",
"external validation sets",
"the aucs",
"the p-r curves",
"0.756–0.895",
"(95%",
"ci 0.329–0.740",
"ci 0.376–0.722",
"the corresponding sets",
"the nomogram",
"the clinical model",
"signature",
"all sets",
"all p",
"the nomogram model",
"good calibration",
"greater net benefit",
"the relevant threshold range",
"other models.conclusionthis study",
"a deep learning nomogram",
"cect images",
"clinical data",
"nac response",
"lagc patients",
"surgical resection",
"personalized treatment insights.graphical abstract",
"322",
"january 2013 to june 2023",
"two",
"two",
"roc",
"roc",
"0.848",
"95%",
"0.802",
"95%",
"0.688–0.889",
"0.751",
"95%",
"0.652–0.833",
"0.838",
"95%",
"0.541",
"95%",
"0.329–0.740",
"0.556",
"95%"
] |
Towards a universal mechanism for successful deep learning | [
"Yuval Meir",
"Yarden Tzach",
"Shiri Hodassman",
"Ofek Tevet",
"Ido Kanter"
] | Recently, the underlying mechanism for successful deep learning (DL) was presented based on a quantitative method that measures the quality of a single filter in each layer of a DL model, particularly VGG-16 trained on CIFAR-10. This method exemplifies that each filter identifies small clusters of possible output labels, with additional noise selected as labels outside the clusters. This feature is progressively sharpened with each layer, resulting in an enhanced signal-to-noise ratio (SNR), which leads to an increase in the accuracy of the DL network. In this study, this mechanism is verified for VGG-16 and EfficientNet-B0 trained on the CIFAR-100 and ImageNet datasets, and the main results are as follows. First, the accuracy and SNR progressively increase with the layers. Second, for a given deep architecture, the maximal error rate increases approximately linearly with the number of output labels. Third, similar trends were obtained for dataset labels in the range [3, 1000], thus supporting the universality of this mechanism. Understanding the performance of a single filter and its dominating features paves the way to highly dilute the deep architecture without affecting its overall accuracy, and this can be achieved by applying the filter’s cluster connections (AFCC). | 10.1038/s41598-024-56609-x | towards a universal mechanism for successful deep learning | recently, the underlying mechanism for successful deep learning (dl) was presented based on a quantitative method that measures the quality of a single filter in each layer of a dl model, particularly vgg-16 trained on cifar-10. this method exemplifies that each filter identifies small clusters of possible output labels, with additional noise selected as labels outside the clusters. this feature is progressively sharpened with each layer, resulting in an enhanced signal-to-noise ratio (snr), which leads to an increase in the accuracy of the dl network. in this study, this mechanism is verified for vgg-16 and efficientnet-b0 trained on the cifar-100 and imagenet datasets, and the main results are as follows. first, the accuracy and snr progressively increase with the layers. second, for a given deep architecture, the maximal error rate increases approximately linearly with the number of output labels. third, similar trends were obtained for dataset labels in the range [3, 1000], thus supporting the universality of this mechanism. understanding the performance of a single filter and its dominating features paves the way to highly dilute the deep architecture without affecting its overall accuracy, and this can be achieved by applying the filter’s cluster connections (afcc). | [
"the underlying mechanism",
"successful deep learning",
"dl",
"a quantitative method",
"that",
"the quality",
"a single filter",
"each layer",
"a dl model",
"particularly vgg-16",
"cifar-10",
"this method",
"each filter",
"small clusters",
"possible output labels",
"additional noise",
"labels",
"the clusters",
"this feature",
"each layer",
"noise",
"snr",
"which",
"an increase",
"the accuracy",
"the dl network",
"this study",
"this mechanism",
"vgg-16",
"efficientnet-b0",
"the cifar-100",
"imagenet",
"datasets",
"the main results",
"the accuracy",
"the layers",
"a given deep architecture",
"the maximal error rate",
"the number",
"output labels",
"similar trends",
"dataset labels",
"the range",
"the universality",
"this mechanism",
"the performance",
"a single filter",
"its dominating features",
"the way",
"the deep architecture",
"its overall accuracy",
"this",
"the filter’s cluster connections",
"afcc",
"cifar-10",
"vgg-16",
"first",
"second",
"third",
"3",
"1000"
] |
White-box inference attack: compromising the security of deep learning-based COVID-19 diagnosis systems | [
"Burhan Ul Haque Sheikh",
"Aasim Zafar"
] | The COVID-19 pandemic has necessitated the exploration of innovative diagnostic approaches, including the utilization of machine learning (ML) and deep learning (DL) technologies. However, recent findings shed light on the susceptibility of deep learning-based models to adversarial attacks, leading to erroneous predictions. This study investigates the vulnerability of a deep COVID-19 diagnosis model to the Fast Gradient Sign Method (FGSM) adversarial attack. Leveraging transfer learning of EfficientNet-B2 on a publicly available dataset, a deep learning-based COVID-19 diagnosis model is developed, achieving an impressive average accuracy of 94.56% on clean test data. However, when subjected to an untargeted FGSM attack with varying epsilon values, the model’s accuracy is severely compromised, plummeting to 21.72% at epsilon 0.008. Notably, the attack successfully misclassifies adversarial COVID-19 images as normal with 100% confidence. This study underscores the critical need for further research and development to address these vulnerabilities and ensure the reliability and accuracy of deep learning models in the diagnosis of COVID-19 patients. | 10.1007/s41870-023-01538-7 | white-box inference attack: compromising the security of deep learning-based covid-19 diagnosis systems | the covid-19 pandemic has necessitated the exploration of innovative diagnostic approaches, including the utilization of machine learning (ml) and deep learning (dl) technologies. however, recent findings shed light on the susceptibility of deep learning-based models to adversarial attacks, leading to erroneous predictions. this study investigates the vulnerability of a deep covid-19 diagnosis model to the fast gradient sign method (fgsm) adversarial attack. leveraging transfer learning of efficientnet-b2 on a publicly available dataset, a deep learning-based covid-19 diagnosis model is developed, achieving an impressive average accuracy of 94.56% on clean test data. however, when subjected to an untargeted fgsm attack with varying epsilon values, the model’s accuracy is severely compromised, plummeting to 21.72% at epsilon 0.008. notably, the attack successfully misclassifies adversarial covid-19 images as normal with 100% confidence. this study underscores the critical need for further research and development to address these vulnerabilities and ensure the reliability and accuracy of deep learning models in the diagnosis of covid-19 patients. | [
"the exploration",
"innovative diagnostic approaches",
"the utilization",
"machine learning",
"ml",
"deep learning",
"(dl) technologies",
"recent findings",
"light",
"the susceptibility",
"deep learning-based models",
"adversarial attacks",
"erroneous predictions",
"this study",
"the vulnerability",
"a deep covid-19 diagnosis model",
"the fast gradient sign method",
"fgsm) adversarial attack",
"transfer learning",
"efficientnet-b2",
"a publicly available dataset",
"a deep learning-based covid-19 diagnosis model",
"an impressive average accuracy",
"94.56%",
"clean test data",
"an untargeted fgsm attack",
"varying epsilon values",
"the model’s accuracy",
"21.72%",
"epsilon",
"the attack",
"adversarial covid-19 images",
"100% confidence",
"this study",
"the critical need",
"further research",
"development",
"these vulnerabilities",
"the reliability",
"accuracy",
"deep learning models",
"the diagnosis",
"covid-19 patients",
"covid-19",
"covid-19",
"covid-19",
"94.56%",
"21.72%",
"0.008",
"covid-19",
"100%",
"covid-19"
] |
Detection of vulnerabilities in blockchain smart contracts using deep learning | [
"Namya Aankur Gupta",
"Mansi Bansal",
"Seema Sharma",
"Deepti Mehrotra",
"Misha Kakkar"
] | Blockchain helps to give a sense of security as there is only one history of transactions visible to all the involved parties. Smart contracts enable users to manage significant asset amounts of finances on the blockchain without the involvement of any intermediaries. The conditions and checks that have been written in smart contract and executed to the application cannot be changed again. However, these unique features pose some other risks to the smart contract. Smart contracts have several flaws in its programmable language and methods of execution, despite being a developing technology. To build smart contracts and implement numerous complicated business logics, high-level languages are used by the developers to code smart contracts. Thus, blockchain smart contract is the most important element of any decentralized application, posing the risk for it to be attacked. So, the presence of vulnerabilities are to be taken care of on a priority basis. It is important for detection of vulnerabilities in a smart contract and only then implement and connect it with applications to ensure security of funds. The motive of the paper is to discuss how deep learning may be utilized to deliver bug-free secure smart contracts. Objective of the paper is to detect three kinds of vulnerabilities- reentrancy, timestamp and infinite loop. A deep learning model has been created for detection of smart contract vulnerabilities using graph neural networks. The performance of this model has been compared to the present automated tools and other independent methods. It has been shown that this model has greater accuracy than other methods while comparing the prediction of smart contract vulnerabilities in existing models. | 10.1007/s11276-024-03755-9 | detection of vulnerabilities in blockchain smart contracts using deep learning | blockchain helps to give a sense of security as there is only one history of transactions visible to all the involved parties. smart contracts enable users to manage significant asset amounts of finances on the blockchain without the involvement of any intermediaries. the conditions and checks that have been written in smart contract and executed to the application cannot be changed again. however, these unique features pose some other risks to the smart contract. smart contracts have several flaws in its programmable language and methods of execution, despite being a developing technology. to build smart contracts and implement numerous complicated business logics, high-level languages are used by the developers to code smart contracts. thus, blockchain smart contract is the most important element of any decentralized application, posing the risk for it to be attacked. so, the presence of vulnerabilities are to be taken care of on a priority basis. it is important for detection of vulnerabilities in a smart contract and only then implement and connect it with applications to ensure security of funds. the motive of the paper is to discuss how deep learning may be utilized to deliver bug-free secure smart contracts. objective of the paper is to detect three kinds of vulnerabilities- reentrancy, timestamp and infinite loop. a deep learning model has been created for detection of smart contract vulnerabilities using graph neural networks. the performance of this model has been compared to the present automated tools and other independent methods. it has been shown that this model has greater accuracy than other methods while comparing the prediction of smart contract vulnerabilities in existing models. | [
"blockchain",
"a sense",
"security",
"only one history",
"transactions",
"all the involved parties",
"smart contracts",
"users",
"significant asset amounts",
"finances",
"the blockchain",
"the involvement",
"any intermediaries",
"the conditions",
"checks",
"that",
"smart contract",
"the application",
"these unique features",
"some other risks",
"the smart contract",
"smart contracts",
"several flaws",
"its programmable language",
"methods",
"execution",
"a developing technology",
"smart contracts",
"numerous complicated business logics",
"high-level languages",
"the developers",
"smart contracts",
"blockchain smart contract",
"the most important element",
"any decentralized application",
"the risk",
"it",
"the presence",
"vulnerabilities",
"care",
"a priority basis",
"it",
"detection",
"vulnerabilities",
"a smart contract",
"it",
"applications",
"security",
"funds",
"the motive",
"the paper",
"how deep learning",
"bug-free secure smart contracts",
"objective",
"the paper",
"three kinds",
"vulnerabilities- reentrancy",
"timestamp",
"infinite loop",
"a deep learning model",
"detection",
"smart contract vulnerabilities",
"graph neural networks",
"the performance",
"this model",
"the present automated tools",
"other independent methods",
"it",
"this model",
"greater accuracy",
"other methods",
"the prediction",
"smart contract vulnerabilities",
"existing models",
"three"
] |
Integrating Deep Learning and Reinforcement Learning for Enhanced Financial Risk Forecasting in Supply Chain Management | [
"Yuanfei Cui",
"Fengtong Yao"
] | In today’s dynamic business landscape, the integration of supply chain management and financial risk forecasting is imperative for sustained success. This research paper introduces a groundbreaking approach that seamlessly merges deep autoencoder (DAE) models with reinforcement learning (RL) techniques to enhance financial risk forecasting within the realm of supply chain management. The primary objective of this research is to optimize financial decision-making processes by extracting key feature representations from financial data and leveraging RL for decision optimization. To achieve this, the paper presents the PSO-SDAE model, a novel and sophisticated approach to financial risk forecasting. By incorporating advanced noise reduction features and optimization algorithms, the PSO-SDAE model significantly enhances the accuracy and reliability of financial risk predictions. Notably, the PSO-SDAE model goes beyond traditional forecasting methods by addressing the need for real-time decision-making in the rapidly evolving landscape of financial risk management. This is achieved through the utilization of a distributed RL algorithm, which expedites the processing of supply chain data while maintaining both efficiency and accuracy. The results of our study showcase the exceptional precision of the PSO-SDAE model in predicting financial risks, underscoring its efficacy for proactive risk management within supply chain operations. Moreover, the augmented processing speed of the model enables real-time analysis and decision-making — a critical capability in today’s fast-paced business environment. | 10.1007/s13132-024-01946-5 | integrating deep learning and reinforcement learning for enhanced financial risk forecasting in supply chain management | in today’s dynamic business landscape, the integration of supply chain management and financial risk forecasting is imperative for sustained success. this research paper introduces a groundbreaking approach that seamlessly merges deep autoencoder (dae) models with reinforcement learning (rl) techniques to enhance financial risk forecasting within the realm of supply chain management. the primary objective of this research is to optimize financial decision-making processes by extracting key feature representations from financial data and leveraging rl for decision optimization. to achieve this, the paper presents the pso-sdae model, a novel and sophisticated approach to financial risk forecasting. by incorporating advanced noise reduction features and optimization algorithms, the pso-sdae model significantly enhances the accuracy and reliability of financial risk predictions. notably, the pso-sdae model goes beyond traditional forecasting methods by addressing the need for real-time decision-making in the rapidly evolving landscape of financial risk management. this is achieved through the utilization of a distributed rl algorithm, which expedites the processing of supply chain data while maintaining both efficiency and accuracy. the results of our study showcase the exceptional precision of the pso-sdae model in predicting financial risks, underscoring its efficacy for proactive risk management within supply chain operations. moreover, the augmented processing speed of the model enables real-time analysis and decision-making — a critical capability in today’s fast-paced business environment. | [
"today’s dynamic business landscape",
"the integration",
"supply chain management",
"financial risk forecasting",
"sustained success",
"this research paper",
"a groundbreaking approach",
"that",
"deep autoencoder (dae) models",
"reinforcement learning",
"(rl) techniques",
"financial risk forecasting",
"the realm",
"supply chain management",
"the primary objective",
"this research",
"financial decision-making processes",
"key feature representations",
"financial data",
"rl",
"decision optimization",
"this",
"the paper",
"the pso-sdae model",
"a novel and sophisticated approach",
"financial risk forecasting",
"advanced noise reduction features",
"optimization algorithms",
"the pso-sdae model",
"the accuracy",
"reliability",
"financial risk predictions",
"the pso-sdae model",
"traditional forecasting methods",
"the need",
"real-time decision-making",
"the rapidly evolving landscape",
"financial risk management",
"this",
"the utilization",
"a distributed rl algorithm",
"which",
"the processing",
"supply chain data",
"both efficiency",
"accuracy",
"the results",
"our study showcase",
"the exceptional precision",
"the pso-sdae model",
"financial risks",
"its efficacy",
"proactive risk management",
"supply chain operations",
"the augmented processing speed",
"the model",
"real-time analysis",
"decision-making",
"a critical capability",
"today’s fast-paced business environment",
"today",
"dae",
"today"
] |
ZS-DML: Zero-Shot Deep Metric Learning approach for plant leaf disease classification | [
"Davood Zabihzadeh",
"Mina Masoudifar"
] | Automatic plant disease detection plays an important role in food security. Deep learning methods are able to detect precisely various types of plant diseases but at the expense of using huge amounts of resources (processors and data). Therefore, employing few-shot or zero-shot learning methods is unavoidable. Deep Metric Learning (DML) is a widely used technique for few/zero shot learning. Existing DML methods extract features from the last hidden layer of a pre-trained deep network, which increases the dependence of the specific features on the observed classes. In this paper, the general discriminative feature learning method is used to learn general features of plant leaves. Moreover, a proxy-based loss is utilized that learns the embedding without sampling phase while having a higher convergence rate. The network is trained on the Plant Village dataset where the images are split into 32 and 6 classes as source and target, respectively. The knowledge learned from the source domain is transferred to the target in a zero-shot setting. A few samples of the target domain are presented to the network as a gallery. The network is then evaluated on the target domain. The experimental results show that by presenting few or even only one sample of new classes to the network without fine-tuning step, our method can achieve a classification accuracy of 99%/80.64% for few/one image(s) per class. | 10.1007/s11042-023-17136-5 | zs-dml: zero-shot deep metric learning approach for plant leaf disease classification | automatic plant disease detection plays an important role in food security. deep learning methods are able to detect precisely various types of plant diseases but at the expense of using huge amounts of resources (processors and data). therefore, employing few-shot or zero-shot learning methods is unavoidable. deep metric learning (dml) is a widely used technique for few/zero shot learning. existing dml methods extract features from the last hidden layer of a pre-trained deep network, which increases the dependence of the specific features on the observed classes. in this paper, the general discriminative feature learning method is used to learn general features of plant leaves. moreover, a proxy-based loss is utilized that learns the embedding without sampling phase while having a higher convergence rate. the network is trained on the plant village dataset where the images are split into 32 and 6 classes as source and target, respectively. the knowledge learned from the source domain is transferred to the target in a zero-shot setting. a few samples of the target domain are presented to the network as a gallery. the network is then evaluated on the target domain. the experimental results show that by presenting few or even only one sample of new classes to the network without fine-tuning step, our method can achieve a classification accuracy of 99%/80.64% for few/one image(s) per class. | [
"automatic plant disease detection",
"an important role",
"food security",
"deep learning methods",
"precisely various types",
"plant diseases",
"the expense",
"huge amounts",
"resources",
"processors",
"data",
"few-shot or zero-shot learning methods",
"deep metric learning",
"dml",
"a widely used technique",
"few/zero shot learning",
"existing dml methods",
"features",
"the last hidden layer",
"a pre-trained deep network",
"which",
"the dependence",
"the specific features",
"the observed classes",
"this paper",
"the general discriminative feature learning method",
"general features",
"plant leaves",
"a proxy-based loss",
"that",
"the embedding",
"sampling phase",
"a higher convergence rate",
"the network",
"the plant village dataset",
"the images",
"32 and 6 classes",
"source",
"target",
"the knowledge",
"the source domain",
"the target",
"a zero-shot setting",
"a few samples",
"the target domain",
"the network",
"a gallery",
"the network",
"the target domain",
"the experimental results",
"few or even only one sample",
"new classes",
"the network",
"fine-tuning step",
"our method",
"a classification accuracy",
"99%/80.64%",
"few/one image(s",
"class",
"zero",
"32",
"6",
"zero",
"only one",
"99%/80.64%",
"one"
] |
Deep Learning Approaches for Automatic Quality Assurance of Magnetic Resonance Images Using ACR Phantom | [
"Tarraf Torfeh",
"Souha Aouadi",
"SA Yoganathan",
"Satheesh Paloor",
"Rabih Hammoud",
"Noora Al-Hammadi"
] | BackgroundIn recent years, there has been a growing trend towards utilizing Artificial Intelligence (AI) and machine learning techniques in medical imaging, including for the purpose of automating quality assurance. In this research, we aimed to develop and evaluate various deep learning-based approaches for automatic quality assurance of Magnetic Resonance (MR) images using the American College of Radiology (ACR) standards.MethodsThe study involved the development, optimization, and testing of custom convolutional neural network (CNN) models. Additionally, popular pre-trained models such as VGG16, VGG19, ResNet50, InceptionV3, EfficientNetB0, and EfficientNetB5 were trained and tested. The use of pre-trained models, particularly those trained on the ImageNet dataset, for transfer learning was also explored. Two-class classification models were employed for assessing spatial resolution and geometric distortion, while an approach classifying the image into 10 classes representing the number of visible spokes was used for the low contrast.ResultsOur results showed that deep learning-based methods can be effectively used for MR image quality assurance and can improve the performance of these models. The low contrast test was one of the most challenging tests within the ACR phantom.ConclusionsOverall, for geometric distortion and spatial resolution, all of the deep learning models tested produced prediction accuracy of 80% or higher. The study also revealed that training the models from scratch performed slightly better compared to transfer learning. For the low contrast, our investigation emphasized the adaptability and potential of deep learning models. The custom CNN models excelled in predicting the number of visible spokes, achieving commendable accuracy, recall, precision, and F1 scores. | 10.1186/s12880-023-01157-5 | deep learning approaches for automatic quality assurance of magnetic resonance images using acr phantom | backgroundin recent years, there has been a growing trend towards utilizing artificial intelligence (ai) and machine learning techniques in medical imaging, including for the purpose of automating quality assurance. in this research, we aimed to develop and evaluate various deep learning-based approaches for automatic quality assurance of magnetic resonance (mr) images using the american college of radiology (acr) standards.methodsthe study involved the development, optimization, and testing of custom convolutional neural network (cnn) models. additionally, popular pre-trained models such as vgg16, vgg19, resnet50, inceptionv3, efficientnetb0, and efficientnetb5 were trained and tested. the use of pre-trained models, particularly those trained on the imagenet dataset, for transfer learning was also explored. two-class classification models were employed for assessing spatial resolution and geometric distortion, while an approach classifying the image into 10 classes representing the number of visible spokes was used for the low contrast.resultsour results showed that deep learning-based methods can be effectively used for mr image quality assurance and can improve the performance of these models. the low contrast test was one of the most challenging tests within the acr phantom.conclusionsoverall, for geometric distortion and spatial resolution, all of the deep learning models tested produced prediction accuracy of 80% or higher. the study also revealed that training the models from scratch performed slightly better compared to transfer learning. for the low contrast, our investigation emphasized the adaptability and potential of deep learning models. the custom cnn models excelled in predicting the number of visible spokes, achieving commendable accuracy, recall, precision, and f1 scores. | [
"a growing trend",
"artificial intelligence",
"ai",
"techniques",
"medical imaging",
"the purpose",
"quality assurance",
"this research",
"we",
"various deep learning-based approaches",
"automatic quality assurance",
"magnetic resonance",
"(mr) images",
"the american college",
"radiology",
"standards.methodsthe study",
"the development",
"optimization",
"testing",
"custom convolutional neural network (cnn) models",
"popular pre-trained models",
"vgg16",
"vgg19",
"resnet50",
"inceptionv3",
"efficientnetb0",
"efficientnetb5",
"the use",
"pre-trained models",
"particularly those",
"the imagenet dataset",
"transfer learning",
"two-class classification models",
"spatial resolution",
"geometric distortion",
"an approach",
"the image",
"10 classes",
"the number",
"visible spokes",
"the low contrast.resultsour results",
"deep learning-based methods",
"mr image quality assurance",
"the performance",
"these models",
"the low contrast test",
"the most challenging tests",
"the acr phantom.conclusionsoverall",
"geometric distortion",
"spatial resolution",
"all",
"the deep learning models",
"produced prediction accuracy",
"80%",
"the study",
"the models",
"scratch",
"learning",
"the low contrast",
"our investigation",
"the adaptability",
"potential",
"deep learning models",
"the custom cnn models",
"the number",
"visible spokes",
"commendable accuracy",
"recall",
"precision",
"f1 scores",
"backgroundin recent years",
"american",
"cnn",
"resnet50",
"inceptionv3",
"efficientnetb0",
"efficientnetb5",
"two",
"10",
"80%",
"cnn"
] |
Depression clinical detection model based on social media: a federated deep learning approach | [
"Yang Liu"
] | Depression can significantly impact people’s mental health, and recent research shows that social media can provide decision-making support for health-care professionals and serve as supplementary information for understanding patients’ health status. Deep learning models are also able to assess an individual’s likelihood of experiencing depression. However, data availability on social media is often limited due to privacy concerns, even though deep learning models benefit from having more data to analyze. To address this issue, this study proposes a methodological framework system for clinical decision support that uses federated deep learning (FDL) to identify individuals experiencing depression and provide intervention decisions for clinicians. The proposed framework involves evaluation of datasets from three social media platforms, and the experimental results demonstrate that our method achieves state-of-the-art results. The study aims to provide a clinical decision support system with evolvable features that can deliver precise solutions and assist health-care professionals in medical diagnosis. The proposed framework that incorporates social media data and deep learning models can provide valuable insights into patients’ health status, support personalized treatment decisions, and adapt to changing health-care needs. | 10.1007/s11227-023-05754-7 | depression clinical detection model based on social media: a federated deep learning approach | depression can significantly impact people’s mental health, and recent research shows that social media can provide decision-making support for health-care professionals and serve as supplementary information for understanding patients’ health status. deep learning models are also able to assess an individual’s likelihood of experiencing depression. however, data availability on social media is often limited due to privacy concerns, even though deep learning models benefit from having more data to analyze. to address this issue, this study proposes a methodological framework system for clinical decision support that uses federated deep learning (fdl) to identify individuals experiencing depression and provide intervention decisions for clinicians. the proposed framework involves evaluation of datasets from three social media platforms, and the experimental results demonstrate that our method achieves state-of-the-art results. the study aims to provide a clinical decision support system with evolvable features that can deliver precise solutions and assist health-care professionals in medical diagnosis. the proposed framework that incorporates social media data and deep learning models can provide valuable insights into patients’ health status, support personalized treatment decisions, and adapt to changing health-care needs. | [
"depression",
"people’s mental health",
"recent research",
"social media",
"decision-making support",
"health-care professionals",
"supplementary information",
"patients’ health status",
"deep learning models",
"an individual’s likelihood",
"depression",
"data availability",
"social media",
"privacy concerns",
"deep learning models",
"more data",
"this issue",
"this study",
"a methodological framework system",
"clinical decision support",
"that",
"deep learning",
"individuals",
"depression",
"intervention decisions",
"clinicians",
"the proposed framework",
"evaluation",
"datasets",
"three social media platforms",
"the experimental results",
"our method",
"the-art",
"the study",
"a clinical decision support system",
"evolvable features",
"that",
"precise solutions",
"health-care professionals",
"medical diagnosis",
"the proposed framework",
"that",
"social media data",
"deep learning models",
"valuable insights",
"patients’ health status",
"personalized treatment decisions",
"health-care needs",
"clinicians",
"three"
] |
Fast reconstruction of EEG signal compression sensing based on deep learning | [
"XiuLi Du",
"KuanYang Liang",
"YaNa Lv",
"ShaoMing Qiu"
] | When traditional EEG signals are collected based on the Nyquist theorem, long-time recordings of EEG signals will produce a large amount of data. At the same time, limited bandwidth, end-to-end delay, and memory space will bring great pressure on the effective transmission of data. The birth of compressed sensing alleviates this transmission pressure. However, using an iterative compressed sensing reconstruction algorithm for EEG signal reconstruction faces complex calculation problems and slow data processing speed, limiting the application of compressed sensing in EEG signal rapid monitoring systems. As such, this paper presents a non-iterative and fast algorithm for reconstructing EEG signals using compressed sensing and deep learning techniques. This algorithm uses the improved residual network model, extracts the feature information of the EEG signal by one-dimensional dilated convolution, directly learns the nonlinear mapping relationship between the measured value and the original signal, and can quickly and accurately reconstruct the EEG signal. The method proposed in this paper has been verified by simulation on the open BCI contest dataset. Overall, it is proved that the proposed method has higher reconstruction accuracy and faster reconstruction speed than the traditional CS reconstruction algorithm and the existing deep learning reconstruction algorithm. In addition, it can realize the rapid reconstruction of EEG signals. | 10.1038/s41598-024-55334-9 | fast reconstruction of eeg signal compression sensing based on deep learning | when traditional eeg signals are collected based on the nyquist theorem, long-time recordings of eeg signals will produce a large amount of data. at the same time, limited bandwidth, end-to-end delay, and memory space will bring great pressure on the effective transmission of data. the birth of compressed sensing alleviates this transmission pressure. however, using an iterative compressed sensing reconstruction algorithm for eeg signal reconstruction faces complex calculation problems and slow data processing speed, limiting the application of compressed sensing in eeg signal rapid monitoring systems. as such, this paper presents a non-iterative and fast algorithm for reconstructing eeg signals using compressed sensing and deep learning techniques. this algorithm uses the improved residual network model, extracts the feature information of the eeg signal by one-dimensional dilated convolution, directly learns the nonlinear mapping relationship between the measured value and the original signal, and can quickly and accurately reconstruct the eeg signal. the method proposed in this paper has been verified by simulation on the open bci contest dataset. overall, it is proved that the proposed method has higher reconstruction accuracy and faster reconstruction speed than the traditional cs reconstruction algorithm and the existing deep learning reconstruction algorithm. in addition, it can realize the rapid reconstruction of eeg signals. | [
"traditional eeg signals",
"the nyquist theorem",
"long-time recordings",
"eeg signals",
"a large amount",
"data",
"the same time",
"end",
"memory space",
"great pressure",
"the effective transmission",
"data",
"the birth",
"this transmission pressure",
"an iterative compressed sensing reconstruction algorithm",
"eeg signal reconstruction",
"complex calculation problems",
"slow data processing speed",
"the application",
"compressed sensing",
"eeg signal rapid monitoring systems",
"this paper",
"a non-iterative and fast algorithm",
"eeg signals",
"compressed sensing",
"deep learning techniques",
"this algorithm",
"the improved residual network model",
"the feature information",
"the eeg signal",
"one-dimensional dilated convolution",
"the nonlinear mapping relationship",
"the measured value",
"the original signal",
"the eeg signal",
"the method",
"this paper",
"simulation",
"the open bci contest dataset",
"it",
"the proposed method",
"higher reconstruction accuracy",
"faster reconstruction speed",
"the traditional cs reconstruction algorithm",
"the existing deep learning reconstruction algorithm",
"addition",
"it",
"the rapid reconstruction",
"eeg signals",
"one",
"bci contest"
] |
Detecting schizophrenia with 3D structural brain MRI using deep learning | [
"Junhao Zhang",
"Vishwanatha M. Rao",
"Ye Tian",
"Yanting Yang",
"Nicolas Acosta",
"Zihan Wan",
"Pin-Yu Lee",
"Chloe Zhang",
"Lawrence S. Kegeles",
"Scott A. Small",
"Jia Guo"
] | Schizophrenia is a chronic neuropsychiatric disorder that causes distinct structural alterations within the brain. We hypothesize that deep learning applied to a structural neuroimaging dataset could detect disease-related alteration and improve classification and diagnostic accuracy. We tested this hypothesis using a single, widely available, and conventional T1-weighted MRI scan, from which we extracted the 3D whole-brain structure using standard post-processing methods. A deep learning model was then developed, optimized, and evaluated on three open datasets with T1-weighted MRI scans of patients with schizophrenia. Our proposed model outperformed the benchmark model, which was also trained with structural MR images using a 3D CNN architecture. Our model is capable of almost perfectly (area under the ROC curve = 0.987) distinguishing schizophrenia patients from healthy controls on unseen structural MRI scans. Regional analysis localized subcortical regions and ventricles as the most predictive brain regions. Subcortical structures serve a pivotal role in cognitive, affective, and social functions in humans, and structural abnormalities of these regions have been associated with schizophrenia. Our finding corroborates that schizophrenia is associated with widespread alterations in subcortical brain structure and the subcortical structural information provides prominent features in diagnostic classification. Together, these results further demonstrate the potential of deep learning to improve schizophrenia diagnosis and identify its structural neuroimaging signatures from a single, standard T1-weighted brain MRI. | 10.1038/s41598-023-41359-z | detecting schizophrenia with 3d structural brain mri using deep learning | schizophrenia is a chronic neuropsychiatric disorder that causes distinct structural alterations within the brain. we hypothesize that deep learning applied to a structural neuroimaging dataset could detect disease-related alteration and improve classification and diagnostic accuracy. we tested this hypothesis using a single, widely available, and conventional t1-weighted mri scan, from which we extracted the 3d whole-brain structure using standard post-processing methods. a deep learning model was then developed, optimized, and evaluated on three open datasets with t1-weighted mri scans of patients with schizophrenia. our proposed model outperformed the benchmark model, which was also trained with structural mr images using a 3d cnn architecture. our model is capable of almost perfectly (area under the roc curve = 0.987) distinguishing schizophrenia patients from healthy controls on unseen structural mri scans. regional analysis localized subcortical regions and ventricles as the most predictive brain regions. subcortical structures serve a pivotal role in cognitive, affective, and social functions in humans, and structural abnormalities of these regions have been associated with schizophrenia. our finding corroborates that schizophrenia is associated with widespread alterations in subcortical brain structure and the subcortical structural information provides prominent features in diagnostic classification. together, these results further demonstrate the potential of deep learning to improve schizophrenia diagnosis and identify its structural neuroimaging signatures from a single, standard t1-weighted brain mri. | [
"schizophrenia",
"a chronic neuropsychiatric disorder",
"that",
"distinct structural alterations",
"the brain",
"we",
"deep learning",
"a structural neuroimaging dataset",
"disease-related alteration",
"classification and diagnostic accuracy",
"we",
"this hypothesis",
"conventional t1-weighted mri scan",
"which",
"we",
"the 3d whole-brain structure",
"standard post-processing methods",
"a deep learning model",
"three open datasets",
"t1-weighted mri scans",
"patients",
"schizophrenia",
"our proposed model",
"the benchmark model",
"which",
"structural mr images",
"a 3d cnn architecture",
"our model",
"almost perfectly (area",
"the roc curve",
"schizophrenia patients",
"healthy controls",
"unseen structural mri scans",
"regional analysis",
"subcortical regions",
"ventricles",
"the most predictive brain regions",
"subcortical structures",
"a pivotal role",
"cognitive, affective, and social functions",
"humans",
"structural abnormalities",
"these regions",
"schizophrenia",
"our finding",
"schizophrenia",
"widespread alterations",
"subcortical brain structure",
"the subcortical structural information",
"prominent features",
"diagnostic classification",
"these results",
"the potential",
"deep learning",
"schizophrenia diagnosis",
"its structural neuroimaging signatures",
"a single, standard t1-weighted brain mri",
"schizophrenia",
"3d",
"three",
"3d",
"cnn",
"roc",
"0.987",
"schizophrenia"
] |
An end-to-end intrusion detection system with IoT dataset using deep learning with unsupervised feature extraction | [
"Yesi Novaria Kunang",
"Siti Nurmaini",
"Deris Stiawan",
"Bhakti Yudho Suprapto"
] | The rapid growth of the Internet of things (IoT) platform has implications on security vulnerabilities that need to be resolved. This requires an intrusion detection system (IDS) to secure attacks on the platforms. In line with this, numerous machine and deep learning algorithms have been adopted to detect cyber-attacks. Real-time IoT devices transmit massive amounts of heterogeneous data, which affects the network. Traffic networks generate redundant and large amounts of data that must be reduced before processing. This study proposed a hybrid deep learning model for an IDS on the IoT platform. We used unsupervised approaches to extract data dimensions and features, then a neural network for classification. Several approaches were used to determine the effectiveness of the deep learning-based IoT IDS with two scenarios of feature extraction. The first case used autoencoder variants such as deep autoencoder (DAE), deep LSTM autoencoder (LSTM-DAE), and deep convolutional autoencoder. The second case used stacked models for feature extraction, including stacked autoencoder and deep belief network. The feature extraction output from the five models was fine-tuned to the fully connected layer using the BoT-IoT dataset. The results showed a good detection performance of almost 100% and a false positive rate (FPR) of nearly 0%. On the CSE-CIC-IDS2018 dataset, the proposed deep learning model was evaluated using a transfer learning approach with the highest detection rate of 99.17% and the lowest FPR of 0.18%. The model developed from the feature extraction process recognized attacks significantly better than the previous approach. | 10.1007/s10207-023-00807-7 | an end-to-end intrusion detection system with iot dataset using deep learning with unsupervised feature extraction | the rapid growth of the internet of things (iot) platform has implications on security vulnerabilities that need to be resolved. this requires an intrusion detection system (ids) to secure attacks on the platforms. in line with this, numerous machine and deep learning algorithms have been adopted to detect cyber-attacks. real-time iot devices transmit massive amounts of heterogeneous data, which affects the network. traffic networks generate redundant and large amounts of data that must be reduced before processing. this study proposed a hybrid deep learning model for an ids on the iot platform. we used unsupervised approaches to extract data dimensions and features, then a neural network for classification. several approaches were used to determine the effectiveness of the deep learning-based iot ids with two scenarios of feature extraction. the first case used autoencoder variants such as deep autoencoder (dae), deep lstm autoencoder (lstm-dae), and deep convolutional autoencoder. the second case used stacked models for feature extraction, including stacked autoencoder and deep belief network. the feature extraction output from the five models was fine-tuned to the fully connected layer using the bot-iot dataset. the results showed a good detection performance of almost 100% and a false positive rate (fpr) of nearly 0%. on the cse-cic-ids2018 dataset, the proposed deep learning model was evaluated using a transfer learning approach with the highest detection rate of 99.17% and the lowest fpr of 0.18%. the model developed from the feature extraction process recognized attacks significantly better than the previous approach. | [
"the rapid growth",
"the internet",
"things",
"(iot) platform",
"implications",
"security vulnerabilities",
"that",
"this",
"an intrusion detection system",
"ids",
"attacks",
"the platforms",
"line",
"this",
"numerous machine",
"deep learning algorithms",
"cyber-attacks",
"real-time iot devices",
"massive amounts",
"heterogeneous data",
"which",
"the network",
"traffic networks",
"redundant and large amounts",
"data",
"that",
"processing",
"this study",
"a hybrid deep learning model",
"an ids",
"the iot platform",
"we",
"unsupervised approaches",
"data dimensions",
"features",
", then a neural network",
"classification",
"several approaches",
"the effectiveness",
"the deep learning-based iot ids",
"two scenarios",
"feature extraction",
"the first case",
"autoencoder variants",
"deep autoencoder",
"dae",
"deep lstm autoencoder",
"lstm-dae",
"deep convolutional autoencoder",
"the second case",
"stacked models",
"feature extraction",
"autoencoder",
"deep belief network",
"the feature extraction output",
"the five models",
"the fully connected layer",
"the bot-iot dataset",
"the results",
"a good detection performance",
"almost 100%",
"a false positive rate",
"fpr",
"nearly 0%",
"the cse-cic-ids2018 dataset",
"the proposed deep learning model",
"a transfer learning approach",
"the highest detection rate",
"99.17%",
"the lowest fpr",
"0.18%",
"the model",
"the feature extraction process",
"attacks",
"the previous approach",
"two",
"first",
"dae",
"second",
"five",
"almost 100%",
"nearly 0%",
"99.17%",
"0.18%"
] |
Application of simultaneous uncertainty quantification and segmentation for oropharyngeal cancer use-case with Bayesian deep learning | [
"Jaakko Sahlsten",
"Joel Jaskari",
"Kareem A. Wahid",
"Sara Ahmed",
"Enrico Glerean",
"Renjie He",
"Benjamin H. Kann",
"Antti Mäkitie",
"Clifton D. Fuller",
"Mohamed A. Naser",
"Kimmo Kaski"
] | BackgroundRadiotherapy is a core treatment modality for oropharyngeal cancer (OPC), where the primary gross tumor volume (GTVp) is manually segmented with high interobserver variability. This calls for reliable and trustworthy automated tools in clinician workflow. Therefore, accurate uncertainty quantification and its downstream utilization is critical.MethodsHere we propose uncertainty-aware deep learning for OPC GTVp segmentation, and illustrate the utility of uncertainty in multiple applications. We examine two Bayesian deep learning (BDL) models and eight uncertainty measures, and utilize a large multi-institute dataset of 292 PET/CT scans to systematically analyze our approach.ResultsWe show that our uncertainty-based approach accurately predicts the quality of the deep learning segmentation in 86.6% of cases, identifies low performance cases for semi-automated correction, and visualizes regions of the scans where the segmentations likely fail.ConclusionsOur BDL-based analysis provides a first-step towards more widespread implementation of uncertainty quantification in OPC GTVp segmentation. | 10.1038/s43856-024-00528-5 | application of simultaneous uncertainty quantification and segmentation for oropharyngeal cancer use-case with bayesian deep learning | backgroundradiotherapy is a core treatment modality for oropharyngeal cancer (opc), where the primary gross tumor volume (gtvp) is manually segmented with high interobserver variability. this calls for reliable and trustworthy automated tools in clinician workflow. therefore, accurate uncertainty quantification and its downstream utilization is critical.methodshere we propose uncertainty-aware deep learning for opc gtvp segmentation, and illustrate the utility of uncertainty in multiple applications. we examine two bayesian deep learning (bdl) models and eight uncertainty measures, and utilize a large multi-institute dataset of 292 pet/ct scans to systematically analyze our approach.resultswe show that our uncertainty-based approach accurately predicts the quality of the deep learning segmentation in 86.6% of cases, identifies low performance cases for semi-automated correction, and visualizes regions of the scans where the segmentations likely fail.conclusionsour bdl-based analysis provides a first-step towards more widespread implementation of uncertainty quantification in opc gtvp segmentation. | [
"backgroundradiotherapy",
"a core treatment modality",
"oropharyngeal cancer",
"the primary gross tumor volume",
"gtvp",
"high interobserver variability",
"this",
"reliable and trustworthy automated tools",
"clinician workflow",
"accurate uncertainty quantification",
"its downstream utilization",
"we",
"uncertainty-aware deep learning",
"opc",
"gtvp segmentation",
"the utility",
"uncertainty",
"multiple applications",
"we",
"two bayesian deep learning",
"bdl) models",
"eight uncertainty measures",
"a large multi-institute dataset",
"292 pet/ct",
"our approach.resultswe show",
"our uncertainty-based approach",
"the quality",
"the deep learning segmentation",
"86.6%",
"cases",
"low performance cases",
"semi-automated correction",
"regions",
"the scans",
"the segmentations",
"bdl-based analysis",
"a first-step",
"more widespread implementation",
"uncertainty quantification",
"gtvp segmentation",
"bayesian",
"eight",
"292",
"86.6%",
"fail.conclusionsour bdl",
"first"
] |
Insights from EEG analysis of evoked memory recalls using deep learning for emotion charting | [
"Muhammad Najam Dar",
"Muhammad Usman Akram",
"Ahmad Rauf Subhani",
"Sajid Gul Khawaja",
"Constantino Carlos Reyes-Aldasoro",
"Sarah Gul"
] | Affect recognition in a real-world, less constrained environment is the principal prerequisite of the industrial-level usefulness of this technology. Monitoring the psychological profile using smart, wearable electroencephalogram (EEG) sensors during daily activities without external stimuli, such as memory-induced emotions, is a challenging research gap in emotion recognition. This paper proposed a deep learning framework for improved memory-induced emotion recognition leveraging a combination of 1D-CNN and LSTM as feature extractors integrated with an Extreme Learning Machine (ELM) classifier. The proposed deep learning architecture, combined with the EEG preprocessing, such as the removal of the average baseline signal from each sample and extraction of EEG rhythms (delta, theta, alpha, beta, and gamma), aims to capture repetitive and continuous patterns for memory-induced emotion recognition, underexplored with deep learning techniques. This work has analyzed EEG signals using a wearable, ultra-mobile sports cap while recalling autobiographical emotional memories evoked by affect-denoting words, with self-annotation on the scale of valence and arousal. With extensive experimentation using the same dataset, the proposed framework empirically outperforms existing techniques for the emerging area of memory-induced emotion recognition with an accuracy of 65.6%. The EEG rhythms analysis, such as delta, theta, alpha, beta, and gamma, achieved 65.5%, 52.1%, 65.1%, 64.6%, and 65.0% accuracies for classification with four quadrants of valence and arousal. These results underscore the significant advancement achieved by our proposed method for the real-world environment of memory-induced emotion recognition. | 10.1038/s41598-024-61832-7 | insights from eeg analysis of evoked memory recalls using deep learning for emotion charting | affect recognition in a real-world, less constrained environment is the principal prerequisite of the industrial-level usefulness of this technology. monitoring the psychological profile using smart, wearable electroencephalogram (eeg) sensors during daily activities without external stimuli, such as memory-induced emotions, is a challenging research gap in emotion recognition. this paper proposed a deep learning framework for improved memory-induced emotion recognition leveraging a combination of 1d-cnn and lstm as feature extractors integrated with an extreme learning machine (elm) classifier. the proposed deep learning architecture, combined with the eeg preprocessing, such as the removal of the average baseline signal from each sample and extraction of eeg rhythms (delta, theta, alpha, beta, and gamma), aims to capture repetitive and continuous patterns for memory-induced emotion recognition, underexplored with deep learning techniques. this work has analyzed eeg signals using a wearable, ultra-mobile sports cap while recalling autobiographical emotional memories evoked by affect-denoting words, with self-annotation on the scale of valence and arousal. with extensive experimentation using the same dataset, the proposed framework empirically outperforms existing techniques for the emerging area of memory-induced emotion recognition with an accuracy of 65.6%. the eeg rhythms analysis, such as delta, theta, alpha, beta, and gamma, achieved 65.5%, 52.1%, 65.1%, 64.6%, and 65.0% accuracies for classification with four quadrants of valence and arousal. these results underscore the significant advancement achieved by our proposed method for the real-world environment of memory-induced emotion recognition. | [
"recognition",
"a real-world, less constrained environment",
"the principal prerequisite",
"the industrial-level usefulness",
"this technology",
"the psychological profile",
"smart, wearable electroencephalogram (eeg) sensors",
"daily activities",
"external stimuli",
"memory-induced emotions",
"a challenging research gap",
"emotion recognition",
"this paper",
"a deep learning framework",
"improved memory-induced emotion recognition",
"a combination",
"1d-cnn",
"lstm",
"feature extractors",
"an extreme learning machine",
"(elm) classifier",
"the proposed deep learning architecture",
"the eeg preprocessing",
"the removal",
"the average baseline signal",
"each sample",
"extraction",
"eeg rhythms",
"delta",
"theta",
"alpha",
"beta",
"gamma",
"repetitive and continuous patterns",
"memory-induced emotion recognition",
"deep learning techniques",
"this work",
"eeg signals",
"a wearable, ultra-mobile sports cap",
"autobiographical emotional memories",
"affect-denoting words",
"self-annotation",
"the scale",
"valence",
"arousal",
"extensive experimentation",
"the same dataset",
"the proposed framework",
"existing techniques",
"the emerging area",
"memory-induced emotion recognition",
"an accuracy",
"65.6%",
"the eeg rhythms analysis",
"delta",
"theta",
"alpha",
"beta",
"gamma",
"65.5%",
"52.1%",
"65.1%",
"64.6%",
"65.0% accuracies",
"classification",
"four quadrants",
"valence",
"arousal",
"these results",
"the significant advancement",
"our proposed method",
"the real-world environment",
"memory-induced emotion recognition",
"daily",
"1d",
"65.6%",
"65.5%",
"52.1%",
"65.1%",
"64.6%",
"65.0%",
"four"
] |
Application of machine learning and deep learning for cancer vaccine (rapid review) | [
"Mohaddeseh Nasiri Hooshmand",
"Elham Maserat"
] | Cancer is a common and dangerous disease based on the World Health Organization. Much research has been done on new and effective cancer treatments, including cancer vaccines and the prediction of neoantigens using machine learning. The purpose of this study is to review articles that use machine learning to design cancer vaccines. This study is a rapid review study using search strategies and related keywords in Google Scholar, PubMed, and science direct databases from 2010 to 2021 in 2021 and revised in August 2023. 1250 articles were searched and 13 articles were selected for this review. We investigated them and then due to the importance and popularity of using machine learning in cancer vaccines recently, we compared them based on their machine learning technique. it is shown that neural networks with Python are used to predict neoantigens in 4 articles and with MATLAB in 2 articles, one article was about using the Fontom, one article with PERL, and one article with R; Other studies were about data mining with flowsom algorithm, multiple linear regression, logistics, and oncopepVCA, and the rest of articles do not provide information about machine learning implementation tools. Providing neural networks with Python is useful in the prediction of neoantigens due to the precision and examination of complex data sets. They use to predict HLA and peptide binding affinity, vaccines outcome, personalized cancer vaccines based on new data, the immune response, processing RNA and DNA sequences, and immunological analysis. | 10.1007/s11042-023-17589-8 | application of machine learning and deep learning for cancer vaccine (rapid review) | cancer is a common and dangerous disease based on the world health organization. much research has been done on new and effective cancer treatments, including cancer vaccines and the prediction of neoantigens using machine learning. the purpose of this study is to review articles that use machine learning to design cancer vaccines. this study is a rapid review study using search strategies and related keywords in google scholar, pubmed, and science direct databases from 2010 to 2021 in 2021 and revised in august 2023. 1250 articles were searched and 13 articles were selected for this review. we investigated them and then due to the importance and popularity of using machine learning in cancer vaccines recently, we compared them based on their machine learning technique. it is shown that neural networks with python are used to predict neoantigens in 4 articles and with matlab in 2 articles, one article was about using the fontom, one article with perl, and one article with r; other studies were about data mining with flowsom algorithm, multiple linear regression, logistics, and oncopepvca, and the rest of articles do not provide information about machine learning implementation tools. providing neural networks with python is useful in the prediction of neoantigens due to the precision and examination of complex data sets. they use to predict hla and peptide binding affinity, vaccines outcome, personalized cancer vaccines based on new data, the immune response, processing rna and dna sequences, and immunological analysis. | [
"cancer",
"a common and dangerous disease",
"the world health organization",
"much research",
"new and effective cancer treatments",
"cancer vaccines",
"the prediction",
"neoantigens",
"machine learning",
"the purpose",
"this study",
"articles",
"that",
"cancer vaccines",
"this study",
"a rapid review study",
"search strategies",
"related keywords",
"google scholar",
"direct databases",
"august",
"1250 articles",
"13 articles",
"this review",
"we",
"them",
"the importance",
"popularity",
"machine learning",
"cancer vaccines",
"we",
"them",
"their machine learning technique",
"it",
"neural networks",
"python",
"neoantigens",
"4 articles",
"matlab",
"2 articles",
"one article",
"the fontom",
"one article",
"perl",
"one article",
"r",
"other studies",
"data mining",
"flowsom algorithm",
"multiple linear regression",
"logistics",
"oncopepvca",
"the rest",
"articles",
"information",
"implementation tools",
"neural networks",
"python",
"the prediction",
"neoantigens",
"the precision",
"examination",
"complex data sets",
"they",
"hla and peptide binding affinity",
"vaccines",
"new data",
"the immune response",
"rna",
"dna sequences",
"immunological analysis",
"the world health organization",
"google",
"2010",
"2021",
"2021",
"august 2023.",
"13",
"4",
"2",
"one",
"one",
"one"
] |
A Resource-Efficient Deep Learning Approach to Visual-Based Cattle Geographic Origin Prediction | [
"Camellia Ray",
"Sambit Bakshi",
"Pankaj Kumar Sa",
"Ganapati Panda"
] | Customized healthcare for cattle health monitoring is essential, which aims to optimize individual animal health, thereby enhancing productivity, minimizing illness-related risks, and improving overall welfare. Tailoring healthcare practices to individual requirements guarantees that individual animals receive proper attention and intervention, resulting in better health outcomes and sustainable cattle farming practices. In this regard, the manuscript proposes a visual cues-based region prediction methodology to design a customized cattle healthcare system. The proposed automated AI healthcare system uses resource-efficient deep learning-inspired architecture for computer vision applications like performing region-wise classification. The classification mechanism can be used further to identify a cattle and the regions it belongs. Extensive experimentation has been conducted on a redesigned image dataset to identify the best-suited deep-learning framework to perform region classification for livestock, such as cattle. MobileNetV2 outperforms the considered state-of-the-art frameworks by achieving an accuracy of 93% in identifying the regions of the cattle. | 10.1007/s11036-024-02350-8 | a resource-efficient deep learning approach to visual-based cattle geographic origin prediction | customized healthcare for cattle health monitoring is essential, which aims to optimize individual animal health, thereby enhancing productivity, minimizing illness-related risks, and improving overall welfare. tailoring healthcare practices to individual requirements guarantees that individual animals receive proper attention and intervention, resulting in better health outcomes and sustainable cattle farming practices. in this regard, the manuscript proposes a visual cues-based region prediction methodology to design a customized cattle healthcare system. the proposed automated ai healthcare system uses resource-efficient deep learning-inspired architecture for computer vision applications like performing region-wise classification. the classification mechanism can be used further to identify a cattle and the regions it belongs. extensive experimentation has been conducted on a redesigned image dataset to identify the best-suited deep-learning framework to perform region classification for livestock, such as cattle. mobilenetv2 outperforms the considered state-of-the-art frameworks by achieving an accuracy of 93% in identifying the regions of the cattle. | [
"customized healthcare",
"cattle health monitoring",
"which",
"individual animal health",
"productivity",
"illness-related risks",
"overall welfare",
"healthcare practices",
"individual requirements",
"individual animals",
"proper attention",
"intervention",
"better health outcomes",
"sustainable cattle farming practices",
"this regard",
"the manuscript",
"a visual cues-based region prediction methodology",
"a customized cattle healthcare system",
"the proposed automated ai healthcare system",
"resource-efficient deep learning-inspired architecture",
"computer vision applications",
"region-wise classification",
"the classification mechanism",
"a cattle",
"the regions",
"it",
"extensive experimentation",
"a redesigned image dataset",
"the best-suited deep-learning framework",
"region classification",
"livestock",
"cattle",
"mobilenetv2",
"the-art",
"an accuracy",
"93%",
"the regions",
"the cattle",
"tailoring healthcare",
"mobilenetv2",
"93%"
] |
Hybrid deep learning framework for weather forecast with rainfall prediction using weather bigdata analytics | [
"C. Lalitha",
"D. Ravindran"
] | The volume and complexity of weather data, along with missing values and high correlation between collected variables, make it challenging to develop efficient deep learning frameworks that can handle data with more features. This leads to a lack of accurate and predictable weather forecasts. To develop a hybrid deep learning framework for weather forecast with rainfall prediction using weather big data analytics to ensure high detection rates. A modified planet optimization (MPO) algorithm is used for data preprocessing to remove unwanted artifacts. An improved Tuna optimization (ITO) algorithm is presented to select optimal features to avoid data dimensionality issues. A hybrid memory-augmented artificial neural network (MA-ANN) classifier is developed to improve weather early forecast detection rates. The proposed framework is validated against standard benchmark datasets such as weather underground and climate forecast system reanalysis (CFSR). The simulation results are compared with other existing state-of-the-art frameworks based on error measures (RMSE, MAPE, BIAS, R) and quality measures (accuracy, sensitivity, specificity, precision, F1-measure).The MA-ANN classifier accuracy obtained 97.65% for wunderground.com Delhi and 98.88% for Tamilnadu. The hybrid deep learning framework with rainfall prediction using weather big data analytics has shown promising results for accurate and predictable weather forecasts. The proposed framework outperforms other existing state-of-the-art frameworks, and the MA-ANN classifier has improved weather early forecast detection rates. The study demonstrates the potential of utilizing big data techniques in weather forecasting and highlights the importance of developing efficient deep learning frameworks to handle complex and high-dimensional weather data. | 10.1007/s11042-023-17801-9 | hybrid deep learning framework for weather forecast with rainfall prediction using weather bigdata analytics | the volume and complexity of weather data, along with missing values and high correlation between collected variables, make it challenging to develop efficient deep learning frameworks that can handle data with more features. this leads to a lack of accurate and predictable weather forecasts. to develop a hybrid deep learning framework for weather forecast with rainfall prediction using weather big data analytics to ensure high detection rates. a modified planet optimization (mpo) algorithm is used for data preprocessing to remove unwanted artifacts. an improved tuna optimization (ito) algorithm is presented to select optimal features to avoid data dimensionality issues. a hybrid memory-augmented artificial neural network (ma-ann) classifier is developed to improve weather early forecast detection rates. the proposed framework is validated against standard benchmark datasets such as weather underground and climate forecast system reanalysis (cfsr). the simulation results are compared with other existing state-of-the-art frameworks based on error measures (rmse, mape, bias, r) and quality measures (accuracy, sensitivity, specificity, precision, f1-measure).the ma-ann classifier accuracy obtained 97.65% for wunderground.com delhi and 98.88% for tamilnadu. the hybrid deep learning framework with rainfall prediction using weather big data analytics has shown promising results for accurate and predictable weather forecasts. the proposed framework outperforms other existing state-of-the-art frameworks, and the ma-ann classifier has improved weather early forecast detection rates. the study demonstrates the potential of utilizing big data techniques in weather forecasting and highlights the importance of developing efficient deep learning frameworks to handle complex and high-dimensional weather data. | [
"the volume",
"complexity",
"weather data",
"missing values",
"high correlation",
"collected variables",
"it",
"efficient deep learning frameworks",
"that",
"data",
"more features",
"this",
"a lack",
"accurate and predictable weather forecasts",
"a hybrid deep learning framework",
"weather forecast",
"rainfall prediction",
"weather big data analytics",
"high detection rates",
"a modified planet optimization",
"mpo",
"algorithm",
"data",
"unwanted artifacts",
"an improved tuna optimization",
"ito",
"algorithm",
"optimal features",
"data dimensionality issues",
"a hybrid memory-augmented artificial neural network",
"ma-ann) classifier",
"weather early forecast detection rates",
"the proposed framework",
"standard benchmark datasets",
"weather",
"climate forecast system reanalysis",
"cfsr",
"the simulation results",
"the-art",
"error measures",
"accuracy",
"sensitivity",
"specificity",
"precision",
"f1-measure).the ma-ann classifier accuracy",
"97.65%",
"98.88%",
"tamilnadu",
"the hybrid deep learning framework",
"rainfall prediction",
"weather big data analytics",
"promising results",
"accurate and predictable weather forecasts",
"the proposed framework",
"the-art",
"the ma-ann classifier",
"weather early forecast detection rates",
"the study",
"the potential",
"big data techniques",
"weather forecasting",
"the importance",
"efficient deep learning frameworks",
"complex and high-dimensional weather data",
"rmse",
"97.65%",
"wunderground.com delhi",
"98.88%"
] |
Image Classification Algorithm Based on Proposal Region Clustering Learning-Unsupervised Deep Learning | [
"Lei Li",
"Xiao-li Yin"
] | Although deep learning has achieved certain results in image classification, images are susceptible to factors such as lighting conditions, shooting angles, complex backgrounds, rotation transformations or scale scaling, and image data sets in some areas are difficult to obtain. They make the deep learning framework unable to give full play to its generalization ability and nonlinear modeling ability in image classification. Therefore, this paper first proposes a proposal region clustering learning algorithm, which clusters the proposal regions in each image so that each cluster corresponds to the category of the image. Then, different clusters can be regarded as different multi-instance learning packets, and each packet uses the multi-instance learning method to learn the unsupervised image classification detector. It can effectively improve the generalization and modeling capabilities of deep learning models. In addition, this paper proposes an unsupervised deep learning method, which designs an unsupervised deep learning network structure and loss function according to the characteristics of the classified image, and combines densely connected blocks to extract features from the source image. It retains the multi-scale features of the middle layer of the classified image, and effectively solves the problem of insufficient image feature extraction information caused by the lack of image data. It also guarantees the accuracy of subsequent image classification. The experimental results show that the image classification method proposed in this paper not only solves the problem of insufficient image data sets and the interference of various complex factors, but also can accurately classify various image data sets. The accuracy of the image classification method proposed in this paper is 1.38–19% higher than other mainstream deep learning methods. | 10.1007/s42835-022-01227-8 | image classification algorithm based on proposal region clustering learning-unsupervised deep learning | although deep learning has achieved certain results in image classification, images are susceptible to factors such as lighting conditions, shooting angles, complex backgrounds, rotation transformations or scale scaling, and image data sets in some areas are difficult to obtain. they make the deep learning framework unable to give full play to its generalization ability and nonlinear modeling ability in image classification. therefore, this paper first proposes a proposal region clustering learning algorithm, which clusters the proposal regions in each image so that each cluster corresponds to the category of the image. then, different clusters can be regarded as different multi-instance learning packets, and each packet uses the multi-instance learning method to learn the unsupervised image classification detector. it can effectively improve the generalization and modeling capabilities of deep learning models. in addition, this paper proposes an unsupervised deep learning method, which designs an unsupervised deep learning network structure and loss function according to the characteristics of the classified image, and combines densely connected blocks to extract features from the source image. it retains the multi-scale features of the middle layer of the classified image, and effectively solves the problem of insufficient image feature extraction information caused by the lack of image data. it also guarantees the accuracy of subsequent image classification. the experimental results show that the image classification method proposed in this paper not only solves the problem of insufficient image data sets and the interference of various complex factors, but also can accurately classify various image data sets. the accuracy of the image classification method proposed in this paper is 1.38–19% higher than other mainstream deep learning methods. | [
"deep learning",
"certain results",
"image classification",
"images",
"factors",
"lighting conditions",
"angles",
"complex backgrounds",
"rotation transformations",
"scale scaling",
"image data sets",
"some areas",
"they",
"the deep learning framework",
"full play",
"its generalization ability",
"modeling ability",
"image classification",
"this paper",
"a proposal region",
"learning algorithm",
"which",
"the proposal regions",
"each image",
"each cluster",
"the category",
"the image",
"different clusters",
"different multi-instance learning packets",
"each packet",
"the multi-instance learning method",
"the unsupervised image classification detector",
"it",
"the generalization",
"modeling",
"capabilities",
"deep learning models",
"addition",
"this paper",
"an unsupervised deep learning method",
"which",
"an unsupervised deep learning network structure and loss function",
"the characteristics",
"the classified image",
"densely connected blocks",
"features",
"the source image",
"it",
"the multi-scale features",
"the middle layer",
"the classified image",
"the problem",
"insufficient image feature extraction information",
"the lack",
"image data",
"it",
"the accuracy",
"subsequent image classification",
"the experimental results",
"the image classification method",
"this paper",
"the problem",
"insufficient image data sets",
"the interference",
"various complex factors",
"various image data sets",
"the accuracy",
"the image classification method",
"this paper",
"other mainstream deep learning methods",
"first",
"1.38–19%"
] |
SkinMultiNet: Advancements in Skin Cancer Prediction Using Deep Learning with Web Interface | [
"Md Nur Hosain Likhon",
"Sahab Uddin Rana",
"Sadeka Akter",
"Md. Shorup Ahmed",
"Khadiza Akter Tanha",
"Md. Mahbubur Rahman",
"Md Emran Hussain Nayeem"
] | Cancer remains the leading cause of death worldwide, significantly impacting individuals and healthcare systems alike. In recent decades, skin cancer has surged in prevalence compared to other major cancer types. Various factors such as texture, color, morphological characteristics, and structure are employed in categorizing different forms of skin cancer. However, traditional methods of identification often prove time-consuming and costly. Skin cancer classification predominantly relies on machine learning, with the primary method being convolutional neural networks (CNNs). Our ‘SkinMultiNet’ framework, presented in this study and based on transfer learning principles, integrates the InceptionV3 and Xception CNN models for predicting skin cancer using image data. While other machine learning models such as ResNet50, NasNet, and MobileNet were explored, the 'SkinMultiNet' framework demonstrated the most promising outcomes. Utilizing a publicly available dataset comprising 6086 skin images, we trained, tested, and evaluated our models extensively. Proposed system employed a train generator to feed image data into our deep learning CNN models, followed by implementing a learning rate reducer on the datasets within the model. Through rigorous testing and validation procedures, our models successfully processed a substantial volume of skin image data. In contrast to conventional approaches, our proposed architecture offers the potential for more reliable diagnoses, achieving an optimal accuracy rate of 94% in skin cancer prediction. This advancement holds promise for early detection and improved patient outcomes following therapy. | 10.1007/s44174-024-00205-0 | skinmultinet: advancements in skin cancer prediction using deep learning with web interface | cancer remains the leading cause of death worldwide, significantly impacting individuals and healthcare systems alike. in recent decades, skin cancer has surged in prevalence compared to other major cancer types. various factors such as texture, color, morphological characteristics, and structure are employed in categorizing different forms of skin cancer. however, traditional methods of identification often prove time-consuming and costly. skin cancer classification predominantly relies on machine learning, with the primary method being convolutional neural networks (cnns). our ‘skinmultinet’ framework, presented in this study and based on transfer learning principles, integrates the inceptionv3 and xception cnn models for predicting skin cancer using image data. while other machine learning models such as resnet50, nasnet, and mobilenet were explored, the 'skinmultinet' framework demonstrated the most promising outcomes. utilizing a publicly available dataset comprising 6086 skin images, we trained, tested, and evaluated our models extensively. proposed system employed a train generator to feed image data into our deep learning cnn models, followed by implementing a learning rate reducer on the datasets within the model. through rigorous testing and validation procedures, our models successfully processed a substantial volume of skin image data. in contrast to conventional approaches, our proposed architecture offers the potential for more reliable diagnoses, achieving an optimal accuracy rate of 94% in skin cancer prediction. this advancement holds promise for early detection and improved patient outcomes following therapy. | [
"cancer",
"the leading cause",
"death",
"significantly impacting individuals",
"healthcare systems",
"recent decades",
"skin cancer",
"prevalence",
"other major cancer types",
"various factors",
"texture",
"color",
"morphological characteristics",
"structure",
"different forms",
"skin cancer",
"traditional methods",
"identification",
"skin cancer classification",
"machine learning",
"the primary method",
"convolutional neural networks",
"cnns",
"our ‘skinmultinet’ framework",
"this study",
"transfer learning principles",
"the inceptionv3 and xception cnn models",
"skin cancer",
"image data",
"other machine learning models",
"resnet50",
"nasnet",
"mobilenet",
"the 'skinmultinet' framework",
"the most promising outcomes",
"a publicly available dataset",
"6086 skin images",
"we",
"our models",
"proposed system",
"a train generator",
"image data",
"our deep learning cnn models",
"a learning rate reducer",
"the datasets",
"the model",
"rigorous testing and validation procedures",
"our models",
"a substantial volume",
"skin image data",
"contrast",
"conventional approaches",
"our proposed architecture",
"the potential",
"more reliable diagnoses",
"an optimal accuracy rate",
"94%",
"skin cancer prediction",
"this advancement",
"promise",
"early detection",
"improved patient outcomes",
"therapy",
"recent decades",
"inceptionv3",
"cnn",
"resnet50",
"6086",
"cnn",
"94%"
] |
Deep learning-based prediction of post-pancreaticoduodenectomy pancreatic fistula | [
"Woohyung Lee",
"Hyo Jung Park",
"Hack-Jin Lee",
"Ki Byung Song",
"Dae Wook Hwang",
"Jae Hoon Lee",
"Kyongmook Lim",
"Yousun Ko",
"Hyoung Jung Kim",
"Kyung Won Kim",
"Song Cheol Kim"
] | Postoperative pancreatic fistula is a life-threatening complication with an unmet need for accurate prediction. This study was aimed to develop preoperative artificial intelligence-based prediction models. Patients who underwent pancreaticoduodenectomy were enrolled and stratified into model development and validation sets by surgery between 2016 and 2017 or in 2018, respectively. Machine learning models based on clinical and body composition data, and deep learning models based on computed tomographic data, were developed, combined by ensemble voting, and final models were selected comparison with earlier model. Among the 1333 participants (training, n = 881; test, n = 452), postoperative pancreatic fistula occurred in 421 (47.8%) and 134 (31.8%) and clinically relevant postoperative pancreatic fistula occurred in 59 (6.7%) and 27 (6.0%) participants in the training and test datasets, respectively. In the test dataset, the area under the receiver operating curve [AUC (95% confidence interval)] of the selected preoperative model for predicting all and clinically relevant postoperative pancreatic fistula was 0.75 (0.71–0.80) and 0.68 (0.58–0.78). The ensemble model showed better predictive performance than the individual ML and DL models. | 10.1038/s41598-024-51777-2 | deep learning-based prediction of post-pancreaticoduodenectomy pancreatic fistula | postoperative pancreatic fistula is a life-threatening complication with an unmet need for accurate prediction. this study was aimed to develop preoperative artificial intelligence-based prediction models. patients who underwent pancreaticoduodenectomy were enrolled and stratified into model development and validation sets by surgery between 2016 and 2017 or in 2018, respectively. machine learning models based on clinical and body composition data, and deep learning models based on computed tomographic data, were developed, combined by ensemble voting, and final models were selected comparison with earlier model. among the 1333 participants (training, n = 881; test, n = 452), postoperative pancreatic fistula occurred in 421 (47.8%) and 134 (31.8%) and clinically relevant postoperative pancreatic fistula occurred in 59 (6.7%) and 27 (6.0%) participants in the training and test datasets, respectively. in the test dataset, the area under the receiver operating curve [auc (95% confidence interval)] of the selected preoperative model for predicting all and clinically relevant postoperative pancreatic fistula was 0.75 (0.71–0.80) and 0.68 (0.58–0.78). the ensemble model showed better predictive performance than the individual ml and dl models. | [
"postoperative pancreatic fistula",
"a life-threatening complication",
"an unmet need",
"accurate prediction",
"this study",
"preoperative artificial intelligence-based prediction models",
"patients",
"who",
"pancreaticoduodenectomy",
"model development",
"validation sets",
"surgery",
"machine learning models",
"clinical and body composition data",
"deep learning models",
"computed tomographic data",
"ensemble voting",
"final models",
"comparison",
"earlier model",
"the 1333 participants",
"training",
"test",
"postoperative pancreatic fistula",
"47.8%",
"31.8%",
"clinically relevant postoperative pancreatic fistula",
"6.7%",
"the training",
"test datasets",
"the test dataset",
"the area",
"the receiver operating curve",
"[auc",
"95% confidence interval",
"the selected preoperative model",
"all and clinically relevant postoperative pancreatic fistula",
"the ensemble model",
"better predictive performance",
"the individual ml",
"dl models",
"between 2016 and 2017",
"2018",
"1333",
"881",
"452",
"421",
"47.8%",
"134",
"31.8%",
"59",
"6.7%",
"27",
"6.0%",
"95%",
"0.75",
"0.71–0.80",
"0.68"
] |
Debunking multi-lingual social media posts using deep learning | [
"Bina Kotiyal",
"Heman Pathak",
"Nipur Singh"
] | Fake news on social media has become a growing concern due to its potential impact on shaping public opinion. The proposed Debunking Multi-Lingual Social Media Posts using Deep Learning (DSMPD) approach offers a promising solution to detect fake news. The DSMPD approach involves creating a dataset of English and Hindi social media posts using web scraping and Natural Language Processing (NLP) techniques. This dataset is then used to train, test, and validate a deep learning-based model that extracts various features, including Embedding from Language Models (ELMo), word and n-gram counts, Term Frequency-Inverse Document Frequency (TF-IDF), sentiments, polarity, and Named Entity Recognition (NER). Based on these features, the model classifies news items into five categories: real, could be real, could be fabricated, fabricated, or dangerously fabricated. To evaluate the performance of the classifiers, the researchers used two datasets comprising over 45,000 articles. Machine learning (ML) algorithms and Deep learning (DL) model are compared to choose the best option for classification and prediction. | 10.1007/s41870-023-01288-6 | debunking multi-lingual social media posts using deep learning | fake news on social media has become a growing concern due to its potential impact on shaping public opinion. the proposed debunking multi-lingual social media posts using deep learning (dsmpd) approach offers a promising solution to detect fake news. the dsmpd approach involves creating a dataset of english and hindi social media posts using web scraping and natural language processing (nlp) techniques. this dataset is then used to train, test, and validate a deep learning-based model that extracts various features, including embedding from language models (elmo), word and n-gram counts, term frequency-inverse document frequency (tf-idf), sentiments, polarity, and named entity recognition (ner). based on these features, the model classifies news items into five categories: real, could be real, could be fabricated, fabricated, or dangerously fabricated. to evaluate the performance of the classifiers, the researchers used two datasets comprising over 45,000 articles. machine learning (ml) algorithms and deep learning (dl) model are compared to choose the best option for classification and prediction. | [
"fake news",
"social media",
"a growing concern",
"its potential impact",
"public opinion",
"the proposed debunking multi-lingual social media posts",
"(dsmpd",
"a promising solution",
"fake news",
"the dsmpd approach",
"a dataset",
"english and hindi social media posts",
"web scraping",
"natural language processing (nlp) techniques",
"this dataset",
"a deep learning-based model",
"that",
"various features",
"language models",
"word and n-gram counts",
"term frequency-inverse document frequency",
"tf-idf",
"sentiments",
"polarity",
"entity recognition",
"ner",
"these features",
"the model classifies",
"five categories",
"the performance",
"the classifiers",
"the researchers",
"two datasets",
"over 45,000 articles",
"ml",
"the best option",
"classification",
"prediction",
"english",
"n-gram",
"ner",
"five",
"two",
"over 45,000"
] |
Designing observables for measurements with deep learning | [
"Owen Long",
"Benjamin Nachman"
] | Many analyses in particle and nuclear physics use simulations to infer fundamental, effective, or phenomenological parameters of the underlying physics models. When the inference is performed with unfolded cross sections, the observables are designed using physics intuition and heuristics. We propose to design targeted observables with machine learning. Unfolded, differential cross sections in a neural network output contain the most information about parameters of interest and can be well-measured by construction. The networks are trained using a custom loss function that rewards outputs that are sensitive to the parameter(s) of interest while simultaneously penalizing outputs that are different between particle-level and detector-level (to minimize detector distortions). We demonstrate this idea in simulation using two physics models for inclusive measurements in deep inelastic scattering. We find that the new approach is more sensitive than classical observables at distinguishing the two models and also has a reduced unfolding uncertainty due to the reduced detector distortions. | 10.1140/epjc/s10052-024-13135-4 | designing observables for measurements with deep learning | many analyses in particle and nuclear physics use simulations to infer fundamental, effective, or phenomenological parameters of the underlying physics models. when the inference is performed with unfolded cross sections, the observables are designed using physics intuition and heuristics. we propose to design targeted observables with machine learning. unfolded, differential cross sections in a neural network output contain the most information about parameters of interest and can be well-measured by construction. the networks are trained using a custom loss function that rewards outputs that are sensitive to the parameter(s) of interest while simultaneously penalizing outputs that are different between particle-level and detector-level (to minimize detector distortions). we demonstrate this idea in simulation using two physics models for inclusive measurements in deep inelastic scattering. we find that the new approach is more sensitive than classical observables at distinguishing the two models and also has a reduced unfolding uncertainty due to the reduced detector distortions. | [
"many analyses",
"particle",
"nuclear physics",
"simulations",
"fundamental, effective, or phenomenological parameters",
"the underlying physics models",
"the inference",
"unfolded cross sections",
"the observables",
"physics intuition",
"heuristics",
"we",
"targeted observables",
"machine learning",
"unfolded, differential cross sections",
"a neural network output",
"the most information",
"parameters",
"interest",
"construction",
"the networks",
"a custom loss function",
"that",
"outputs",
"that",
"the parameter(s",
"interest",
"outputs",
"that",
"particle-level",
"detector-level",
"detector distortions",
"we",
"this idea",
"simulation",
"two physics models",
"inclusive measurements",
"deep inelastic scattering",
"we",
"the new approach",
"classical observables",
"the two models",
"a reduced unfolding uncertainty",
"the reduced detector distortions",
"two",
"two"
] |
Autonomous multi-drone racing method based on deep reinforcement learning | [
"Yu Kang",
"Jian Di",
"Ming Li",
"Yunbo Zhao",
"Yuhui Wang"
] | Racing drones have attracted increasing attention due to their remarkable high speed and excellent maneuverability. However, autonomous multi-drone racing is quite difficult since it requires quick and agile flight in intricate surroundings and rich drone interaction. To address these issues, we propose a novel autonomous multi-drone racing method based on deep reinforcement learning. A new set of reward functions is proposed to make racing drones learn the racing skills of human experts. Unlike previous methods that required global information about tracks and track boundary constraints, the proposed method requires only limited localized track information within the range of its own onboard sensors. Further, the dynamic response characteristics of racing drones are incorporated into the training environment, so that the proposed method is more in line with the requirements of real drone racing scenarios. In addition, our method has a low computational cost and can meet the requirements of real-time racing. Finally, the effectiveness and superiority of the proposed method are verified by extensive comparison with the state-of-the-art methods in a series of simulations and real-world experiments. | 10.1007/s11432-023-4029-9 | autonomous multi-drone racing method based on deep reinforcement learning | racing drones have attracted increasing attention due to their remarkable high speed and excellent maneuverability. however, autonomous multi-drone racing is quite difficult since it requires quick and agile flight in intricate surroundings and rich drone interaction. to address these issues, we propose a novel autonomous multi-drone racing method based on deep reinforcement learning. a new set of reward functions is proposed to make racing drones learn the racing skills of human experts. unlike previous methods that required global information about tracks and track boundary constraints, the proposed method requires only limited localized track information within the range of its own onboard sensors. further, the dynamic response characteristics of racing drones are incorporated into the training environment, so that the proposed method is more in line with the requirements of real drone racing scenarios. in addition, our method has a low computational cost and can meet the requirements of real-time racing. finally, the effectiveness and superiority of the proposed method are verified by extensive comparison with the state-of-the-art methods in a series of simulations and real-world experiments. | [
"racing drones",
"increasing attention",
"their remarkable high speed",
"excellent maneuverability",
"autonomous multi-drone racing",
"it",
"quick and agile flight",
"intricate surroundings",
"rich drone interaction",
"these issues",
"we",
"a novel autonomous multi-drone racing method",
"deep reinforcement learning",
"a new set",
"reward functions",
"racing drones",
"the racing skills",
"human experts",
"previous methods",
"that",
"global information",
"tracks",
"boundary constraints",
"the proposed method",
"only limited localized track information",
"the range",
"its own onboard sensors",
"the dynamic response characteristics",
"racing drones",
"the training environment",
"the proposed method",
"line",
"the requirements",
"real drone racing scenarios",
"addition",
"our method",
"a low computational cost",
"the requirements",
"real-time racing",
"the effectiveness",
"superiority",
"the proposed method",
"extensive comparison",
"the-art",
"a series",
"simulations",
"real-world experiments"
] |
Enhancing parasitic organism detection in microscopy images through deep learning and fine-tuned optimizer | [
"Yogesh Kumar",
"Pertik Garg",
"Manu Raj Moudgil",
"Rupinder Singh",
"Marcin Woźniak",
"Jana Shafi",
"Muhammad Fazal Ijaz"
] | Parasitic organisms pose a major global health threat, mainly in regions that lack advanced medical facilities. Early and accurate detection of parasitic organisms is vital to saving lives. Deep learning models have uplifted the medical sector by providing promising results in diagnosing, detecting, and classifying diseases. This paper explores the role of deep learning techniques in detecting and classifying various parasitic organisms. The research works on a dataset consisting of 34,298 samples of parasites such as Toxoplasma Gondii, Trypanosome, Plasmodium, Leishmania, Babesia, and Trichomonad along with host cells like red blood cells and white blood cells. These images are initially converted from RGB to grayscale followed by the computation of morphological features such as perimeter, height, area, and width. Later, Otsu thresholding and watershed techniques are applied to differentiate foreground from background and create markers on the images for the identification of regions of interest. Deep transfer learning models such as VGG19, InceptionV3, ResNet50V2, ResNet152V2, EfficientNetB3, EfficientNetB0, MobileNetV2, Xception, DenseNet169, and a hybrid model, InceptionResNetV2, are employed. The parameters of these models are fine-tuned using three optimizers: SGD, RMSprop, and Adam. Experimental results reveal that when RMSprop is applied, VGG19, InceptionV3, and EfficientNetB0 achieve the highest accuracy of 99.1% with a loss of 0.09. Similarly, using the SGD optimizer, InceptionV3 performs exceptionally well, achieving the highest accuracy of 99.91% with a loss of 0.98. Finally, applying the Adam optimizer, InceptionResNetV2 excels, achieving the highest accuracy of 99.96% with a loss of 0.13, outperforming other optimizers. The findings of this research signify that using deep learning models coupled with image processing methods generates a highly accurate and efficient way to detect and classify parasitic organisms. | 10.1038/s41598-024-56323-8 | enhancing parasitic organism detection in microscopy images through deep learning and fine-tuned optimizer | parasitic organisms pose a major global health threat, mainly in regions that lack advanced medical facilities. early and accurate detection of parasitic organisms is vital to saving lives. deep learning models have uplifted the medical sector by providing promising results in diagnosing, detecting, and classifying diseases. this paper explores the role of deep learning techniques in detecting and classifying various parasitic organisms. the research works on a dataset consisting of 34,298 samples of parasites such as toxoplasma gondii, trypanosome, plasmodium, leishmania, babesia, and trichomonad along with host cells like red blood cells and white blood cells. these images are initially converted from rgb to grayscale followed by the computation of morphological features such as perimeter, height, area, and width. later, otsu thresholding and watershed techniques are applied to differentiate foreground from background and create markers on the images for the identification of regions of interest. deep transfer learning models such as vgg19, inceptionv3, resnet50v2, resnet152v2, efficientnetb3, efficientnetb0, mobilenetv2, xception, densenet169, and a hybrid model, inceptionresnetv2, are employed. the parameters of these models are fine-tuned using three optimizers: sgd, rmsprop, and adam. experimental results reveal that when rmsprop is applied, vgg19, inceptionv3, and efficientnetb0 achieve the highest accuracy of 99.1% with a loss of 0.09. similarly, using the sgd optimizer, inceptionv3 performs exceptionally well, achieving the highest accuracy of 99.91% with a loss of 0.98. finally, applying the adam optimizer, inceptionresnetv2 excels, achieving the highest accuracy of 99.96% with a loss of 0.13, outperforming other optimizers. the findings of this research signify that using deep learning models coupled with image processing methods generates a highly accurate and efficient way to detect and classify parasitic organisms. | [
"parasitic organisms",
"a major global health threat",
"regions",
"that",
"advanced medical facilities",
"early and accurate detection",
"parasitic organisms",
"lives",
"deep learning models",
"the medical sector",
"promising results",
"diseases",
"this paper",
"the role",
"deep learning techniques",
"various parasitic organisms",
"the research",
"a dataset",
"34,298 samples",
"parasites",
"toxoplasma gondii",
"trypanosome",
"plasmodium",
"leishmania",
"babesia",
"trichomonad",
"host cells",
"red blood cells",
"white blood cells",
"these images",
"rgb",
"grayscale",
"the computation",
"morphological features",
"perimeter",
"height",
"area",
"width",
"otsu thresholding and watershed techniques",
"foreground",
"background",
"markers",
"the images",
"the identification",
"regions",
"interest",
"deep transfer learning models",
"vgg19",
"inceptionv3",
"resnet50v2",
"resnet152v2",
"efficientnetb3",
"efficientnetb0",
"mobilenetv2",
"xception",
"densenet169",
"a hybrid model",
"the parameters",
"these models",
"three optimizers",
"sgd",
"rmsprop",
"adam",
"experimental results",
"rmsprop",
"efficientnetb0",
"the highest accuracy",
"99.1%",
"a loss",
"the sgd optimizer",
"inceptionv3",
"the highest accuracy",
"99.91%",
"a loss",
"the adam optimizer",
"inceptionresnetv2 excels",
"the highest accuracy",
"99.96%",
"a loss",
"other optimizers",
"the findings",
"this research",
"deep learning models",
"image processing methods",
"a highly accurate and efficient way",
"parasitic organisms",
"34,298",
"leishmania",
"babesia",
"rgb",
"inceptionv3",
"efficientnetb3",
"efficientnetb0",
"mobilenetv2",
"inceptionresnetv2",
"three",
"inceptionv3",
"efficientnetb0",
"99.1%",
"0.09",
"inceptionv3",
"99.91%",
"0.98",
"inceptionresnetv2",
"99.96%",
"0.13"
] |
A survey on automatic generation of medical imaging reports based on deep learning | [
"Ting Pang",
"Peigao Li",
"Lijie Zhao"
] | Recent advances in deep learning have shown great potential for the automatic generation of medical imaging reports. Deep learning techniques, inspired by image captioning, have made significant progress in the field of diagnostic report generation. This paper provides a comprehensive overview of recent research efforts in deep learning-based medical imaging report generation and proposes future directions in this field. First, we summarize and analyze the data set, architecture, application, and evaluation of deep learning-based medical imaging report generation. Specially, we survey the deep learning architectures used in diagnostic report generation, including hierarchical RNN-based frameworks, attention-based frameworks, and reinforcement learning-based frameworks. In addition, we identify potential challenges and suggest future research directions to support clinical applications and decision-making using medical imaging report generation systems. | 10.1186/s12938-023-01113-y | a survey on automatic generation of medical imaging reports based on deep learning | recent advances in deep learning have shown great potential for the automatic generation of medical imaging reports. deep learning techniques, inspired by image captioning, have made significant progress in the field of diagnostic report generation. this paper provides a comprehensive overview of recent research efforts in deep learning-based medical imaging report generation and proposes future directions in this field. first, we summarize and analyze the data set, architecture, application, and evaluation of deep learning-based medical imaging report generation. specially, we survey the deep learning architectures used in diagnostic report generation, including hierarchical rnn-based frameworks, attention-based frameworks, and reinforcement learning-based frameworks. in addition, we identify potential challenges and suggest future research directions to support clinical applications and decision-making using medical imaging report generation systems. | [
"recent advances",
"deep learning",
"great potential",
"the automatic generation",
"medical imaging reports",
"deep learning techniques",
"image captioning",
"significant progress",
"the field",
"diagnostic report generation",
"this paper",
"a comprehensive overview",
"recent research efforts",
"deep learning-based medical imaging report generation",
"future directions",
"this field",
"we",
"the data set",
"architecture",
"application",
"evaluation",
"deep learning-based medical imaging report generation",
"we",
"the deep learning architectures",
"diagnostic report generation",
"hierarchical rnn-based frameworks",
"attention-based frameworks",
"reinforcement learning-based frameworks",
"addition",
"we",
"potential challenges",
"future research directions",
"clinical applications",
"decision-making",
"medical imaging report generation systems",
"first"
] |
Development of Deep Learning Color Recognition Model for Color Measurement Processes | [
"Sanghun Lee",
"Ki-Sub Kim",
"Jeong Won Kang"
] | We present a deep learning color recognition model for the color measurement process in the paint industry. Currently, spectrophotometers are primarily used for color measurements owing to their accuracy. The measurement method involves manually injecting the sample into a spectrophotometer. Our proposed method uses a webcam with a deep learning model on the stand of a spectrophotometer. Deep learning models are widely used for image and color detection. In this study, the “you only look once (YOLO)” algorithm is applied for real-time detection of color samples. Upon training various sample images using YOLO, the model could detect the sample area in real time using a webcam. An open source computer vision (OpenCV) library was used for the color recognition model, and the detected RGB color value was converted to the international commission on illumination color space (CIELAB) value, which is primarily used in the color measuring process. However, because of the mirror-like reflection of light from a surface with specular reflection, it is difficult to implement the color value using a camera. To address this problem, we compare several specular removal methods and propose the most suitable model for the color recognition model of color samples. The accuracy of the proposed model was verified by comparing the colors of various samples. Our proposed approach can easily detect samples and color values, which can contribute significantly to automatically calculating the exact amount of coloring required for the target color. | 10.1007/s42835-024-01791-1 | development of deep learning color recognition model for color measurement processes | we present a deep learning color recognition model for the color measurement process in the paint industry. currently, spectrophotometers are primarily used for color measurements owing to their accuracy. the measurement method involves manually injecting the sample into a spectrophotometer. our proposed method uses a webcam with a deep learning model on the stand of a spectrophotometer. deep learning models are widely used for image and color detection. in this study, the “you only look once (yolo)” algorithm is applied for real-time detection of color samples. upon training various sample images using yolo, the model could detect the sample area in real time using a webcam. an open source computer vision (opencv) library was used for the color recognition model, and the detected rgb color value was converted to the international commission on illumination color space (cielab) value, which is primarily used in the color measuring process. however, because of the mirror-like reflection of light from a surface with specular reflection, it is difficult to implement the color value using a camera. to address this problem, we compare several specular removal methods and propose the most suitable model for the color recognition model of color samples. the accuracy of the proposed model was verified by comparing the colors of various samples. our proposed approach can easily detect samples and color values, which can contribute significantly to automatically calculating the exact amount of coloring required for the target color. | [
"we",
"a deep learning color recognition model",
"the color measurement process",
"the paint industry",
"spectrophotometers",
"color measurements",
"their accuracy",
"the measurement method",
"the sample",
"a spectrophotometer",
"our proposed method",
"a webcam",
"a deep learning model",
"the stand",
"a spectrophotometer",
"deep learning models",
"image",
"color detection",
"this study",
"you",
"algorithm",
"real-time detection",
"color samples",
"various sample images",
"yolo",
"the model",
"the sample area",
"real time",
"a webcam",
"an open source computer vision",
"(opencv) library",
"the color recognition model",
"the detected rgb color value",
"the international commission",
"illumination color space",
"(cielab) value",
"which",
"the color measuring process",
"the mirror-like reflection",
"light",
"a surface",
"specular reflection",
"it",
"the color value",
"a camera",
"this problem",
"we",
"several specular removal methods",
"the most suitable model",
"the color recognition model",
"color samples",
"the accuracy",
"the proposed model",
"the colors",
"various samples",
"our proposed approach",
"samples",
"color values",
"which",
"the exact amount",
"the target color",
"deep"
] |
Rapid diagnosis of celiac disease based on plasma Raman spectroscopy combined with deep learning | [
"Tian Shi",
"Jiahe Li",
"Na Li",
"Cheng Chen",
"Chen Chen",
"Chenjie Chang",
"Shenglong Xue",
"Weidong Liu",
"Ainur Maimaiti Reyim",
"Feng Gao",
"Xiaoyi Lv"
] | Celiac Disease (CD) is a primary malabsorption syndrome resulting from the interplay of genetic, immune, and dietary factors. CD negatively impacts daily activities and may lead to conditions such as osteoporosis, malignancies in the small intestine, ulcerative jejunitis, and enteritis, ultimately causing severe malnutrition. Therefore, an effective and rapid differentiation between healthy individuals and those with celiac disease is crucial for early diagnosis and treatment. This study utilizes Raman spectroscopy combined with deep learning models to achieve a non-invasive, rapid, and accurate diagnostic method for celiac disease and healthy controls. A total of 59 plasma samples, comprising 29 celiac disease cases and 30 healthy controls, were collected for experimental purposes. Convolutional Neural Network (CNN), Multi-Scale Convolutional Neural Network (MCNN), Residual Network (ResNet), and Deep Residual Shrinkage Network (DRSN) classification models were employed. The accuracy rates for these models were found to be 86.67%, 90.76%, 86.67% and 95.00%, respectively. Comparative validation results revealed that the DRSN model exhibited the best performance, with an AUC value and accuracy of 97.60% and 95%, respectively. This confirms the superiority of Raman spectroscopy combined with deep learning in the diagnosis of celiac disease. | 10.1038/s41598-024-64621-4 | rapid diagnosis of celiac disease based on plasma raman spectroscopy combined with deep learning | celiac disease (cd) is a primary malabsorption syndrome resulting from the interplay of genetic, immune, and dietary factors. cd negatively impacts daily activities and may lead to conditions such as osteoporosis, malignancies in the small intestine, ulcerative jejunitis, and enteritis, ultimately causing severe malnutrition. therefore, an effective and rapid differentiation between healthy individuals and those with celiac disease is crucial for early diagnosis and treatment. this study utilizes raman spectroscopy combined with deep learning models to achieve a non-invasive, rapid, and accurate diagnostic method for celiac disease and healthy controls. a total of 59 plasma samples, comprising 29 celiac disease cases and 30 healthy controls, were collected for experimental purposes. convolutional neural network (cnn), multi-scale convolutional neural network (mcnn), residual network (resnet), and deep residual shrinkage network (drsn) classification models were employed. the accuracy rates for these models were found to be 86.67%, 90.76%, 86.67% and 95.00%, respectively. comparative validation results revealed that the drsn model exhibited the best performance, with an auc value and accuracy of 97.60% and 95%, respectively. this confirms the superiority of raman spectroscopy combined with deep learning in the diagnosis of celiac disease. | [
"celiac disease",
"cd",
"a primary malabsorption syndrome",
"the interplay",
"genetic, immune, and dietary factors",
"cd",
"daily activities",
"conditions",
"osteoporosis",
"malignancies",
"the small intestine",
"ulcerative jejunitis",
"enteritis",
"severe malnutrition",
"an effective and rapid differentiation",
"healthy individuals",
"those",
"celiac disease",
"early diagnosis",
"treatment",
"this study",
"raman spectroscopy",
"deep learning models",
"a non-invasive, rapid, and accurate diagnostic method",
"celiac disease",
"healthy controls",
"a total",
"59 plasma samples",
"29 celiac disease cases",
"30 healthy controls",
"experimental purposes",
"convolutional neural network",
"cnn",
"multi-scale convolutional neural network",
"mcnn",
"residual network",
"resnet",
"deep residual shrinkage network",
"drsn) classification models",
"the accuracy rates",
"these models",
"86.67%",
"90.76%",
"86.67%",
"95.00%",
"comparative validation results",
"the drsn model",
"the best performance",
"an auc value",
"accuracy",
"97.60%",
"95%",
"this",
"the superiority",
"raman spectroscopy",
"deep learning",
"the diagnosis",
"celiac disease",
"daily",
"59",
"29",
"30",
"cnn",
"86.67%",
"90.76%",
"86.67%",
"95.00%",
"97.60% and",
"95%"
] |
White-box inference attack: compromising the security of deep learning-based COVID-19 diagnosis systems | [
"Burhan Ul Haque Sheikh",
"Aasim Zafar"
] | The COVID-19 pandemic has necessitated the exploration of innovative diagnostic approaches, including the utilization of machine learning (ML) and deep learning (DL) technologies. However, recent findings shed light on the susceptibility of deep learning-based models to adversarial attacks, leading to erroneous predictions. This study investigates the vulnerability of a deep COVID-19 diagnosis model to the Fast Gradient Sign Method (FGSM) adversarial attack. Leveraging transfer learning of EfficientNet-B2 on a publicly available dataset, a deep learning-based COVID-19 diagnosis model is developed, achieving an impressive average accuracy of 94.56% on clean test data. However, when subjected to an untargeted FGSM attack with varying epsilon values, the model’s accuracy is severely compromised, plummeting to 21.72% at epsilon 0.008. Notably, the attack successfully misclassifies adversarial COVID-19 images as normal with 100% confidence. This study underscores the critical need for further research and development to address these vulnerabilities and ensure the reliability and accuracy of deep learning models in the diagnosis of COVID-19 patients. | 10.1007/s41870-023-01538-7 | white-box inference attack: compromising the security of deep learning-based covid-19 diagnosis systems | the covid-19 pandemic has necessitated the exploration of innovative diagnostic approaches, including the utilization of machine learning (ml) and deep learning (dl) technologies. however, recent findings shed light on the susceptibility of deep learning-based models to adversarial attacks, leading to erroneous predictions. this study investigates the vulnerability of a deep covid-19 diagnosis model to the fast gradient sign method (fgsm) adversarial attack. leveraging transfer learning of efficientnet-b2 on a publicly available dataset, a deep learning-based covid-19 diagnosis model is developed, achieving an impressive average accuracy of 94.56% on clean test data. however, when subjected to an untargeted fgsm attack with varying epsilon values, the model’s accuracy is severely compromised, plummeting to 21.72% at epsilon 0.008. notably, the attack successfully misclassifies adversarial covid-19 images as normal with 100% confidence. this study underscores the critical need for further research and development to address these vulnerabilities and ensure the reliability and accuracy of deep learning models in the diagnosis of covid-19 patients. | [
"the exploration",
"innovative diagnostic approaches",
"the utilization",
"machine learning",
"ml",
"deep learning",
"(dl) technologies",
"recent findings",
"light",
"the susceptibility",
"deep learning-based models",
"adversarial attacks",
"erroneous predictions",
"this study",
"the vulnerability",
"a deep covid-19 diagnosis model",
"the fast gradient sign method",
"fgsm) adversarial attack",
"transfer learning",
"efficientnet-b2",
"a publicly available dataset",
"a deep learning-based covid-19 diagnosis model",
"an impressive average accuracy",
"94.56%",
"clean test data",
"an untargeted fgsm attack",
"varying epsilon values",
"the model’s accuracy",
"21.72%",
"epsilon",
"the attack",
"adversarial covid-19 images",
"100% confidence",
"this study",
"the critical need",
"further research",
"development",
"these vulnerabilities",
"the reliability",
"accuracy",
"deep learning models",
"the diagnosis",
"covid-19 patients",
"covid-19",
"covid-19",
"covid-19",
"94.56%",
"21.72%",
"0.008",
"covid-19",
"100%",
"covid-19"
] |
Prediction of intraoperative hypotension using deep learning models based on non-invasive monitoring devices | [
"Heejoon Jeong",
"Donghee Kim",
"Dong Won Kim",
"Seungho Baek",
"Hyung-Chul Lee",
"Yusung Kim",
"Hyun Joo Ahn"
] | PurposeIntraoperative hypotension is associated with adverse outcomes. Predicting and proactively managing hypotension can reduce its incidence. Previously, hypotension prediction algorithms using artificial intelligence were developed for invasive arterial blood pressure monitors. This study tested whether routine non-invasive monitors could also predict intraoperative hypotension using deep learning algorithms.MethodsAn open-source database of non-cardiac surgery patients (https://vitadb.net/dataset) was used to develop the deep learning algorithm. The algorithm was validated using external data obtained from a tertiary Korean hospital. Intraoperative hypotension was defined as a systolic blood pressure less than 90 mmHg. The input data included five monitors: non-invasive blood pressure, electrocardiography, photoplethysmography, capnography, and bispectral index. The primary outcome was the performance of the deep learning model as assessed by the area under the receiver operating characteristic curve (AUROC).ResultsData from 4754 and 421 patients were used for algorithm development and external validation, respectively. The fully connected model of Multi-head Attention architecture and the Globally Attentive Locally Recurrent model with Focal Loss function were able to predict intraoperative hypotension 5 min before its occurrence. The AUROC of the algorithm was 0.917 (95% confidence interval [CI], 0.915–0.918) for the original data and 0.833 (95% CI, 0.830–0.836) for the external validation data. Attention map, which quantified the contributions of each monitor, showed that our algorithm utilized data from each monitor with weights ranging from 8 to 22% for determining hypotension.ConclusionsA deep learning model utilizing multi-channel non-invasive monitors could predict intraoperative hypotension with high accuracy. Future prospective studies are needed to determine whether this model can assist clinicians in preventing hypotension in patients undergoing surgery with non-invasive monitoring. | 10.1007/s10877-024-01206-6 | prediction of intraoperative hypotension using deep learning models based on non-invasive monitoring devices | purposeintraoperative hypotension is associated with adverse outcomes. predicting and proactively managing hypotension can reduce its incidence. previously, hypotension prediction algorithms using artificial intelligence were developed for invasive arterial blood pressure monitors. this study tested whether routine non-invasive monitors could also predict intraoperative hypotension using deep learning algorithms.methodsan open-source database of non-cardiac surgery patients (https://vitadb.net/dataset) was used to develop the deep learning algorithm. the algorithm was validated using external data obtained from a tertiary korean hospital. intraoperative hypotension was defined as a systolic blood pressure less than 90 mmhg. the input data included five monitors: non-invasive blood pressure, electrocardiography, photoplethysmography, capnography, and bispectral index. the primary outcome was the performance of the deep learning model as assessed by the area under the receiver operating characteristic curve (auroc).resultsdata from 4754 and 421 patients were used for algorithm development and external validation, respectively. the fully connected model of multi-head attention architecture and the globally attentive locally recurrent model with focal loss function were able to predict intraoperative hypotension 5 min before its occurrence. the auroc of the algorithm was 0.917 (95% confidence interval [ci], 0.915–0.918) for the original data and 0.833 (95% ci, 0.830–0.836) for the external validation data. attention map, which quantified the contributions of each monitor, showed that our algorithm utilized data from each monitor with weights ranging from 8 to 22% for determining hypotension.conclusionsa deep learning model utilizing multi-channel non-invasive monitors could predict intraoperative hypotension with high accuracy. future prospective studies are needed to determine whether this model can assist clinicians in preventing hypotension in patients undergoing surgery with non-invasive monitoring. | [
"purposeintraoperative hypotension",
"adverse outcomes",
"hypotension",
"its incidence",
"hypotension prediction algorithms",
"artificial intelligence",
"invasive arterial blood pressure monitors",
"this study",
"routine non-invasive monitors",
"intraoperative hypotension",
"algorithms.methodsan open-source database",
"non-cardiac surgery patients",
"https://vitadb.net/dataset",
"the deep learning algorithm",
"the algorithm",
"external data",
"a tertiary korean hospital",
"intraoperative hypotension",
"a systolic blood pressure",
"less than 90 mmhg",
"the input data",
"five monitors",
"non-invasive blood pressure",
"electrocardiography",
"photoplethysmography",
"capnography",
"bispectral index",
"the primary outcome",
"the performance",
"the deep learning model",
"the area",
"the receiver operating characteristic curve",
"auroc).resultsdata",
"421 patients",
"algorithm development",
"external validation",
"the fully connected model",
"multi-head attention architecture",
"the globally attentive locally recurrent model",
"focal loss function",
"intraoperative hypotension 5 min",
"its occurrence",
"the auroc",
"the algorithm",
"95% confidence interval",
"the original data",
"(95% ci",
"the external validation data",
"attention map",
"which",
"the contributions",
"each monitor",
"our algorithm utilized data",
"each monitor",
"weights",
"8 to 22%",
"hypotension.conclusionsa deep learning model",
"multi-channel non-invasive monitors",
"intraoperative hypotension",
"high accuracy",
"future prospective studies",
"this model",
"clinicians",
"hypotension",
"patients",
"surgery",
"non-invasive monitoring",
"algorithms.methodsan",
"tertiary",
"korean",
"less than",
"five",
"4754",
"421",
"5",
"0.917",
"95%",
"0.915–0.918",
"0.833",
"95%",
"8 to 22%",
"hypotension.conclusionsa",
"clinicians"
] |
Deep learning nomogram for predicting neoadjuvant chemotherapy response in locally advanced gastric cancer patients | [
"Jingjing Zhang",
"Qiang Zhang",
"Bo Zhao",
"Gaofeng Shi"
] | PurposeDeveloped and validated a deep learning radiomics nomogram using multi-phase contrast-enhanced computed tomography (CECT) images to predict neoadjuvant chemotherapy (NAC) response in locally advanced gastric cancer (LAGC) patients.MethodsThis multi-center study retrospectively included 322 patients diagnosed with gastric cancer from January 2013 to June 2023 at two hospitals. Handcrafted radiomics technique and the EfficientNet V2 neural network were applied to arterial, portal venous, and delayed phase CT images to extract two-dimensional handcrafted and deep learning features. A nomogram model was built by integrating the handcrafted signature, the deep learning signature, with clinical features. Discriminative ability was assessed using the receiver operating characteristics (ROC) curve and the precision-recall (P-R) curve. Model fitting was evaluated using calibration curves, and clinical utility was assessed through decision curve analysis (DCA).ResultsThe nomogram exhibited excellent performance. The area under the ROC curve (AUC) was 0.848 [95% confidence interval (CI), 0.793–0.893)], 0.802 (95% CI 0.688–0.889), and 0.751 (95% CI 0.652–0.833) for the training, internal validation, and external validation sets, respectively. The AUCs of the P-R curves were 0.838 (95% CI 0.756–0.895), 0.541 (95% CI 0.329–0.740), and 0.556 (95% CI 0.376–0.722) for the corresponding sets. The nomogram outperformed the clinical model and handcrafted signature across all sets (all P < 0.05). The nomogram model demonstrated good calibration and provided greater net benefit within the relevant threshold range compared to other models.ConclusionThis study created a deep learning nomogram using CECT images and clinical data to predict NAC response in LAGC patients undergoing surgical resection, offering personalized treatment insights.Graphical abstract | 10.1007/s00261-024-04331-7 | deep learning nomogram for predicting neoadjuvant chemotherapy response in locally advanced gastric cancer patients | purposedeveloped and validated a deep learning radiomics nomogram using multi-phase contrast-enhanced computed tomography (cect) images to predict neoadjuvant chemotherapy (nac) response in locally advanced gastric cancer (lagc) patients.methodsthis multi-center study retrospectively included 322 patients diagnosed with gastric cancer from january 2013 to june 2023 at two hospitals. handcrafted radiomics technique and the efficientnet v2 neural network were applied to arterial, portal venous, and delayed phase ct images to extract two-dimensional handcrafted and deep learning features. a nomogram model was built by integrating the handcrafted signature, the deep learning signature, with clinical features. discriminative ability was assessed using the receiver operating characteristics (roc) curve and the precision-recall (p-r) curve. model fitting was evaluated using calibration curves, and clinical utility was assessed through decision curve analysis (dca).resultsthe nomogram exhibited excellent performance. the area under the roc curve (auc) was 0.848 [95% confidence interval (ci), 0.793–0.893)], 0.802 (95% ci 0.688–0.889), and 0.751 (95% ci 0.652–0.833) for the training, internal validation, and external validation sets, respectively. the aucs of the p-r curves were 0.838 (95% ci 0.756–0.895), 0.541 (95% ci 0.329–0.740), and 0.556 (95% ci 0.376–0.722) for the corresponding sets. the nomogram outperformed the clinical model and handcrafted signature across all sets (all p < 0.05). the nomogram model demonstrated good calibration and provided greater net benefit within the relevant threshold range compared to other models.conclusionthis study created a deep learning nomogram using cect images and clinical data to predict nac response in lagc patients undergoing surgical resection, offering personalized treatment insights.graphical abstract | [
"a deep learning radiomics nomogram",
"multi-phase contrast-enhanced computed tomography",
"cect",
"neoadjuvant chemotherapy (nac) response",
"locally advanced gastric cancer",
"lagc",
"patients.methodsthis multi-center study",
"322 patients",
"gastric cancer",
"january",
"june",
"two hospitals",
"handcrafted radiomics technique",
"the efficientnet v2 neural network",
"arterial, portal venous, and delayed phase ct images",
"two-dimensional handcrafted and deep learning features",
"a nomogram model",
"the handcrafted signature",
"the deep learning signature",
"clinical features",
"discriminative ability",
"the receiver operating characteristics",
"(roc) curve",
"the precision-recall (p-r) curve",
"model fitting",
"calibration curves",
"clinical utility",
"decision curve analysis",
"dca).resultsthe nomogram",
"excellent performance",
"the area",
"the roc curve",
"auc",
"[95% confidence interval",
"ci",
"0.793–0.893",
"0.802 (95%",
"ci 0.688–0.889",
"(95%",
"ci",
"the training",
"internal validation",
"external validation sets",
"the aucs",
"the p-r curves",
"0.756–0.895",
"(95%",
"ci 0.329–0.740",
"ci 0.376–0.722",
"the corresponding sets",
"the nomogram",
"the clinical model",
"signature",
"all sets",
"all p",
"the nomogram model",
"good calibration",
"greater net benefit",
"the relevant threshold range",
"other models.conclusionthis study",
"a deep learning nomogram",
"cect images",
"clinical data",
"nac response",
"lagc patients",
"surgical resection",
"personalized treatment insights.graphical abstract",
"322",
"january 2013 to june 2023",
"two",
"two",
"roc",
"roc",
"0.848",
"95%",
"0.802",
"95%",
"0.688–0.889",
"0.751",
"95%",
"0.652–0.833",
"0.838",
"95%",
"0.541",
"95%",
"0.329–0.740",
"0.556",
"95%"
] |
Deep learning in cancer genomics and histopathology | [
"Michaela Unger",
"Jakob Nikolas Kather"
] | Histopathology and genomic profiling are cornerstones of precision oncology and are routinely obtained for patients with cancer. Traditionally, histopathology slides are manually reviewed by highly trained pathologists. Genomic data, on the other hand, is evaluated by engineered computational pipelines. In both applications, the advent of modern artificial intelligence methods, specifically machine learning (ML) and deep learning (DL), have opened up a fundamentally new way of extracting actionable insights from raw data, which could augment and potentially replace some aspects of traditional evaluation workflows. In this review, we summarize current and emerging applications of DL in histopathology and genomics, including basic diagnostic as well as advanced prognostic tasks. Based on a growing body of evidence, we suggest that DL could be the groundwork for a new kind of workflow in oncology and cancer research. However, we also point out that DL models can have biases and other flaws that users in healthcare and research need to know about, and we propose ways to address them. | 10.1186/s13073-024-01315-6 | deep learning in cancer genomics and histopathology | histopathology and genomic profiling are cornerstones of precision oncology and are routinely obtained for patients with cancer. traditionally, histopathology slides are manually reviewed by highly trained pathologists. genomic data, on the other hand, is evaluated by engineered computational pipelines. in both applications, the advent of modern artificial intelligence methods, specifically machine learning (ml) and deep learning (dl), have opened up a fundamentally new way of extracting actionable insights from raw data, which could augment and potentially replace some aspects of traditional evaluation workflows. in this review, we summarize current and emerging applications of dl in histopathology and genomics, including basic diagnostic as well as advanced prognostic tasks. based on a growing body of evidence, we suggest that dl could be the groundwork for a new kind of workflow in oncology and cancer research. however, we also point out that dl models can have biases and other flaws that users in healthcare and research need to know about, and we propose ways to address them. | [
"histopathology",
"genomic profiling",
"cornerstones",
"precision oncology",
"patients",
"cancer",
"histopathology slides",
"highly trained pathologists",
"genomic data",
"the other hand",
"engineered computational pipelines",
"both applications",
"the advent",
"modern artificial intelligence methods",
"specifically machine learning",
"deep learning",
"dl",
"a fundamentally new way",
"actionable insights",
"raw data",
"which",
"some aspects",
"traditional evaluation workflows",
"this review",
"we",
"current and emerging applications",
"dl",
"histopathology",
"genomics",
"basic diagnostic as well as advanced prognostic tasks",
"a growing body",
"evidence",
"we",
"dl",
"the groundwork",
"a new kind",
"workflow",
"oncology and cancer research",
"we",
"dl models",
"biases",
"other flaws",
"that",
"users",
"healthcare",
"research",
"we",
"ways",
"them"
] |
Phase unwrapping based on deep learning in light field fringe projection 3D measurement | [
"Xinjun Zhu",
"Haichuan Zhao",
"Mengkai Yuan",
"Zhizhi Zhang",
"Hongyi Wang",
"Limei Song"
] | Phase unwrapping is one of the key roles in fringe projection three-dimensional (3D) measurement technology. We propose a new method to achieve phase unwrapping in camera array light filed fringe projection 3D measurement based on deep learning. A multi-stream convolutional neural network (CNN) is proposed to learn the mapping relationship between camera array light filed wrapped phases and fringe orders of the expected central view, and is used to predict the fringe order to achieve the phase unwrapping. Experiments are performed on the light field fringe projection data generated by the simulated camera array fringe projection measurement system in Blender and by the experimental 3×3 camera array light field fringe projection system. The performance of the proposed network with light field wrapped phases using multiple directions as network input data is studied, and the advantages of phase unwrapping based on deep learning in light filed fringe projection are demonstrated. | 10.1007/s11801-023-3002-4 | phase unwrapping based on deep learning in light field fringe projection 3d measurement | phase unwrapping is one of the key roles in fringe projection three-dimensional (3d) measurement technology. we propose a new method to achieve phase unwrapping in camera array light filed fringe projection 3d measurement based on deep learning. a multi-stream convolutional neural network (cnn) is proposed to learn the mapping relationship between camera array light filed wrapped phases and fringe orders of the expected central view, and is used to predict the fringe order to achieve the phase unwrapping. experiments are performed on the light field fringe projection data generated by the simulated camera array fringe projection measurement system in blender and by the experimental 3×3 camera array light field fringe projection system. the performance of the proposed network with light field wrapped phases using multiple directions as network input data is studied, and the advantages of phase unwrapping based on deep learning in light filed fringe projection are demonstrated. | [
"phase",
"the key roles",
"fringe projection three-dimensional (3d) measurement technology",
"we",
"a new method",
"phase",
"camera array light",
"fringe projection 3d measurement",
"deep learning",
"a multi-stream convolutional neural network",
"cnn",
"the mapping relationship",
"camera array light",
"phases",
"fringe orders",
"the expected central view",
"the fringe order",
"the phase",
"experiments",
"the light field fringe projection data",
"the simulated camera array fringe projection measurement system",
"blender",
"the experimental 3×3 camera array light field fringe projection system",
"the performance",
"the proposed network",
"light field",
"phases",
"multiple directions",
"network input data",
"the advantages",
"phase",
"deep learning",
"light filed fringe projection",
"three",
"3d",
"3d",
"cnn",
"3×3"
] |
Radiological age assessment based on clavicle ossification in CT: enhanced accuracy through deep learning | [
"Philipp Wesp",
"Balthasar Maria Schachtner",
"Katharina Jeblick",
"Johanna Topalis",
"Marvin Weber",
"Florian Fischer",
"Randolph Penning",
"Jens Ricke",
"Michael Ingrisch",
"Bastian Oliver Sabel"
] | BackgroundRadiological age assessment using reference studies is inherently limited in accuracy due to a finite number of assignable skeletal maturation stages. To overcome this limitation, we present a deep learning approach for continuous age assessment based on clavicle ossification in computed tomography (CT).MethodsThoracic CT scans were retrospectively collected from the picture archiving and communication system. Individuals aged 15.0 to 30.0 years examined in routine clinical practice were included. All scans were automatically cropped around the medial clavicular epiphyseal cartilages. A deep learning model was trained to predict a person’s chronological age based on these scans. Performance was evaluated using mean absolute error (MAE). Model performance was compared to an optimistic human reader performance estimate for an established reference study method.ResultsThe deep learning model was trained on 4,400 scans of 1,935 patients (training set: mean age = 24.2 years ± 4.0, 1132 female) and evaluated on 300 scans of 300 patients with a balanced age and sex distribution (test set: mean age = 22.5 years ± 4.4, 150 female). Model MAE was 1.65 years, and the highest absolute error was 6.40 years for females and 7.32 years for males. However, performance could be attributed to norm-variants or pathologic disorders. Human reader estimate MAE was 1.84 years and the highest absolute error was 3.40 years for females and 3.78 years for males.ConclusionsWe present a deep learning approach for continuous age predictions using CT volumes highlighting the medial clavicular epiphyseal cartilage with performance comparable to the human reader estimate. | 10.1007/s00414-024-03167-6 | radiological age assessment based on clavicle ossification in ct: enhanced accuracy through deep learning | backgroundradiological age assessment using reference studies is inherently limited in accuracy due to a finite number of assignable skeletal maturation stages. to overcome this limitation, we present a deep learning approach for continuous age assessment based on clavicle ossification in computed tomography (ct).methodsthoracic ct scans were retrospectively collected from the picture archiving and communication system. individuals aged 15.0 to 30.0 years examined in routine clinical practice were included. all scans were automatically cropped around the medial clavicular epiphyseal cartilages. a deep learning model was trained to predict a person’s chronological age based on these scans. performance was evaluated using mean absolute error (mae). model performance was compared to an optimistic human reader performance estimate for an established reference study method.resultsthe deep learning model was trained on 4,400 scans of 1,935 patients (training set: mean age = 24.2 years ± 4.0, 1132 female) and evaluated on 300 scans of 300 patients with a balanced age and sex distribution (test set: mean age = 22.5 years ± 4.4, 150 female). model mae was 1.65 years, and the highest absolute error was 6.40 years for females and 7.32 years for males. however, performance could be attributed to norm-variants or pathologic disorders. human reader estimate mae was 1.84 years and the highest absolute error was 3.40 years for females and 3.78 years for males.conclusionswe present a deep learning approach for continuous age predictions using ct volumes highlighting the medial clavicular epiphyseal cartilage with performance comparable to the human reader estimate. | [
"backgroundradiological age assessment",
"reference studies",
"accuracy",
"a finite number",
"assignable skeletal maturation stages",
"this limitation",
"we",
"a deep learning approach",
"continuous age assessment",
"clavicle ossification",
"computed tomography",
"(ct).methodsthoracic ct scans",
"the picture archiving and communication system",
"individuals",
"15.0 to 30.0 years",
"routine clinical practice",
"all scans",
"the medial clavicular epiphyseal cartilages",
"a deep learning model",
"a person’s chronological age",
"these scans",
"performance",
"mean absolute error",
"mae",
"model performance",
"an optimistic human reader performance estimate",
"an established reference study",
"method.resultsthe deep learning model",
"4,400 scans",
"1,935 patients",
"training set",
"mean age",
"±",
"1132 female",
"300 scans",
"300 patients",
"a balanced age",
"sex distribution",
"mean age",
"150 female",
"model mae",
"1.65 years",
"the highest absolute error",
"6.40 years",
"females",
"7.32 years",
"males",
"performance",
"norm-variants",
"pathologic disorders",
"human reader",
"mae",
"1.84 years",
"the highest absolute error",
"3.40 years",
"females",
"3.78 years",
"a deep learning approach",
"continuous age predictions",
"ct volumes",
"the medial clavicular epiphyseal cartilage",
"performance",
"the human reader estimate",
"15.0 to 30.0 years",
"4,400",
"1,935",
"24.2 years",
"4.0, 1132",
"300",
"300",
"22.5 years",
"4.4",
"150",
"mae",
"1.65 years",
"6.40 years",
"7.32 years",
"1.84 years",
"3.40 years",
"3.78 years"
] |
Development and application of a deep learning-based comprehensive early diagnostic model for chronic obstructive pulmonary disease | [
"Zecheng Zhu",
"Shunjin Zhao",
"Jiahui Li",
"Yuting Wang",
"Luopiao Xu",
"Yubing Jia",
"Zihan Li",
"Wenyuan Li",
"Gang Chen",
"Xifeng Wu"
] | BackgroundChronic obstructive pulmonary disease (COPD) is a frequently diagnosed yet treatable condition, provided it is identified early and managed effectively. This study aims to develop an advanced COPD diagnostic model by integrating deep learning and radiomics features.MethodsWe utilized a dataset comprising CT images from 2,983 participants, of which 2,317 participants also provided epidemiological data through questionnaires. Deep learning features were extracted using a Variational Autoencoder, and radiomics features were obtained using the PyRadiomics package. Multi-Layer Perceptrons were used to construct models based on deep learning and radiomics features independently, as well as a fusion model integrating both. Subsequently, epidemiological questionnaire data were incorporated to establish a more comprehensive model. The diagnostic performance of standalone models, the fusion model and the comprehensive model was evaluated and compared using metrics including accuracy, precision, recall, F1-score, Brier score, receiver operating characteristic curves, and area under the curve (AUC).ResultsThe fusion model exhibited outstanding performance with an AUC of 0.952, surpassing the standalone models based solely on deep learning features (AUC = 0.844) or radiomics features (AUC = 0.944). Notably, the comprehensive model, incorporating deep learning features, radiomics features, and questionnaire variables demonstrated the highest diagnostic performance among all models, yielding an AUC of 0.971.ConclusionWe developed and implemented a data fusion strategy to construct a state-of-the-art COPD diagnostic model integrating deep learning features, radiomics features, and questionnaire variables. Our data fusion strategy proved effective, and the model can be easily deployed in clinical settings.Trial registrationNot applicable. This study is NOT a clinical trial, it does not report the results of a health care intervention on human participants. | 10.1186/s12931-024-02793-3 | development and application of a deep learning-based comprehensive early diagnostic model for chronic obstructive pulmonary disease | backgroundchronic obstructive pulmonary disease (copd) is a frequently diagnosed yet treatable condition, provided it is identified early and managed effectively. this study aims to develop an advanced copd diagnostic model by integrating deep learning and radiomics features.methodswe utilized a dataset comprising ct images from 2,983 participants, of which 2,317 participants also provided epidemiological data through questionnaires. deep learning features were extracted using a variational autoencoder, and radiomics features were obtained using the pyradiomics package. multi-layer perceptrons were used to construct models based on deep learning and radiomics features independently, as well as a fusion model integrating both. subsequently, epidemiological questionnaire data were incorporated to establish a more comprehensive model. the diagnostic performance of standalone models, the fusion model and the comprehensive model was evaluated and compared using metrics including accuracy, precision, recall, f1-score, brier score, receiver operating characteristic curves, and area under the curve (auc).resultsthe fusion model exhibited outstanding performance with an auc of 0.952, surpassing the standalone models based solely on deep learning features (auc = 0.844) or radiomics features (auc = 0.944). notably, the comprehensive model, incorporating deep learning features, radiomics features, and questionnaire variables demonstrated the highest diagnostic performance among all models, yielding an auc of 0.971.conclusionwe developed and implemented a data fusion strategy to construct a state-of-the-art copd diagnostic model integrating deep learning features, radiomics features, and questionnaire variables. our data fusion strategy proved effective, and the model can be easily deployed in clinical settings.trial registrationnot applicable. this study is not a clinical trial, it does not report the results of a health care intervention on human participants. | [
"backgroundchronic obstructive pulmonary disease",
"copd",
"a frequently diagnosed yet treatable condition",
"it",
"this study",
"an advanced copd diagnostic model",
"deep learning and radiomics features.methodswe",
"a dataset",
"ct images",
"2,983 participants",
"which",
"2,317 participants",
"epidemiological data",
"questionnaires",
"deep learning features",
"a variational autoencoder",
"radiomics features",
"the pyradiomics package",
"multi-layer perceptrons",
"models",
"deep learning",
"radiomics",
"a fusion model",
"both",
"epidemiological questionnaire data",
"a more comprehensive model",
"the diagnostic performance",
"standalone models",
"the fusion model",
"the comprehensive model",
"metrics",
"accuracy",
"precision",
"recall, f1-score",
"brier score",
"the curve",
"auc).resultsthe fusion model",
"outstanding performance",
"an auc",
"the standalone models",
"deep learning features",
"auc",
"radiomics features",
"auc =",
"the comprehensive model",
"deep learning features",
"radiomics features",
"questionnaire variables",
"the highest diagnostic performance",
"all models",
"an auc",
"0.971.conclusionwe",
"a data fusion strategy",
"the-art",
"deep learning features",
"radiomics features",
"questionnaire variables",
"our data fusion strategy",
"the model",
"clinical settings.trial registrationnot",
"this study",
"a clinical trial",
"it",
"the results",
"a health care intervention",
"human participants",
"2,983",
"2,317",
"0.952",
"0.844",
"0.944",
"0.971.conclusionwe"
] |
Deep learning-based phenotype imputation on population-scale biobank data increases genetic discoveries | [
"Ulzee An",
"Ali Pazokitoroudi",
"Marcus Alvarez",
"Lianyun Huang",
"Silviu Bacanu",
"Andrew J. Schork",
"Kenneth Kendler",
"Päivi Pajukanta",
"Jonathan Flint",
"Noah Zaitlen",
"Na Cai",
"Andy Dahl",
"Sriram Sankararaman"
] | Biobanks that collect deep phenotypic and genomic data across many individuals have emerged as a key resource in human genetics. However, phenotypes in biobanks are often missing across many individuals, limiting their utility. We propose AutoComplete, a deep learning-based imputation method to impute or ‘fill-in’ missing phenotypes in population-scale biobank datasets. When applied to collections of phenotypes measured across ~300,000 individuals from the UK Biobank, AutoComplete substantially improved imputation accuracy over existing methods. On three traits with notable amounts of missingness, we show that AutoComplete yields imputed phenotypes that are genetically similar to the originally observed phenotypes while increasing the effective sample size by about twofold on average. Further, genome-wide association analyses on the resulting imputed phenotypes led to a substantial increase in the number of associated loci. Our results demonstrate the utility of deep learning-based phenotype imputation to increase power for genetic discoveries in existing biobank datasets. | 10.1038/s41588-023-01558-w | deep learning-based phenotype imputation on population-scale biobank data increases genetic discoveries | biobanks that collect deep phenotypic and genomic data across many individuals have emerged as a key resource in human genetics. however, phenotypes in biobanks are often missing across many individuals, limiting their utility. we propose autocomplete, a deep learning-based imputation method to impute or ‘fill-in’ missing phenotypes in population-scale biobank datasets. when applied to collections of phenotypes measured across ~300,000 individuals from the uk biobank, autocomplete substantially improved imputation accuracy over existing methods. on three traits with notable amounts of missingness, we show that autocomplete yields imputed phenotypes that are genetically similar to the originally observed phenotypes while increasing the effective sample size by about twofold on average. further, genome-wide association analyses on the resulting imputed phenotypes led to a substantial increase in the number of associated loci. our results demonstrate the utility of deep learning-based phenotype imputation to increase power for genetic discoveries in existing biobank datasets. | [
"that",
"deep phenotypic and genomic data",
"many individuals",
"a key resource",
"human genetics",
"phenotypes",
"biobanks",
"many individuals",
"their utility",
"we",
"a deep learning-based imputation method",
"population-scale biobank datasets",
"collections",
"phenotypes",
"individuals",
"the uk biobank",
"existing methods",
"three traits",
"notable amounts",
"missingness",
"we",
"autocomplete yields",
"phenotypes",
"that",
"the originally observed phenotypes",
"the effective sample size",
"genome-wide association",
"the resulting imputed phenotypes",
"a substantial increase",
"the number",
"associated loci",
"our results",
"the utility",
"deep learning-based phenotype imputation",
"power",
"genetic discoveries",
"existing biobank datasets",
"uk",
"three",
"about twofold"
] |
Automatic early detection of rice leaf diseases using hybrid deep learning and machine learning methods | [
"Vikram Rajpoot",
"Akhilesh Tiwari",
"Anand Singh Jalal"
] | Plant leaf disease detection is critical for long-term agricultural viability. Numerous Artificial Intelligence (AI) and Machine Learning (ML) technologies have been implemented for detecting rice diseases. However, such methods failed to identify or have slow recognition causing severe output loss. Therefore, an advanced and precise detection method has become necessary to overcome this issue. This study analyzes plant diseases that affect rice, comprising three different forms of diseases. Bacterial leaf blight, Brown spot, and Leaf smut are three of the six diseases that can affect rice plants. In the proposed approach a VGG-16 transfer learning with Faster R-CNN deep architecture is used to extract features. After completing the transfer learning step, the gathered characteristics are categorized using the random forest method. The random forest classifier divided the radish field into three distinct regions. The images of rice plant leaves are taken from UCI Machine Learning Repository. The proposed approach obtains an average predicting accuracy of 97.3% for rice disease imagery class prediction. The extensive experiment outcomes demonstrate the suggested technique’s validity, so it effectively detects rice diseases. | 10.1007/s11042-023-14969-y | automatic early detection of rice leaf diseases using hybrid deep learning and machine learning methods | plant leaf disease detection is critical for long-term agricultural viability. numerous artificial intelligence (ai) and machine learning (ml) technologies have been implemented for detecting rice diseases. however, such methods failed to identify or have slow recognition causing severe output loss. therefore, an advanced and precise detection method has become necessary to overcome this issue. this study analyzes plant diseases that affect rice, comprising three different forms of diseases. bacterial leaf blight, brown spot, and leaf smut are three of the six diseases that can affect rice plants. in the proposed approach a vgg-16 transfer learning with faster r-cnn deep architecture is used to extract features. after completing the transfer learning step, the gathered characteristics are categorized using the random forest method. the random forest classifier divided the radish field into three distinct regions. the images of rice plant leaves are taken from uci machine learning repository. the proposed approach obtains an average predicting accuracy of 97.3% for rice disease imagery class prediction. the extensive experiment outcomes demonstrate the suggested technique’s validity, so it effectively detects rice diseases. | [
"plant leaf disease detection",
"long-term agricultural viability",
"numerous artificial intelligence",
"ai",
"machine learning (ml) technologies",
"rice diseases",
"such methods",
"slow recognition",
"severe output loss",
"an advanced and precise detection method",
"this issue",
"this study",
"plant diseases",
"that",
"rice",
"three different forms",
"diseases",
"bacterial leaf blight",
"brown spot",
"leaf smut",
"the six diseases",
"that",
"rice plants",
"the proposed approach",
"a vgg-16 transfer",
"faster r-cnn deep architecture",
"features",
"the transfer learning step",
"the gathered characteristics",
"the random forest method",
"the random forest classifier",
"the radish field",
"three distinct regions",
"the images",
"rice plant leaves",
"uci machine learning repository",
"the proposed approach",
"an average predicting accuracy",
"97.3%",
"rice disease imagery class prediction",
"the extensive experiment outcomes",
"the suggested technique’s validity",
"it",
"rice diseases",
"three",
"three",
"six",
"three",
"uci",
"97.3%"
] |
Enhancing Weld Inspection Through Comparative Analysis of Traditional Algorithms and Deep Learning Approaches | [
"Baoxin Zhang",
"Xiaopeng Wang",
"Jinhan Cui",
"Juntao Wu",
"Zhi Xiong",
"Wenpin Zhang",
"Xinghua Yu"
] | Automated inspection is vital in modern industrial manufacturing, optimizing production processes and ensuring product quality. Welding, a widely used joining technique, is susceptible to defects like porosity and cracks, compromising product reliability. Traditional nondestructive testing (NDT) methods suffer from inefficiency and limited accuracy. Many researchers have tried to apply deep learning for defect detection to address these limitations. This study compares traditional algorithms with deep learning methods, specifically evaluating the SwinUNet model for weld segmentation. The model achieves an impressive F1 score of 96.31, surpassing traditional algorithms. Feature analysis utilizing class activation maps confirms the model's robust recognition and generalization capabilities. Additionally, segmentation results for different welding defects were compared among various models, further substantiating the recognition capabilities of SwinUNet. The findings contribute to the automation of weld identification and segmentation, driving industrial production efficiency and enhancing defect detection. | 10.1007/s10921-024-01047-y | enhancing weld inspection through comparative analysis of traditional algorithms and deep learning approaches | automated inspection is vital in modern industrial manufacturing, optimizing production processes and ensuring product quality. welding, a widely used joining technique, is susceptible to defects like porosity and cracks, compromising product reliability. traditional nondestructive testing (ndt) methods suffer from inefficiency and limited accuracy. many researchers have tried to apply deep learning for defect detection to address these limitations. this study compares traditional algorithms with deep learning methods, specifically evaluating the swinunet model for weld segmentation. the model achieves an impressive f1 score of 96.31, surpassing traditional algorithms. feature analysis utilizing class activation maps confirms the model's robust recognition and generalization capabilities. additionally, segmentation results for different welding defects were compared among various models, further substantiating the recognition capabilities of swinunet. the findings contribute to the automation of weld identification and segmentation, driving industrial production efficiency and enhancing defect detection. | [
"automated inspection",
"modern industrial manufacturing",
"production processes",
"product quality",
"welding",
"a widely used joining technique",
"defects",
"porosity",
"cracks",
"product reliability",
"traditional nondestructive testing (ndt) methods",
"inefficiency",
"limited accuracy",
"many researchers",
"deep learning",
"defect detection",
"these limitations",
"this study",
"traditional algorithms",
"deep learning methods",
"the swinunet model",
"weld segmentation",
"the model",
"an impressive f1 score",
"traditional algorithms",
"feature analysis",
"class activation maps",
"the model's robust recognition",
"generalization capabilities",
"segmentation results",
"different welding defects",
"various models",
"the recognition capabilities",
"swinunet",
"the findings",
"the automation",
"weld identification",
"segmentation",
"industrial production efficiency",
"defect detection",
"96.31"
] |
Deep learning in cancer genomics and histopathology | [
"Michaela Unger",
"Jakob Nikolas Kather"
] | Histopathology and genomic profiling are cornerstones of precision oncology and are routinely obtained for patients with cancer. Traditionally, histopathology slides are manually reviewed by highly trained pathologists. Genomic data, on the other hand, is evaluated by engineered computational pipelines. In both applications, the advent of modern artificial intelligence methods, specifically machine learning (ML) and deep learning (DL), have opened up a fundamentally new way of extracting actionable insights from raw data, which could augment and potentially replace some aspects of traditional evaluation workflows. In this review, we summarize current and emerging applications of DL in histopathology and genomics, including basic diagnostic as well as advanced prognostic tasks. Based on a growing body of evidence, we suggest that DL could be the groundwork for a new kind of workflow in oncology and cancer research. However, we also point out that DL models can have biases and other flaws that users in healthcare and research need to know about, and we propose ways to address them. | 10.1186/s13073-024-01315-6 | deep learning in cancer genomics and histopathology | histopathology and genomic profiling are cornerstones of precision oncology and are routinely obtained for patients with cancer. traditionally, histopathology slides are manually reviewed by highly trained pathologists. genomic data, on the other hand, is evaluated by engineered computational pipelines. in both applications, the advent of modern artificial intelligence methods, specifically machine learning (ml) and deep learning (dl), have opened up a fundamentally new way of extracting actionable insights from raw data, which could augment and potentially replace some aspects of traditional evaluation workflows. in this review, we summarize current and emerging applications of dl in histopathology and genomics, including basic diagnostic as well as advanced prognostic tasks. based on a growing body of evidence, we suggest that dl could be the groundwork for a new kind of workflow in oncology and cancer research. however, we also point out that dl models can have biases and other flaws that users in healthcare and research need to know about, and we propose ways to address them. | [
"histopathology",
"genomic profiling",
"cornerstones",
"precision oncology",
"patients",
"cancer",
"histopathology slides",
"highly trained pathologists",
"genomic data",
"the other hand",
"engineered computational pipelines",
"both applications",
"the advent",
"modern artificial intelligence methods",
"specifically machine learning",
"deep learning",
"dl",
"a fundamentally new way",
"actionable insights",
"raw data",
"which",
"some aspects",
"traditional evaluation workflows",
"this review",
"we",
"current and emerging applications",
"dl",
"histopathology",
"genomics",
"basic diagnostic as well as advanced prognostic tasks",
"a growing body",
"evidence",
"we",
"dl",
"the groundwork",
"a new kind",
"workflow",
"oncology and cancer research",
"we",
"dl models",
"biases",
"other flaws",
"that",
"users",
"healthcare",
"research",
"we",
"ways",
"them"
] |
Phase unwrapping based on deep learning in light field fringe projection 3D measurement | [
"Xinjun Zhu",
"Haichuan Zhao",
"Mengkai Yuan",
"Zhizhi Zhang",
"Hongyi Wang",
"Limei Song"
] | Phase unwrapping is one of the key roles in fringe projection three-dimensional (3D) measurement technology. We propose a new method to achieve phase unwrapping in camera array light filed fringe projection 3D measurement based on deep learning. A multi-stream convolutional neural network (CNN) is proposed to learn the mapping relationship between camera array light filed wrapped phases and fringe orders of the expected central view, and is used to predict the fringe order to achieve the phase unwrapping. Experiments are performed on the light field fringe projection data generated by the simulated camera array fringe projection measurement system in Blender and by the experimental 3×3 camera array light field fringe projection system. The performance of the proposed network with light field wrapped phases using multiple directions as network input data is studied, and the advantages of phase unwrapping based on deep learning in light filed fringe projection are demonstrated. | 10.1007/s11801-023-3002-4 | phase unwrapping based on deep learning in light field fringe projection 3d measurement | phase unwrapping is one of the key roles in fringe projection three-dimensional (3d) measurement technology. we propose a new method to achieve phase unwrapping in camera array light filed fringe projection 3d measurement based on deep learning. a multi-stream convolutional neural network (cnn) is proposed to learn the mapping relationship between camera array light filed wrapped phases and fringe orders of the expected central view, and is used to predict the fringe order to achieve the phase unwrapping. experiments are performed on the light field fringe projection data generated by the simulated camera array fringe projection measurement system in blender and by the experimental 3×3 camera array light field fringe projection system. the performance of the proposed network with light field wrapped phases using multiple directions as network input data is studied, and the advantages of phase unwrapping based on deep learning in light filed fringe projection are demonstrated. | [
"phase",
"the key roles",
"fringe projection three-dimensional (3d) measurement technology",
"we",
"a new method",
"phase",
"camera array light",
"fringe projection 3d measurement",
"deep learning",
"a multi-stream convolutional neural network",
"cnn",
"the mapping relationship",
"camera array light",
"phases",
"fringe orders",
"the expected central view",
"the fringe order",
"the phase",
"experiments",
"the light field fringe projection data",
"the simulated camera array fringe projection measurement system",
"blender",
"the experimental 3×3 camera array light field fringe projection system",
"the performance",
"the proposed network",
"light field",
"phases",
"multiple directions",
"network input data",
"the advantages",
"phase",
"deep learning",
"light filed fringe projection",
"three",
"3d",
"3d",
"cnn",
"3×3"
] |
Radiological age assessment based on clavicle ossification in CT: enhanced accuracy through deep learning | [
"Philipp Wesp",
"Balthasar Maria Schachtner",
"Katharina Jeblick",
"Johanna Topalis",
"Marvin Weber",
"Florian Fischer",
"Randolph Penning",
"Jens Ricke",
"Michael Ingrisch",
"Bastian Oliver Sabel"
] | BackgroundRadiological age assessment using reference studies is inherently limited in accuracy due to a finite number of assignable skeletal maturation stages. To overcome this limitation, we present a deep learning approach for continuous age assessment based on clavicle ossification in computed tomography (CT).MethodsThoracic CT scans were retrospectively collected from the picture archiving and communication system. Individuals aged 15.0 to 30.0 years examined in routine clinical practice were included. All scans were automatically cropped around the medial clavicular epiphyseal cartilages. A deep learning model was trained to predict a person’s chronological age based on these scans. Performance was evaluated using mean absolute error (MAE). Model performance was compared to an optimistic human reader performance estimate for an established reference study method.ResultsThe deep learning model was trained on 4,400 scans of 1,935 patients (training set: mean age = 24.2 years ± 4.0, 1132 female) and evaluated on 300 scans of 300 patients with a balanced age and sex distribution (test set: mean age = 22.5 years ± 4.4, 150 female). Model MAE was 1.65 years, and the highest absolute error was 6.40 years for females and 7.32 years for males. However, performance could be attributed to norm-variants or pathologic disorders. Human reader estimate MAE was 1.84 years and the highest absolute error was 3.40 years for females and 3.78 years for males.ConclusionsWe present a deep learning approach for continuous age predictions using CT volumes highlighting the medial clavicular epiphyseal cartilage with performance comparable to the human reader estimate. | 10.1007/s00414-024-03167-6 | radiological age assessment based on clavicle ossification in ct: enhanced accuracy through deep learning | backgroundradiological age assessment using reference studies is inherently limited in accuracy due to a finite number of assignable skeletal maturation stages. to overcome this limitation, we present a deep learning approach for continuous age assessment based on clavicle ossification in computed tomography (ct).methodsthoracic ct scans were retrospectively collected from the picture archiving and communication system. individuals aged 15.0 to 30.0 years examined in routine clinical practice were included. all scans were automatically cropped around the medial clavicular epiphyseal cartilages. a deep learning model was trained to predict a person’s chronological age based on these scans. performance was evaluated using mean absolute error (mae). model performance was compared to an optimistic human reader performance estimate for an established reference study method.resultsthe deep learning model was trained on 4,400 scans of 1,935 patients (training set: mean age = 24.2 years ± 4.0, 1132 female) and evaluated on 300 scans of 300 patients with a balanced age and sex distribution (test set: mean age = 22.5 years ± 4.4, 150 female). model mae was 1.65 years, and the highest absolute error was 6.40 years for females and 7.32 years for males. however, performance could be attributed to norm-variants or pathologic disorders. human reader estimate mae was 1.84 years and the highest absolute error was 3.40 years for females and 3.78 years for males.conclusionswe present a deep learning approach for continuous age predictions using ct volumes highlighting the medial clavicular epiphyseal cartilage with performance comparable to the human reader estimate. | [
"backgroundradiological age assessment",
"reference studies",
"accuracy",
"a finite number",
"assignable skeletal maturation stages",
"this limitation",
"we",
"a deep learning approach",
"continuous age assessment",
"clavicle ossification",
"computed tomography",
"(ct).methodsthoracic ct scans",
"the picture archiving and communication system",
"individuals",
"15.0 to 30.0 years",
"routine clinical practice",
"all scans",
"the medial clavicular epiphyseal cartilages",
"a deep learning model",
"a person’s chronological age",
"these scans",
"performance",
"mean absolute error",
"mae",
"model performance",
"an optimistic human reader performance estimate",
"an established reference study",
"method.resultsthe deep learning model",
"4,400 scans",
"1,935 patients",
"training set",
"mean age",
"±",
"1132 female",
"300 scans",
"300 patients",
"a balanced age",
"sex distribution",
"mean age",
"150 female",
"model mae",
"1.65 years",
"the highest absolute error",
"6.40 years",
"females",
"7.32 years",
"males",
"performance",
"norm-variants",
"pathologic disorders",
"human reader",
"mae",
"1.84 years",
"the highest absolute error",
"3.40 years",
"females",
"3.78 years",
"a deep learning approach",
"continuous age predictions",
"ct volumes",
"the medial clavicular epiphyseal cartilage",
"performance",
"the human reader estimate",
"15.0 to 30.0 years",
"4,400",
"1,935",
"24.2 years",
"4.0, 1132",
"300",
"300",
"22.5 years",
"4.4",
"150",
"mae",
"1.65 years",
"6.40 years",
"7.32 years",
"1.84 years",
"3.40 years",
"3.78 years"
] |
Development and application of a deep learning-based comprehensive early diagnostic model for chronic obstructive pulmonary disease | [
"Zecheng Zhu",
"Shunjin Zhao",
"Jiahui Li",
"Yuting Wang",
"Luopiao Xu",
"Yubing Jia",
"Zihan Li",
"Wenyuan Li",
"Gang Chen",
"Xifeng Wu"
] | BackgroundChronic obstructive pulmonary disease (COPD) is a frequently diagnosed yet treatable condition, provided it is identified early and managed effectively. This study aims to develop an advanced COPD diagnostic model by integrating deep learning and radiomics features.MethodsWe utilized a dataset comprising CT images from 2,983 participants, of which 2,317 participants also provided epidemiological data through questionnaires. Deep learning features were extracted using a Variational Autoencoder, and radiomics features were obtained using the PyRadiomics package. Multi-Layer Perceptrons were used to construct models based on deep learning and radiomics features independently, as well as a fusion model integrating both. Subsequently, epidemiological questionnaire data were incorporated to establish a more comprehensive model. The diagnostic performance of standalone models, the fusion model and the comprehensive model was evaluated and compared using metrics including accuracy, precision, recall, F1-score, Brier score, receiver operating characteristic curves, and area under the curve (AUC).ResultsThe fusion model exhibited outstanding performance with an AUC of 0.952, surpassing the standalone models based solely on deep learning features (AUC = 0.844) or radiomics features (AUC = 0.944). Notably, the comprehensive model, incorporating deep learning features, radiomics features, and questionnaire variables demonstrated the highest diagnostic performance among all models, yielding an AUC of 0.971.ConclusionWe developed and implemented a data fusion strategy to construct a state-of-the-art COPD diagnostic model integrating deep learning features, radiomics features, and questionnaire variables. Our data fusion strategy proved effective, and the model can be easily deployed in clinical settings.Trial registrationNot applicable. This study is NOT a clinical trial, it does not report the results of a health care intervention on human participants. | 10.1186/s12931-024-02793-3 | development and application of a deep learning-based comprehensive early diagnostic model for chronic obstructive pulmonary disease | backgroundchronic obstructive pulmonary disease (copd) is a frequently diagnosed yet treatable condition, provided it is identified early and managed effectively. this study aims to develop an advanced copd diagnostic model by integrating deep learning and radiomics features.methodswe utilized a dataset comprising ct images from 2,983 participants, of which 2,317 participants also provided epidemiological data through questionnaires. deep learning features were extracted using a variational autoencoder, and radiomics features were obtained using the pyradiomics package. multi-layer perceptrons were used to construct models based on deep learning and radiomics features independently, as well as a fusion model integrating both. subsequently, epidemiological questionnaire data were incorporated to establish a more comprehensive model. the diagnostic performance of standalone models, the fusion model and the comprehensive model was evaluated and compared using metrics including accuracy, precision, recall, f1-score, brier score, receiver operating characteristic curves, and area under the curve (auc).resultsthe fusion model exhibited outstanding performance with an auc of 0.952, surpassing the standalone models based solely on deep learning features (auc = 0.844) or radiomics features (auc = 0.944). notably, the comprehensive model, incorporating deep learning features, radiomics features, and questionnaire variables demonstrated the highest diagnostic performance among all models, yielding an auc of 0.971.conclusionwe developed and implemented a data fusion strategy to construct a state-of-the-art copd diagnostic model integrating deep learning features, radiomics features, and questionnaire variables. our data fusion strategy proved effective, and the model can be easily deployed in clinical settings.trial registrationnot applicable. this study is not a clinical trial, it does not report the results of a health care intervention on human participants. | [
"backgroundchronic obstructive pulmonary disease",
"copd",
"a frequently diagnosed yet treatable condition",
"it",
"this study",
"an advanced copd diagnostic model",
"deep learning and radiomics features.methodswe",
"a dataset",
"ct images",
"2,983 participants",
"which",
"2,317 participants",
"epidemiological data",
"questionnaires",
"deep learning features",
"a variational autoencoder",
"radiomics features",
"the pyradiomics package",
"multi-layer perceptrons",
"models",
"deep learning",
"radiomics",
"a fusion model",
"both",
"epidemiological questionnaire data",
"a more comprehensive model",
"the diagnostic performance",
"standalone models",
"the fusion model",
"the comprehensive model",
"metrics",
"accuracy",
"precision",
"recall, f1-score",
"brier score",
"the curve",
"auc).resultsthe fusion model",
"outstanding performance",
"an auc",
"the standalone models",
"deep learning features",
"auc",
"radiomics features",
"auc =",
"the comprehensive model",
"deep learning features",
"radiomics features",
"questionnaire variables",
"the highest diagnostic performance",
"all models",
"an auc",
"0.971.conclusionwe",
"a data fusion strategy",
"the-art",
"deep learning features",
"radiomics features",
"questionnaire variables",
"our data fusion strategy",
"the model",
"clinical settings.trial registrationnot",
"this study",
"a clinical trial",
"it",
"the results",
"a health care intervention",
"human participants",
"2,983",
"2,317",
"0.952",
"0.844",
"0.944",
"0.971.conclusionwe"
] |
Deep learning-based phenotype imputation on population-scale biobank data increases genetic discoveries | [
"Ulzee An",
"Ali Pazokitoroudi",
"Marcus Alvarez",
"Lianyun Huang",
"Silviu Bacanu",
"Andrew J. Schork",
"Kenneth Kendler",
"Päivi Pajukanta",
"Jonathan Flint",
"Noah Zaitlen",
"Na Cai",
"Andy Dahl",
"Sriram Sankararaman"
] | Biobanks that collect deep phenotypic and genomic data across many individuals have emerged as a key resource in human genetics. However, phenotypes in biobanks are often missing across many individuals, limiting their utility. We propose AutoComplete, a deep learning-based imputation method to impute or ‘fill-in’ missing phenotypes in population-scale biobank datasets. When applied to collections of phenotypes measured across ~300,000 individuals from the UK Biobank, AutoComplete substantially improved imputation accuracy over existing methods. On three traits with notable amounts of missingness, we show that AutoComplete yields imputed phenotypes that are genetically similar to the originally observed phenotypes while increasing the effective sample size by about twofold on average. Further, genome-wide association analyses on the resulting imputed phenotypes led to a substantial increase in the number of associated loci. Our results demonstrate the utility of deep learning-based phenotype imputation to increase power for genetic discoveries in existing biobank datasets. | 10.1038/s41588-023-01558-w | deep learning-based phenotype imputation on population-scale biobank data increases genetic discoveries | biobanks that collect deep phenotypic and genomic data across many individuals have emerged as a key resource in human genetics. however, phenotypes in biobanks are often missing across many individuals, limiting their utility. we propose autocomplete, a deep learning-based imputation method to impute or ‘fill-in’ missing phenotypes in population-scale biobank datasets. when applied to collections of phenotypes measured across ~300,000 individuals from the uk biobank, autocomplete substantially improved imputation accuracy over existing methods. on three traits with notable amounts of missingness, we show that autocomplete yields imputed phenotypes that are genetically similar to the originally observed phenotypes while increasing the effective sample size by about twofold on average. further, genome-wide association analyses on the resulting imputed phenotypes led to a substantial increase in the number of associated loci. our results demonstrate the utility of deep learning-based phenotype imputation to increase power for genetic discoveries in existing biobank datasets. | [
"that",
"deep phenotypic and genomic data",
"many individuals",
"a key resource",
"human genetics",
"phenotypes",
"biobanks",
"many individuals",
"their utility",
"we",
"a deep learning-based imputation method",
"population-scale biobank datasets",
"collections",
"phenotypes",
"individuals",
"the uk biobank",
"existing methods",
"three traits",
"notable amounts",
"missingness",
"we",
"autocomplete yields",
"phenotypes",
"that",
"the originally observed phenotypes",
"the effective sample size",
"genome-wide association",
"the resulting imputed phenotypes",
"a substantial increase",
"the number",
"associated loci",
"our results",
"the utility",
"deep learning-based phenotype imputation",
"power",
"genetic discoveries",
"existing biobank datasets",
"uk",
"three",
"about twofold"
] |
Active Exploration Deep Reinforcement Learning for Continuous Action Space with Forward Prediction | [
"Dongfang Zhao",
"Xu Huanshi",
"Zhang Xun"
] | The application of reinforcement learning (RL) to the field of autonomous robotics has high requirements about sample efficiency, since the agent expends for interaction with the environment. One method for sample efficiency is to extract knowledge from existing samples and used to exploration. Typical RL algorithms achieve exploration using task-specific knowledge or adding exploration noise. These methods are limited to current policy improvement level and lack of long-term planning. We propose a novel active exploration deep RL algorithm for the continuous action space problem named active exploration deep reinforcement learning (AEDRL). Our method uses the Gaussian process to model dynamic model, enabling the probability description of prediction sample. Action selection is formulated as the solution of the optimization problem. Thus, the optimization objective is specifically designed for selecting samples that can minimize the uncertainty of the dynamic model. Active exploration is achieved through long-term optimized action selection. This long-term considered action exploration method is more guidance for learning. Enable intelligent agents to explore more interesting action spaces. The proposed AEDRL algorithm is evaluated on several robotic control task including classic pendulum problem and five complex articulated robots. The AEDRL can learn a controller using fewer episodes and demonstrates performance and sample efficiency. | 10.1007/s44196-023-00389-1 | active exploration deep reinforcement learning for continuous action space with forward prediction | the application of reinforcement learning (rl) to the field of autonomous robotics has high requirements about sample efficiency, since the agent expends for interaction with the environment. one method for sample efficiency is to extract knowledge from existing samples and used to exploration. typical rl algorithms achieve exploration using task-specific knowledge or adding exploration noise. these methods are limited to current policy improvement level and lack of long-term planning. we propose a novel active exploration deep rl algorithm for the continuous action space problem named active exploration deep reinforcement learning (aedrl). our method uses the gaussian process to model dynamic model, enabling the probability description of prediction sample. action selection is formulated as the solution of the optimization problem. thus, the optimization objective is specifically designed for selecting samples that can minimize the uncertainty of the dynamic model. active exploration is achieved through long-term optimized action selection. this long-term considered action exploration method is more guidance for learning. enable intelligent agents to explore more interesting action spaces. the proposed aedrl algorithm is evaluated on several robotic control task including classic pendulum problem and five complex articulated robots. the aedrl can learn a controller using fewer episodes and demonstrates performance and sample efficiency. | [
"the application",
"reinforcement learning",
"the field",
"autonomous robotics",
"high requirements",
"sample efficiency",
"the agent",
"interaction",
"the environment",
"one method",
"sample efficiency",
"knowledge",
"existing samples",
"typical rl algorithms",
"exploration",
"task-specific knowledge",
"exploration noise",
"these methods",
"current policy improvement level",
"lack",
"long-term planning",
"we",
"a novel active exploration",
"deep rl algorithm",
"the continuous action space problem",
"active exploration deep reinforcement learning",
"our method",
"the gaussian process",
"dynamic model",
"the probability description",
"prediction sample",
"action selection",
"the solution",
"the optimization problem",
"the optimization objective",
"samples",
"that",
"the uncertainty",
"the dynamic model",
"active exploration",
"long-term optimized action selection",
"this long-term considered action exploration method",
"more guidance",
"intelligent agents",
"more interesting action spaces",
"the proposed aedrl algorithm",
"several robotic control task",
"classic pendulum problem",
"five complex articulated robots",
"the aedrl",
"a controller",
"fewer episodes",
"performance",
"sample efficiency",
"one",
"five"
] |
Automatic early detection of rice leaf diseases using hybrid deep learning and machine learning methods | [
"Vikram Rajpoot",
"Akhilesh Tiwari",
"Anand Singh Jalal"
] | Plant leaf disease detection is critical for long-term agricultural viability. Numerous Artificial Intelligence (AI) and Machine Learning (ML) technologies have been implemented for detecting rice diseases. However, such methods failed to identify or have slow recognition causing severe output loss. Therefore, an advanced and precise detection method has become necessary to overcome this issue. This study analyzes plant diseases that affect rice, comprising three different forms of diseases. Bacterial leaf blight, Brown spot, and Leaf smut are three of the six diseases that can affect rice plants. In the proposed approach a VGG-16 transfer learning with Faster R-CNN deep architecture is used to extract features. After completing the transfer learning step, the gathered characteristics are categorized using the random forest method. The random forest classifier divided the radish field into three distinct regions. The images of rice plant leaves are taken from UCI Machine Learning Repository. The proposed approach obtains an average predicting accuracy of 97.3% for rice disease imagery class prediction. The extensive experiment outcomes demonstrate the suggested technique’s validity, so it effectively detects rice diseases. | 10.1007/s11042-023-14969-y | automatic early detection of rice leaf diseases using hybrid deep learning and machine learning methods | plant leaf disease detection is critical for long-term agricultural viability. numerous artificial intelligence (ai) and machine learning (ml) technologies have been implemented for detecting rice diseases. however, such methods failed to identify or have slow recognition causing severe output loss. therefore, an advanced and precise detection method has become necessary to overcome this issue. this study analyzes plant diseases that affect rice, comprising three different forms of diseases. bacterial leaf blight, brown spot, and leaf smut are three of the six diseases that can affect rice plants. in the proposed approach a vgg-16 transfer learning with faster r-cnn deep architecture is used to extract features. after completing the transfer learning step, the gathered characteristics are categorized using the random forest method. the random forest classifier divided the radish field into three distinct regions. the images of rice plant leaves are taken from uci machine learning repository. the proposed approach obtains an average predicting accuracy of 97.3% for rice disease imagery class prediction. the extensive experiment outcomes demonstrate the suggested technique’s validity, so it effectively detects rice diseases. | [
"plant leaf disease detection",
"long-term agricultural viability",
"numerous artificial intelligence",
"ai",
"machine learning (ml) technologies",
"rice diseases",
"such methods",
"slow recognition",
"severe output loss",
"an advanced and precise detection method",
"this issue",
"this study",
"plant diseases",
"that",
"rice",
"three different forms",
"diseases",
"bacterial leaf blight",
"brown spot",
"leaf smut",
"the six diseases",
"that",
"rice plants",
"the proposed approach",
"a vgg-16 transfer",
"faster r-cnn deep architecture",
"features",
"the transfer learning step",
"the gathered characteristics",
"the random forest method",
"the random forest classifier",
"the radish field",
"three distinct regions",
"the images",
"rice plant leaves",
"uci machine learning repository",
"the proposed approach",
"an average predicting accuracy",
"97.3%",
"rice disease imagery class prediction",
"the extensive experiment outcomes",
"the suggested technique’s validity",
"it",
"rice diseases",
"three",
"three",
"six",
"three",
"uci",
"97.3%"
] |
A literature review on deep learning algorithms for analysis of X-ray images | [
"Gokhan Seyfi",
"Engin Esme",
"Merve Yilmaz",
"Mustafa Servet Kiran"
] | Since the invention of the X-ray beam, it has been used for useful applications in various fields, such as medical diagnosis, fluoroscopy, radiation therapy, and computed tomography. In addition, it is also widely used to identify prohibited or illegal materials using X-ray imaging in the security field. However, these procedures are generally dependent on the human factor. An operator detects prohibited objects by projecting pseudo-color images onto a computer screen. Because these processes are prone to error, much work has gone into automating the processes involved. Initial research on this topic consisted mainly of machine learning and methods using hand-crafted features. The newly developed deep learning methods have subsequently been more successful. For this reason, deep learning algorithms are a trend in recent studies and the number of publications has increased in areas such as X-ray imaging. Therefore, we surveyed the studies published in the literature on Deep Learning-based X-ray imaging to attract new readers and provide new perspectives. | 10.1007/s13042-023-01961-z | a literature review on deep learning algorithms for analysis of x-ray images | since the invention of the x-ray beam, it has been used for useful applications in various fields, such as medical diagnosis, fluoroscopy, radiation therapy, and computed tomography. in addition, it is also widely used to identify prohibited or illegal materials using x-ray imaging in the security field. however, these procedures are generally dependent on the human factor. an operator detects prohibited objects by projecting pseudo-color images onto a computer screen. because these processes are prone to error, much work has gone into automating the processes involved. initial research on this topic consisted mainly of machine learning and methods using hand-crafted features. the newly developed deep learning methods have subsequently been more successful. for this reason, deep learning algorithms are a trend in recent studies and the number of publications has increased in areas such as x-ray imaging. therefore, we surveyed the studies published in the literature on deep learning-based x-ray imaging to attract new readers and provide new perspectives. | [
"the invention",
"the x-ray beam",
"it",
"useful applications",
"various fields",
"medical diagnosis",
"radiation therapy",
"tomography",
"addition",
"it",
"prohibited or illegal materials",
"x-ray imaging",
"the security field",
"these procedures",
"the human factor",
"an operator detects",
"objects",
"pseudo-color images",
"a computer screen",
"these processes",
"error",
"much work",
"the processes",
"initial research",
"this topic",
"machine learning",
"methods",
"hand-crafted features",
"the newly developed deep learning methods",
"this reason",
"deep learning algorithms",
"a trend",
"recent studies",
"the number",
"publications",
"areas",
"x-ray imaging",
"we",
"the studies",
"the literature",
"deep learning-based x",
"-ray imaging",
"new readers",
"new perspectives"
] |
Systematic review of deep learning image analyses for the diagnosis and monitoring of skin disease | [
"Shern Ping Choy",
"Byung Jin Kim",
"Alexandra Paolino",
"Wei Ren Tan",
"Sarah Man Lin Lim",
"Jessica Seo",
"Sze Ping Tan",
"Luc Francis",
"Teresa Tsakok",
"Michael Simpson",
"Jonathan N. W. N. Barker",
"Magnus D. Lynch",
"Mark S. Corbett",
"Catherine H. Smith",
"Satveer K. Mahil"
] | Skin diseases affect one-third of the global population, posing a major healthcare burden. Deep learning may optimise healthcare workflows through processing skin images via neural networks to make predictions. A focus of deep learning research is skin lesion triage to detect cancer, but this may not translate to the wider scope of >2000 other skin diseases. We searched for studies applying deep learning to skin images, excluding benign/malignant lesions (1/1/2000-23/6/2022, PROSPERO CRD42022309935). The primary outcome was accuracy of deep learning algorithms in disease diagnosis or severity assessment. We modified QUADAS-2 for quality assessment. Of 13,857 references identified, 64 were included. The most studied diseases were acne, psoriasis, eczema, rosacea, vitiligo, urticaria. Deep learning algorithms had high specificity and variable sensitivity in diagnosing these conditions. Accuracy of algorithms in diagnosing acne (median 94%, IQR 86–98; n = 11), rosacea (94%, 90–97; n = 4), eczema (93%, 90–99; n = 9) and psoriasis (89%, 78–92; n = 8) was high. Accuracy for grading severity was highest for psoriasis (range 93–100%, n = 2), eczema (88%, n = 1), and acne (67–86%, n = 4). However, 59 (92%) studies had high risk-of-bias judgements and 62 (97%) had high-level applicability concerns. Only 12 (19%) reported participant ethnicity/skin type. Twenty-four (37.5%) evaluated the algorithm in an independent dataset, clinical setting or prospectively. These data indicate potential of deep learning image analysis in diagnosing and monitoring common skin diseases. Current research has important methodological/reporting limitations. Real-world, prospectively-acquired image datasets with external validation/testing will advance deep learning beyond the current experimental phase towards clinically-useful tools to mitigate rising health and cost impacts of skin disease. | 10.1038/s41746-023-00914-8 | systematic review of deep learning image analyses for the diagnosis and monitoring of skin disease | skin diseases affect one-third of the global population, posing a major healthcare burden. deep learning may optimise healthcare workflows through processing skin images via neural networks to make predictions. a focus of deep learning research is skin lesion triage to detect cancer, but this may not translate to the wider scope of >2000 other skin diseases. we searched for studies applying deep learning to skin images, excluding benign/malignant lesions (1/1/2000-23/6/2022, prospero crd42022309935). the primary outcome was accuracy of deep learning algorithms in disease diagnosis or severity assessment. we modified quadas-2 for quality assessment. of 13,857 references identified, 64 were included. the most studied diseases were acne, psoriasis, eczema, rosacea, vitiligo, urticaria. deep learning algorithms had high specificity and variable sensitivity in diagnosing these conditions. accuracy of algorithms in diagnosing acne (median 94%, iqr 86–98; n = 11), rosacea (94%, 90–97; n = 4), eczema (93%, 90–99; n = 9) and psoriasis (89%, 78–92; n = 8) was high. accuracy for grading severity was highest for psoriasis (range 93–100%, n = 2), eczema (88%, n = 1), and acne (67–86%, n = 4). however, 59 (92%) studies had high risk-of-bias judgements and 62 (97%) had high-level applicability concerns. only 12 (19%) reported participant ethnicity/skin type. twenty-four (37.5%) evaluated the algorithm in an independent dataset, clinical setting or prospectively. these data indicate potential of deep learning image analysis in diagnosing and monitoring common skin diseases. current research has important methodological/reporting limitations. real-world, prospectively-acquired image datasets with external validation/testing will advance deep learning beyond the current experimental phase towards clinically-useful tools to mitigate rising health and cost impacts of skin disease. | [
"skin diseases",
"one-third",
"the global population",
"a major healthcare burden",
"deep learning",
"healthcare workflows",
"skin images",
"neural networks",
"predictions",
"a focus",
"deep learning research",
"skin lesion triage",
"cancer",
"this",
"the wider scope",
"other skin diseases",
"we",
"studies",
"deep learning",
"skin images",
"benign/malignant lesions",
"prospero",
"the primary outcome",
"accuracy",
"deep learning algorithms",
"disease diagnosis",
"severity assessment",
"we",
"quadas-2",
"quality assessment",
"13,857 references",
"the most studied diseases",
"urticaria",
"deep learning algorithms",
"high specificity",
"variable sensitivity",
"these conditions",
"accuracy",
"algorithms",
"acne",
"iqr",
"rosacea",
"94%",
"90–97",
"eczema",
"93%",
"90–99",
"psoriasis",
"89%",
"n = 8)",
"accuracy",
"grading severity",
"psoriasis",
"n",
"eczema",
"88%",
"acne",
"67–86%",
"92%",
"bias",
"62 (97%",
"high-level applicability concerns",
"only 12 (19%",
"participant ethnicity/skin type",
"twenty-four (37.5%",
"the algorithm",
"an independent dataset, clinical setting",
"these data",
"potential",
"deep learning image analysis",
"common skin diseases",
"current research",
"important methodological/reporting limitations",
"real-world, prospectively-acquired image datasets",
"external validation/testing",
"deep learning",
"the current experimental phase",
"clinically-useful tools",
"rising health and cost impacts",
"skin disease",
"one-third",
"2000",
"1/1/2000-23/6/2022",
"13,857",
"64",
"94%",
"11",
"94%",
"90–97",
"4",
"93%",
"90–99",
"9",
"89%",
"78–92",
"8)",
"93–100%",
"2",
"88%",
"1",
"67–86%",
"4",
"59",
"92%",
"62",
"97%",
"only 12",
"19%",
"twenty-four",
"37.5%"
] |
Deep learning-based automatic measurement system for patellar height: a multicenter retrospective study | [
"Zeyu Liu",
"Jiangjiang Wu",
"Xu Gao",
"Zhipeng Qin",
"Run Tian",
"Chunsheng Wang"
] | BackgroundThe patellar height index is important; however, the measurement procedures are time-consuming and prone to significant variability among and within observers. We developed a deep learning-based automatic measurement system for the patellar height and evaluated its performance and generalization ability to accurately measure the patellar height index.MethodsWe developed a dataset containing 3,923 lateral knee X-ray images. Notably, all X-ray images were from three tertiary level A hospitals, and 2,341 cases were included in the analysis after screening. By manually labeling key points, the model was trained using the residual network (ResNet) and high-resolution network (HRNet) for human pose estimation architectures to measure the patellar height index. Various data enhancement techniques were used to enhance the robustness of the model. The root mean square error (RMSE), object keypoint similarity (OKS), and percentage of correct keypoint (PCK) metrics were used to evaluate the training results. In addition, we used the intraclass correlation coefficient (ICC) to assess the consistency between manual and automatic measurements.ResultsThe HRNet model performed excellently in keypoint detection tasks by comparing different deep learning models. Furthermore, the pose_hrnet_w48 model was particularly outstanding in the RMSE, OKS, and PCK metrics, and the Insall–Salvati index (ISI) automatically calculated by this model was also highly consistent with the manual measurements (intraclass correlation coefficient [ICC], 0.809–0.885). This evidence demonstrates the accuracy and generalizability of this deep learning system in practical applications.ConclusionWe successfully developed a deep learning-based automatic measurement system for the patellar height. The system demonstrated accuracy comparable to that of experienced radiologists and a strong generalizability across different datasets. It provides an essential tool for assessing and treating knee diseases early and monitoring and rehabilitation after knee surgery. Due to the potential bias in the selection of datasets in this study, different datasets should be examined in the future to optimize the model so that it can be reliably applied in clinical practice.Trial registrationThe study was registered at the Medical Research Registration and Filing Information System (medicalresearch.org.cn) MR-61-23-013065. Date of registration: May 04, 2023 (retrospectively registered). | 10.1186/s13018-024-04809-6 | deep learning-based automatic measurement system for patellar height: a multicenter retrospective study | backgroundthe patellar height index is important; however, the measurement procedures are time-consuming and prone to significant variability among and within observers. we developed a deep learning-based automatic measurement system for the patellar height and evaluated its performance and generalization ability to accurately measure the patellar height index.methodswe developed a dataset containing 3,923 lateral knee x-ray images. notably, all x-ray images were from three tertiary level a hospitals, and 2,341 cases were included in the analysis after screening. by manually labeling key points, the model was trained using the residual network (resnet) and high-resolution network (hrnet) for human pose estimation architectures to measure the patellar height index. various data enhancement techniques were used to enhance the robustness of the model. the root mean square error (rmse), object keypoint similarity (oks), and percentage of correct keypoint (pck) metrics were used to evaluate the training results. in addition, we used the intraclass correlation coefficient (icc) to assess the consistency between manual and automatic measurements.resultsthe hrnet model performed excellently in keypoint detection tasks by comparing different deep learning models. furthermore, the pose_hrnet_w48 model was particularly outstanding in the rmse, oks, and pck metrics, and the insall–salvati index (isi) automatically calculated by this model was also highly consistent with the manual measurements (intraclass correlation coefficient [icc], 0.809–0.885). this evidence demonstrates the accuracy and generalizability of this deep learning system in practical applications.conclusionwe successfully developed a deep learning-based automatic measurement system for the patellar height. the system demonstrated accuracy comparable to that of experienced radiologists and a strong generalizability across different datasets. it provides an essential tool for assessing and treating knee diseases early and monitoring and rehabilitation after knee surgery. due to the potential bias in the selection of datasets in this study, different datasets should be examined in the future to optimize the model so that it can be reliably applied in clinical practice.trial registrationthe study was registered at the medical research registration and filing information system (medicalresearch.org.cn) mr-61-23-013065. date of registration: may 04, 2023 (retrospectively registered). | [
"backgroundthe patellar height index",
"the measurement procedures",
"significant variability",
"observers",
"we",
"a deep learning-based automatic measurement system",
"the patellar height",
"its performance and generalization ability",
"the patellar height index.methodswe",
"a dataset",
"3,923 lateral knee x-ray images",
"all x-ray images",
"three tertiary level",
"a hospitals",
"2,341 cases",
"the analysis",
"key points",
"the model",
"the residual network",
"resnet",
"high-resolution network",
"hrnet",
"human pose estimation",
"the patellar height index",
"various data enhancement techniques",
"the robustness",
"the model",
"the root mean square error",
"rmse",
"object keypoint similarity",
"oks",
"percentage",
"pck",
"the training results",
"addition",
"we",
"the intraclass correlation",
"coefficient",
"icc",
"the consistency",
"manual and automatic measurements.resultsthe hrnet model",
"keypoint detection tasks",
"different deep learning models",
"the pose_hrnet_w48 model",
"the rmse",
"oks",
"pck metrics",
"salvati index",
"isi",
"this model",
"the manual measurements",
"intraclass correlation",
"this evidence",
"the accuracy",
"generalizability",
"this deep learning system",
"a deep learning-based automatic measurement system",
"the patellar height",
"the system",
"accuracy",
"that",
"experienced radiologists",
"a strong generalizability",
"different datasets",
"it",
"an essential tool",
"knee diseases",
"knee surgery",
"the potential bias",
"the selection",
"datasets",
"this study",
"different datasets",
"the future",
"the model",
"it",
"clinical practice.trial registrationthe study",
"the medical research registration and filing information system",
"date",
"registration",
"3,923",
"three",
"2,341",
"rmse",
"oks",
"isi",
"mr-61",
"may 04, 2023"
] |
Addressing data imbalance challenges in oral cavity histopathological whole slide images with advanced deep learning techniques | [
"Tabasum Majeed",
"Tariq Ahmad Masoodi",
"Muzafar Ahmad Macha",
"Muzafar Rasool Bhat",
"Khalid Muzaffar",
"Assif Assad"
] | Oral Cavity Squamous Cell Carcinoma (OCSCC) represents a common form of head and neck cancer originating from the mucosal lining of the oral cavity, often detected in advanced stages. Traditional detection methods rely on analyzing hematoxylin and eosin (H&E)-stained histopathological whole-slide images, which are time-consuming and require expert pathology skills. Hence, automated analysis is urgently needed to expedite diagnosis and improve patient outcomes. Deep learning, through automated feature extraction, offers a promising avenue for capturing high-level abstract features with greater accuracy than traditional methods. However, the imbalance in class distribution within datasets significantly affects the performance of deep learning models during training, necessitating specialized approaches. To address the issue, various methods have been proposed at both data and algorithmic levels. This study investigates strategies to mitigate class imbalance by employing a publicly available OCSCC imbalance dataset. We evaluated undersampling methods (Near Miss, Edited Nearest Neighbors) and oversampling techniques (SMOTE, Deep SMOTE, ADASYN) integrated with transfer learning across different imbalance ratios (0.1, 0.15, 0.20, 0.30). Our findings demonstrate the effectiveness of SMOTE in improving test performance, highlighting the efficacy of strategic oversampling combined with transfer learning in classifying imbalanced medical datasets. This enhances OCSCC diagnostic accuracy, streamlines clinical decisions, and reduces reliance on costly histopathological tests. | 10.1007/s13198-024-02440-6 | addressing data imbalance challenges in oral cavity histopathological whole slide images with advanced deep learning techniques | oral cavity squamous cell carcinoma (ocscc) represents a common form of head and neck cancer originating from the mucosal lining of the oral cavity, often detected in advanced stages. traditional detection methods rely on analyzing hematoxylin and eosin (h&e)-stained histopathological whole-slide images, which are time-consuming and require expert pathology skills. hence, automated analysis is urgently needed to expedite diagnosis and improve patient outcomes. deep learning, through automated feature extraction, offers a promising avenue for capturing high-level abstract features with greater accuracy than traditional methods. however, the imbalance in class distribution within datasets significantly affects the performance of deep learning models during training, necessitating specialized approaches. to address the issue, various methods have been proposed at both data and algorithmic levels. this study investigates strategies to mitigate class imbalance by employing a publicly available ocscc imbalance dataset. we evaluated undersampling methods (near miss, edited nearest neighbors) and oversampling techniques (smote, deep smote, adasyn) integrated with transfer learning across different imbalance ratios (0.1, 0.15, 0.20, 0.30). our findings demonstrate the effectiveness of smote in improving test performance, highlighting the efficacy of strategic oversampling combined with transfer learning in classifying imbalanced medical datasets. this enhances ocscc diagnostic accuracy, streamlines clinical decisions, and reduces reliance on costly histopathological tests. | [
"oral cavity squamous cell carcinoma",
"ocscc",
"a common form",
"head and neck cancer",
"the mucosal lining",
"the oral cavity",
"advanced stages",
"traditional detection methods",
"hematoxylin",
"eosin",
"histopathological whole-slide images",
"which",
"expert pathology skills",
"automated analysis",
"diagnosis",
"patient outcomes",
"deep learning",
"automated feature extraction",
"a promising avenue",
"high-level abstract features",
"greater accuracy",
"traditional methods",
"the imbalance",
"class distribution",
"datasets",
"the performance",
"deep learning models",
"training",
"specialized approaches",
"the issue",
"various methods",
"both data",
"algorithmic levels",
"this study",
"strategies",
"class imbalance",
"a publicly available ocscc imbalance dataset",
"we",
"undersampling methods",
"near miss",
"neighbors",
"techniques",
"smote",
"deep smote",
"adasyn",
"different imbalance ratios",
"our findings",
"the effectiveness",
"smote",
"test performance",
"the efficacy",
"strategic oversampling",
"imbalanced medical datasets",
"diagnostic accuracy",
"clinical decisions",
"reliance",
"costly histopathological tests",
"hematoxylin",
"0.1",
"0.15",
"0.20",
"0.30"
] |
Heat-vision based drone surveillance augmented by deep learning for critical industrial monitoring | [
"Do Yeong Lim",
"Ik Jae Jin",
"In Cheol Bang"
] | This study examines the application of drone-assisted infrared (IR) imaging with vision grayscale imaging and deep learning for enhanced abnormal detection in nuclear power plants. A scaled model, replicating the modern pressurized water reactor, facilitated the data collection for normal and abnormal conditions. A drone, equipped with dual vision and IR cameras, captured detailed operational imagery, crucial for detecting subtle anomalies within the plant's primary systems. Deep learning algorithms were deployed to interpret these images, aiming to identify component abnormals not easily discernible by traditional monitoring. The object detection model was trained to classify normal and abnormal component states within the facility, marked by color-coded bounding boxes for clarity. Models like YOLO and Mask R-CNN were evaluated for their precision in anomaly detection. Results indicated that the YOLO v8m model was particularly effective, showcasing high accuracy in both detecting and adapting to system anomalies, as validated by high mAP scores. The integration of drone technology with IR imaging and deep learning illustrates a significant stride toward automating abnormal detection in complex industrial environments, enhancing operational safety and efficiency. This approach has the potential to revolutionize real-time monitoring in safety–critical settings by providing a comprehensive, automated solution to abnormal detection. | 10.1038/s41598-023-49589-x | heat-vision based drone surveillance augmented by deep learning for critical industrial monitoring | this study examines the application of drone-assisted infrared (ir) imaging with vision grayscale imaging and deep learning for enhanced abnormal detection in nuclear power plants. a scaled model, replicating the modern pressurized water reactor, facilitated the data collection for normal and abnormal conditions. a drone, equipped with dual vision and ir cameras, captured detailed operational imagery, crucial for detecting subtle anomalies within the plant's primary systems. deep learning algorithms were deployed to interpret these images, aiming to identify component abnormals not easily discernible by traditional monitoring. the object detection model was trained to classify normal and abnormal component states within the facility, marked by color-coded bounding boxes for clarity. models like yolo and mask r-cnn were evaluated for their precision in anomaly detection. results indicated that the yolo v8m model was particularly effective, showcasing high accuracy in both detecting and adapting to system anomalies, as validated by high map scores. the integration of drone technology with ir imaging and deep learning illustrates a significant stride toward automating abnormal detection in complex industrial environments, enhancing operational safety and efficiency. this approach has the potential to revolutionize real-time monitoring in safety–critical settings by providing a comprehensive, automated solution to abnormal detection. | [
"this study",
"the application",
"vision grayscale imaging",
"deep learning",
"enhanced abnormal detection",
"nuclear power plants",
"a scaled model",
"the modern pressurized water reactor",
"the data collection",
"normal and abnormal conditions",
"a drone",
"dual vision",
"ir cameras",
"detailed operational imagery",
"subtle anomalies",
"the plant's primary systems",
"deep learning algorithms",
"these images",
"component abnormals",
"traditional monitoring",
"the object detection model",
"normal and abnormal component states",
"the facility",
"color-coded bounding boxes",
"clarity",
"models",
"yolo",
"mask r-cnn",
"their precision",
"anomaly detection",
"results",
"the yolo v8m model",
"high accuracy",
"system anomalies",
"high map scores",
"the integration",
"drone technology",
"ir imaging",
"deep learning",
"a significant stride",
"abnormal detection",
"complex industrial environments",
"operational safety",
"efficiency",
"this approach",
"the potential",
"real-time monitoring",
"safety",
"critical settings",
"a comprehensive, automated solution",
"abnormal detection",
"anomaly detection"
] |
COVID-19 Fake News Detection using Deep Learning Model | [
"Mahabuba Akhter",
"Syed Md. Minhaz Hossain",
"Rizma Sijana Nigar",
"Srabanti Paul",
"Khaleque Md. Aashiq Kamal",
"Anik Sen",
"Iqbal H. Sarker"
] | People may now receive and share information more quickly and easily than ever due to the widespread use of mobile networked devices. However, this can occasionally lead to the spread of false information. Such information is being disseminated widely, which may cause people to make incorrect decisions about potentially crucial topics. This occurred in 2020, the year of the fatal and extremely contagious Coronavirus Disease (COVID-19) outbreak. The spread of false information about COVID-19 on social media has already been labeled as an “infodemic” by the World Health Organization (WHO), causing serious difficulties for governments attempting to control the pandemic. Consequently, it is crucial to have a model for detecting fake news related to COVID-19. In this paper, we present an effective Convolutional Neural Network (CNN)-based deep learning model using word embedding. For selecting the best CNN architecture, we take into account the optimal values of model hyper-parameters using grid search. Further, for measuring the effectiveness of our proposed CNN model, various state-of-the-art machine learning algorithms are conducted for COVID-19 fake news detection. Among them, CNN outperforms with 96.19% mean accuracy, 95% mean F1-score, and 0.985 area under ROC curve (AUC). | 10.1007/s40745-023-00507-y | covid-19 fake news detection using deep learning model | people may now receive and share information more quickly and easily than ever due to the widespread use of mobile networked devices. however, this can occasionally lead to the spread of false information. such information is being disseminated widely, which may cause people to make incorrect decisions about potentially crucial topics. this occurred in 2020, the year of the fatal and extremely contagious coronavirus disease (covid-19) outbreak. the spread of false information about covid-19 on social media has already been labeled as an “infodemic” by the world health organization (who), causing serious difficulties for governments attempting to control the pandemic. consequently, it is crucial to have a model for detecting fake news related to covid-19. in this paper, we present an effective convolutional neural network (cnn)-based deep learning model using word embedding. for selecting the best cnn architecture, we take into account the optimal values of model hyper-parameters using grid search. further, for measuring the effectiveness of our proposed cnn model, various state-of-the-art machine learning algorithms are conducted for covid-19 fake news detection. among them, cnn outperforms with 96.19% mean accuracy, 95% mean f1-score, and 0.985 area under roc curve (auc). | [
"people",
"information",
"the widespread use",
"mobile networked devices",
"this",
"the spread",
"false information",
"such information",
"which",
"people",
"incorrect decisions",
"potentially crucial topics",
"this",
"the year",
"the fatal and extremely contagious coronavirus disease",
"covid-19",
"the spread",
"false information",
"covid-19",
"social media",
"the world health organization",
"who",
"serious difficulties",
"governments",
"it",
"a model",
"fake news",
"covid-19",
"this paper",
"we",
"an effective convolutional neural network",
"cnn)-based deep learning model",
"word",
"the best cnn architecture",
"we",
"account",
"the optimal values",
"model hyper-parameters",
"grid search",
"the effectiveness",
"our proposed cnn model",
"the-art",
"covid-19 fake news detection",
"them",
"cnn",
"96.19%",
"mean accuracy",
"95%",
"0.985 area",
"roc curve",
"auc",
"2020",
"the year",
"covid-19",
"covid-19",
"the world health organization",
"covid-19",
"cnn",
"cnn",
"covid-19",
"cnn",
"96.19%",
"95%",
"0.985",
"roc"
] |
Deep learning-based magnetic resonance image super-resolution: a survey | [
"Zexin Ji",
"Beiji Zou",
"Xiaoyan Kui",
"Jun Liu",
"Wei Zhao",
"Chengzhang Zhu",
"Peishan Dai",
"Yulan Dai"
] | Magnetic resonance imaging (MRI) is a medical imaging technique used to show anatomical structures and physiological processes of the human body. Due to limitations like image acquisition time, hardware capabilities, or uncooperative patients, the resolution of MR images is insufficient. Super-resolution (SR) is a crucial method to enhance the resolution of images without expensive scanning equipment. Recent years have witnessed significant progress in MR image super-resolution. Therefore, this survey presents a thorough overview of current developments in deep learning-based MR image super-resolution methods. In general, we can roughly divide the MRI super-resolution methods into single-contrast MR image SR methods and multi-contrast MR image SR methods. Additionally, we introduce the multi-task learning approaches about the MR image super-resolution. We also summarize other crucial topics, such as the degradation model, the definition of the super-resolution problem, the dataset, loss functions, and image quality assessment. Lastly, we indicate the challenges in the field of super-resolution and draw a conclusion to our survey. | 10.1007/s00521-024-09890-w | deep learning-based magnetic resonance image super-resolution: a survey | magnetic resonance imaging (mri) is a medical imaging technique used to show anatomical structures and physiological processes of the human body. due to limitations like image acquisition time, hardware capabilities, or uncooperative patients, the resolution of mr images is insufficient. super-resolution (sr) is a crucial method to enhance the resolution of images without expensive scanning equipment. recent years have witnessed significant progress in mr image super-resolution. therefore, this survey presents a thorough overview of current developments in deep learning-based mr image super-resolution methods. in general, we can roughly divide the mri super-resolution methods into single-contrast mr image sr methods and multi-contrast mr image sr methods. additionally, we introduce the multi-task learning approaches about the mr image super-resolution. we also summarize other crucial topics, such as the degradation model, the definition of the super-resolution problem, the dataset, loss functions, and image quality assessment. lastly, we indicate the challenges in the field of super-resolution and draw a conclusion to our survey. | [
"magnetic resonance imaging",
"mri",
"a medical imaging technique",
"anatomical structures",
"physiological processes",
"the human body",
"limitations",
"image acquisition time",
"hardware capabilities",
"uncooperative patients",
"the resolution",
"mr images",
"-",
"resolution",
"sr",
"a crucial method",
"the resolution",
"images",
"expensive scanning equipment",
"recent years",
"significant progress",
"mr image",
"super",
"-",
"resolution",
"this survey",
"a thorough overview",
"current developments",
"deep learning-based mr image super-resolution methods",
"we",
"the mri super-resolution methods",
"single-contrast mr image sr methods",
"-",
"contrast mr image sr methods",
"we",
"the multi-task learning approaches",
"the mr image",
"super",
"-",
"resolution",
"we",
"other crucial topics",
"the degradation model",
"the definition",
"the super-resolution problem",
"the dataset",
"loss functions",
"image quality assessment",
"we",
"the challenges",
"the field",
"super",
"-",
"resolution",
"a conclusion",
"our survey",
"recent years"
] |
sqFm: a novel adaptive optimization scheme for deep learning model | [
"Shubhankar Bhakta",
"Utpal Nandi",
"Madhab Mondal",
"Kuheli Ray Mahapatra",
"Partha Chowdhuri",
"Pabitra Pal"
] | For deep model training, an optimization technique is required that minimizes loss and maximizes accuracy. The development of an effective optimization method is one of the most important study areas. The diffGrad optimization method uses gradient changes during optimization phases but does not update 2nd order moments based on 1st order moments, and the AngularGrad optimization method uses the angular value of the gradient, which necessitates additional calculation. Due to these factors, both of those approaches result in zigzag trajectories that take a long time and require additional calculations to attain a global minimum. To overcome those limitations, a novel adaptive deep learning optimization method based on square of first momentum (sqFm) has been proposed. By adjusting 2nd order moments depending on 1st order moments and changing step size according to the present gradient on the non-negative function, the suggested sqFm delivers a smoother trajectory and better image classification accuracy. The empirical research comparing the performance of the proposed sqFm with Adam, diffGrad, and AngularGrad applying non-convex functions demonstrates that the suggested method delivers the best convergence and parameter values. In comparison to SGD, Adam, diffGrad, RAdam, and AngularGrad(tan) using the Rosenbrock function, the proposed sqFm method can attain the global minima gradually with less overshoot. Additionally, it is demonstrated that the proposed sqFm gives consistently good classification accuracy when training CNN networks (ResNet16, ResNet50, VGG34, ResNet18, and DenseNet121) on the CIFAR10, CIFAR100, and MNIST datasets, in contrast to SGDM, diffGrad, Adam, AngularGrad(Cos), and AngularGrad(Tan). The proposed method also gives the best classification accuracy than SGD, Adam, AdaBelief, Yogi, RAdam, and AngularGrad using the ImageNet dataset on the ResNet18 network. Source code link: https://github.com/UtpalNandi/sqFm-A-novel-adaptive-optimization-scheme-for-deep-learning-model. | 10.1007/s12065-023-00897-1 | sqfm: a novel adaptive optimization scheme for deep learning model | for deep model training, an optimization technique is required that minimizes loss and maximizes accuracy. the development of an effective optimization method is one of the most important study areas. the diffgrad optimization method uses gradient changes during optimization phases but does not update 2nd order moments based on 1st order moments, and the angulargrad optimization method uses the angular value of the gradient, which necessitates additional calculation. due to these factors, both of those approaches result in zigzag trajectories that take a long time and require additional calculations to attain a global minimum. to overcome those limitations, a novel adaptive deep learning optimization method based on square of first momentum (sqfm) has been proposed. by adjusting 2nd order moments depending on 1st order moments and changing step size according to the present gradient on the non-negative function, the suggested sqfm delivers a smoother trajectory and better image classification accuracy. the empirical research comparing the performance of the proposed sqfm with adam, diffgrad, and angulargrad applying non-convex functions demonstrates that the suggested method delivers the best convergence and parameter values. in comparison to sgd, adam, diffgrad, radam, and angulargrad(tan) using the rosenbrock function, the proposed sqfm method can attain the global minima gradually with less overshoot. additionally, it is demonstrated that the proposed sqfm gives consistently good classification accuracy when training cnn networks (resnet16, resnet50, vgg34, resnet18, and densenet121) on the cifar10, cifar100, and mnist datasets, in contrast to sgdm, diffgrad, adam, angulargrad(cos), and angulargrad(tan). the proposed method also gives the best classification accuracy than sgd, adam, adabelief, yogi, radam, and angulargrad using the imagenet dataset on the resnet18 network. source code link: https://github.com/utpalnandi/sqfm-a-novel-adaptive-optimization-scheme-for-deep-learning-model. | [
"deep model training",
"an optimization technique",
"the development",
"an effective optimization method",
"the most important study areas",
"the diffgrad optimization method",
"gradient changes",
"optimization phases",
"2nd order moments",
"1st order moments",
"the angulargrad optimization method",
"the angular value",
"the gradient",
"which",
"additional calculation",
"these factors",
"both",
"those approaches",
"zigzag trajectories",
"that",
"a long time",
"additional calculations",
"a global minimum",
"those limitations",
"a novel adaptive deep learning optimization method",
"square",
"first momentum",
"(sqfm",
"2nd order moments",
"1st order moments",
"step size",
"the present gradient",
"the non-negative function",
"the suggested sqfm",
"a smoother trajectory",
"better image classification accuracy",
"the empirical research",
"the performance",
"the proposed sqfm",
"adam",
"diffgrad",
"angulargrad",
"non-convex functions",
"the suggested method",
"the best convergence and parameter values",
"comparison",
"the rosenbrock function",
"the proposed sqfm method",
"the global minima",
"less overshoot",
"it",
"the proposed sqfm",
"consistently good classification accuracy",
"cnn networks",
"resnet16",
"resnet50",
"vgg34",
"resnet18",
"densenet121",
"the cifar10",
"cifar100",
"mnist datasets",
"contrast",
"diffgrad",
"adam",
"angulargrad(cos",
"angulargrad(tan",
"the proposed method",
"the best classification accuracy",
"sgd",
"adam",
"adabelief",
"yogi",
"radam",
"angulargrad",
"the imagenet",
"the resnet18 network",
"source code link",
"https://github.com/utpalnandi/sqfm-a-novel-adaptive-optimization-scheme-for-deep-learning-model",
"2nd",
"1st",
"first",
"2nd",
"1st",
"diffgrad",
"adam",
"diffgrad",
"angulargrad(tan",
"cnn",
"resnet16",
"resnet50",
"resnet18",
"cifar10",
"diffgrad",
"adam",
"angulargrad(cos",
"resnet18"
] |
Deep source transfer learning for the estimation of internal brain dynamics using scalp EEG | [
"Haitao Yu",
"Zhiwen Hu",
"Quanfa Zhao",
"Jing Liu"
] | Electroencephalography (EEG) provides high temporal resolution neural data for brain-computer interfacing via noninvasive electrophysiological recording. Estimating the internal brain activity by means of source imaging techniques can further improve the spatial resolution of EEG and enhance the reliability of neural decoding and brain-computer interaction. In this work, we propose a novel EEG data-driven source imaging scheme for precise and efficient estimation of macroscale spatiotemporal brain dynamics across thalamus and cortical regions with deep learning methods. A deep source imaging framework with a convolutional-recurrent neural network is designed to estimate the internal brain dynamics from high-density EEG recordings. Moreover, a brain model including 210 cortical regions and 16 thalamic nuclei is established based on human brain connectome to provide synthetic training data, which manifests intrinsic characteristics of underlying brain dynamics in spontaneous, stimulation-evoked, and pathological states. Transfer learning algorithm is further applied to the trained network to reduce the dynamical differences between synthetic and realistic EEG. Extensive experiments exhibit that the proposed deep-learning method can accurately estimate the spatial and temporal activity of brain sources and achieves superior performance compared to the state-of-the-art approaches. Moreover, the EEG data-driven source imaging framework is effective in the location of seizure onset zone in epilepsy and reconstruction of dynamical thalamocortical interactions during sensory processing of acupuncture stimulation, implying its applicability in brain-computer interfacing for neuroscience research and clinical applications. | 10.1007/s11571-024-10149-2 | deep source transfer learning for the estimation of internal brain dynamics using scalp eeg | electroencephalography (eeg) provides high temporal resolution neural data for brain-computer interfacing via noninvasive electrophysiological recording. estimating the internal brain activity by means of source imaging techniques can further improve the spatial resolution of eeg and enhance the reliability of neural decoding and brain-computer interaction. in this work, we propose a novel eeg data-driven source imaging scheme for precise and efficient estimation of macroscale spatiotemporal brain dynamics across thalamus and cortical regions with deep learning methods. a deep source imaging framework with a convolutional-recurrent neural network is designed to estimate the internal brain dynamics from high-density eeg recordings. moreover, a brain model including 210 cortical regions and 16 thalamic nuclei is established based on human brain connectome to provide synthetic training data, which manifests intrinsic characteristics of underlying brain dynamics in spontaneous, stimulation-evoked, and pathological states. transfer learning algorithm is further applied to the trained network to reduce the dynamical differences between synthetic and realistic eeg. extensive experiments exhibit that the proposed deep-learning method can accurately estimate the spatial and temporal activity of brain sources and achieves superior performance compared to the state-of-the-art approaches. moreover, the eeg data-driven source imaging framework is effective in the location of seizure onset zone in epilepsy and reconstruction of dynamical thalamocortical interactions during sensory processing of acupuncture stimulation, implying its applicability in brain-computer interfacing for neuroscience research and clinical applications. | [
"electroencephalography",
"(eeg",
"high temporal resolution neural data",
"brain-computer",
"noninvasive electrophysiological recording",
"the internal brain activity",
"means",
"source",
"imaging techniques",
"the spatial resolution",
"eeg",
"the reliability",
"neural decoding and brain-computer interaction",
"this work",
"we",
"a novel eeg data-driven source",
"scheme",
"precise and efficient estimation",
"macroscale spatiotemporal brain dynamics",
"thalamus and cortical regions",
"deep learning methods",
"a deep source imaging framework",
"a convolutional-recurrent neural network",
"the internal brain dynamics",
"high-density eeg recordings",
"a brain model",
"210 cortical regions",
"16 thalamic nuclei",
"human brain connectome",
"synthetic training data",
"which",
"intrinsic characteristics",
"underlying brain dynamics",
"spontaneous, stimulation-evoked, and pathological states",
"transfer learning algorithm",
"the trained network",
"the dynamical differences",
"synthetic and realistic eeg",
"extensive experiments",
"the proposed deep-learning method",
"the spatial and temporal activity",
"brain sources",
"superior performance",
"the-art",
"the eeg data-driven source imaging framework",
"the location",
"seizure onset zone",
"epilepsy",
"reconstruction",
"dynamical thalamocortical interactions",
"sensory processing",
"acupuncture stimulation",
"its applicability",
"brain-computer",
"neuroscience research and clinical applications",
"electroencephalography",
"210",
"16"
] |
New hybrid deep learning models for multi-target NILM disaggregation | [
"Jamila Ouzine",
"Manal Marzouq",
"Saad Dosse Bennani",
"Khadija Lahrech",
"Hakim EL Fadili"
] | Non-Intrusive Load Monitoring (NILM) technique or energy disaggregation is a technique used to detect the appliance’s states and estimate their individual energy consumption, given the aggregated data through the main smart meter. Indeed, energy efficiency is the main goal of the NILM techniques, which can be achieved by providing energy disaggregation feedback to the consumers. Unlike single models where training must be performed for each appliance, this work proposes multi-target disaggregation which is more appropriate due to the drastic reduction of resources when training is performed for all target appliances simultaneously. For this purpose, new hybrid models are proposed by combining well-known deep learning models: Convolutional Neural Network (CNN), Denoising Autoencoder (DAE), Recurrent Neural Network (RNN), and Long Short-Term Memory network (LSTM). An implementation and detailed comparative study is then suggested between the proposed hybrid deep learning models and conventional single models in terms of various performance metrics on the UK-Domestic Appliance-Level Electricity (UKDALE) benchmarking database. The experimental results show that the proposed hybrid models provide the best disaggregation performances for multi-target disaggregation compared to single models. Specifically, the CNN-LSTM and the DAE-LSTM are the best hybrid models with the highest overall F1-score of 78.90% and 72.94% respectively. | 10.1007/s12053-023-10161-1 | new hybrid deep learning models for multi-target nilm disaggregation | non-intrusive load monitoring (nilm) technique or energy disaggregation is a technique used to detect the appliance’s states and estimate their individual energy consumption, given the aggregated data through the main smart meter. indeed, energy efficiency is the main goal of the nilm techniques, which can be achieved by providing energy disaggregation feedback to the consumers. unlike single models where training must be performed for each appliance, this work proposes multi-target disaggregation which is more appropriate due to the drastic reduction of resources when training is performed for all target appliances simultaneously. for this purpose, new hybrid models are proposed by combining well-known deep learning models: convolutional neural network (cnn), denoising autoencoder (dae), recurrent neural network (rnn), and long short-term memory network (lstm). an implementation and detailed comparative study is then suggested between the proposed hybrid deep learning models and conventional single models in terms of various performance metrics on the uk-domestic appliance-level electricity (ukdale) benchmarking database. the experimental results show that the proposed hybrid models provide the best disaggregation performances for multi-target disaggregation compared to single models. specifically, the cnn-lstm and the dae-lstm are the best hybrid models with the highest overall f1-score of 78.90% and 72.94% respectively. | [
"non-intrusive load monitoring (nilm) technique or energy disaggregation",
"a technique",
"the appliance’s states",
"their individual energy consumption",
"the aggregated data",
"the main smart meter",
"energy efficiency",
"the main goal",
"the nilm techniques",
"which",
"energy disaggregation feedback",
"the consumers",
"single models",
"training",
"each appliance",
"this work",
"multi-target disaggregation",
"which",
"the drastic reduction",
"resources",
"training",
"all target appliances",
"this purpose",
"new hybrid models",
"well-known deep learning models",
"convolutional neural network",
"cnn",
"autoencoder",
"dae",
"recurrent neural network",
"rnn",
"long short-term memory network",
"lstm",
"an implementation",
"detailed comparative study",
"the proposed hybrid deep learning models",
"conventional single models",
"terms",
"various performance metrics",
"the uk-domestic appliance-level electricity",
"(ukdale",
"benchmarking database",
"the experimental results",
"the proposed hybrid models",
"the best disaggregation performances",
"multi-target disaggregation",
"single models",
"the cnn-lstm",
"the dae-lstm",
"the best hybrid models",
"the highest overall f1-score",
"78.90%",
"72.94%",
"cnn",
"dae",
"uk",
"cnn",
"78.90%",
"72.94%"
] |
Advancing brain tumor classification accuracy through deep learning: harnessing radimagenet pre-trained convolutional neural networks, ensemble learning, and machine learning classifiers on MRI brain images | [
"Nihal Remzan",
"Karim Tahiry",
"Abdelmajid Farchi"
] | Brain tumors, a severe health concern across all age groups, present challenges for accurate grading in health monitoring and automated diagnosis. Choosing MRI scans for their superior quality and comprehensive anatomical insight, this study navigates the complexities of brain tumor classification. Deep learning (DL) algorithms revolutionize this field, empowering radiologists with enhanced diagnostic precision. Despite the pivotal role of extensive training data in DL success, medical image datasets often suffer from limitations in diversity and quantity. Transfer learning emerges as a solution, bridging gaps between similar yet distinct areas. The inadequacy of general pre-trained models on ImageNet for medical imaging characteristics prompts our recommendation for RadImageNet. This tailored solution aligns pre-trained models with the specific demands of medical imaging datasets, addressing the current mismatch. Our dataset includes 7023 MR images of Normal brain, Glioma, Meningioma, and Pituitary tumors. We employ two approaches: 'Feature Ensemble,' combining 2-top features, and 'Stacking Ensemble,' utilizing SVM, k-NN, Extra Trees, and MLP as base learners, and Logistic Regression (LR) as a meta-learner. In our initial approach with ResNet-50 and DenseNet121 features guided by MLP, we attained an impressive 97.71% accuracy and a remarkable Area Under the Curve (AUC) of 99.87%. Shifting to the Stacking Ensemble method, where DenseNet121 serves as the optimal feature extractor, we secured a commendable 97.40% accuracy, accompanied by an exceptional AUC of 99.83%. These outcomes highlight the efficacy of our ensemble learning strategies in enhancing both accuracy and the overall discriminative power of the model. | 10.1007/s11042-024-18780-1 | advancing brain tumor classification accuracy through deep learning: harnessing radimagenet pre-trained convolutional neural networks, ensemble learning, and machine learning classifiers on mri brain images | brain tumors, a severe health concern across all age groups, present challenges for accurate grading in health monitoring and automated diagnosis. choosing mri scans for their superior quality and comprehensive anatomical insight, this study navigates the complexities of brain tumor classification. deep learning (dl) algorithms revolutionize this field, empowering radiologists with enhanced diagnostic precision. despite the pivotal role of extensive training data in dl success, medical image datasets often suffer from limitations in diversity and quantity. transfer learning emerges as a solution, bridging gaps between similar yet distinct areas. the inadequacy of general pre-trained models on imagenet for medical imaging characteristics prompts our recommendation for radimagenet. this tailored solution aligns pre-trained models with the specific demands of medical imaging datasets, addressing the current mismatch. our dataset includes 7023 mr images of normal brain, glioma, meningioma, and pituitary tumors. we employ two approaches: 'feature ensemble,' combining 2-top features, and 'stacking ensemble,' utilizing svm, k-nn, extra trees, and mlp as base learners, and logistic regression (lr) as a meta-learner. in our initial approach with resnet-50 and densenet121 features guided by mlp, we attained an impressive 97.71% accuracy and a remarkable area under the curve (auc) of 99.87%. shifting to the stacking ensemble method, where densenet121 serves as the optimal feature extractor, we secured a commendable 97.40% accuracy, accompanied by an exceptional auc of 99.83%. these outcomes highlight the efficacy of our ensemble learning strategies in enhancing both accuracy and the overall discriminative power of the model. | [
"brain tumors",
"a severe health concern",
"all age groups",
"present challenges",
"accurate grading",
"health monitoring",
"automated diagnosis",
"mri scans",
"their superior quality",
"comprehensive anatomical insight",
"this study",
"the complexities",
"brain tumor classification",
"deep learning (dl) algorithms",
"this field",
"radiologists",
"enhanced diagnostic precision",
"the pivotal role",
"extensive training data",
"dl success",
"medical image datasets",
"limitations",
"diversity",
"quantity",
"transfer learning",
"a solution",
"gaps",
"similar yet distinct areas",
"the inadequacy",
"general pre-trained models",
"imagenet",
"medical imaging characteristics",
"our recommendation",
"radimagenet",
"this tailored solution",
"pre-trained models",
"the specific demands",
"medical imaging datasets",
"the current mismatch",
"our dataset",
"mr images",
"normal brain",
"glioma",
"meningioma",
"pituitary tumors",
"we",
"two approaches",
"feature ensemble",
"2-top features",
"ensemble",
"svm",
"extra trees",
"mlp",
"base learners",
"logistic regression",
"lr",
"a meta-learner",
"our initial approach",
"resnet-50 and densenet121 features",
"mlp",
"we",
"an impressive 97.71% accuracy",
"a remarkable area",
"the curve",
"auc",
"99.87%",
"the stacking ensemble method",
"densenet121",
"the optimal feature extractor",
"we",
"a commendable 97.40% accuracy",
"an exceptional auc",
"99.83%",
"these outcomes",
"the efficacy",
"our ensemble learning strategies",
"both accuracy",
"the overall discriminative power",
"the model",
"7023",
"glioma",
"two",
"2",
"resnet-50",
"97.71%",
"99.87%",
"97.40%",
"99.83%"
] |
Optimization Based Deep Learning for COVID-19 Detection Using Respiratory Sound Signals | [
"Jawad Ahmad Dar",
"Kamal Kr Srivastava",
"Sajaad Ahmed Lone"
] | The COVID-19 prediction process is more indispensable to handle the spread and death occurred rate because of COVID-19. However, early and precise prediction of COVID-19 is more difficult, because of different sizes and resolutions of input image. Thus, these challenges and problems experienced by traditional COVID-19 detection methods are considered as major motivation to develop SJHBO-based Deep Q Network. The classification issue of respiratory sound has perceived a great focus from the clinical scientists as well as the community of medical researcher in the previous year for the identification of COVID-19 disease. The major contribution of this research is to design an effectual COVID-19 detection model using devised SJHBO-based Deep Q Network. In this paper, the COVID-19 detection is carried out by the deep learning with optimization technique, namely Snake Jaya Honey Badger Optimization (SJHBO) algorithm-driven Deep Q Network. Here, the SJHBO algorithm is the incorporation of Jaya Honey Badger Optimization (JHBO) along with Snake optimization (SO). Here, the COVID-19 is detected by the Deep Q Network wherein the weights of Deep Q Network are tuned by the SJHBO algorithm. Moreover, JHBO is modelled by hybrids, which are the Jaya algorithm and Honey Badger Optimization (HBO) algorithm. Furthermore, the features, such as spectral contrast, Mel frequency cepstral coefficients (MFCC), empirical mode decomposition (EMD) algorithm, spectral flux, fast Fourier transform (FFT), spectral roll-off, spectral centroid, zero-crossing rate, root mean square energy, spectral bandwidth, spectral flatness, power spectral density, mobility complexity, fluctuation index and relative amplitude, are mined for enlightening the detection performance. The developed method realized the better performance based on the accuracy, sensitivity and specificity of 0.9511, 0.9506 and 0.9469. All test results are validated with the k-fold cross validation method in order to make an assessment of the generalizability of these results. Statistical analysis is performed to analyze the performance of the proposed method based on testing accuracy, sensitivity and specificity. Hence, this paper presents the newly devised SJHBO-based Deep Q-Net for COVID-19 detection. This research considers the audio samples as an input, which is acquired from the Coswara dataset. The SJHBO-based Deep Q network approach is developed for COVID-19 detection. The developed approach can be extended by including other hybrid optimization algorithms as well as other features that can be extracted for further improving the detection performance. The proposed COVID-19 detection method is useful in various applications, like medical and so on. Developed SJHBO-enabled Deep Q network for COVID-19 detection: An effective COVID-19 detection technique is introduced based on hybrid optimization–driven deep learning model. The Deep Q Network is used for detecting COVID-19, which classifies the feature vector as COVID-19 or non-COVID-19. Moreover, the Deep Q Network is trained by devised SJHBO approach, which is the incorporation of Jaya Honey Badger Optimization (JHBO) along with Snake optimization (SO). | 10.1007/s12559-024-10300-5 | optimization based deep learning for covid-19 detection using respiratory sound signals | the covid-19 prediction process is more indispensable to handle the spread and death occurred rate because of covid-19. however, early and precise prediction of covid-19 is more difficult, because of different sizes and resolutions of input image. thus, these challenges and problems experienced by traditional covid-19 detection methods are considered as major motivation to develop sjhbo-based deep q network. the classification issue of respiratory sound has perceived a great focus from the clinical scientists as well as the community of medical researcher in the previous year for the identification of covid-19 disease. the major contribution of this research is to design an effectual covid-19 detection model using devised sjhbo-based deep q network. in this paper, the covid-19 detection is carried out by the deep learning with optimization technique, namely snake jaya honey badger optimization (sjhbo) algorithm-driven deep q network. here, the sjhbo algorithm is the incorporation of jaya honey badger optimization (jhbo) along with snake optimization (so). here, the covid-19 is detected by the deep q network wherein the weights of deep q network are tuned by the sjhbo algorithm. moreover, jhbo is modelled by hybrids, which are the jaya algorithm and honey badger optimization (hbo) algorithm. furthermore, the features, such as spectral contrast, mel frequency cepstral coefficients (mfcc), empirical mode decomposition (emd) algorithm, spectral flux, fast fourier transform (fft), spectral roll-off, spectral centroid, zero-crossing rate, root mean square energy, spectral bandwidth, spectral flatness, power spectral density, mobility complexity, fluctuation index and relative amplitude, are mined for enlightening the detection performance. the developed method realized the better performance based on the accuracy, sensitivity and specificity of 0.9511, 0.9506 and 0.9469. all test results are validated with the k-fold cross validation method in order to make an assessment of the generalizability of these results. statistical analysis is performed to analyze the performance of the proposed method based on testing accuracy, sensitivity and specificity. hence, this paper presents the newly devised sjhbo-based deep q-net for covid-19 detection. this research considers the audio samples as an input, which is acquired from the coswara dataset. the sjhbo-based deep q network approach is developed for covid-19 detection. the developed approach can be extended by including other hybrid optimization algorithms as well as other features that can be extracted for further improving the detection performance. the proposed covid-19 detection method is useful in various applications, like medical and so on. developed sjhbo-enabled deep q network for covid-19 detection: an effective covid-19 detection technique is introduced based on hybrid optimization–driven deep learning model. the deep q network is used for detecting covid-19, which classifies the feature vector as covid-19 or non-covid-19. moreover, the deep q network is trained by devised sjhbo approach, which is the incorporation of jaya honey badger optimization (jhbo) along with snake optimization (so). | [
"the covid-19 prediction process",
"the spread",
"death",
"occurred rate",
"covid-19",
"early and precise prediction",
"covid-19",
"different sizes",
"resolutions",
"input image",
"these challenges",
"problems",
"traditional covid-19 detection methods",
"major motivation",
"sjhbo-based deep q network",
"the classification issue",
"respiratory sound",
"a great focus",
"the clinical scientists",
"the community",
"medical researcher",
"the previous year",
"the identification",
"covid-19 disease",
"the major contribution",
"this research",
"an effectual covid-19 detection model",
"devised sjhbo-based deep q network",
"this paper",
"the covid-19 detection",
"the deep learning",
"optimization technique",
"namely snake jaya honey badger optimization",
"sjhbo",
"algorithm-driven deep q network",
"the sjhbo algorithm",
"the incorporation",
"jaya honey badger optimization",
"jhbo",
"snake optimization",
"the covid-19",
"the deep q network",
"the weights",
"deep q network",
"the sjhbo algorithm",
"jhbo",
"hybrids",
"which",
"the jaya algorithm",
"honey badger optimization",
"hbo",
"algorithm",
"the features",
"spectral contrast",
"empirical mode decomposition",
"emd",
"algorithm",
"spectral flux",
"fast fourier transform",
"fft",
"spectral roll-off",
"spectral centroid",
"zero-crossing rate",
"root mean square energy",
"spectral flatness",
"power spectral density",
"mobility complexity",
"fluctuation index",
"relative amplitude",
"the detection performance",
"the developed method",
"the better performance",
"the accuracy",
"sensitivity",
"specificity",
"all test results",
"the k-fold cross validation method",
"order",
"an assessment",
"the generalizability",
"these results",
"statistical analysis",
"the performance",
"the proposed method",
"testing accuracy",
"sensitivity",
"specificity",
"this paper",
"the newly devised sjhbo-based deep q-net",
"covid-19 detection",
"this research",
"the audio samples",
"an input",
"which",
"the coswara dataset",
"the sjhbo-based deep q network approach",
"covid-19 detection",
"the developed approach",
"other features",
"that",
"the detection performance",
"the proposed covid-19 detection method",
"various applications",
"developed sjhbo-enabled deep q network",
"covid-19 detection",
"an effective covid-19 detection technique",
"hybrid optimization",
"driven deep learning model",
"the deep q network",
"covid-19",
"which",
"the feature vector",
"covid-19",
"non",
"-",
"covid-19",
"the deep q network",
"devised sjhbo approach",
"which",
"the incorporation",
"jaya honey badger optimization",
"jhbo",
"snake optimization",
"covid-19",
"covid-19",
"covid-19",
"covid-19",
"the previous year",
"covid-19",
"covid-19",
"covid-19",
"jaya honey badger optimization",
"covid-19",
"jhbo",
"mel",
"zero",
"0.9511",
"0.9506",
"0.9469",
"the k-fold",
"covid-19",
"coswara",
"covid-19",
"covid-19",
"covid-19",
"covid-19",
"covid-19",
"covid-19",
"non-covid-19",
"jaya honey badger optimization"
] |
Image Analysis of the Automatic Welding Defects Detection Based on Deep Learning | [
"Xiaopeng Wang",
"Baoxin Zhang",
"Jinhan Cui",
"Juntao Wu",
"Yan Li",
"Jinhang Li",
"Yunhua Tan",
"Xiaoming Chen",
"Wenliang Wu",
"Xinghua Yu"
] | Automatic detection of welding flaws based on deep learning methods has aroused great interest in the non-destructive testing. However, few studies focus on the characteristics of welding flaws in the X-ray image. This study uses four deep learning models to train and test on a dataset containing 15,194 X-ray images. A hybrid prediction based on OR logic is proposed to avoid the miss detection as much as possible and reduce the miss detection rate to 0.61%, which is state of the art. Quantitative analysis of flaws’ characteristics, including the area, aspect ratio, mean, and variance, suggests the aspect ratios of miss detected flaws are smaller than 2, and the coefficient variances of miss detected flaws are smaller than 0.2. Tracking the critical pixels of X-ray images show that salt noises lead to false alarmed predictions. Error analysis indicates that when using the deep learning method for automatic welding flaws detection, the characteristics of flaws and the factors caused by inappropriate X-ray exposure techniques also should be noted. | 10.1007/s10921-023-00992-4 | image analysis of the automatic welding defects detection based on deep learning | automatic detection of welding flaws based on deep learning methods has aroused great interest in the non-destructive testing. however, few studies focus on the characteristics of welding flaws in the x-ray image. this study uses four deep learning models to train and test on a dataset containing 15,194 x-ray images. a hybrid prediction based on or logic is proposed to avoid the miss detection as much as possible and reduce the miss detection rate to 0.61%, which is state of the art. quantitative analysis of flaws’ characteristics, including the area, aspect ratio, mean, and variance, suggests the aspect ratios of miss detected flaws are smaller than 2, and the coefficient variances of miss detected flaws are smaller than 0.2. tracking the critical pixels of x-ray images show that salt noises lead to false alarmed predictions. error analysis indicates that when using the deep learning method for automatic welding flaws detection, the characteristics of flaws and the factors caused by inappropriate x-ray exposure techniques also should be noted. | [
"automatic detection",
"welding flaws",
"deep learning methods",
"great interest",
"the non-destructive testing",
"few studies",
"the characteristics",
"welding flaws",
"the x-ray image",
"this study",
"four deep learning models",
"a dataset",
"15,194 x-ray images",
"a hybrid prediction",
"logic",
"the miss detection",
"the miss detection rate",
"0.61%",
"which",
"state",
"the art",
"quantitative analysis",
"flaws’ characteristics",
"the area",
"aspect ratio",
"variance",
"the aspect ratios",
"miss detected flaws",
"the coefficient variances",
"miss detected flaws",
"the critical pixels",
"x-ray images",
"salt noises",
"false alarmed predictions",
"error analysis",
"the deep learning method",
"automatic welding flaws detection",
"the characteristics",
"flaws",
"the factors",
"inappropriate x-ray exposure techniques",
"four",
"15,194",
"0.61%",
"2",
"0.2"
] |
Ensemble Deep Learning Approach for Turbidity Prediction of Dooskal Lake Using Remote Sensing Data | [
"Janjhyam Venkata Naga Ramesh",
"Pavithra Roy Patibandla",
"Manjula Shanbhog",
"Srinivas Ambala",
"Mohd Ashraf",
"Ajmeera Kiran"
] | The summer season in India is marked by a severe shortage of water, which poses significant challenges for daily usage and agricultural practices. With unpredictable weather patterns and irregular rainfall, it is crucial to monitor and maintain water bodies such as domestic ponds and lakes in urban areas to ensure they provide clean and safe water for regular use, free from industrial pollutants. In this research paper, we propose an innovative ensemble deep learning approach (e-DLA) that leverages deep learning models to predict the turbidity of Dooskal Lake, located in Telangana, India, using remote sensing data. The proposed approach utilizes various deep learning models, including bagging, boosting, and stacking, to analyze the complex relationships between remote sensing data and turbidity levels in the lake. The study aims to provide accurate and efficient predictions of turbidity levels, which can aid in the management and conservation of water resources in the region. Hyperparameter tuning is employed, and dynamic climatic features are extracted and integrated with the ensemble learning global protective intelligent algorithm to reveal the complex relationship between in situ and measured values of turbidity during the measuring timeline. The proposed approach provides accurate predictions of turbidity levels, enabling the implementation of effective control measures to maintain water quality standards. Experimental results demonstrate that the proposed approach significantly reduces prediction errors compared to existing deep learning models. Overall, this research highlights the potential of machine learning techniques in monitoring and maintaining water resources, particularly in urban areas, to support sustainable water management and usage, and addresses an urgent and pressing issue in India and around the world. | 10.1007/s41976-023-00098-5 | ensemble deep learning approach for turbidity prediction of dooskal lake using remote sensing data | the summer season in india is marked by a severe shortage of water, which poses significant challenges for daily usage and agricultural practices. with unpredictable weather patterns and irregular rainfall, it is crucial to monitor and maintain water bodies such as domestic ponds and lakes in urban areas to ensure they provide clean and safe water for regular use, free from industrial pollutants. in this research paper, we propose an innovative ensemble deep learning approach (e-dla) that leverages deep learning models to predict the turbidity of dooskal lake, located in telangana, india, using remote sensing data. the proposed approach utilizes various deep learning models, including bagging, boosting, and stacking, to analyze the complex relationships between remote sensing data and turbidity levels in the lake. the study aims to provide accurate and efficient predictions of turbidity levels, which can aid in the management and conservation of water resources in the region. hyperparameter tuning is employed, and dynamic climatic features are extracted and integrated with the ensemble learning global protective intelligent algorithm to reveal the complex relationship between in situ and measured values of turbidity during the measuring timeline. the proposed approach provides accurate predictions of turbidity levels, enabling the implementation of effective control measures to maintain water quality standards. experimental results demonstrate that the proposed approach significantly reduces prediction errors compared to existing deep learning models. overall, this research highlights the potential of machine learning techniques in monitoring and maintaining water resources, particularly in urban areas, to support sustainable water management and usage, and addresses an urgent and pressing issue in india and around the world. | [
"the summer season",
"india",
"a severe shortage",
"water",
"which",
"significant challenges",
"daily usage",
"agricultural practices",
"unpredictable weather patterns",
"irregular rainfall",
"it",
"water bodies",
"domestic ponds",
"lakes",
"urban areas",
"they",
"clean and safe water",
"regular use",
"industrial pollutants",
"this research paper",
"we",
"an innovative ensemble deep learning approach",
"e",
"-",
"dla",
"that",
"deep learning models",
"the turbidity",
"dooskal lake",
"telangana",
"india",
"remote sensing data",
"the proposed approach",
"various deep learning models",
"bagging",
"stacking",
"the complex relationships",
"remote sensing data",
"turbidity levels",
"the lake",
"the study",
"accurate and efficient predictions",
"turbidity levels",
"which",
"the management",
"conservation",
"water resources",
"the region",
"hyperparameter tuning",
"dynamic climatic features",
"global protective intelligent algorithm",
"the complex relationship",
"situ",
"measured values",
"turbidity",
"the measuring timeline",
"the proposed approach",
"accurate predictions",
"turbidity levels",
"the implementation",
"effective control measures",
"water quality standards",
"experimental results",
"the proposed approach",
"prediction errors",
"existing deep learning models",
"this research",
"the potential",
"machine learning techniques",
"monitoring",
"water resources",
"urban areas",
"sustainable water management",
"usage",
"an urgent and pressing issue",
"india",
"the world",
"the summer season",
"india",
"daily",
"telangana",
"india",
"india"
] |
Analyzing tourism reviews using Deep Learning and AI to predict sentiments | [
"Piergiorgio Marigliano"
] | In this study, we investigate the application of Artificial Intelligence (AI), specifically through Deep Learning and neural networks, in analyzing and predicting sentiments expressed in tourism reviews. Our dataset comprised various hotel reviews, with the objective to predict whether each textual review indicates positive or negative feedback. The primary challenge was to use solely the textual data of the reviews for this prediction. Through meticulous data processing and analysis, we developed neural network-based models that highlight the efficacy of Deep Learning in the accurate interpretation of reviews. Our findings reveal a significant correlation between the content of reviews and their overall ratings, thereby providing new insights into the application of AI in automating and enhancing understanding of customer needs and perceptions in the tourism sector. The principal contribution of this study is the practical demonstration of how AI techniques can be effectively employed to analyze large volumes of textual data, opening new avenues for marketing strategies and service optimization in the hospitality industry. Each review represents a client’s assessment of a hotel. For each textual review, we aim to predict whether it corresponds to a positive review (the customer is satisfied) or a negative review (the customer is dissatisfied). The overall ratings of the reviews can range from 2.5/10 to 10/10. To simplify the issue, we’ll categorize them as follows: negative reviews have overall ratings of less than 5; positive reviews have ratings of 5 or higher. The challenge lies in predicting this information using only the raw textual data of the review. | 10.1007/s11135-024-01840-x | analyzing tourism reviews using deep learning and ai to predict sentiments | in this study, we investigate the application of artificial intelligence (ai), specifically through deep learning and neural networks, in analyzing and predicting sentiments expressed in tourism reviews. our dataset comprised various hotel reviews, with the objective to predict whether each textual review indicates positive or negative feedback. the primary challenge was to use solely the textual data of the reviews for this prediction. through meticulous data processing and analysis, we developed neural network-based models that highlight the efficacy of deep learning in the accurate interpretation of reviews. our findings reveal a significant correlation between the content of reviews and their overall ratings, thereby providing new insights into the application of ai in automating and enhancing understanding of customer needs and perceptions in the tourism sector. the principal contribution of this study is the practical demonstration of how ai techniques can be effectively employed to analyze large volumes of textual data, opening new avenues for marketing strategies and service optimization in the hospitality industry. each review represents a client’s assessment of a hotel. for each textual review, we aim to predict whether it corresponds to a positive review (the customer is satisfied) or a negative review (the customer is dissatisfied). the overall ratings of the reviews can range from 2.5/10 to 10/10. to simplify the issue, we’ll categorize them as follows: negative reviews have overall ratings of less than 5; positive reviews have ratings of 5 or higher. the challenge lies in predicting this information using only the raw textual data of the review. | [
"this study",
"we",
"the application",
"artificial intelligence",
"(ai",
"deep learning",
"neural networks",
"sentiments",
"tourism reviews",
"our dataset",
"various hotel reviews",
"the objective",
"each textual review",
"positive or negative feedback",
"the primary challenge",
"the textual data",
"the reviews",
"this prediction",
"meticulous data processing",
"analysis",
"we",
"neural network-based models",
"that",
"the efficacy",
"deep learning",
"the accurate interpretation",
"reviews",
"our findings",
"a significant correlation",
"the content",
"reviews",
"their overall ratings",
"new insights",
"the application",
"ai",
"understanding",
"customer needs",
"perceptions",
"the tourism sector",
"the principal contribution",
"this study",
"the practical demonstration",
"techniques",
"large volumes",
"textual data",
"new avenues",
"marketing strategies",
"service optimization",
"the hospitality industry",
"each review",
"a client’s assessment",
"a hotel",
"each textual review",
"we",
"it",
"a positive review",
"the customer",
"a negative review",
"the customer",
"the overall ratings",
"the reviews",
"the issue",
"we",
"them",
"negative reviews",
"overall ratings",
"positive reviews",
"ratings",
"the challenge",
"this information",
"only the raw textual data",
"the review",
"2.5/10",
"10/10",
"less than 5",
"5"
] |
Automated assembly quality inspection by deep learning with 2D and 3D synthetic CAD data | [
"Xiaomeng Zhu",
"Pär Mårtensson",
"Lars Hanson",
"Mårten Björkman",
"Atsuto Maki"
] | In the manufacturing industry, automatic quality inspections can lead to improved product quality and productivity. Deep learning-based computer vision technologies, with their superior performance in many applications, can be a possible solution for automatic quality inspections. However, collecting a large amount of annotated training data for deep learning is expensive and time-consuming, especially for processes involving various products and human activities such as assembly. To address this challenge, we propose a method for automated assembly quality inspection using synthetic data generated from computer-aided design (CAD) models. The method involves two steps: automatic data generation and model implementation. In the first step, we generate synthetic data in two formats: two-dimensional (2D) images and three-dimensional (3D) point clouds. In the second step, we apply different state-of-the-art deep learning approaches to the data for quality inspection, including unsupervised domain adaptation, i.e., a method of adapting models across different data distributions, and transfer learning, which transfers knowledge between related tasks. We evaluate the methods in a case study of pedal car front-wheel assembly quality inspection to identify the possible optimal approach for assembly quality inspection. Our results show that the method using Transfer Learning on 2D synthetic images achieves superior performance compared with others. Specifically, it attained 95% accuracy through fine-tuning with only five annotated real images per class. With promising results, our method may be suggested for other similar quality inspection use cases. By utilizing synthetic CAD data, our method reduces the need for manual data collection and annotation. Furthermore, our method performs well on test data with different backgrounds, making it suitable for different manufacturing environments. | 10.1007/s10845-024-02375-6 | automated assembly quality inspection by deep learning with 2d and 3d synthetic cad data | in the manufacturing industry, automatic quality inspections can lead to improved product quality and productivity. deep learning-based computer vision technologies, with their superior performance in many applications, can be a possible solution for automatic quality inspections. however, collecting a large amount of annotated training data for deep learning is expensive and time-consuming, especially for processes involving various products and human activities such as assembly. to address this challenge, we propose a method for automated assembly quality inspection using synthetic data generated from computer-aided design (cad) models. the method involves two steps: automatic data generation and model implementation. in the first step, we generate synthetic data in two formats: two-dimensional (2d) images and three-dimensional (3d) point clouds. in the second step, we apply different state-of-the-art deep learning approaches to the data for quality inspection, including unsupervised domain adaptation, i.e., a method of adapting models across different data distributions, and transfer learning, which transfers knowledge between related tasks. we evaluate the methods in a case study of pedal car front-wheel assembly quality inspection to identify the possible optimal approach for assembly quality inspection. our results show that the method using transfer learning on 2d synthetic images achieves superior performance compared with others. specifically, it attained 95% accuracy through fine-tuning with only five annotated real images per class. with promising results, our method may be suggested for other similar quality inspection use cases. by utilizing synthetic cad data, our method reduces the need for manual data collection and annotation. furthermore, our method performs well on test data with different backgrounds, making it suitable for different manufacturing environments. | [
"the manufacturing industry",
"automatic quality inspections",
"improved product quality",
"productivity",
"deep learning-based computer vision technologies",
"their superior performance",
"many applications",
"a possible solution",
"automatic quality inspections",
"a large amount",
"annotated training data",
"deep learning",
"processes",
"various products",
"human activities",
"assembly",
"this challenge",
"we",
"a method",
"automated assembly quality inspection",
"synthetic data",
"computer-aided design (cad) models",
"the method",
"two steps",
"automatic data generation and model implementation",
"the first step",
"we",
"synthetic data",
"two formats",
"two-dimensional (2d) images",
"three-dimensional (3d",
"the second step",
"we",
"the-art",
"the data",
"quality inspection",
"unsupervised domain adaptation",
"i.e., a method",
"adapting models",
"different data distributions",
"transfer learning",
"which",
"knowledge",
"related tasks",
"we",
"the methods",
"a case study",
"pedal car front-wheel assembly quality inspection",
"the possible optimal approach",
"assembly quality inspection",
"our results",
"the method",
"transfer learning",
"2d synthetic images",
"superior performance",
"others",
"it",
"95% accuracy",
"fine-tuning",
"only five annotated real images",
"class",
"promising results",
"our method",
"other similar quality inspection use cases",
"synthetic cad data",
"our method",
"the need",
"manual data collection",
"annotation",
"our method",
"test data",
"different backgrounds",
"it",
"different manufacturing environments",
"two",
"first",
"two",
"two",
"2d",
"three",
"3d",
"second",
"2d",
"95%",
"only five"
] |
ConIQA: A deep learning method for perceptual image quality assessment with limited data | [
"M. Hossein Eybposh",
"Changjia Cai",
"Aram Moossavi",
"Jose Rodriguez-Romaguera",
"Nicolas C. Pégard"
] | Effectively assessing the realism and naturalness of images in virtual (VR) and augmented (AR) reality applications requires Full Reference Image Quality Assessment (FR-IQA) metrics that closely align with human perception. Deep learning-based IQAs that are trained on human-labeled data have recently shown promise in generic computer vision tasks. However, their performance decreases in applications where perfect matches between the reference and the distorted images should not be expected, or whenever distortion patterns are restricted to specific domains. Tackling this issue necessitates training a task-specific neural network, yet generating human-labeled FR-IQAs is costly, and deep learning typically demands substantial labeled data. To address these challenges, we developed ConIQA, a deep learning-based IQA that leverages consistency training and a novel data augmentation method to learn from both labeled and unlabeled data. This makes ConIQA well-suited for contexts with scarce labeled data. To validate ConIQA, we considered the example application of Computer-Generated Holography (CGH) where specific artifacts such as ringing, speckle, and quantization errors routinely occur, yet are not explicitly accounted for by existing IQAs. We developed a new dataset, HQA1k, that comprises 1000 natural images each paired with an image rendered using various popular CGH algorithms, and quality-rated by thirteen human participants. Our results show that ConIQA achieves superior Pearson (0.98), Spearman (0.965), and Kendall’s tau (0.86) correlations over fifteen FR-IQA metrics by up to 5%, showcasing significant improvements in aligning with human perception on the HQA1k dataset. | 10.1038/s41598-024-70469-5 | coniqa: a deep learning method for perceptual image quality assessment with limited data | effectively assessing the realism and naturalness of images in virtual (vr) and augmented (ar) reality applications requires full reference image quality assessment (fr-iqa) metrics that closely align with human perception. deep learning-based iqas that are trained on human-labeled data have recently shown promise in generic computer vision tasks. however, their performance decreases in applications where perfect matches between the reference and the distorted images should not be expected, or whenever distortion patterns are restricted to specific domains. tackling this issue necessitates training a task-specific neural network, yet generating human-labeled fr-iqas is costly, and deep learning typically demands substantial labeled data. to address these challenges, we developed coniqa, a deep learning-based iqa that leverages consistency training and a novel data augmentation method to learn from both labeled and unlabeled data. this makes coniqa well-suited for contexts with scarce labeled data. to validate coniqa, we considered the example application of computer-generated holography (cgh) where specific artifacts such as ringing, speckle, and quantization errors routinely occur, yet are not explicitly accounted for by existing iqas. we developed a new dataset, hqa1k, that comprises 1000 natural images each paired with an image rendered using various popular cgh algorithms, and quality-rated by thirteen human participants. our results show that coniqa achieves superior pearson (0.98), spearman (0.965), and kendall’s tau (0.86) correlations over fifteen fr-iqa metrics by up to 5%, showcasing significant improvements in aligning with human perception on the hqa1k dataset. | [
"the realism",
"naturalness",
"images",
"virtual (vr",
"augmented (ar) reality applications",
"full reference image quality assessment",
"fr-iqa",
"metrics",
"that",
"human perception",
"deep learning-based iqas",
"that",
"human-labeled data",
"promise",
"generic computer vision tasks",
"their performance",
"applications",
"perfect matches",
"the reference",
"the distorted images",
"distortion patterns",
"specific domains",
"this issue necessitates",
"a task-specific neural network",
"human-labeled fr-iqas",
"deep learning",
"substantial labeled data",
"these challenges",
"we",
"coniqa",
"that",
"consistency training",
"a novel data augmentation method",
"both labeled and unlabeled data",
"this",
"contexts",
"scarce labeled data",
"validate coniqa",
"we",
"the example application",
"computer-generated holography",
"cgh",
"where specific artifacts",
"ringing",
"speckle",
"quantization errors",
"existing iqas",
"we",
"a new dataset",
"hqa1k",
"that",
"1000 natural images",
"each",
"an image",
"various popular cgh algorithms",
"thirteen human participants",
"our results",
"coniqa",
"superior pearson",
"spearman",
"kendall’s tau (0.86) correlations",
"fifteen fr-iqa metrics",
"up to 5%",
"significant improvements",
"human perception",
"the hqa1k dataset",
"1000",
"thirteen",
"0.98",
"0.965",
"0.86",
"fifteen",
"up to 5%"
] |
DCLGM: Fusion Recommendation Model Based on LightGBM and Deep Learning | [
"Bin Zhao",
"Bin Li",
"Jiqun Zhang",
"Wei Cao",
"Yilong Gao"
] | The recommendation system can mine valuable information according to user preferences, so it is widely used in various industries. However, the performance of recommendation systems is generally affected by the problem of data sparsity, and LightGBM can alleviate the impact caused by data sparsity to a certain extent. To this end, this paper proposes a fusion recommendation model based on the LightGBM and deep learning—CLGM model. The model is composed of LighGBM, cross network and deep neural network. First, the features in the dataset are fused and extracted through LightGBM, and the feature with the highest classification accuracy is selected as the input of the neural network layer; Then, using the cross network and the deep neural network, the linear cross combination feature relationship and nonlinear correlation relationship between high-order features are respectively obtained; finally, the results obtained by the pre-order network are linearly weighted and combined to obtain the final recommendation result. In this paper, AUC and Logloss are used as evaluation indicators to verify the model on the public dataset Criteo and dataset Avazu. The simulation experiment results show that, compared with the four typical recommendation models, the recommendation effect of this model is better. | 10.1007/s11063-024-11504-4 | dclgm: fusion recommendation model based on lightgbm and deep learning | the recommendation system can mine valuable information according to user preferences, so it is widely used in various industries. however, the performance of recommendation systems is generally affected by the problem of data sparsity, and lightgbm can alleviate the impact caused by data sparsity to a certain extent. to this end, this paper proposes a fusion recommendation model based on the lightgbm and deep learning—clgm model. the model is composed of lighgbm, cross network and deep neural network. first, the features in the dataset are fused and extracted through lightgbm, and the feature with the highest classification accuracy is selected as the input of the neural network layer; then, using the cross network and the deep neural network, the linear cross combination feature relationship and nonlinear correlation relationship between high-order features are respectively obtained; finally, the results obtained by the pre-order network are linearly weighted and combined to obtain the final recommendation result. in this paper, auc and logloss are used as evaluation indicators to verify the model on the public dataset criteo and dataset avazu. the simulation experiment results show that, compared with the four typical recommendation models, the recommendation effect of this model is better. | [
"the recommendation system",
"valuable information",
"user preferences",
"it",
"various industries",
"the performance",
"recommendation systems",
"the problem",
"data sparsity",
"lightgbm",
"the impact",
"data sparsity",
"a certain extent",
"this end",
"this paper",
"a fusion recommendation model",
"the lightgbm",
"deep learning",
"clgm model",
"the model",
"lighgbm",
"cross network",
"deep neural network",
"the features",
"the dataset",
"lightgbm",
"the feature",
"the highest classification accuracy",
"the input",
"the neural network layer",
"the cross network",
"the deep neural network",
"the linear cross combination feature relationship",
"nonlinear correlation relationship",
"high-order features",
"the results",
"the pre-order network",
"the final recommendation result",
"this paper",
"auc",
"logloss",
"evaluation indicators",
"the model",
"the public dataset criteo",
"dataset avazu",
"the simulation experiment results",
"the four typical recommendation models",
"the recommendation effect",
"this model",
"first",
"four"
] |
Deep learning versus manual morphology-based embryo selection in IVF: a randomized, double-blind noninferiority trial | [
"Peter J. Illingworth",
"Christos Venetis",
"David K. Gardner",
"Scott M. Nelson",
"Jørgen Berntsen",
"Mark G. Larman",
"Franca Agresta",
"Saran Ahitan",
"Aisling Ahlström",
"Fleur Cattrall",
"Simon Cooke",
"Kristy Demmers",
"Anette Gabrielsen",
"Johnny Hindkjær",
"Rebecca L. Kelley",
"Charlotte Knight",
"Lisa Lee",
"Robert Lahoud",
"Manveen Mangat",
"Hannah Park",
"Anthony Price",
"Geoffrey Trew",
"Bettina Troest",
"Anna Vincent",
"Susanne Wennerström",
"Lyndsey Zujovic",
"Thorir Hardarson"
] | To assess the value of deep learning in selecting the optimal embryo for in vitro fertilization, a multicenter, randomized, double-blind, noninferiority parallel-group trial was conducted across 14 in vitro fertilization clinics in Australia and Europe. Women under 42 years of age with at least two early-stage blastocysts on day 5 were randomized to either the control arm, using standard morphological assessment, or the study arm, employing a deep learning algorithm, intelligent Data Analysis Score (iDAScore), for embryo selection. The primary endpoint was a clinical pregnancy rate with a noninferiority margin of 5%. The trial included 1,066 patients (533 in the iDAScore group and 533 in the morphology group). The iDAScore group exhibited a clinical pregnancy rate of 46.5% (248 of 533 patients), compared to 48.2% (257 of 533 patients) in the morphology arm (risk difference −1.7%; 95% confidence interval −7.7, 4.3; P = 0.62). This study was not able to demonstrate noninferiority of deep learning for clinical pregnancy rate when compared to standard morphology and a predefined prioritization scheme. Australian New Zealand Clinical Trials Registry (ANZCTR) registration: 379161. | 10.1038/s41591-024-03166-5 | deep learning versus manual morphology-based embryo selection in ivf: a randomized, double-blind noninferiority trial | to assess the value of deep learning in selecting the optimal embryo for in vitro fertilization, a multicenter, randomized, double-blind, noninferiority parallel-group trial was conducted across 14 in vitro fertilization clinics in australia and europe. women under 42 years of age with at least two early-stage blastocysts on day 5 were randomized to either the control arm, using standard morphological assessment, or the study arm, employing a deep learning algorithm, intelligent data analysis score (idascore), for embryo selection. the primary endpoint was a clinical pregnancy rate with a noninferiority margin of 5%. the trial included 1,066 patients (533 in the idascore group and 533 in the morphology group). the idascore group exhibited a clinical pregnancy rate of 46.5% (248 of 533 patients), compared to 48.2% (257 of 533 patients) in the morphology arm (risk difference −1.7%; 95% confidence interval −7.7, 4.3; p = 0.62). this study was not able to demonstrate noninferiority of deep learning for clinical pregnancy rate when compared to standard morphology and a predefined prioritization scheme. australian new zealand clinical trials registry (anzctr) registration: 379161. | [
"the value",
"deep learning",
"the optimal embryo",
"in vitro fertilization",
"a multicenter",
"randomized, double-blind, noninferiority parallel-group trial",
"in vitro fertilization clinics",
"australia",
"europe",
"women",
"42 years",
"age",
"at least two early-stage blastocysts",
"day",
"either the control arm",
"standard morphological assessment",
"the study arm",
"a deep learning algorithm",
"intelligent data analysis score",
"(idascore",
"embryo selection",
"the primary endpoint",
"a clinical pregnancy rate",
"a noninferiority margin",
"5%",
"the trial",
"1,066 patients",
"the idascore group",
"the morphology group",
"the idascore group",
"a clinical pregnancy rate",
"46.5%",
"533 patients",
"48.2%",
"533 patients",
"the morphology arm",
"risk difference",
"−1.7%",
"95% confidence interval −7.7",
"p",
"this study",
"noninferiority",
"deep learning",
"clinical pregnancy rate",
"standard morphology",
"a predefined prioritization scheme",
"australian new zealand clinical trials registry (anzctr) registration",
"14",
"australia",
"europe",
"42 years of age",
"at least two",
"day 5",
"5%",
"1,066",
"533",
"533",
"46.5%",
"248",
"533",
"48.2%",
"257",
"533",
"−1.7%",
"95%",
"−7.7",
"4.3",
"0.62",
"australian",
"new zealand",
"379161"
] |
Cyber Digital Twin with Deep Learning Model for Enterprise Products Management | [
"Ziqian Wang"
] | Time series abnormalities might be signs of upcoming problems; thus, new computational anomaly detection techniques are needed for early warning systems and real-time system condition monitoring. Security and intrusion detection systems (IDS) are critical components of Internet of Things (IoT) devices. Current approaches are inadequate for handling complex data and unique intrusion detection systems (IDSs) in today’s network security platforms; deep learning techniques are needed. The Cybertwin-Enhanced self-attention mechanisms with long short-term memory anomaly detection (DL-Cyberwin-Enhanced SAM-LSTM-AD) model for business solutions that may achieve higher prediction accuracy for IoT devices is the main component of this suggested study. It is based on deep learning. To find the absolute error rate threshold of a new model, this model examines assaults against the Cybertwin-neural network. We looked at the CSE-CIC-IDS-2018 dataset to gauge the classifiers’ performance. Utilising the model's capacity to do time series analysis, this research combines the processed data into its suggested framework. These models show that the suggested model is feasible based on the high true positive rate (TPR) and low false positive rate (FPR) that were achieved. The model is evaluated using the test dataset using important metrics including F1-score, ROC-AUC, TPR, FPR, accuracy, and precision. | 10.1007/s11277-024-11146-8 | cyber digital twin with deep learning model for enterprise products management | time series abnormalities might be signs of upcoming problems; thus, new computational anomaly detection techniques are needed for early warning systems and real-time system condition monitoring. security and intrusion detection systems (ids) are critical components of internet of things (iot) devices. current approaches are inadequate for handling complex data and unique intrusion detection systems (idss) in today’s network security platforms; deep learning techniques are needed. the cybertwin-enhanced self-attention mechanisms with long short-term memory anomaly detection (dl-cyberwin-enhanced sam-lstm-ad) model for business solutions that may achieve higher prediction accuracy for iot devices is the main component of this suggested study. it is based on deep learning. to find the absolute error rate threshold of a new model, this model examines assaults against the cybertwin-neural network. we looked at the cse-cic-ids-2018 dataset to gauge the classifiers’ performance. utilising the model's capacity to do time series analysis, this research combines the processed data into its suggested framework. these models show that the suggested model is feasible based on the high true positive rate (tpr) and low false positive rate (fpr) that were achieved. the model is evaluated using the test dataset using important metrics including f1-score, roc-auc, tpr, fpr, accuracy, and precision. | [
"series abnormalities",
"signs",
"upcoming problems",
"new computational anomaly detection techniques",
"early warning systems",
"real-time system condition monitoring",
"security and intrusion detection systems",
"ids",
"critical components",
"internet",
"things",
"(iot) devices",
"current approaches",
"complex data",
"unique intrusion detection systems",
"idss",
"today’s network security platforms",
"deep learning techniques",
"the cybertwin-enhanced self-attention mechanisms",
"long short-term memory anomaly detection",
"(dl-cyberwin-enhanced sam-lstm-ad) model",
"business solutions",
"that",
"higher prediction accuracy",
"iot devices",
"the main component",
"this suggested study",
"it",
"deep learning",
"the absolute error rate threshold",
"a new model",
"this model",
"assaults",
"the cybertwin-neural network",
"we",
"the cse-cic-ids-2018 dataset",
"the classifiers’ performance",
"the model's capacity",
"time series analysis",
"this research",
"the processed data",
"its suggested framework",
"these models",
"the suggested model",
"the high true positive rate",
"tpr",
"low false positive rate",
"fpr",
"that",
"the model",
"the test dataset",
"important metrics",
"f1-score",
"roc-auc",
"tpr",
"fpr",
"accuracy",
"precision",
"today",
"anomaly detection (dl-cyberwin",
"sam-lstm-ad",
"roc"
] |
Deep transfer learning-based automated detection of blast disease in paddy crop | [
"Amandeep Singh",
"Jaspreet Kaur",
"Kuldeep Singh",
"Maninder Lal Singh"
] | A major proportion of the loss faced by the agricultural industry originates from the diseases of the crop during cultivation. Paddy crop is one of the dominant crops which provides food to a huge population. In this crop, the losses caused by such diseases vary from 30 to 90% of the yield. Therefore, the automated detection of different diseases in paddy crops seeks the attention of the research community. In this context, the present work proposes a deep transfer learning solution for the automated detection of blast disease of paddy, which is the major cause of its yield reduction. For this purpose, an image dataset of healthy and blast disease-infected leave images of paddy crop has been developed. These images are fed to five convolutional neural network-based deep transfer learning algorithms, viz., LeNet, AlexNet, VGG 16, Inception v1, and Xception models for binary classification. The performance analysis of given algorithms reveals that AlexNet provides better results for binary classification with an average accuracy of 98.7% followed by VGG 16 and LeNet architectures having accuracies of 98.2% and 97.8%. So, this deep transfer learning-based approach may assist in reducing the gap between experts and farmers by providing an automated expert advice platform for the timely detection of diseases in paddy crop. | 10.1007/s11760-023-02735-4 | deep transfer learning-based automated detection of blast disease in paddy crop | a major proportion of the loss faced by the agricultural industry originates from the diseases of the crop during cultivation. paddy crop is one of the dominant crops which provides food to a huge population. in this crop, the losses caused by such diseases vary from 30 to 90% of the yield. therefore, the automated detection of different diseases in paddy crops seeks the attention of the research community. in this context, the present work proposes a deep transfer learning solution for the automated detection of blast disease of paddy, which is the major cause of its yield reduction. for this purpose, an image dataset of healthy and blast disease-infected leave images of paddy crop has been developed. these images are fed to five convolutional neural network-based deep transfer learning algorithms, viz., lenet, alexnet, vgg 16, inception v1, and xception models for binary classification. the performance analysis of given algorithms reveals that alexnet provides better results for binary classification with an average accuracy of 98.7% followed by vgg 16 and lenet architectures having accuracies of 98.2% and 97.8%. so, this deep transfer learning-based approach may assist in reducing the gap between experts and farmers by providing an automated expert advice platform for the timely detection of diseases in paddy crop. | [
"a major proportion",
"the loss",
"the agricultural industry originates",
"the diseases",
"the crop",
"cultivation",
"paddy crop",
"the dominant crops",
"which",
"food",
"a huge population",
"this crop",
"the losses",
"such diseases",
"30 to 90%",
"the yield",
"the automated detection",
"different diseases",
"paddy crops",
"the attention",
"the research community",
"this context",
"the present work",
"a deep transfer learning solution",
"the automated detection",
"blast disease",
"paddy",
"which",
"the major cause",
"its yield reduction",
"this purpose",
"paddy crop",
"these images",
"five convolutional neural network-based deep transfer learning algorithms",
"viz",
"lenet",
"alexnet",
"vgg",
"inception v1",
"xception models",
"binary classification",
"the performance analysis",
"given algorithms",
"alexnet",
"better results",
"binary classification",
"an average accuracy",
"98.7%",
"accuracies",
"98.2%",
"97.8%",
"this deep transfer learning-based approach",
"the gap",
"experts",
"farmers",
"an automated expert advice platform",
"the timely detection",
"diseases",
"paddy crop",
"30 to 90%",
"fed",
"five",
"16",
"98.7%",
"16",
"98.2%",
"97.8%"
] |
Network intrusion detection and mitigation in SDN using deep learning models | [
"Mamatha Maddu",
"Yamarthi Narasimha Rao"
] | Software-Defined Networking (SDN) is a contemporary network strategy utilized instead of a traditional network structure. It provides significantly more administrative efficiency and ease than traditional networks. However, the centralized control used in SDN entails an elevated risk of single-point failure that is more susceptible to different kinds of network assaults like Distributed Denial of Service (DDoS), DoS, spoofing, and API exploitation which are very complex to identify and mitigate. Thus, a powerful intrusion detection system (IDS) based on deep learning is created in this study for the detection and mitigation of network intrusions. This system contains several stages and begins with the data augmentation method named Deep Convolutional Generative Adversarial Networks (DCGAN) to over the data imbalance problem. Then, the features are extracted from the input data using a CenterNet-based approach. After extracting effective characteristics, ResNet152V2 with Slime Mold Algorithm (SMA) based deep learning is implemented to categorize the assaults in InSDN and Edge IIoT datasets. Once the network intrusion is detected, the proposed defense module is activated to restore regular network connectivity quickly. Finally, several experiments are carried out to validate the algorithm's robustness, and the outcomes reveal that the proposed system can successfully detect and mitigate network intrusions. | 10.1007/s10207-023-00771-2 | network intrusion detection and mitigation in sdn using deep learning models | software-defined networking (sdn) is a contemporary network strategy utilized instead of a traditional network structure. it provides significantly more administrative efficiency and ease than traditional networks. however, the centralized control used in sdn entails an elevated risk of single-point failure that is more susceptible to different kinds of network assaults like distributed denial of service (ddos), dos, spoofing, and api exploitation which are very complex to identify and mitigate. thus, a powerful intrusion detection system (ids) based on deep learning is created in this study for the detection and mitigation of network intrusions. this system contains several stages and begins with the data augmentation method named deep convolutional generative adversarial networks (dcgan) to over the data imbalance problem. then, the features are extracted from the input data using a centernet-based approach. after extracting effective characteristics, resnet152v2 with slime mold algorithm (sma) based deep learning is implemented to categorize the assaults in insdn and edge iiot datasets. once the network intrusion is detected, the proposed defense module is activated to restore regular network connectivity quickly. finally, several experiments are carried out to validate the algorithm's robustness, and the outcomes reveal that the proposed system can successfully detect and mitigate network intrusions. | [
"software-defined networking",
"sdn",
"a contemporary network strategy",
"a traditional network structure",
"it",
"significantly more administrative efficiency",
"ease",
"traditional networks",
"the centralized control",
"sdn",
"an elevated risk",
"single-point failure",
"that",
"different kinds",
"network assaults",
"distributed denial",
"service",
"ddos",
"api exploitation",
"which",
"a powerful intrusion detection system",
"ids",
"deep learning",
"this study",
"the detection",
"mitigation",
"network intrusions",
"this system",
"several stages",
"the data augmentation method",
"deep convolutional generative adversarial networks",
"dcgan",
"the data imbalance problem",
"the features",
"the input data",
"a centernet-based approach",
"effective characteristics",
"slime mold",
"algorithm",
"sma",
"based deep learning",
"the assaults",
"insdn",
"iiot datasets",
"the network intrusion",
"the proposed defense module",
"regular network connectivity",
"several experiments",
"the algorithm's robustness",
"the outcomes",
"the proposed system",
"network intrusions"
] |
Crop disease detection via ensembled-deep-learning paradigm and ABC Coyote pack optimization algorithm (ABC-CPOA) | [
"M. Chithambarathanu",
"M. K. Jeyakumar"
] | Crop disease is a significant issue that affects the growth and yield of crops, leading to financial loss for farmers. Identification and treatment of crop diseases have become challenging due to the increase in the variety of diseases and the lack of knowledge among farmers. To address this issue, this investigate uses an ensembled-deep-learning paradigm to propose a deep learning-based model for crop disease identification trained with an ABC-CPOA. Initially, collected raw images are pre-processed via Bilateral filter and gamma correction Feature Extraction: Then, from the pre-processed images, the features like texture feature (Local Quinary Pattern (LQP), Local Gradient Pattern (LGP), Enriched Local Binary Pattern (E-LBP), color features (Color Histogram, Color Moments), shape features (Contour-based features, Convex Hull). Optimal feature selection- Among the extracted features, the optimal features is designated by means of a self-improved meta-heuristic optimization model referred as ABC-CPOA. This ABC-CPOA model is an extended version of standard Coyote Optimization Algorithm (COA). Crop disease detection phase is modelled with a new ensembled-deep-learning paradigm. Ensembled-deep-learning paradigm comprises Attention-based Bi-LSTM, Recurrent Neural Networks (RNNs) and Optimized Deep Neural Network (O-DNN). The weight function of O-DNN is fine-tuned using the new ABC-CPOA. Precision, recall, sensitivity, and specificity, in addition to TPR, FPR, FNR, and TNR, F1-score, and accuracy are used to assess the suggested approach. The implementation was performed by the MATLAB tool (version: 2022B). | 10.1007/s11042-024-19329-y | crop disease detection via ensembled-deep-learning paradigm and abc coyote pack optimization algorithm (abc-cpoa) | crop disease is a significant issue that affects the growth and yield of crops, leading to financial loss for farmers. identification and treatment of crop diseases have become challenging due to the increase in the variety of diseases and the lack of knowledge among farmers. to address this issue, this investigate uses an ensembled-deep-learning paradigm to propose a deep learning-based model for crop disease identification trained with an abc-cpoa. initially, collected raw images are pre-processed via bilateral filter and gamma correction feature extraction: then, from the pre-processed images, the features like texture feature (local quinary pattern (lqp), local gradient pattern (lgp), enriched local binary pattern (e-lbp), color features (color histogram, color moments), shape features (contour-based features, convex hull). optimal feature selection- among the extracted features, the optimal features is designated by means of a self-improved meta-heuristic optimization model referred as abc-cpoa. this abc-cpoa model is an extended version of standard coyote optimization algorithm (coa). crop disease detection phase is modelled with a new ensembled-deep-learning paradigm. ensembled-deep-learning paradigm comprises attention-based bi-lstm, recurrent neural networks (rnns) and optimized deep neural network (o-dnn). the weight function of o-dnn is fine-tuned using the new abc-cpoa. precision, recall, sensitivity, and specificity, in addition to tpr, fpr, fnr, and tnr, f1-score, and accuracy are used to assess the suggested approach. the implementation was performed by the matlab tool (version: 2022b). | [
"crop disease",
"a significant issue",
"that",
"the growth",
"yield",
"crops",
"financial loss",
"farmers",
"identification",
"treatment",
"crop diseases",
"the increase",
"the variety",
"diseases",
"the lack",
"knowledge",
"farmers",
"this issue",
"this investigate",
"an ensembled-deep-learning paradigm",
"a deep learning-based model",
"crop disease identification",
"an abc-cpoa",
"raw images",
"bilateral filter",
"gamma correction",
"feature extraction",
"the pre-processed images",
"texture feature",
"local quinary pattern",
"lqp",
"local gradient pattern",
"lgp",
"enriched local binary pattern",
"e",
"-",
"lbp",
"color features",
"color histogram",
"color moments",
"shape features",
"(contour-based features",
"convex hull",
"optimal feature",
"the extracted features",
"the optimal features",
"means",
"a self-improved meta-heuristic optimization model",
"abc-cpoa",
"this abc-cpoa model",
"an extended version",
"standard coyote optimization algorithm",
"coa",
"crop disease detection phase",
"a new ensembled-deep-learning paradigm",
"ensembled-deep-learning paradigm",
"attention-based bi",
"-",
"lstm",
"recurrent neural networks",
"rnns",
"deep neural network",
"o",
"dnn",
"the weight function",
"o",
"-",
"dnn",
"the new abc-cpoa",
"precision",
"recall",
"sensitivity",
"specificity",
"addition",
"tpr",
"fpr",
"fnr",
"tnr",
"f1-score",
"accuracy",
"the suggested approach",
"the implementation",
"the matlab tool",
"version",
"abc",
"abc",
"abc",
"abc"
] |
Development of a deep learning model for cancer diagnosis by inspecting cell-free DNA end-motifs | [
"Hongru Shen",
"Meng Yang",
"Jilei Liu",
"Kexin Chen",
"Xiangchun Li"
] | Accurate discrimination between patients with and without cancer from cfDNA is crucial for early cancer diagnosis. Herein, we develop and validate a deep-learning-based model entitled end-motif inspection via transformer (EMIT) for discriminating individuals with and without cancer by learning feature representations from cfDNA end-motifs. EMIT is a self-supervised learning approach that models rankings of cfDNA end-motifs. We include 4606 samples subjected to different types of cfDNA sequencing to develop EIMIT, and subsequently evaluate classification performance of linear projections of EMIT on six datasets and an additional inhouse testing set encopassing whole-genome, whole-genome bisulfite and 5-hydroxymethylcytosine sequencing. The linear projection of representations from EMIT achieved area under the receiver operating curve (AUROC) values ranged from 0.895 (0.835–0.955) to 0.996 (0.994–0.997) across these six datasets, outperforming its baseline by significant margins. Additionally, we showed that linear projection of EMIT representations can achieve an AUROC of 0.962 (0.914–1.0) in identification of lung cancer on an independent testing set subjected to whole-exome sequencing. The findings of this study indicate that a transformer-based deep learning model can learn cancer-discrimative representations from cfDNA end-motifs. The representations of this deep learning model can be exploited for discriminating patients with and without cancer. | 10.1038/s41698-024-00635-5 | development of a deep learning model for cancer diagnosis by inspecting cell-free dna end-motifs | accurate discrimination between patients with and without cancer from cfdna is crucial for early cancer diagnosis. herein, we develop and validate a deep-learning-based model entitled end-motif inspection via transformer (emit) for discriminating individuals with and without cancer by learning feature representations from cfdna end-motifs. emit is a self-supervised learning approach that models rankings of cfdna end-motifs. we include 4606 samples subjected to different types of cfdna sequencing to develop eimit, and subsequently evaluate classification performance of linear projections of emit on six datasets and an additional inhouse testing set encopassing whole-genome, whole-genome bisulfite and 5-hydroxymethylcytosine sequencing. the linear projection of representations from emit achieved area under the receiver operating curve (auroc) values ranged from 0.895 (0.835–0.955) to 0.996 (0.994–0.997) across these six datasets, outperforming its baseline by significant margins. additionally, we showed that linear projection of emit representations can achieve an auroc of 0.962 (0.914–1.0) in identification of lung cancer on an independent testing set subjected to whole-exome sequencing. the findings of this study indicate that a transformer-based deep learning model can learn cancer-discrimative representations from cfdna end-motifs. the representations of this deep learning model can be exploited for discriminating patients with and without cancer. | [
"accurate discrimination",
"patients",
"cancer",
"cfdna",
"early cancer diagnosis",
"we",
"a deep-learning-based model entitled end-motif inspection",
"transformer",
"individuals",
"cancer",
"feature representations",
"cfdna end-motifs",
"emit",
"a self-supervised learning approach",
"that",
"cfdna end-motifs",
"we",
"4606 samples",
"different types",
"cfdna",
"eimit",
"classification performance",
"linear projections",
"six datasets",
"an additional inhouse testing",
"whole-genome, whole-genome bisulfite",
"5-hydroxymethylcytosine",
"the linear projection",
"representations",
"emit",
"area",
"the receiver operating curve (auroc) values",
"(0.835–0.955",
"0.994–0.997",
"these six datasets",
"its baseline",
"significant margins",
"we",
"linear projection",
"emit representations",
"an auroc",
"identification",
"lung cancer",
"an independent testing",
"the findings",
"this study",
"a transformer-based deep learning model",
"cancer-discrimative representations",
"cfdna end-motifs",
"the representations",
"this deep learning model",
"patients",
"cancer",
"emit",
"linear",
"six",
"5",
"0.895",
"0.996",
"six",
"linear",
"0.962"
] |
An automated weed detection approach using deep learning and UAV imagery in smart agriculture system | [
"Baozhong Liu"
] | Weed detection plays a critical role in smart and precise agriculture systems by enabling targeted weed management and reducing environmental impact. Unmanned aerial vehicles (UAVs) and their associated imagery have emerged as powerful tools for weed detection. Traditional and deep learning methods have been explored for weed detection, with deep learning methods being favored due to their ability to handle complex patterns. However, accuracy rate and computation cost challenges persist in deep learning-based weed detection methods. To address this, we propose a method on the basis of the YOLOv5 algorithm to deal with high accuracy demand and low computation cost requirements. The approach involves model generation using a custom dataset and training, validation, and testing sets. Experimental results and performance evaluation validate the proposed method that indicates the research contributes to advancing weed detection in smart and precise agriculture systems, leveraging deep learning techniques for enhanced accuracy and efficiency. | 10.1007/s12596-023-01445-x | an automated weed detection approach using deep learning and uav imagery in smart agriculture system | weed detection plays a critical role in smart and precise agriculture systems by enabling targeted weed management and reducing environmental impact. unmanned aerial vehicles (uavs) and their associated imagery have emerged as powerful tools for weed detection. traditional and deep learning methods have been explored for weed detection, with deep learning methods being favored due to their ability to handle complex patterns. however, accuracy rate and computation cost challenges persist in deep learning-based weed detection methods. to address this, we propose a method on the basis of the yolov5 algorithm to deal with high accuracy demand and low computation cost requirements. the approach involves model generation using a custom dataset and training, validation, and testing sets. experimental results and performance evaluation validate the proposed method that indicates the research contributes to advancing weed detection in smart and precise agriculture systems, leveraging deep learning techniques for enhanced accuracy and efficiency. | [
"weed detection",
"a critical role",
"smart and precise agriculture systems",
"targeted weed management",
"environmental impact",
"unmanned aerial vehicles",
"their associated imagery",
"powerful tools",
"weed detection",
"traditional and deep learning methods",
"weed detection",
"deep learning methods",
"their ability",
"complex patterns",
"accuracy rate",
"computation cost challenges",
"deep learning-based weed detection methods",
"this",
"we",
"a method",
"the basis",
"the yolov5 algorithm",
"high accuracy demand",
"low computation cost requirements",
"the approach",
"model generation",
"a custom dataset",
"training",
"validation",
"testing sets",
"experimental results",
"performance evaluation",
"the proposed method",
"that",
"the research",
"weed detection",
"smart and precise agriculture systems",
"deep learning techniques",
"enhanced accuracy",
"efficiency",
"yolov5"
] |
Performance prediction in online academic course: a deep learning approach with time series imaging | [
"Ahmed Ben Said",
"Abdel-Salam G. Abdel-Salam",
"Khalifa A. Hazaa"
] | With the COVID-19 outbreak, schools and universities have massively adopted online learning to ensure the continuation of the learning process. However, in such setting, instructors lack efficient mechanisms to evaluate the learning gains and get insights about difficulties learners encounter. In this research work, we tackle the problem of predicting learner performance in online learning using a deep learning-based approach. Our proposed solution allows stakeholders involved in the online learning to anticipate the learner outcome ahead of the final assessment hence offering the opportunity for proactive measures to assist the learners. We propose a two-pathway deep learning model to classify learner performance using their interaction during the online sessions in the form of clickstreams. We also propose to transform these time series of clicks into images using the Gramian Angular Field. The learning model makes use of the available extra demographic and assessment information. We evaluate our approach on the Open University Learning Analytics Dataset. Comprehensive comparative study is conducted with evaluation against state-of-art approaches under different experimental settings. We also demonstrate the importance of including extra demographic and assessment data in the prediction process. | 10.1007/s11042-023-17596-9 | performance prediction in online academic course: a deep learning approach with time series imaging | with the covid-19 outbreak, schools and universities have massively adopted online learning to ensure the continuation of the learning process. however, in such setting, instructors lack efficient mechanisms to evaluate the learning gains and get insights about difficulties learners encounter. in this research work, we tackle the problem of predicting learner performance in online learning using a deep learning-based approach. our proposed solution allows stakeholders involved in the online learning to anticipate the learner outcome ahead of the final assessment hence offering the opportunity for proactive measures to assist the learners. we propose a two-pathway deep learning model to classify learner performance using their interaction during the online sessions in the form of clickstreams. we also propose to transform these time series of clicks into images using the gramian angular field. the learning model makes use of the available extra demographic and assessment information. we evaluate our approach on the open university learning analytics dataset. comprehensive comparative study is conducted with evaluation against state-of-art approaches under different experimental settings. we also demonstrate the importance of including extra demographic and assessment data in the prediction process. | [
"the covid-19 outbreak",
"schools",
"universities",
"online learning",
"the continuation",
"the learning process",
"such setting",
"instructors",
"efficient mechanisms",
"the learning gains",
"insights",
"difficulties",
"learners",
"this research work",
"we",
"the problem",
"learner performance",
"online learning",
"a deep learning-based approach",
"our proposed solution",
"stakeholders",
"the online learning",
"the learner outcome",
"the final assessment",
"the opportunity",
"proactive measures",
"the learners",
"we",
"a two-pathway deep learning model",
"learner performance",
"their interaction",
"the online sessions",
"the form",
"clickstreams",
"we",
"these time series",
"clicks",
"images",
"the gramian angular field",
"the learning model",
"use",
"the available extra demographic and assessment information",
"we",
"our approach",
"the open university learning analytics",
"comprehensive comparative study",
"evaluation",
"art",
"different experimental settings",
"we",
"the importance",
"extra demographic and assessment data",
"the prediction process",
"covid-19",
"two",
"gramian"
] |
Deep Learning Enhanced Snapshot Generation for Efficient Hyper-reduction in Nonlinear Structural Dynamics | [
"Hossein Najafi",
"Morteza Karamooz Mahdiabadi"
] | PurposeThis study presents a novel approach to enhancing hyper-reduction in nonlinear structural dynamics by utilizing the predictive capabilities of stacked Long Short-Term Memory (LSTM) neural networks. Hyper-reduction methods are crucial for overcoming the limitations of traditional model order reduction techniques, particularly in accurately capturing the nonlinear behavior of internal force vectors in complex structures.Method The proposed technique employs stacked LSTM neural networks to generate training snapshots for the Energy Conserving Mesh Sampling and Weighting (ECSW) hyper-reduction method. By training the model on a well-defined dataset, we achieve an impressive accuracy of 97.5%. The effectiveness of our method is demonstrated through a geometrically nonlinear dynamic analysis of a leaf spring, resulting in only a 3.24% error when compared to full simulation results. This study emphasizes the potential of deep learning techniques in improving hyper-reduction methods and underscores the importance of computational efficiency in simulations of complex structural dynamics.ResultsThe findings reveal significant advancements in the application of deep learning for hyper-reduction methods, showcasing the ability to accurately model nonlinear structural behaviors while maintaining computational efficiency. This research contributes valuable insights into the integration of advanced machine learning techniques within the field of structural dynamics. | 10.1007/s42417-024-01528-4 | deep learning enhanced snapshot generation for efficient hyper-reduction in nonlinear structural dynamics | purposethis study presents a novel approach to enhancing hyper-reduction in nonlinear structural dynamics by utilizing the predictive capabilities of stacked long short-term memory (lstm) neural networks. hyper-reduction methods are crucial for overcoming the limitations of traditional model order reduction techniques, particularly in accurately capturing the nonlinear behavior of internal force vectors in complex structures.method the proposed technique employs stacked lstm neural networks to generate training snapshots for the energy conserving mesh sampling and weighting (ecsw) hyper-reduction method. by training the model on a well-defined dataset, we achieve an impressive accuracy of 97.5%. the effectiveness of our method is demonstrated through a geometrically nonlinear dynamic analysis of a leaf spring, resulting in only a 3.24% error when compared to full simulation results. this study emphasizes the potential of deep learning techniques in improving hyper-reduction methods and underscores the importance of computational efficiency in simulations of complex structural dynamics.resultsthe findings reveal significant advancements in the application of deep learning for hyper-reduction methods, showcasing the ability to accurately model nonlinear structural behaviors while maintaining computational efficiency. this research contributes valuable insights into the integration of advanced machine learning techniques within the field of structural dynamics. | [
"purposethis study",
"a novel approach",
"hyper-reduction",
"nonlinear structural dynamics",
"the predictive capabilities",
"stacked long short-term memory",
"lstm",
"neural networks",
"hyper-reduction methods",
"the limitations",
"traditional model order reduction techniques",
"the nonlinear behavior",
"internal force vectors",
"complex structures.method",
"the proposed technique employs",
"lstm neural networks",
"training snapshots",
"the energy conserving mesh sampling",
"weighting",
"hyper-reduction method",
"the model",
"a well-defined dataset",
"we",
"an impressive accuracy",
"97.5%",
"the effectiveness",
"our method",
"a geometrically nonlinear dynamic analysis",
"a leaf spring",
"only a 3.24% error",
"full simulation results",
"this study",
"the potential",
"deep learning techniques",
"hyper-reduction methods",
"the importance",
"computational efficiency",
"simulations",
"complex structural dynamics.resultsthe findings",
"significant advancements",
"the application",
"deep learning",
"hyper-reduction methods",
"the ability",
"nonlinear structural behaviors",
"computational efficiency",
"this research",
"valuable insights",
"the integration",
"advanced machine learning techniques",
"the field",
"structural dynamics",
"97.5%",
"a leaf spring",
"3.24%"
] |
Optimization of job shop scheduling problem based on deep reinforcement learning | [
"Dongping Qiao",
"Lvqi Duan",
"HongLei Li",
"Yanqiu Xiao"
] | Aiming at the optimization problem of minimizing the maximum completion time in job shop scheduling, a deep reinforcement learning optimization algorithm is proposed. First, a deep reinforcement learning scheduling environment is built based on the disjunctive graph model, and three channels of state characteristics are established. The action space consists of 20 designed combination scheduling rules. The reward function is designed based on the proportional relationship between the total work of the scheduled operation and the current maximum completion time. The deep convolutional neural network is used to construct action network and target network, and the state features are used as inputs to output the Q value of each action. Then, the action is selected by using the action validity exploration and exploitation strategy. Finally, the immediate reward is calculated and the scheduling environment is updated. Experiments are carried out using benchmark instances to verify the algorithm. The results show that it can balance solution quality and computation time effectively, and the trained agent has good generalization ability to the scheduling problem in the non-zero initial state. | 10.1007/s12065-023-00885-5 | optimization of job shop scheduling problem based on deep reinforcement learning | aiming at the optimization problem of minimizing the maximum completion time in job shop scheduling, a deep reinforcement learning optimization algorithm is proposed. first, a deep reinforcement learning scheduling environment is built based on the disjunctive graph model, and three channels of state characteristics are established. the action space consists of 20 designed combination scheduling rules. the reward function is designed based on the proportional relationship between the total work of the scheduled operation and the current maximum completion time. the deep convolutional neural network is used to construct action network and target network, and the state features are used as inputs to output the q value of each action. then, the action is selected by using the action validity exploration and exploitation strategy. finally, the immediate reward is calculated and the scheduling environment is updated. experiments are carried out using benchmark instances to verify the algorithm. the results show that it can balance solution quality and computation time effectively, and the trained agent has good generalization ability to the scheduling problem in the non-zero initial state. | [
"the optimization problem",
"the maximum completion time",
"job shop scheduling",
"a deep reinforcement learning optimization algorithm",
"a deep reinforcement learning scheduling environment",
"the disjunctive graph model",
"three channels",
"state characteristics",
"the action space",
"20 designed combination scheduling rules",
"the reward function",
"the proportional relationship",
"the total work",
"the scheduled operation",
"the current maximum completion time",
"the deep convolutional neural network",
"action network",
"target network",
"the state features",
"inputs",
"the q value",
"each action",
"the action",
"the action validity exploration and exploitation strategy",
"the immediate reward",
"the scheduling environment",
"experiments",
"benchmark instances",
"the algorithm",
"the results",
"it",
"solution quality and computation time",
"the trained agent",
"good generalization ability",
"the scheduling problem",
"the non-zero initial state",
"first",
"three",
"20"
] |
A Study on Different Deep Learning Algorithms Used in Deep Neural Nets: MLP SOM and DBN | [
"J. Naskath",
"G. Sivakamasundari",
"A. Alif Siddiqua Begum"
] | Deep learning is a wildly popular topic in machine learning and is structured as a series of nonlinear layers that learns various levels of data representations. Deep learning employs numerous layers to represent data abstractions to implement various computer models. Deep learning approaches like generative, discriminative models and model transfer have transformed information processing. This article proposes a comprehensive review of various deep learning algorithms Multi layer perception, Self-organizing map and deep belief networks algorithms. It first briefly introduces historical and recent state-of-the-art reviews with suitable architectures and implementation steps. Moreover, the various applications of those algorithms in various fields such as wireless networks, Adhoc networks, Mobile ad-hoc and vehicular ad-hoc networks, speech recognition engineering, medical applications, natural language processing, material science and remote sensing applications, etc. are classified. | 10.1007/s11277-022-10079-4 | a study on different deep learning algorithms used in deep neural nets: mlp som and dbn | deep learning is a wildly popular topic in machine learning and is structured as a series of nonlinear layers that learns various levels of data representations. deep learning employs numerous layers to represent data abstractions to implement various computer models. deep learning approaches like generative, discriminative models and model transfer have transformed information processing. this article proposes a comprehensive review of various deep learning algorithms multi layer perception, self-organizing map and deep belief networks algorithms. it first briefly introduces historical and recent state-of-the-art reviews with suitable architectures and implementation steps. moreover, the various applications of those algorithms in various fields such as wireless networks, adhoc networks, mobile ad-hoc and vehicular ad-hoc networks, speech recognition engineering, medical applications, natural language processing, material science and remote sensing applications, etc. are classified. | [
"deep learning",
"a wildly popular topic",
"machine learning",
"a series",
"nonlinear layers",
"that",
"various levels",
"data representations",
"deep learning",
"numerous layers",
"data abstractions",
"various computer models",
"approaches",
"discriminative models",
"model transfer",
"information processing",
"this article",
"a comprehensive review",
"various deep learning algorithms",
"multi layer perception",
"self-organizing map",
"deep belief networks",
"it",
"the-art",
"suitable architectures",
"implementation steps",
"the various applications",
"those algorithms",
"various fields",
"wireless networks",
"adhoc networks",
"mobile ad-hoc and vehicular ad-hoc networks",
"speech recognition engineering",
"medical applications",
"natural language processing",
"material science",
"remote sensing applications",
"first"
] |
A Review of Deep Learning Techniques for Glaucoma Detection | [
"Takfarines Guergueb",
"Moulay A. Akhloufi"
] | Glaucoma is one of the major reasons for visual impairment all across the globe. The recent advancements in machine learning techniques have greatly facilitated ophthalmologists in the early diagnosis of ocular diseases through the employment of automated systems. Several studies have been published lately to address the timely detection of glaucoma using deep learning approaches. A comprehensive review of the deep learning approaches employed for glaucoma detection using retinal fundus images is presented in this paper. The available retinal image datasets, image pre-processing techniques, state-of-the-art models, and performance evaluation metrics used in the recent studies are reviewed. This systematic review aims to provide critical insights and potential research directions to the ophthalmologists and researchers in this domain. | 10.1007/s42979-023-01734-z | a review of deep learning techniques for glaucoma detection | glaucoma is one of the major reasons for visual impairment all across the globe. the recent advancements in machine learning techniques have greatly facilitated ophthalmologists in the early diagnosis of ocular diseases through the employment of automated systems. several studies have been published lately to address the timely detection of glaucoma using deep learning approaches. a comprehensive review of the deep learning approaches employed for glaucoma detection using retinal fundus images is presented in this paper. the available retinal image datasets, image pre-processing techniques, state-of-the-art models, and performance evaluation metrics used in the recent studies are reviewed. this systematic review aims to provide critical insights and potential research directions to the ophthalmologists and researchers in this domain. | [
"glaucoma",
"the major reasons",
"visual impairment",
"the globe",
"the recent advancements",
"machine learning techniques",
"the early diagnosis",
"ocular diseases",
"the employment",
"automated systems",
"several studies",
"the timely detection",
"glaucoma",
"deep learning approaches",
"a comprehensive review",
"the deep learning approaches",
"glaucoma detection",
"retinal fundus images",
"this paper",
"the available retinal image datasets",
"image pre-processing techniques",
"the-art",
"performance evaluation metrics",
"the recent studies",
"this systematic review",
"critical insights",
"potential research directions",
"the ophthalmologists",
"researchers",
"this domain",
"glaucoma"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.