title
stringlengths
31
206
authors
sequencelengths
1
85
abstract
stringlengths
428
3.21k
doi
stringlengths
21
31
cleaned_title
stringlengths
31
206
cleaned_abstract
stringlengths
428
3.21k
key_phrases
sequencelengths
19
150
Question classification task based on deep learning models with self-attention mechanism
[ "Subhash Mondal", "Manas Barman", "Amitava Nag" ]
Question classification (QC) is a process that involves classifying questions based on their type to enable systems to provide accurate responses by matching the question type with relevant information. To understand and respond to natural language questions posed by humans, machine learning models or systems must comprehend the type of information requested, which can often be inferred from the structure and wording of the question. The high dimensionality and sparse nature of text data lead to challenges for text classification. These tasks can be improved using deep learning (DL) approaches to process complex patterns and features within input data. By training on large amounts of labeled data, deep learning algorithms can automatically extract relevant features and representations from text, resulting in more accurate and robust classification. This study utilizes a dataset comprising 5452 instances of questions and six output labels and uses two different word embedding techniques, like GloVe and Word2Vec, tested on the dataset using three deep learning models, LSTM, BiLSTM, and GRU, followed by a convolution layer. Additionally, a self-attention layer is included, which helps the model to focus on more relevant information when making predictions. Finally, an analytical discussion of the proposed models and their performance results provide insight into how GloVe and Word2Vec perform on the above-mentioned models. The GloVe embedding outperforms by achieving 97.68% accuracy and a moderate loss of 16.98 with the GRU model.
10.1007/s11042-024-19239-z
question classification task based on deep learning models with self-attention mechanism
question classification (qc) is a process that involves classifying questions based on their type to enable systems to provide accurate responses by matching the question type with relevant information. to understand and respond to natural language questions posed by humans, machine learning models or systems must comprehend the type of information requested, which can often be inferred from the structure and wording of the question. the high dimensionality and sparse nature of text data lead to challenges for text classification. these tasks can be improved using deep learning (dl) approaches to process complex patterns and features within input data. by training on large amounts of labeled data, deep learning algorithms can automatically extract relevant features and representations from text, resulting in more accurate and robust classification. this study utilizes a dataset comprising 5452 instances of questions and six output labels and uses two different word embedding techniques, like glove and word2vec, tested on the dataset using three deep learning models, lstm, bilstm, and gru, followed by a convolution layer. additionally, a self-attention layer is included, which helps the model to focus on more relevant information when making predictions. finally, an analytical discussion of the proposed models and their performance results provide insight into how glove and word2vec perform on the above-mentioned models. the glove embedding outperforms by achieving 97.68% accuracy and a moderate loss of 16.98 with the gru model.
[ "question classification", "qc", "a process", "that", "questions", "their type", "systems", "accurate responses", "the question type", "relevant information", "natural language questions", "humans", "machine learning models", "systems", "the type", "information", "which", "the structure", "wording", "the question", "the high dimensionality", "sparse nature", "text data", "challenges", "text classification", "these tasks", "dl", "complex patterns", "features", "input data", "training", "large amounts", "labeled data", "deep learning algorithms", "relevant features", "representations", "text", "more accurate and robust classification", "this study", "a dataset", "5452 instances", "questions", "six output labels", "two different word", "techniques", "glove", "word2vec", "the dataset", "three deep learning models", "lstm", "bilstm", "gru", "a convolution layer", "a self-attention layer", "which", "the model", "more relevant information", "predictions", "an analytical discussion", "the proposed models", "their performance results", "insight", "how glove", "the above-mentioned models", "the glove", "outperforms", "97.68% accuracy", "a moderate loss", "the gru model", "5452", "six", "two", "three", "97.68%", "16.98" ]
Deep multi-metric training: the need of multi-metric curve evaluation to avoid weak learning
[ "Michail Mamalakis", "Abhirup Banerjee", "Surajit Ray", "Craig Wilkie", "Richard H. Clayton", "Andrew J. Swift", "George Panoutsos", "Bart Vorselaars" ]
The development and application of artificial intelligence-based computer vision systems in medicine, environment, and industry are playing an increasingly prominent role. Hence, the need for optimal and efficient hyperparameter tuning strategies is more than crucial to deliver the highest performance of the deep learning networks in large and demanding datasets. In our study, we have developed and evaluated a new training methodology named deep multi-metric training (DMMT) for enhanced training performance. The DMMT delivers a state of robust learning for deep networks using a new important criterion of multi-metric performance evaluation. We have tested the DMMT methodology in multi-class (three, four, and ten), multi-vendors (different X-ray imaging devices), and multi-size (large, medium, and small) datasets. The validity of the DMMT methodology has been tested in three different classification problems: (i) medical disease classification, (ii) environmental classification, and (iii) ecological classification. For disease classification, we have used two large COVID-19 chest X-rays datasets, namely the BIMCV COVID-19+ and Sheffield hospital datasets. The environmental application is related to the classification of weather images in cloudy, rainy, shine or sunrise conditions. The ecological classification task involves a classification of three animal species (cat, dog, wild) and a classification of ten animals and transportation vehicles categories (CIFAR-10). We have used state-of-the-art networks of DenseNet-121, ResNet-50, VGG-16, VGG-19, and DenResCov-19 (DenRes-131) to verify that our novel methodology is applicable in a variety of different deep learning networks. To the best of our knowledge, this is the first work that proposes a training methodology to deliver robust learning, over a variety of deep learning networks and multi-field classification problems.
10.1007/s00521-024-10182-6
deep multi-metric training: the need of multi-metric curve evaluation to avoid weak learning
the development and application of artificial intelligence-based computer vision systems in medicine, environment, and industry are playing an increasingly prominent role. hence, the need for optimal and efficient hyperparameter tuning strategies is more than crucial to deliver the highest performance of the deep learning networks in large and demanding datasets. in our study, we have developed and evaluated a new training methodology named deep multi-metric training (dmmt) for enhanced training performance. the dmmt delivers a state of robust learning for deep networks using a new important criterion of multi-metric performance evaluation. we have tested the dmmt methodology in multi-class (three, four, and ten), multi-vendors (different x-ray imaging devices), and multi-size (large, medium, and small) datasets. the validity of the dmmt methodology has been tested in three different classification problems: (i) medical disease classification, (ii) environmental classification, and (iii) ecological classification. for disease classification, we have used two large covid-19 chest x-rays datasets, namely the bimcv covid-19+ and sheffield hospital datasets. the environmental application is related to the classification of weather images in cloudy, rainy, shine or sunrise conditions. the ecological classification task involves a classification of three animal species (cat, dog, wild) and a classification of ten animals and transportation vehicles categories (cifar-10). we have used state-of-the-art networks of densenet-121, resnet-50, vgg-16, vgg-19, and denrescov-19 (denres-131) to verify that our novel methodology is applicable in a variety of different deep learning networks. to the best of our knowledge, this is the first work that proposes a training methodology to deliver robust learning, over a variety of deep learning networks and multi-field classification problems.
[ "the development", "application", "artificial intelligence-based computer vision systems", "medicine", "environment", "industry", "an increasingly prominent role", "the need", "optimal and efficient hyperparameter tuning strategies", "the highest performance", "the deep learning networks", "large and demanding datasets", "our study", "we", "a new training methodology", "deep multi-metric training", "dmmt", "enhanced training performance", "the dmmt", "a state", "robust learning", "deep networks", "a new important criterion", "multi-metric performance evaluation", "we", "the dmmt methodology", "-", "-", "vendors (different x-ray imaging devices", "multi-size (large, medium, and small) datasets", "the validity", "the dmmt methodology", "three different classification problems", "(i) medical disease classification", "(ii) environmental classification", "(iii) ecological classification", "disease classification", "we", "two large covid-19 chest x-rays datasets", "namely the bimcv covid-19", "hospital datasets", "the environmental application", "the classification", "weather images", "shine", "sunrise conditions", "the ecological classification task", "a classification", "three animal species", "a classification", "ten animals", "transportation vehicles categories", "cifar-10", "we", "the-art", "resnet-50", "vgg-16", "vgg-19", "denrescov-19", "(denres-131", "our novel methodology", "a variety", "different deep learning networks", "our knowledge", "this", "the first work", "that", "a training methodology", "robust learning", "a variety", "deep learning networks", "multi-field classification problems", "three", "four", "ten", "three", "two", "covid-19", "covid-19+", "three", "ten", "cifar-10", "densenet-121, resnet-50", "vgg-16", "vgg-19", "denrescov-19", "first" ]
Deep Reinforcement Learning Model for Stock Portfolio Management Based on Data Fusion
[ "Haifeng Li", "Mo Hai" ]
Deep reinforcement learning (DRL) can be used to extract deep features that can be incorporated into reinforcement learning systems to enable improved decision-making; DRL can therefore also be used for managing stock portfolios. Traditional methods cannot fully exploit the advantages of DRL because they are generally based on real-time stock quotes, which do not have sufficient features for making comprehensive decisions. In this study, in addition to stock quotes, we introduced stock financial indices as additional stock features. Moreover, we used Markowitz mean-variance theory for determining stock correlation. A three-agent deep reinforcement learning model called Collaborative Multi-agent reinforcement learning-based stock Portfolio management System (CMPS) was designed and trained based on fused data. In CMPS, each agent was implemented with a deep Q-network to obtain the features of time-series stock data, and a self-attention network was used to combine the output of each agent. We added a risk-free asset strategy to CMPS to prevent risks and referred to this model as CMPS-Risk Free (CMPS-RF). We conducted experiments under different market conditions using the stock data of China Shanghai Stock Exchange 50 and compared our model with the state-of-the-art models. The results showed that CMPS could obtain better profits than the compared benchmark models, and CMPS-RF was able to accurately recognize the market risk and achieved the best Sharpe and Calmar ratios. The study findings are expected to aid in the development of an efficient investment-trading strategy.
10.1007/s11063-024-11582-4
deep reinforcement learning model for stock portfolio management based on data fusion
deep reinforcement learning (drl) can be used to extract deep features that can be incorporated into reinforcement learning systems to enable improved decision-making; drl can therefore also be used for managing stock portfolios. traditional methods cannot fully exploit the advantages of drl because they are generally based on real-time stock quotes, which do not have sufficient features for making comprehensive decisions. in this study, in addition to stock quotes, we introduced stock financial indices as additional stock features. moreover, we used markowitz mean-variance theory for determining stock correlation. a three-agent deep reinforcement learning model called collaborative multi-agent reinforcement learning-based stock portfolio management system (cmps) was designed and trained based on fused data. in cmps, each agent was implemented with a deep q-network to obtain the features of time-series stock data, and a self-attention network was used to combine the output of each agent. we added a risk-free asset strategy to cmps to prevent risks and referred to this model as cmps-risk free (cmps-rf). we conducted experiments under different market conditions using the stock data of china shanghai stock exchange 50 and compared our model with the state-of-the-art models. the results showed that cmps could obtain better profits than the compared benchmark models, and cmps-rf was able to accurately recognize the market risk and achieved the best sharpe and calmar ratios. the study findings are expected to aid in the development of an efficient investment-trading strategy.
[ "deep reinforcement learning", "drl", "deep features", "that", "reinforcement learning systems", "improved decision-making", "drl", "stock portfolios", "traditional methods", "the advantages", "drl", "they", "real-time stock quotes", "which", "sufficient features", "comprehensive decisions", "this study", "addition", "stock quotes", "we", "stock financial indices", "additional stock features", "we", "markowitz mean-variance theory", "stock correlation", "a three-agent deep reinforcement learning model", "collaborative multi-agent reinforcement learning-based stock portfolio management system", "cmps", "fused data", "cmps", "each agent", "a deep q-network", "the features", "time-series stock data", "a self-attention network", "the output", "each agent", "we", "a risk-free asset strategy", "cmps", "risks", "this model", "cmps-risk free (cmps", "we", "experiments", "different market conditions", "the stock data", "china shanghai stock exchange", "our model", "the-art", "the results", "cmps", "better profits", "the compared benchmark models", "cmps-rf", "the market risk", "the best sharpe", "calmar ratios", "the study findings", "the development", "an efficient investment-trading strategy", "three", "china shanghai stock exchange", "50", "sharpe" ]
Sub-trajectory clustering with deep reinforcement learning
[ "Anqi Liang", "Bin Yao", "Bo Wang", "Yinpei Liu", "Zhida Chen", "Jiong Xie", "Feifei Li" ]
Sub-trajectory clustering is a fundamental problem in many trajectory applications. Existing approaches usually divide the clustering procedure into two phases: segmenting trajectories into sub-trajectories and then clustering these sub-trajectories. However, researchers need to develop complex human-crafted segmentation rules for specific applications, making the clustering results sensitive to the segmentation rules and lacking in generality. To solve this problem, we propose a novel algorithm using the clustering results to guide the segmentation, which is based on reinforcement learning (RL). The novelty is that the segmentation and clustering components cooperate closely and improve each other continuously to yield better clustering results. To devise our RL-based algorithm, we model the procedure of trajectory segmentation as a Markov decision process (MDP). We apply Deep-Q-Network (DQN) learning to train an RL model for the segmentation and achieve excellent clustering results. Experimental results on real datasets demonstrate the superior performance of the proposed RL-based approach over state-of-the-art methods.
10.1007/s00778-023-00833-w
sub-trajectory clustering with deep reinforcement learning
sub-trajectory clustering is a fundamental problem in many trajectory applications. existing approaches usually divide the clustering procedure into two phases: segmenting trajectories into sub-trajectories and then clustering these sub-trajectories. however, researchers need to develop complex human-crafted segmentation rules for specific applications, making the clustering results sensitive to the segmentation rules and lacking in generality. to solve this problem, we propose a novel algorithm using the clustering results to guide the segmentation, which is based on reinforcement learning (rl). the novelty is that the segmentation and clustering components cooperate closely and improve each other continuously to yield better clustering results. to devise our rl-based algorithm, we model the procedure of trajectory segmentation as a markov decision process (mdp). we apply deep-q-network (dqn) learning to train an rl model for the segmentation and achieve excellent clustering results. experimental results on real datasets demonstrate the superior performance of the proposed rl-based approach over state-of-the-art methods.
[ "sub-trajectory clustering", "a fundamental problem", "many trajectory applications", "existing approaches", "the clustering procedure", "two phases", "trajectories", "sub", "-", "trajectories", "these sub", "-", "trajectories", "researchers", "complex human-crafted segmentation rules", "specific applications", "the clustering results", "the segmentation rules", "generality", "this problem", "we", "a novel algorithm", "the clustering results", "the segmentation", "which", "reinforcement learning", "rl", "the novelty", "the segmentation", "clustering components", "better clustering results", "our rl-based algorithm", "we", "the procedure", "trajectory segmentation", "a markov decision process", "mdp", "we", "q", "dqn", "an rl model", "the segmentation", "excellent clustering results", "experimental results", "real datasets", "the superior performance", "the proposed rl-based approach", "the-art", "two" ]
Prediction of non-muscle invasive bladder cancer recurrence using deep learning of pathology image
[ "Guang-Yue Wang", "Jing-Fei Zhu", "Qi-Chao Wang", "Jia-Xin Qin", "Xin-Lei Wang", "Xing Liu", "Xin-Yu Liu", "Jun-Zhi Chen", "Jie-Fei Zhu", "Shi-Chao Zhuo", "Di Wu", "Na Li", "Liu Chao", "Fan-Lai Meng", "Hao Lu", "Zhen-Duo Shi", "Zhi-Gang Jia", "Cong-Hui Han" ]
We aimed to build a deep learning-based pathomics model to predict the early recurrence of non-muscle-infiltrating bladder cancer (NMIBC) in this work. A total of 147 patients from Xuzhou Central Hospital were enrolled as the training cohort, and 63 patients from Suqian Affiliated Hospital of Xuzhou Medical University were enrolled as the test cohort. Based on two consecutive phases of patch level prediction and WSI-level predictione, we built a pathomics model, with the initial model developed in the training cohort and subjected to transfer learning, and then the test cohort was validated for generalization. The features extracted from the visualization model were used for model interpretation. After migration learning, the area under the receiver operating characteristic curve for the deep learning-based pathomics model in the test cohort was 0.860 (95% CI 0.752–0.969), with good agreement between the migration training cohort and the test cohort in predicting recurrence, and the predicted values matched well with the observed values, with p values of 0.667766 and 0.140233 for the Hosmer–Lemeshow test, respectively. The good clinical application was observed using a decision curve analysis method. We developed a deep learning-based pathomics model showed promising performance in predicting recurrence within one year in NMIBC patients. Including 10 state prediction NMIBC recurrence group pathology features be visualized, which may be used to facilitate personalized management of NMIBC patients to avoid ineffective or unnecessary treatment for the benefit of patients.
10.1038/s41598-024-66870-9
prediction of non-muscle invasive bladder cancer recurrence using deep learning of pathology image
we aimed to build a deep learning-based pathomics model to predict the early recurrence of non-muscle-infiltrating bladder cancer (nmibc) in this work. a total of 147 patients from xuzhou central hospital were enrolled as the training cohort, and 63 patients from suqian affiliated hospital of xuzhou medical university were enrolled as the test cohort. based on two consecutive phases of patch level prediction and wsi-level predictione, we built a pathomics model, with the initial model developed in the training cohort and subjected to transfer learning, and then the test cohort was validated for generalization. the features extracted from the visualization model were used for model interpretation. after migration learning, the area under the receiver operating characteristic curve for the deep learning-based pathomics model in the test cohort was 0.860 (95% ci 0.752–0.969), with good agreement between the migration training cohort and the test cohort in predicting recurrence, and the predicted values matched well with the observed values, with p values of 0.667766 and 0.140233 for the hosmer–lemeshow test, respectively. the good clinical application was observed using a decision curve analysis method. we developed a deep learning-based pathomics model showed promising performance in predicting recurrence within one year in nmibc patients. including 10 state prediction nmibc recurrence group pathology features be visualized, which may be used to facilitate personalized management of nmibc patients to avoid ineffective or unnecessary treatment for the benefit of patients.
[ "we", "a deep learning-based pathomics model", "the early recurrence", "non-muscle-infiltrating bladder cancer", "nmibc", "this work", "a total", "147 patients", "xuzhou central hospital", "the training cohort", "63 patients", "suqian affiliated hospital", "xuzhou medical university", "the test cohort", "two consecutive phases", "patch level prediction", "wsi-level predictione", "we", "a pathomics model", "the initial model", "the training cohort", "learning", "the test cohort", "generalization", "the features", "the visualization model", "model interpretation", "migration learning", "the area", "the receiver operating characteristic curve", "the deep learning-based pathomics model", "the test cohort", "(95%", "good agreement", "the migration training cohort", "the test cohort", "recurrence", "the predicted values", "the observed values", "p values", "the hosmer", "lemeshow test", "the good clinical application", "a decision curve analysis method", "we", "a deep learning-based pathomics model", "promising performance", "recurrence", "one year", "nmibc patients", "10 state prediction nmibc recurrence group pathology features", "which", "personalized management", "nmibc patients", "ineffective or unnecessary treatment", "the benefit", "patients", "147", "xuzhou", "63", "xuzhou medical university", "two", "0.860", "95%", "0.667766", "0.140233", "one year", "10" ]
Pre-operative lung ablation prediction using deep learning
[ "Krishna Nand Keshavamurthy", "Carsten Eickhoff", "Etay Ziv" ]
ObjectiveMicrowave lung ablation (MWA) is a minimally invasive and inexpensive alternative cancer treatment for patients who are not candidates for surgery/radiotherapy. However, a major challenge for MWA is its relatively high tumor recurrence rates, due to incomplete treatment as a result of inaccurate planning. We introduce a patient-specific, deep-learning model to accurately predict post-treatment ablation zones to aid planning and enable effective treatments.Materials and methodsOur IRB-approved retrospective study consisted of ablations with a single applicator/burn/vendor between 01/2015 and 01/2019. The input data included pre-procedure computerized tomography (CT), ablation power/time, and applicator position. The ground truth ablation zone was segmented from follow-up CT post-treatment. Novel deformable image registration optimized for ablation scans and an applicator-centric co-ordinate system for data analysis were applied. Our prediction model was based on the U-net architecture. The registrations were evaluated using target registration error (TRE) and predictions using Bland-Altman plots, Dice co-efficient, precision, and recall, compared against the applicator vendor’s estimates.ResultsThe data included 113 unique ablations from 72 patients (median age 57, interquartile range (IQR) (49–67); 41 women). We obtained a TRE ≤ 2 mm on 52 ablations. Our prediction had no bias from ground truth ablation volumes (p = 0.169) unlike the vendor’s estimate (p < 0.001) and had smaller limits of agreement (p < 0.001). An 11% improvement was achieved in the Dice score. The ability to account for patient-specific in-vivo anatomical effects due to vessels, chest wall, heart, lung boundaries, and fissures was shown.ConclusionsWe demonstrated a patient-specific deep-learning model to predict the ablation treatment effect prior to the procedure, with the potential for improved planning, achieving complete treatments, and reduce tumor recurrence.Clinical relevance statementOur method addresses the current lack of reliable tools to estimate ablation extents, required for ensuring successful ablation treatments. The potential clinical implications include improved treatment planning, ensuring complete treatments, and reducing tumor recurrence.
10.1007/s00330-024-10767-8
pre-operative lung ablation prediction using deep learning
objectivemicrowave lung ablation (mwa) is a minimally invasive and inexpensive alternative cancer treatment for patients who are not candidates for surgery/radiotherapy. however, a major challenge for mwa is its relatively high tumor recurrence rates, due to incomplete treatment as a result of inaccurate planning. we introduce a patient-specific, deep-learning model to accurately predict post-treatment ablation zones to aid planning and enable effective treatments.materials and methodsour irb-approved retrospective study consisted of ablations with a single applicator/burn/vendor between 01/2015 and 01/2019. the input data included pre-procedure computerized tomography (ct), ablation power/time, and applicator position. the ground truth ablation zone was segmented from follow-up ct post-treatment. novel deformable image registration optimized for ablation scans and an applicator-centric co-ordinate system for data analysis were applied. our prediction model was based on the u-net architecture. the registrations were evaluated using target registration error (tre) and predictions using bland-altman plots, dice co-efficient, precision, and recall, compared against the applicator vendor’s estimates.resultsthe data included 113 unique ablations from 72 patients (median age 57, interquartile range (iqr) (49–67); 41 women). we obtained a tre ≤ 2 mm on 52 ablations. our prediction had no bias from ground truth ablation volumes (p = 0.169) unlike the vendor’s estimate (p < 0.001) and had smaller limits of agreement (p < 0.001). an 11% improvement was achieved in the dice score. the ability to account for patient-specific in-vivo anatomical effects due to vessels, chest wall, heart, lung boundaries, and fissures was shown.conclusionswe demonstrated a patient-specific deep-learning model to predict the ablation treatment effect prior to the procedure, with the potential for improved planning, achieving complete treatments, and reduce tumor recurrence.clinical relevance statementour method addresses the current lack of reliable tools to estimate ablation extents, required for ensuring successful ablation treatments. the potential clinical implications include improved treatment planning, ensuring complete treatments, and reducing tumor recurrence.
[ "objectivemicrowave lung ablation", "mwa", "a minimally invasive and inexpensive alternative cancer treatment", "patients", "who", "candidates", "surgery/radiotherapy", "a major challenge", "mwa", "its relatively high tumor recurrence rates", "incomplete treatment", "a result", "inaccurate planning", "we", "a patient-specific, deep-learning model", "post-treatment ablation zones", "planning", "effective treatments.materials", "methodsour irb-approved retrospective study", "ablations", "a single applicator/burn/vendor", "01/2015", "01/2019", "the input data", "pre-procedure computerized tomography", "ablation power/time", "applicator position", "the ground truth ablation zone", "follow-up", "-", "treatment", "novel deformable image registration", "ablation scans", "an applicator-centric co-ordinate system", "data analysis", "our prediction model", "the u-net architecture", "the registrations", "target registration error", "tre", "predictions", "bland-altman plots", "dice co", "precision", "recall", "the applicator vendor’s estimates.resultsthe data", "113 unique ablations", "72 patients", "median age", "interquartile range", "iqr", "41 women", "we", "a tre ≤", "2 mm", "52 ablations", "our prediction", "no bias", "ground truth ablation volumes", "the vendor’s estimate", "p", "smaller limits", "agreement", "p", "an 11% improvement", "the dice score", "the ability", "patient-specific in-vivo anatomical effects", "vessels", "chest wall", "heart", "lung boundaries", "fissures", "shown.conclusionswe", "a patient-specific deep-learning model", "the ablation treatment effect", "the procedure", "the potential", "improved planning", "complete treatments", "the current lack", "reliable tools", "ablation extents", "successful ablation treatments", "the potential clinical implications", "improved treatment planning", "complete treatments", "tumor recurrence", "mwa", "mwa", "methodsour irb-", "113", "72", "age 57", "49–67", "41", "52", "0.169", "p < 0.001", "11%" ]
Building trust in deep learning-based immune response predictors with interpretable explanations
[ "Piyush Borole", "Ajitha Rajan" ]
The ability to predict whether a peptide will get presented on Major Histocompatibility Complex (MHC) class I molecules has profound implications in designing vaccines. Numerous deep learning-based predictors for peptide presentation on MHC class I molecules exist with high levels of accuracy. However, these MHC class I predictors are treated as black-box functions, providing little insight into their decision making. To build turst in these predictors, it is crucial to understand the rationale behind their decisions with human-interpretable explanations. We present MHCXAI, eXplainable AI (XAI) techniques to help interpret the outputs from MHC class I predictors in terms of input peptide features. In our experiments, we explain the outputs of four state-of-the-art MHC class I predictors over a large dataset of peptides and MHC alleles. Additionally, we evaluate the reliability of the explanations by comparing against ground truth and checking their robustness. MHCXAI seeks to increase understanding of deep learning-based predictors in the immune response domain and build trust with validated explanations.
10.1038/s42003-024-05968-2
building trust in deep learning-based immune response predictors with interpretable explanations
the ability to predict whether a peptide will get presented on major histocompatibility complex (mhc) class i molecules has profound implications in designing vaccines. numerous deep learning-based predictors for peptide presentation on mhc class i molecules exist with high levels of accuracy. however, these mhc class i predictors are treated as black-box functions, providing little insight into their decision making. to build turst in these predictors, it is crucial to understand the rationale behind their decisions with human-interpretable explanations. we present mhcxai, explainable ai (xai) techniques to help interpret the outputs from mhc class i predictors in terms of input peptide features. in our experiments, we explain the outputs of four state-of-the-art mhc class i predictors over a large dataset of peptides and mhc alleles. additionally, we evaluate the reliability of the explanations by comparing against ground truth and checking their robustness. mhcxai seeks to increase understanding of deep learning-based predictors in the immune response domain and build trust with validated explanations.
[ "the ability", "a peptide", "major histocompatibility complex (mhc) class", "i molecules", "profound implications", "vaccines", "numerous deep learning-based predictors", "peptide presentation", "mhc class", "i molecules", "high levels", "accuracy", "these mhc class", "i predictors", "black-box functions", "little insight", "their decision making", "turst", "these predictors", "it", "the rationale", "their decisions", "human-interpretable explanations", "we", "mhcxai, explainable ai (xai) techniques", "the outputs", "mhc class", "i", "terms", "input peptide features", "our experiments", "we", "the outputs", "the-art", "i", "a large dataset", "peptides", "mhc alleles", "we", "the reliability", "the explanations", "ground truth", "their robustness", "mhcxai", "understanding", "deep learning-based predictors", "the immune response domain", "trust", "validated explanations", "four" ]
Deep learning approach to detect cyberbullying on twitter
[ "Çinare Oğuz Aliyeva", "Mete Yağanoğlu" ]
In recent years, especially children and adolescents have shown increased interest in social media, making them a potential risk group for cyberbullying. Cyberbullying posts spread very quickly, often taking a long time to be deleted and sometimes remaining online indefinitely. Cyberbullying can have severe mental, psychological, and emotional effects on children and adolescents, and in extreme cases, it can lead to suicide. Turkey is among the top 10 countries with the highest number of children who are victims of cyberbullying. However, there are very few studies conducted in the Turkish language on this topic. This study aims to identify cyberbullying in Turkish Twitter posts. The Multi-Layer Detection (MLP) based model was evaluated using a dataset of 5000 tweets. The model was trained using both social media features and textual features extracted from the dataset. Textual features were obtained using various feature extraction methods such as Bag of Words (BOW), Term Frequency-Inverse Term Frequency (TF-IDF), Hashing Vectorizer, N-gram, and word embedding. These features were utilized in training the model, and their effectiveness was evaluated. The experiments revealed that the features obtained from TF-IDF and unigram methods significantly improved the model’s performance. Subsequently, unnecessary features were eliminated using the Chi-Square feature selection method. The proposed model achieved a higher accuracy of 93.2% compared to machine learning (ML) methods used in previous studies on the same dataset. Additionally, the proposed model was compared with popular deep learning models in the literature, such as LSTM, BLSTM, and CNN, demonstrating promising results.
10.1007/s11042-024-19869-3
deep learning approach to detect cyberbullying on twitter
in recent years, especially children and adolescents have shown increased interest in social media, making them a potential risk group for cyberbullying. cyberbullying posts spread very quickly, often taking a long time to be deleted and sometimes remaining online indefinitely. cyberbullying can have severe mental, psychological, and emotional effects on children and adolescents, and in extreme cases, it can lead to suicide. turkey is among the top 10 countries with the highest number of children who are victims of cyberbullying. however, there are very few studies conducted in the turkish language on this topic. this study aims to identify cyberbullying in turkish twitter posts. the multi-layer detection (mlp) based model was evaluated using a dataset of 5000 tweets. the model was trained using both social media features and textual features extracted from the dataset. textual features were obtained using various feature extraction methods such as bag of words (bow), term frequency-inverse term frequency (tf-idf), hashing vectorizer, n-gram, and word embedding. these features were utilized in training the model, and their effectiveness was evaluated. the experiments revealed that the features obtained from tf-idf and unigram methods significantly improved the model’s performance. subsequently, unnecessary features were eliminated using the chi-square feature selection method. the proposed model achieved a higher accuracy of 93.2% compared to machine learning (ml) methods used in previous studies on the same dataset. additionally, the proposed model was compared with popular deep learning models in the literature, such as lstm, blstm, and cnn, demonstrating promising results.
[ "recent years", "especially children", "adolescents", "increased interest", "social media", "them", "cyberbullying posts", "a long time", "cyberbullying", "severe mental, psychological, and emotional effects", "children", "adolescents", "extreme cases", "it", "suicide", "turkey", "the top 10 countries", "the highest number", "children", "who", "victims", "very few studies", "the turkish language", "this topic", "this study", "turkish twitter posts", "the multi-layer detection", "mlp) based model", "a dataset", "5000 tweets", "the model", "both social media features", "textual features", "the dataset", "textual features", "various feature extraction methods", "bag", "words", "bow", "tf-idf", "vectorizer", "word", "these features", "the model", "their effectiveness", "the experiments", "the features", "tf-idf and unigram methods", "the model’s performance", "unnecessary features", "the chi-square feature selection method", "the proposed model", "a higher accuracy", "93.2%", "machine learning (ml) methods", "previous studies", "the same dataset", "the proposed model", "popular deep learning models", "the literature", "lstm", "blstm", "cnn", "promising results", "recent years", "turkey", "10", "5000", "n-gram", "93.2%", "cnn" ]
Impact of log parsing on deep learning-based anomaly detection
[ "Zanis Ali Khan", "Donghwan Shin", "Domenico Bianculli", "Lionel C. Briand" ]
Software systems log massive amounts of data, recording important runtime information. Such logs are used, for example, for log-based anomaly detection, which aims to automatically detect abnormal behaviors of the system under analysis by processing the information recorded in its logs. Many log-based anomaly detection techniques based on deep learning models include a pre-processing step called log parsing. However, understanding the impact of log parsing on the accuracy of anomaly detection techniques has received surprisingly little attention so far. Investigating what are the key properties log parsing techniques should ideally have to help anomaly detection is therefore warranted. In this paper, we report on a comprehensive empirical study on the impact of log parsing on anomaly detection accuracy, using 13 log parsing techniques, seven anomly detection techniques (five based on deep learning and two based on traditional machine learning) on three publicly available log datasets. Our empirical results show that, despite what is widely assumed, there is no strong correlation between log parsing accuracy and anomaly detection accuracy, regardless of the metric used for measuring log parsing accuracy. Moreover, we experimentally confirm existing theoretical results showing that it is a property that we refer to as distinguishability in log parsing results—as opposed to their accuracy—that plays an essential role in achieving accurate anomaly detection.
10.1007/s10664-024-10533-w
impact of log parsing on deep learning-based anomaly detection
software systems log massive amounts of data, recording important runtime information. such logs are used, for example, for log-based anomaly detection, which aims to automatically detect abnormal behaviors of the system under analysis by processing the information recorded in its logs. many log-based anomaly detection techniques based on deep learning models include a pre-processing step called log parsing. however, understanding the impact of log parsing on the accuracy of anomaly detection techniques has received surprisingly little attention so far. investigating what are the key properties log parsing techniques should ideally have to help anomaly detection is therefore warranted. in this paper, we report on a comprehensive empirical study on the impact of log parsing on anomaly detection accuracy, using 13 log parsing techniques, seven anomly detection techniques (five based on deep learning and two based on traditional machine learning) on three publicly available log datasets. our empirical results show that, despite what is widely assumed, there is no strong correlation between log parsing accuracy and anomaly detection accuracy, regardless of the metric used for measuring log parsing accuracy. moreover, we experimentally confirm existing theoretical results showing that it is a property that we refer to as distinguishability in log parsing results—as opposed to their accuracy—that plays an essential role in achieving accurate anomaly detection.
[ "software systems", "massive amounts", "data", "important runtime information", "such logs", "example", "log-based anomaly detection", "which", "abnormal behaviors", "the system", "analysis", "the information", "its logs", "many log-based anomaly detection techniques", "deep learning models", "a pre-processing step", "the impact", "log", "the accuracy", "anomaly detection techniques", "surprisingly little attention", "what", "the key properties", "techniques", "anomaly detection", "this paper", "we", "a comprehensive empirical study", "the impact", "log", "anomaly detection accuracy", "13 log parsing techniques", "seven anomly detection techniques", "deep learning", "traditional machine learning", "three publicly available log datasets", "our empirical results", "what", "no strong correlation", "log", "accuracy", "anomaly detection accuracy", "the metric", "log", "accuracy", "we", "existing theoretical results", "it", "a property", "that", "we", "distinguishability", "log parsing results", "their accuracy", "that", "an essential role", "accurate anomaly detection", "software systems", "anomaly detection", "anomaly detection", "anomaly", "anomaly detection", "anomaly detection accuracy", "13", "seven", "five", "two", "three", "anomaly detection" ]
Chest X-ray Images for Lung Disease Detection Using Deep Learning Techniques: A Comprehensive Survey
[ "Mohammed A. A. Al-qaness", "Jie Zhu", "Dalal AL-Alimi", "Abdelghani Dahou", "Saeed Hamood Alsamhi", "Mohamed Abd Elaziz", "Ahmed A. Ewees" ]
In medical imaging, the last decade has witnessed a remarkable increase in the availability and diversity of chest X-ray (CXR) datasets. Concurrently, there has been a significant advancement in deep learning techniques, noted for their escalating accuracy. These developments have catalyzed a surge in the application of deep learning in various medical studies, particularly in detecting and classifying lung diseases. This study delves into an extensive compilation of over 200 studies from the recent five years (2018–2023), employing advanced machine learning, including deep learning methodologies to analyze CXR images. Our exploration is twofold: it categorizes these studies based on the methods used and the types of lung diseases addressed. It also presents an in-depth examination of the current limitations and prospective trajectories in this rapidly evolving field. Our findings underscore the transformative impact and continual progress of deep learning models in enhancing the accuracy and efficiency of lung disease detection using CXR images. This survey culminates by emphasizing the critical need for further technological advancement in this domain, aiming to bridge gaps in healthcare provision and improve patient outcomes. The overarching goal is to pave the way for more precise, efficient, and accessible diagnostic tools in the battle against lung diseases, reinforcing the indispensable role of technology in modern healthcare.
10.1007/s11831-024-10081-y
chest x-ray images for lung disease detection using deep learning techniques: a comprehensive survey
in medical imaging, the last decade has witnessed a remarkable increase in the availability and diversity of chest x-ray (cxr) datasets. concurrently, there has been a significant advancement in deep learning techniques, noted for their escalating accuracy. these developments have catalyzed a surge in the application of deep learning in various medical studies, particularly in detecting and classifying lung diseases. this study delves into an extensive compilation of over 200 studies from the recent five years (2018–2023), employing advanced machine learning, including deep learning methodologies to analyze cxr images. our exploration is twofold: it categorizes these studies based on the methods used and the types of lung diseases addressed. it also presents an in-depth examination of the current limitations and prospective trajectories in this rapidly evolving field. our findings underscore the transformative impact and continual progress of deep learning models in enhancing the accuracy and efficiency of lung disease detection using cxr images. this survey culminates by emphasizing the critical need for further technological advancement in this domain, aiming to bridge gaps in healthcare provision and improve patient outcomes. the overarching goal is to pave the way for more precise, efficient, and accessible diagnostic tools in the battle against lung diseases, reinforcing the indispensable role of technology in modern healthcare.
[ "medical imaging", "the last decade", "a remarkable increase", "the availability", "diversity", "chest x", "-ray (cxr) datasets", "a significant advancement", "deep learning techniques", "their escalating accuracy", "these developments", "a surge", "the application", "deep learning", "various medical studies", "lung diseases", "this study", "an extensive compilation", "over 200 studies", "the recent five years", "advanced machine learning", "deep learning methodologies", "cxr images", "our exploration", "it", "these studies", "the methods", "the types", "lung diseases", "it", "-depth", "the current limitations", "prospective trajectories", "this rapidly evolving field", "our findings", "the transformative impact", "continual progress", "deep learning models", "the accuracy", "efficiency", "lung disease detection", "cxr images", "this survey", "the critical need", "further technological advancement", "this domain", "bridge gaps", "healthcare provision", "patient outcomes", "the overarching goal", "the way", "the battle", "lung diseases", "the indispensable role", "technology", "modern healthcare", "the last decade", "over 200", "the recent five years", "2018–2023" ]
Comparative analysis of deep learning models for dysarthric speech detection
[ "P. Shanmugapriya", "V. Mohan" ]
Dysarthria is a speech communication disorder that is associated with neurological impairments. To detect this disorder from speech, we present an experimental comparison of deep models developed based on frequency domain features. A comparative analysis of deep models is performed in the detection of dysarthria using scalogram of dysarthric speech. Also, it can assist physicians, specialists, and doctors based on the results of its detection. Since dysarthric speech signals have segments of breathy and semi-whispery, experiments are performed only on the frequency-domain representation of speech signals. Time-domain speech signal is transformed into a 2-D scalogram image through wavelet transformation. Then, the scalogram images are applied to pre-trained convolutional neural networks. The layers of pre-trained networks are tuned for our scalogram images through transfer learning. The proposed method of applying the scalogram images as input to pre-trained CNNs is evaluated on the TORGO database and the classification performance of these networks is compared. In this work, AlexNet, GoogLeNet, ResNet 50 and two pre-trained sound CNNs, namely VGGish and YAMNET are considered deep models of pre-trained convolutional neural networks. The proposed method of using pre-trained and transfer learned CNN with scalogram image feature achieved better accuracy when compared to other machine learning models in the dysarthria detection system.
10.1007/s00500-023-09302-6
comparative analysis of deep learning models for dysarthric speech detection
dysarthria is a speech communication disorder that is associated with neurological impairments. to detect this disorder from speech, we present an experimental comparison of deep models developed based on frequency domain features. a comparative analysis of deep models is performed in the detection of dysarthria using scalogram of dysarthric speech. also, it can assist physicians, specialists, and doctors based on the results of its detection. since dysarthric speech signals have segments of breathy and semi-whispery, experiments are performed only on the frequency-domain representation of speech signals. time-domain speech signal is transformed into a 2-d scalogram image through wavelet transformation. then, the scalogram images are applied to pre-trained convolutional neural networks. the layers of pre-trained networks are tuned for our scalogram images through transfer learning. the proposed method of applying the scalogram images as input to pre-trained cnns is evaluated on the torgo database and the classification performance of these networks is compared. in this work, alexnet, googlenet, resnet 50 and two pre-trained sound cnns, namely vggish and yamnet are considered deep models of pre-trained convolutional neural networks. the proposed method of using pre-trained and transfer learned cnn with scalogram image feature achieved better accuracy when compared to other machine learning models in the dysarthria detection system.
[ "dysarthria", "a speech communication disorder", "that", "neurological impairments", "this disorder", "speech", "we", "an experimental comparison", "deep models", "frequency domain features", "a comparative analysis", "deep models", "the detection", "dysarthria", "scalogram", "dysarthric speech", "it", "physicians", "specialists", "doctors", "the results", "its detection", "dysarthric speech signals", "segments", "-", "whispery", "experiments", "the frequency-domain representation", "speech signals", "time-domain speech signal", "a 2-d scalogram image", "wavelet transformation", "the scalogram images", "pre-trained convolutional neural networks", "the layers", "pre-trained networks", "our scalogram images", "transfer learning", "the proposed method", "the scalogram images", "input", "pre-trained cnns", "the torgo database", "the classification performance", "these networks", "this work", "yamnet", "deep models", "pre-trained convolutional neural networks", "the proposed method", "pre-trained and transfer", "cnn", "scalogram image feature", "better accuracy", "other machine learning models", "the dysarthria detection system", "dysarthria", "2", "50", "two", "cnn" ]
Deep Learning Radiomics Analysis of CT Imaging for Differentiating Between Crohn’s Disease and Intestinal Tuberculosis
[ "Ming Cheng", "Hanyue Zhang", "Wenpeng Huang", "Fei Li", "Jianbo Gao" ]
This study aimed to develop and evaluate a CT-based deep learning radiomics model for differentiating between Crohn’s disease (CD) and intestinal tuberculosis (ITB). A total of 330 patients with pathologically confirmed as CD or ITB from the First Affiliated Hospital of Zhengzhou University were divided into the validation dataset one (CD: 167; ITB: 57) and validation dataset two (CD: 78; ITB: 28). Based on the validation dataset one, the synthetic minority oversampling technique (SMOTE) was adopted to create balanced dataset as training data for feature selection and model construction. The handcrafted and deep learning (DL) radiomics features were extracted from the arterial and venous phases images, respectively. The interobserver consistency analysis, Spearman’s correlation, univariate analysis, and the least absolute shrinkage and selection operator (LASSO) regression were used to select features. Based on extracted multi-phase radiomics features, six logistic regression models were finally constructed. The diagnostic performances of different models were compared using ROC analysis and Delong test. The arterial-venous combined deep learning radiomics model for differentiating between CD and ITB showed a high prediction quality with AUCs of 0.885, 0.877, and 0.800 in SMOTE dataset, validation dataset one, and validation dataset two, respectively. Moreover, the deep learning radiomics model outperformed the handcrafted radiomics model in same phase images. In validation dataset one, the Delong test results indicated that there was a significant difference in the AUC of the arterial models (p = 0.037), while not in venous and arterial-venous combined models (p = 0.398 and p = 0.265) as comparing deep learning radiomics models and handcrafted radiomics models. In our study, the arterial-venous combined model based on deep learning radiomics analysis exhibited good performance in differentiating between CD and ITB.
10.1007/s10278-024-01059-0
deep learning radiomics analysis of ct imaging for differentiating between crohn’s disease and intestinal tuberculosis
this study aimed to develop and evaluate a ct-based deep learning radiomics model for differentiating between crohn’s disease (cd) and intestinal tuberculosis (itb). a total of 330 patients with pathologically confirmed as cd or itb from the first affiliated hospital of zhengzhou university were divided into the validation dataset one (cd: 167; itb: 57) and validation dataset two (cd: 78; itb: 28). based on the validation dataset one, the synthetic minority oversampling technique (smote) was adopted to create balanced dataset as training data for feature selection and model construction. the handcrafted and deep learning (dl) radiomics features were extracted from the arterial and venous phases images, respectively. the interobserver consistency analysis, spearman’s correlation, univariate analysis, and the least absolute shrinkage and selection operator (lasso) regression were used to select features. based on extracted multi-phase radiomics features, six logistic regression models were finally constructed. the diagnostic performances of different models were compared using roc analysis and delong test. the arterial-venous combined deep learning radiomics model for differentiating between cd and itb showed a high prediction quality with aucs of 0.885, 0.877, and 0.800 in smote dataset, validation dataset one, and validation dataset two, respectively. moreover, the deep learning radiomics model outperformed the handcrafted radiomics model in same phase images. in validation dataset one, the delong test results indicated that there was a significant difference in the auc of the arterial models (p = 0.037), while not in venous and arterial-venous combined models (p = 0.398 and p = 0.265) as comparing deep learning radiomics models and handcrafted radiomics models. in our study, the arterial-venous combined model based on deep learning radiomics analysis exhibited good performance in differentiating between cd and itb.
[ "this study", "a ct-based deep learning radiomics model", "crohn’s disease", "cd", "intestinal tuberculosis", "itb", "a total", "330 patients", "cd", "itb", "the first affiliated hospital", "zhengzhou university", "the validation", "itb", "validation", "(cd", "itb", "the validation", "the synthetic minority oversampling technique", "smote", "balanced dataset", "training data", "feature selection", "model construction", "the handcrafted and deep learning", "(dl) radiomics features", "the arterial and venous phases images", "the interobserver consistency analysis", "spearman’s correlation", "analysis", "the least absolute shrinkage and selection operator", "lasso) regression", "features", "extracted multi-phase radiomics features", "six logistic regression models", "the diagnostic performances", "different models", "roc analysis", "delong test", "the arterial-venous combined deep learning radiomics model", "cd", "itb", "a high prediction quality", "aucs", "smote dataset", "validation", "one", "validation", "the deep learning radiomics model", "the handcrafted radiomics model", "same phase images", "validation", "the delong test results", "a significant difference", "the auc", "the arterial models", "venous and arterial-venous combined models", "p =", "deep learning radiomics models", "handcrafted radiomics models", "our study", "the arterial-venous combined model", "deep learning radiomics analysis", "good performance", "cd", "itb", "itb", "330", "itb", "first", "167", "itb", "57", "two", "78", "28", "six", "roc", "itb", "0.885", "0.877", "0.800", "one", "two", "0.037", "0.398", "0.265", "itb" ]
Guarding Against the Unknown: Deep Transfer Learning for Hardware Image-Based Malware Detection
[ "Zhangying He", "Houman Homayoun", "Hossein Sayadi" ]
Malware is increasingly becoming a significant threat to computing systems, and detecting zero-day (unknown) malware is crucial to ensure the security of modern systems. These attacks exploit software security vulnerabilities that are not documented or known in the detection mechanism’s database, making it particularly a pressing challenge to address. In recent times, there has been a shift in focus by security researchers toward the architecture of underlying processors. They have suggested implementing hardware-based malware detection (HMD) countermeasures to address the shortcomings of software-based detection methods. HMD techniques involve applying standard machine learning (ML) algorithms to low-level events of processors that are gathered from hardware performance counter (HPC) registers. While these techniques have shown promising results for detecting known malware, accurately recognizing zero-day malware remains an unsolved issue in the existing HPC-based detection methods. Our comprehensive analysis has revealed that standard ML classifiers are ineffective in identifying zero-day malware traces using HPC events. In response, we propose Deep-HMD, a multi-level intelligent and flexible approach based on deep neural network and transfer learning, for accurate zero-day malware detection using image-based hardware events. Deep-HMD first converts HPC-based malware and benign data into images, and subsequently employs a lightweight deep transfer learning methodology to obtain a high malware detection performance for both known and unknown test scenarios. To conduct a thorough analysis, three deep learning-based and nine standard ML algorithms are implemented and evaluated for hardware-based malware detection. The experimental results indicate that our proposed image-based malware detection solution achieves superior performance compared to all other methods, with a 97% detection performance (measured by F-measure and area under the curve) for run-time zero-day malware detection utilizing soley the top four performance counter events. Specifically, our novel approach outperforms the binarized MLP by 16% and the best classical ML algorithm by 18% in F-measure, while maintaining a minimal false positive rate and without incurring any hardware redesign overhead.
10.1007/s41635-024-00146-6
guarding against the unknown: deep transfer learning for hardware image-based malware detection
malware is increasingly becoming a significant threat to computing systems, and detecting zero-day (unknown) malware is crucial to ensure the security of modern systems. these attacks exploit software security vulnerabilities that are not documented or known in the detection mechanism’s database, making it particularly a pressing challenge to address. in recent times, there has been a shift in focus by security researchers toward the architecture of underlying processors. they have suggested implementing hardware-based malware detection (hmd) countermeasures to address the shortcomings of software-based detection methods. hmd techniques involve applying standard machine learning (ml) algorithms to low-level events of processors that are gathered from hardware performance counter (hpc) registers. while these techniques have shown promising results for detecting known malware, accurately recognizing zero-day malware remains an unsolved issue in the existing hpc-based detection methods. our comprehensive analysis has revealed that standard ml classifiers are ineffective in identifying zero-day malware traces using hpc events. in response, we propose deep-hmd, a multi-level intelligent and flexible approach based on deep neural network and transfer learning, for accurate zero-day malware detection using image-based hardware events. deep-hmd first converts hpc-based malware and benign data into images, and subsequently employs a lightweight deep transfer learning methodology to obtain a high malware detection performance for both known and unknown test scenarios. to conduct a thorough analysis, three deep learning-based and nine standard ml algorithms are implemented and evaluated for hardware-based malware detection. the experimental results indicate that our proposed image-based malware detection solution achieves superior performance compared to all other methods, with a 97% detection performance (measured by f-measure and area under the curve) for run-time zero-day malware detection utilizing soley the top four performance counter events. specifically, our novel approach outperforms the binarized mlp by 16% and the best classical ml algorithm by 18% in f-measure, while maintaining a minimal false positive rate and without incurring any hardware redesign overhead.
[ "malware", "a significant threat", "computing systems", "zero-day (unknown) malware", "the security", "modern systems", "these attacks", "software security vulnerabilities", "that", "the detection mechanism’s database", "it", "recent times", "a shift", "focus", "security researchers", "the architecture", "underlying processors", "they", "hardware-based malware detection", "hmd", "the shortcomings", "software-based detection methods", "hmd techniques", "standard machine learning", "ml", "low-level events", "processors", "that", "hardware performance counter (hpc) registers", "these techniques", "promising results", "known malware", "zero-day malware", "an unsolved issue", "the existing hpc-based detection methods", "our comprehensive analysis", "standard ml classifiers", "hpc events", "response", "we", "deep-hmd", "a multi-level intelligent and flexible approach", "deep neural network", "transfer", "learning", "accurate zero-day malware detection", "image-based hardware events", "deep-hmd", "hpc-based malware", "benign data", "images", "a lightweight deep transfer", "methodology", "a high malware detection performance", "both known and unknown test scenarios", "a thorough analysis", "three deep learning-based and nine standard ml algorithms", "hardware-based malware detection", "the experimental results", "our proposed image-based malware detection solution", "superior performance", "all other methods", "a 97% detection performance", "f-measure", "area", "the curve", "run-time zero-day malware detection", "soley", "the top four performance counter events", "our novel approach", "the binarized mlp", "16%", "the best classical ml algorithm", "18%", "f-measure", "a minimal false positive rate", "any hardware redesign", "malware", "zero-day", "zero-day", "zero-day", "zero-day", "first", "three", "nine", "97%", "zero-day", "four", "16%", "18%" ]
Deep learning of causal structures in high dimensions under data limitations
[ "Kai Lagemann", "Christian Lagemann", "Bernd Taschler", "Sach Mukherjee" ]
Causal learning is a key challenge in scientific artificial intelligence as it allows researchers to go beyond purely correlative or predictive analyses towards learning underlying cause-and-effect relationships, which are important for scientific understanding as well as for a wide range of downstream tasks. Here, motivated by emerging biomedical questions, we propose a deep neural architecture for learning causal relationships between variables from a combination of high-dimensional data and prior causal knowledge. We combine convolutional and graph neural networks within a causal risk framework to provide an approach that is demonstrably effective under the conditions of high dimensionality, noise and data limitations that are characteristic of many applications, including in large-scale biology. In experiments, we find that the proposed learners can effectively identify novel causal relationships across thousands of variables. Results include extensive (linear and nonlinear) simulations (where the ground truth is known and can be directly compared against), as well as real biological examples where the models are applied to high-dimensional molecular data and their outputs compared against entirely unseen validation experiments. These results support the notion that deep learning approaches can be used to learn causal networks at large scale.
10.1038/s42256-023-00744-z
deep learning of causal structures in high dimensions under data limitations
causal learning is a key challenge in scientific artificial intelligence as it allows researchers to go beyond purely correlative or predictive analyses towards learning underlying cause-and-effect relationships, which are important for scientific understanding as well as for a wide range of downstream tasks. here, motivated by emerging biomedical questions, we propose a deep neural architecture for learning causal relationships between variables from a combination of high-dimensional data and prior causal knowledge. we combine convolutional and graph neural networks within a causal risk framework to provide an approach that is demonstrably effective under the conditions of high dimensionality, noise and data limitations that are characteristic of many applications, including in large-scale biology. in experiments, we find that the proposed learners can effectively identify novel causal relationships across thousands of variables. results include extensive (linear and nonlinear) simulations (where the ground truth is known and can be directly compared against), as well as real biological examples where the models are applied to high-dimensional molecular data and their outputs compared against entirely unseen validation experiments. these results support the notion that deep learning approaches can be used to learn causal networks at large scale.
[ "causal learning", "a key challenge", "scientific artificial intelligence", "it", "researchers", "purely correlative or predictive analyses", "underlying cause-and-effect relationships", "which", "scientific understanding", "a wide range", "downstream tasks", "biomedical questions", "we", "a deep neural architecture", "causal relationships", "variables", "a combination", "high-dimensional data", "prior causal knowledge", "we", "convolutional and graph neural networks", "a causal risk framework", "an approach", "that", "the conditions", "high dimensionality, noise and data limitations", "that", "many applications", "large-scale biology", "experiments", "we", "the proposed learners", "novel causal relationships", "thousands", "variables", "results", "extensive (linear and nonlinear) simulations", "the ground truth", "real biological examples", "the models", "high-dimensional molecular data", "their outputs", "entirely unseen validation experiments", "these results", "the notion", "deep learning approaches", "causal networks", "large scale", "thousands", "linear" ]
Deep learning-based biometric cryptographic key generation with post-quantum security
[ "Oleksandr Kuznetsov", "Dmytro Zakharov", "Emanuele Frontoni" ]
In contemporary digital security systems, the generation and management of cryptographic keys, such as passwords and pin codes, often rely on stochastic random processes and intricate mathematical transformations. While these keys ensure robust security, their storage and distribution necessitate sophisticated and costly mechanisms. This study explores an alternative approach that leverages biometric data for generating cryptographic keys, thereby eliminating the need for complex storage and distribution processes. The paper investigates biometric key generation technologies based on deep learning models, specifically utilizing convolutional neural networks to extract biometric features from human facial images. Subsequently, code-based cryptographic extractors are employed to process the primary extracted features. The performance of various deep learning models and the extractor is evaluated by considering Type 1 and Type 2 errors. The optimized algorithm parameters yield an error rate of less than \(10\%\), rendering the generated keys suitable for biometric authentication. Additionally, this study demonstrates that the application of code-based cryptographic extractors provides a post-quantum level of security, further enhancing the practicality and effectiveness of biometric key generation technologies in modern information security systems. This research contributes to the ongoing efforts towards secure, efficient, and user-friendly authentication and encryption methods, harnessing the power of biometric data and deep learning techniques.
10.1007/s11042-023-17714-7
deep learning-based biometric cryptographic key generation with post-quantum security
in contemporary digital security systems, the generation and management of cryptographic keys, such as passwords and pin codes, often rely on stochastic random processes and intricate mathematical transformations. while these keys ensure robust security, their storage and distribution necessitate sophisticated and costly mechanisms. this study explores an alternative approach that leverages biometric data for generating cryptographic keys, thereby eliminating the need for complex storage and distribution processes. the paper investigates biometric key generation technologies based on deep learning models, specifically utilizing convolutional neural networks to extract biometric features from human facial images. subsequently, code-based cryptographic extractors are employed to process the primary extracted features. the performance of various deep learning models and the extractor is evaluated by considering type 1 and type 2 errors. the optimized algorithm parameters yield an error rate of less than \(10\%\), rendering the generated keys suitable for biometric authentication. additionally, this study demonstrates that the application of code-based cryptographic extractors provides a post-quantum level of security, further enhancing the practicality and effectiveness of biometric key generation technologies in modern information security systems. this research contributes to the ongoing efforts towards secure, efficient, and user-friendly authentication and encryption methods, harnessing the power of biometric data and deep learning techniques.
[ "contemporary digital security systems", "the generation", "management", "cryptographic keys", "passwords", "pin codes", "stochastic random processes", "intricate mathematical transformations", "these keys", "robust security", "their storage and distribution necessitate sophisticated and costly mechanisms", "this study", "an alternative approach", "that", "biometric data", "cryptographic keys", "the need", "complex storage and distribution processes", "the paper investigates", "key generation technologies", "deep learning models", "convolutional neural networks", "biometric features", "human facial images", "code-based cryptographic extractors", "the primary extracted features", "the performance", "various deep learning models", "the extractor", "type", "2 errors", "the optimized algorithm parameters", "an error rate", "the generated keys", "biometric authentication", "this study", "the application", "code-based cryptographic extractors", "a post-quantum level", "security", "the practicality", "effectiveness", "biometric key generation technologies", "modern information security systems", "this research", "the ongoing efforts", "user-friendly authentication and encryption methods", "the power", "biometric data", "deep learning techniques", "1", "2" ]
Connecting national flags – a deep learning approach
[ "Theofanis Kalampokas", "Dimitrios Mentizis", "Eleni Vrochidou", "George A. Papakostas" ]
National flags are the most recognizable symbols of the identity of a country. Similarities between flags may be observed due to cultural, historical, or ethical connections between nations, because they may be originated from the same group of people, or due to unrelated sharing of common symbols and colors. Although the fact that similar flags exist is indisputable, this has never been quantified. Quantifying flags’ similarities could provide a useful body of knowledge for vexillologists and historians. To this end, this work aims to develop a supporting tool for the scientific study of nations’ history and symbolisms, through the quantification of the varying degrees of similarity between their flags, by considering three initially stated hypotheses and by using a novel feature inclusion (FI) measure. The proposed FI measure aims to objectively quantify the overall similarity between flags based on optical multi-scaled features extracted from flag images. State-of-the-art deep learning models built for other applications tested their capability for the first time for the problem under study by using transfer learning, towards calculating the FI measure. More specifically, FI was quantified by six deep learning models: Yolo (V4 and V5), SSD, RetinaNet, Fast R-CNN, FCOS and CornerNet. Flags’ images dataset included flags of 195 nations officially recognized by the United Nations. Experimental results reported maximum feature inclusion between flags of up to 99%. The extracted degrees of similarity were subsequently justified with the help of the Vexillology scientific domain, to support research findings and to raise questions for further investigation. Experimental results reveal that the proposed approach and FI measure are reliable and able to serve as a supporting tool to social sciences for knowledge extraction and quantification.
10.1007/s11042-023-15056-y
connecting national flags – a deep learning approach
national flags are the most recognizable symbols of the identity of a country. similarities between flags may be observed due to cultural, historical, or ethical connections between nations, because they may be originated from the same group of people, or due to unrelated sharing of common symbols and colors. although the fact that similar flags exist is indisputable, this has never been quantified. quantifying flags’ similarities could provide a useful body of knowledge for vexillologists and historians. to this end, this work aims to develop a supporting tool for the scientific study of nations’ history and symbolisms, through the quantification of the varying degrees of similarity between their flags, by considering three initially stated hypotheses and by using a novel feature inclusion (fi) measure. the proposed fi measure aims to objectively quantify the overall similarity between flags based on optical multi-scaled features extracted from flag images. state-of-the-art deep learning models built for other applications tested their capability for the first time for the problem under study by using transfer learning, towards calculating the fi measure. more specifically, fi was quantified by six deep learning models: yolo (v4 and v5), ssd, retinanet, fast r-cnn, fcos and cornernet. flags’ images dataset included flags of 195 nations officially recognized by the united nations. experimental results reported maximum feature inclusion between flags of up to 99%. the extracted degrees of similarity were subsequently justified with the help of the vexillology scientific domain, to support research findings and to raise questions for further investigation. experimental results reveal that the proposed approach and fi measure are reliable and able to serve as a supporting tool to social sciences for knowledge extraction and quantification.
[ "national flags", "the most recognizable symbols", "the identity", "a country", "similarities", "flags", "cultural, historical, or ethical connections", "nations", "they", "the same group", "people", "unrelated sharing", "common symbols", "colors", "the fact", "similar flags", "this", "flags", "a useful body", "knowledge", "vexillologists", "historians", "this end", "this work", "a supporting tool", "the scientific study", "nations’ history", "symbolisms", "the quantification", "the varying degrees", "similarity", "their flags", "three initially stated hypotheses", "a novel feature inclusion (fi) measure", "the proposed fi measure", "the overall similarity", "flags", "optical multi-scaled features", "flag images", "the-art", "other applications", "their capability", "the first time", "the problem", "study", "transfer learning", "the fi measure", "fi", "six deep learning models", "v4", "v5", "flags", "flags", "195 nations", "the united nations", "experimental results", "maximum feature inclusion", "flags", "up to 99%", "the extracted degrees", "similarity", "the help", "the vexillology scientific domain", "research findings", "questions", "further investigation", "experimental results", "the proposed approach", "fi measure", "a supporting tool", "social sciences", "knowledge extraction", "quantification", "three", "first", "six", "fcos", "195", "the united nations", "up to 99%" ]
Secure Communications with THz Reconfigurable Intelligent Surfaces and Deep Learning in 6G Systems
[ "Ajmeera Kiran", "Abhilash Sonker", "Sachin Jadhav", "Makarand Mohan Jadhav", "Janjhyam Venkata Naga Ramesh", "Elangovan Muniyandy" ]
In anticipation of the 6G era, this paper explores the integration of terahertz (THz) communications with Reconfigurable Intelligent Surfaces (RIS) and deep learning to establish a secure wireless network capable of ultra-high data rates. Addressing the non-convex challenge of maximizing secure energy efficiency, we introduce a novel deep learning framework that employs a variety of neural network architectures for optimizing RIS reflection and beamforming. Our simulations, set against scenarios with varying eavesdropper cooperation, confirm the efficacy of the proposed solution, achieving 97% of the optimal performance benchmarked against a genie-aided model. This research underlines a significant advancement in 6G network security, potentially influencing future standards and laying the groundwork for practical deployment, thereby marking a milestone in the convergence of THz technology, intelligent surfaces, and AI for future-proof secure communications.
10.1007/s11277-024-11163-7
secure communications with thz reconfigurable intelligent surfaces and deep learning in 6g systems
in anticipation of the 6g era, this paper explores the integration of terahertz (thz) communications with reconfigurable intelligent surfaces (ris) and deep learning to establish a secure wireless network capable of ultra-high data rates. addressing the non-convex challenge of maximizing secure energy efficiency, we introduce a novel deep learning framework that employs a variety of neural network architectures for optimizing ris reflection and beamforming. our simulations, set against scenarios with varying eavesdropper cooperation, confirm the efficacy of the proposed solution, achieving 97% of the optimal performance benchmarked against a genie-aided model. this research underlines a significant advancement in 6g network security, potentially influencing future standards and laying the groundwork for practical deployment, thereby marking a milestone in the convergence of thz technology, intelligent surfaces, and ai for future-proof secure communications.
[ "anticipation", "the 6g era", "this paper", "the integration", "thz", "reconfigurable intelligent surfaces", "ris", "deep learning", "a secure wireless network", "ultra-high data rates", "the non-convex challenge", "secure energy efficiency", "we", "a novel deep learning framework", "that", "a variety", "neural network", "ris reflection", "beamforming", "our simulations", "scenarios", "varying eavesdropper cooperation", "the efficacy", "the proposed solution", "97%", "the optimal performance", "a genie-aided model", "this research", "a significant advancement", "6g network security", "future standards", "the groundwork", "practical deployment", "a milestone", "the convergence", "thz technology", "intelligent surfaces", "future-proof secure communications", "6", "97%", "genie", "6" ]
A deep learning-based automated diagnosis system for SPECT myocardial perfusion imaging
[ "Dai Kusumoto", "Takumi Akiyama", "Masahiro Hashimoto", "Yu Iwabuchi", "Toshiomi Katsuki", "Mai Kimura", "Yohei Akiba", "Hiromune Sawada", "Taku Inohara", "Shinsuke Yuasa", "Keiichi Fukuda", "Masahiro Jinzaki", "Masaki Ieda" ]
Images obtained from single-photon emission computed tomography for myocardial perfusion imaging (MPI SPECT) contain noises and artifacts, making cardiovascular disease diagnosis difficult. We developed a deep learning-based diagnosis support system using MPI SPECT images. Single-center datasets of MPI SPECT images (n = 5443) were obtained and labeled as healthy or coronary artery disease based on diagnosis reports. Three axes of four-dimensional datasets, resting, and stress conditions of three-dimensional reconstruction data, were reconstructed, and an AI model was trained to classify them. The trained convolutional neural network showed high performance [area under the curve (AUC) of the ROC curve: approximately 0.91; area under the recall precision curve: 0.87]. Additionally, using unsupervised learning and the Grad-CAM method, diseased lesions were successfully visualized. The AI-based automated diagnosis system had the highest performance (88%), followed by cardiologists with AI-guided diagnosis (80%) and cardiologists alone (65%). Furthermore, diagnosis time was shorter for AI-guided diagnosis (12 min) than for cardiologists alone (31 min). Our high-quality deep learning-based diagnosis support system may benefit cardiologists by improving diagnostic accuracy and reducing working hours.
10.1038/s41598-024-64445-2
a deep learning-based automated diagnosis system for spect myocardial perfusion imaging
images obtained from single-photon emission computed tomography for myocardial perfusion imaging (mpi spect) contain noises and artifacts, making cardiovascular disease diagnosis difficult. we developed a deep learning-based diagnosis support system using mpi spect images. single-center datasets of mpi spect images (n = 5443) were obtained and labeled as healthy or coronary artery disease based on diagnosis reports. three axes of four-dimensional datasets, resting, and stress conditions of three-dimensional reconstruction data, were reconstructed, and an ai model was trained to classify them. the trained convolutional neural network showed high performance [area under the curve (auc) of the roc curve: approximately 0.91; area under the recall precision curve: 0.87]. additionally, using unsupervised learning and the grad-cam method, diseased lesions were successfully visualized. the ai-based automated diagnosis system had the highest performance (88%), followed by cardiologists with ai-guided diagnosis (80%) and cardiologists alone (65%). furthermore, diagnosis time was shorter for ai-guided diagnosis (12 min) than for cardiologists alone (31 min). our high-quality deep learning-based diagnosis support system may benefit cardiologists by improving diagnostic accuracy and reducing working hours.
[ "images", "single-photon emission", "tomography", "myocardial perfusion imaging", "(mpi spect", "noises", "artifacts", "cardiovascular disease diagnosis", "we", "a deep learning-based diagnosis support system", "mpi spect images", "single-center datasets", "mpi spect images", "healthy or coronary artery disease", "diagnosis reports", "three axes", "four-dimensional datasets", "stress conditions", "three-dimensional reconstruction data", "an ai model", "them", "the trained convolutional neural network", "high performance", "[area", "the curve", "auc", "the roc curve", "area", "the recall precision curve", "unsupervised learning", "the grad-cam method", "diseased lesions", "the ai-based automated diagnosis system", "the highest performance", "88%", "cardiologists", "ai-guided diagnosis", "80%", "cardiologists", "65%", "diagnosis time", "ai-guided diagnosis", "12 min", "cardiologists", "31 min", "our high-quality deep learning-based diagnosis support system", "cardiologists", "diagnostic accuracy", "working hours", "5443", "three", "four", "three", "roc", "approximately 0.91", "0.87", "88%", "80%", "65%", "12", "31", "working hours" ]
Deep learning based water leakage detection for shield tunnel lining
[ "Shichang Liu", "Xu Xu", "Gwanggil Jeon", "Junxin Chen", "Ben-Guo He" ]
Shield tunnel lining is prone to water leakage, which may further bring about corrosion and structural damage to the walls, potentially leading to dangerous accidents. To avoid tedious and inefficient manual inspection, many projects use artificial intelligence (AI) to detect cracks and water leakage. A novel method for water leakage inspection in shield tunnel lining that utilizes deep learning is introduced in this paper. Our proposal includes a ConvNeXt-S backbone, deconvolutional-feature pyramid network (D-FPN), spatial attention module (SPAM). and a detection head. It can extract representative features of leaking areas to aid inspection processes. To further improve the model’s robustness, we innovatively use an inversed low-light enhancement method to convert normally illuminated images to low light ones and introduce them into the training samples. Validation experiments are performed, achieving the average precision (AP) score of 56.8%, which outperforms previous work by a margin of 5.7%. Visualization illustrations also support our method’s practical effectiveness.
10.1007/s11709-024-1071-5
deep learning based water leakage detection for shield tunnel lining
shield tunnel lining is prone to water leakage, which may further bring about corrosion and structural damage to the walls, potentially leading to dangerous accidents. to avoid tedious and inefficient manual inspection, many projects use artificial intelligence (ai) to detect cracks and water leakage. a novel method for water leakage inspection in shield tunnel lining that utilizes deep learning is introduced in this paper. our proposal includes a convnext-s backbone, deconvolutional-feature pyramid network (d-fpn), spatial attention module (spam). and a detection head. it can extract representative features of leaking areas to aid inspection processes. to further improve the model’s robustness, we innovatively use an inversed low-light enhancement method to convert normally illuminated images to low light ones and introduce them into the training samples. validation experiments are performed, achieving the average precision (ap) score of 56.8%, which outperforms previous work by a margin of 5.7%. visualization illustrations also support our method’s practical effectiveness.
[ "shield tunnel lining", "water leakage", "which", "corrosion", "structural damage", "the walls", "dangerous accidents", "tedious and inefficient manual inspection", "many projects", "artificial intelligence", "(ai", "cracks", "water leakage", "a novel method", "water leakage inspection", "shield tunnel lining", "that", "deep learning", "this paper", "our proposal", "a convnext-s backbone, deconvolutional-feature pyramid network", "d", "fpn", "spatial attention module", "spam", "and a detection head", "it", "representative features", "leaking areas", "inspection processes", "the model’s robustness", "we", "an inversed low-light enhancement method", "normally illuminated images", "low light ones", "them", "the training samples", "validation experiments", "the average precision", "ap", ") score", "56.8%", "which", "previous work", "a margin", "5.7%", "visualization illustrations", "our method’s practical effectiveness", "56.8%", "5.7%" ]
HCCNet Fusion: a synergistic approach for accurate hepatocellular carcinoma staging using deep learning paradigm
[ "Devi Rajeev", "S. Remya", "Anand Nayyar" ]
Hepatocellular carcinoma (HCC) stands as the second most prevalent cancer and a leading cause of cancer-related mortality globally, necessitating precise diagnostic and prognostic methodologies. The study introduces an innovative approach centered around the HCCNet Fusion model; a robust integration of advanced deep-learning techniques designed to elevate the accuracy of HCC stage recognition. Leveraging the synergies between the VGG16 architecture and U-Net and incorporating sophisticated data pre-processing methods such as Otsu’s binary thresholding and marker-based watershed segmentation, this approach aims to strengthen the precision of HCC stage identification. Furthermore, transfer learning plays a pivotal role in HCCNet Fusion, enabling the models to integrate knowledge from diverse medical image settings through pre-trained weights from VGG16 and U-Net architectures. This strategic integration demonstrates the efficacy of advanced deep learning strategies in addressing intricate medical challenges, and outperforms conventional methods with a remarkable accuracy rate of 95%, underscoring the potential of cutting-edge deep learning techniques in medical diagnostics. The evaluation and validation of the proposed HCCNet Fusion model demonstrate its strong performance across many metrics like AUC ROC, Loss, Accuracy, Precision, Recall, and F1. Additionally, a comparative study was done against well-known methods, including CNN, Inception ResNetV2, VGG16, Inception V3, EfficientNet-B0, and ResNet50 and the results state that the proposed system not only advances HCC detection but also sets a pattern for leveraging state-of-the-art methodologies in addressing complex medical issues.
10.1007/s11042-024-19446-8
hccnet fusion: a synergistic approach for accurate hepatocellular carcinoma staging using deep learning paradigm
hepatocellular carcinoma (hcc) stands as the second most prevalent cancer and a leading cause of cancer-related mortality globally, necessitating precise diagnostic and prognostic methodologies. the study introduces an innovative approach centered around the hccnet fusion model; a robust integration of advanced deep-learning techniques designed to elevate the accuracy of hcc stage recognition. leveraging the synergies between the vgg16 architecture and u-net and incorporating sophisticated data pre-processing methods such as otsu’s binary thresholding and marker-based watershed segmentation, this approach aims to strengthen the precision of hcc stage identification. furthermore, transfer learning plays a pivotal role in hccnet fusion, enabling the models to integrate knowledge from diverse medical image settings through pre-trained weights from vgg16 and u-net architectures. this strategic integration demonstrates the efficacy of advanced deep learning strategies in addressing intricate medical challenges, and outperforms conventional methods with a remarkable accuracy rate of 95%, underscoring the potential of cutting-edge deep learning techniques in medical diagnostics. the evaluation and validation of the proposed hccnet fusion model demonstrate its strong performance across many metrics like auc roc, loss, accuracy, precision, recall, and f1. additionally, a comparative study was done against well-known methods, including cnn, inception resnetv2, vgg16, inception v3, efficientnet-b0, and resnet50 and the results state that the proposed system not only advances hcc detection but also sets a pattern for leveraging state-of-the-art methodologies in addressing complex medical issues.
[ "hepatocellular carcinoma", "hcc", "the second most prevalent cancer", "a leading cause", "cancer-related mortality", "precise diagnostic and prognostic methodologies", "the study", "an innovative approach", "the hccnet fusion model", "a robust integration", "advanced deep-learning techniques", "the accuracy", "hcc stage recognition", "the synergies", "the vgg16 architecture", "u", "-", "net", "sophisticated data pre-processing methods", "otsu’s binary thresholding and marker-based watershed segmentation", "this approach", "the precision", "hcc stage identification", "transfer learning", "a pivotal role", "hccnet fusion", "the models", "knowledge", "diverse medical image settings", "pre-trained weights", "vgg16 and u-net architectures", "this strategic integration", "the efficacy", "advanced deep learning strategies", "intricate medical challenges", "conventional methods", "a remarkable accuracy rate", "95%", "the potential", "cutting-edge deep learning techniques", "medical diagnostics", "the evaluation", "validation", "the proposed hccnet fusion model", "its strong performance", "many metrics", "auc roc", "loss", "accuracy", "precision", "recall", "f1", "a comparative study", "well-known methods", "cnn", "inception resnetv2", "vgg16", "inception v3", "efficientnet-b0", "resnet50", "the results", "the proposed system", "hcc detection", "a pattern", "the-art", "complex medical issues", "second", "95%", "roc", "cnn", "resnetv2", "v3", "resnet50" ]
Visual sentiment analysis using data-augmented deep transfer learning techniques
[ "Haoran Hong", "Waneeza Zaheer", "Aamir Wali" ]
The use of visual content to express emotions on social media platforms has become increasingly popular. Visual sentiment analysis can be used to understand the sentiment conveyed by the users using images. Compared to text, visual sentiment analysis is a challenging task since images are a more condensed form of data, have ambiguity and do not have explicit textual clues. Recently, a few studies used deep transfer learning techniques for visual sentiment analysis but the results reported can be significantly improved. In this research paper, we introduce a novel architecture that combines data augmentation and transfer learning. Our approach involves feature fusion of the pretrained VGG16 and MobileNetV1 models, followed by fine-tuning using an SVM classifier using augmented training data. For evaluation, we used two image datasets. To augment these datasets, we apply various techniques. The proposed model is also compared with three other transfer techniques, as well as four machine learning models (excluding SVM). VGG16+MobileNetV1-SVM has the best accuracy of 96% and recall of 99% for both datasets. Compared to other studies that also employed the same dataset, the proposed model produced the best results.
10.1007/s00530-024-01308-w
visual sentiment analysis using data-augmented deep transfer learning techniques
the use of visual content to express emotions on social media platforms has become increasingly popular. visual sentiment analysis can be used to understand the sentiment conveyed by the users using images. compared to text, visual sentiment analysis is a challenging task since images are a more condensed form of data, have ambiguity and do not have explicit textual clues. recently, a few studies used deep transfer learning techniques for visual sentiment analysis but the results reported can be significantly improved. in this research paper, we introduce a novel architecture that combines data augmentation and transfer learning. our approach involves feature fusion of the pretrained vgg16 and mobilenetv1 models, followed by fine-tuning using an svm classifier using augmented training data. for evaluation, we used two image datasets. to augment these datasets, we apply various techniques. the proposed model is also compared with three other transfer techniques, as well as four machine learning models (excluding svm). vgg16+mobilenetv1-svm has the best accuracy of 96% and recall of 99% for both datasets. compared to other studies that also employed the same dataset, the proposed model produced the best results.
[ "the use", "visual content", "emotions", "social media platforms", "visual sentiment analysis", "the sentiment", "the users", "images", "text", "visual sentiment analysis", "a challenging task", "images", "a more condensed form", "data", "ambiguity", "explicit textual clues", "a few studies", "techniques", "visual sentiment analysis", "the results", "this research paper", "we", "a novel architecture", "that", "data augmentation", "transfer learning", "our approach", "feature fusion", "the pretrained vgg16 and mobilenetv1 models", "fine-tuning", "an svm classifier", "augmented training data", "evaluation", "we", "two image datasets", "these datasets", "we", "various techniques", "the proposed model", "three other transfer techniques", "four machine learning models", "svm", "vgg16+mobilenetv1-svm", "the best accuracy", "96%", "recall", "99%", "both datasets", "other studies", "that", "the same dataset", "the proposed model", "the best results", "mobilenetv1", "two", "three", "four", "96%", "99%" ]
Deep Learning Methods for Binding Site Prediction in Protein Structures
[ "E. P. Geraseva" ]
AbstractThis work is an overview of deep machine learning methods aimed at predicting binding sites in protein structures. Several classes of methods are selected: prediction of binding sites for small molecules, proteins, and nucleic acids. For each class, various approaches to prediction are considered (prediction of binding atoms, residues, surfaces, pockets). Specifics of feature selection and neural network architectures inherent to each class and approach are highlighted, and an attempt is made to explain these specifics and foresee the further direction of their development.
10.1134/S1990750823600498
deep learning methods for binding site prediction in protein structures
abstractthis work is an overview of deep machine learning methods aimed at predicting binding sites in protein structures. several classes of methods are selected: prediction of binding sites for small molecules, proteins, and nucleic acids. for each class, various approaches to prediction are considered (prediction of binding atoms, residues, surfaces, pockets). specifics of feature selection and neural network architectures inherent to each class and approach are highlighted, and an attempt is made to explain these specifics and foresee the further direction of their development.
[ "abstractthis work", "an overview", "deep machine learning methods", "binding sites", "protein structures", "several classes", "methods", "prediction", "binding sites", "small molecules", "proteins", "nucleic acids", "each class", "various approaches", "prediction", "(prediction", "binding atoms", "residues", "surfaces", "pockets", "specifics", "feature selection", "neural network", "each class", "approach", "an attempt", "these specifics", "the further direction", "their development" ]
Deep learning for tumor margin identification in electromagnetic imaging
[ "Amir Mirbeik", "Negar Ebadi" ]
In this work, a novel method for tumor margin identification in electromagnetic imaging is proposed to optimize the tumor removal surgery. This capability will enable the visualization of the border of the cancerous tissue for the surgeon prior or during the excision surgery. To this end, the border between the normal and tumor parts needs to be identified. Therefore, the images need to be segmented into tumor and normal areas. We propose a deep learning technique which divides the electromagnetic images into two regions: tumor and normal, with high accuracy. We formulate deep learning from a perspective relevant to electromagnetic image reconstruction. A recurrent auto-encoder network architecture (termed here DeepTMI) is presented. The effectiveness of the algorithm is demonstrated by segmenting the reconstructed images of an experimental tissue-mimicking phantom. The structure similarity measure (SSIM) and mean-square-error (MSE) average of normalized reconstructed results by the DeepTMI method are about 0.94 and 0.04 respectively, while that average obtained from the conventional backpropagation (BP) method can hardly overcome 0.35 and 0.41 respectively.
10.1038/s41598-023-42625-w
deep learning for tumor margin identification in electromagnetic imaging
in this work, a novel method for tumor margin identification in electromagnetic imaging is proposed to optimize the tumor removal surgery. this capability will enable the visualization of the border of the cancerous tissue for the surgeon prior or during the excision surgery. to this end, the border between the normal and tumor parts needs to be identified. therefore, the images need to be segmented into tumor and normal areas. we propose a deep learning technique which divides the electromagnetic images into two regions: tumor and normal, with high accuracy. we formulate deep learning from a perspective relevant to electromagnetic image reconstruction. a recurrent auto-encoder network architecture (termed here deeptmi) is presented. the effectiveness of the algorithm is demonstrated by segmenting the reconstructed images of an experimental tissue-mimicking phantom. the structure similarity measure (ssim) and mean-square-error (mse) average of normalized reconstructed results by the deeptmi method are about 0.94 and 0.04 respectively, while that average obtained from the conventional backpropagation (bp) method can hardly overcome 0.35 and 0.41 respectively.
[ "this work", "a novel method", "tumor margin identification", "electromagnetic imaging", "the tumor removal surgery", "this capability", "the visualization", "the border", "the cancerous tissue", "the surgeon", "the excision surgery", "this end", "the border", "the normal and tumor parts", "the images", "tumor and normal areas", "we", "a deep learning technique", "which", "the electromagnetic images", "two regions", "tumor", "high accuracy", "we", "deep learning", "a perspective", "electromagnetic image reconstruction", "a recurrent auto-encoder network architecture", "deeptmi", "the effectiveness", "the algorithm", "the reconstructed images", "an experimental tissue-mimicking phantom", "the structure similarity measure", "ssim", "mean-square-error (mse) average", "normalized reconstructed results", "the deeptmi method", "that average", "the conventional backpropagation (bp) method", "two", "about 0.94", "0.04", "0.35", "0.41" ]
High-dimensional stochastic control models for newsvendor problems and deep learning resolution
[ "Jingtang Ma", "Shan Yang" ]
This paper studies continuous-time models for newsvendor problems with dynamic replenishment, financial hedging and Stackelberg competition. These factors are considered simultaneously and the high-dimensional stochastic control models are established. High-dimensional Hamilton-Jacobi-Bellman (HJB) equations are derived for the value functions. To circumvent the curse of dimensionality, a deep learning algorithm is proposed to solve the HJB equations. A projection is introduced in the algorithm to avoid the gradient explosion during the training phase. The deep learning algorithm is implemented for HJB equations derived from the newsvendor models with dimensions up to six. Numerical outcomes validate the algorithm’s accuracy and demonstrate that the high-dimensional stochastic control models can successfully mitigate the risk.
10.1007/s10479-024-05872-2
high-dimensional stochastic control models for newsvendor problems and deep learning resolution
this paper studies continuous-time models for newsvendor problems with dynamic replenishment, financial hedging and stackelberg competition. these factors are considered simultaneously and the high-dimensional stochastic control models are established. high-dimensional hamilton-jacobi-bellman (hjb) equations are derived for the value functions. to circumvent the curse of dimensionality, a deep learning algorithm is proposed to solve the hjb equations. a projection is introduced in the algorithm to avoid the gradient explosion during the training phase. the deep learning algorithm is implemented for hjb equations derived from the newsvendor models with dimensions up to six. numerical outcomes validate the algorithm’s accuracy and demonstrate that the high-dimensional stochastic control models can successfully mitigate the risk.
[ "this paper", "newsvendor problems", "dynamic replenishment", "financial hedging", "stackelberg competition", "these factors", "the high-dimensional stochastic control models", "hjb", "the value functions", "the curse", "dimensionality", "a deep learning algorithm", "the hjb equations", "a projection", "the algorithm", "the gradient explosion", "the training phase", "the deep learning algorithm", "hjb equations", "the newsvendor models", "dimensions", "numerical outcomes", "the algorithm’s accuracy", "the high-dimensional stochastic control models", "the risk", "hamilton-jacobi-bellman", "six" ]
An improved federated deep learning for plant leaf disease detection
[ "Pragya Hari", "Maheshwari Prasad Singh", "Amit Kumar Singh" ]
Leaf diseases are hazardous to the yield and quality of food crops. There are many deep learning algorithms developed for their detection that may require large computational resources to train a single model with voluminous amounts of data. In the agriculture domain, various plant diseases are found across the country. Accumulating such a huge dataset from various regions is a tedious task, and training them with a single model can also be challenging. This paper proposes Federated Deep Learning (FDL) for Plant Leaf Disease Detection. This concept allows multiple local models to get trained with their region-based datasets and share their knowledge with siblings through the parent, instead of sharing complete datasets. Knowledge transfer significantly reduces the computational costs. This paper formulates the federated dataset using PlantVillage, to simulate the configuration of the FDL. A lightweight and efficient Hierarchical Convolutional Neural Network (H-CNN) is proposed for parent model and child models with 0.09 million parameters and 0.35MB model size. The simulation results show that the proposed FDL attains 93% testing accuracy, outperforming state-of-the-art methods namely, FedAdam and FedAvg with 86.8% and 87.7% respectively. In addition to this, the proposed FDL achieves 95.7%, 95.4% and 95.3% of weighted precision, weighted recall and weighted F1-score respectively for first local model and 92.1%, 90.8% and 91.2% respectively of weighted precision, weighted recall and weighted F1-score respectively for second local model.
10.1007/s11042-024-18867-9
an improved federated deep learning for plant leaf disease detection
leaf diseases are hazardous to the yield and quality of food crops. there are many deep learning algorithms developed for their detection that may require large computational resources to train a single model with voluminous amounts of data. in the agriculture domain, various plant diseases are found across the country. accumulating such a huge dataset from various regions is a tedious task, and training them with a single model can also be challenging. this paper proposes federated deep learning (fdl) for plant leaf disease detection. this concept allows multiple local models to get trained with their region-based datasets and share their knowledge with siblings through the parent, instead of sharing complete datasets. knowledge transfer significantly reduces the computational costs. this paper formulates the federated dataset using plantvillage, to simulate the configuration of the fdl. a lightweight and efficient hierarchical convolutional neural network (h-cnn) is proposed for parent model and child models with 0.09 million parameters and 0.35mb model size. the simulation results show that the proposed fdl attains 93% testing accuracy, outperforming state-of-the-art methods namely, fedadam and fedavg with 86.8% and 87.7% respectively. in addition to this, the proposed fdl achieves 95.7%, 95.4% and 95.3% of weighted precision, weighted recall and weighted f1-score respectively for first local model and 92.1%, 90.8% and 91.2% respectively of weighted precision, weighted recall and weighted f1-score respectively for second local model.
[ "leaf diseases", "the yield", "quality", "food crops", "many deep learning algorithms", "their detection", "that", "large computational resources", "a single model", "voluminous amounts", "data", "the agriculture domain", "various plant diseases", "the country", "such a huge dataset", "various regions", "a tedious task", "them", "a single model", "deep learning", "(fdl", "plant leaf disease detection", "this concept", "multiple local models", "their region-based datasets", "their knowledge", "siblings", "the parent", "complete datasets", "knowledge transfer", "the computational costs", "this paper", "the federated dataset", "plantvillage", "the configuration", "the fdl", "a lightweight and efficient hierarchical convolutional neural network", "h-cnn", "parent model", "child models", "0.09 million parameters", "0.35mb model size", "the simulation results", "93% testing accuracy", "the-art", "86.8%", "87.7%", "addition", "this", "the proposed fdl", "95.7%", "95.4%", "95.3%", "weighted precision", "recall", "f1-score", "first local model", "92.1%", "90.8%", "91.2%", "weighted precision", "recall", "f1-score", "second local model", "0.09 million", "0.35", "93%", "86.8%", "87.7%", "95.7%", "95.4%", "95.3%", "first", "92.1%", "90.8%", "91.2%", "second" ]
Deep learning-based power usage effectiveness optimization for IoT-enabled data center
[ "Yu Sun", "Yanyi Wang", "Gaoxiang Jiang", "Bo Cheng", "Haibo Zhou" ]
The proliferation of data centers is driving increased energy consumption, leading to environmentally unacceptable carbon emissions. As the use of Internet-of-Things (IoT) techniques for extensive data collection in data centers continues to grow, deep learning-based solutions have emerged as attractive alternatives to suboptimal traditional methods. However, existing approaches suffer from unsatisfactory performance, unrealistic assumptions, and an inability to address practical data center optimization. In this paper, we focus on power usage effectiveness (PUE) optimization in IoT-enabled data centers using deep learning algorithms. We first develop a deep learning-based PUE optimization framework tailored to IoT-enabled data centers. We then formulate the general PUE optimization problem, simplifying and specifying it for the minimization of long-term energy consumption in chiller cooling systems. Additionally, we introduce a transformer-based prediction network designed for energy consumption forecasting. Subsequently, we transform this formulation into a Markov decision process (MDP) and present the branching double dueling deep Q-network. This approach effectively tackles the challenges posed by enormous action spaces within MDP by branching actions into sub-actions. Extensive experiments conducted on real-world datasets demonstrate the exceptional performance of our algorithms, excelling in prediction precision, optimization convergence, and optimality while effectively managing a substantial number of actions on the order of \(10^{13}\).
10.1007/s12083-024-01663-5
deep learning-based power usage effectiveness optimization for iot-enabled data center
the proliferation of data centers is driving increased energy consumption, leading to environmentally unacceptable carbon emissions. as the use of internet-of-things (iot) techniques for extensive data collection in data centers continues to grow, deep learning-based solutions have emerged as attractive alternatives to suboptimal traditional methods. however, existing approaches suffer from unsatisfactory performance, unrealistic assumptions, and an inability to address practical data center optimization. in this paper, we focus on power usage effectiveness (pue) optimization in iot-enabled data centers using deep learning algorithms. we first develop a deep learning-based pue optimization framework tailored to iot-enabled data centers. we then formulate the general pue optimization problem, simplifying and specifying it for the minimization of long-term energy consumption in chiller cooling systems. additionally, we introduce a transformer-based prediction network designed for energy consumption forecasting. subsequently, we transform this formulation into a markov decision process (mdp) and present the branching double dueling deep q-network. this approach effectively tackles the challenges posed by enormous action spaces within mdp by branching actions into sub-actions. extensive experiments conducted on real-world datasets demonstrate the exceptional performance of our algorithms, excelling in prediction precision, optimization convergence, and optimality while effectively managing a substantial number of actions on the order of \(10^{13}\).
[ "the proliferation", "data centers", "increased energy consumption", "environmentally unacceptable carbon emissions", "the use", "things", "iot", "extensive data collection", "data centers", "deep learning-based solutions", "attractive alternatives", "suboptimal traditional methods", "existing approaches", "unsatisfactory performance", "unrealistic assumptions", "an inability", "practical data center optimization", "this paper", "we", "power usage effectiveness", "(pue) optimization", "iot-enabled data centers", "deep learning algorithms", "we", "a deep learning-based pue optimization framework", "iot-enabled data centers", "we", "the general pue optimization problem", "it", "the minimization", "long-term energy consumption", "chiller cooling systems", "we", "a transformer-based prediction network", "energy consumption forecasting", "we", "this formulation", "a markov decision process", "mdp", "the branching", "deep q-network", "this approach", "the challenges", "enormous action spaces", "mdp", "actions", "sub", "-", "actions", "extensive experiments", "real-world datasets", "the exceptional performance", "our algorithms", "prediction precision", "optimization convergence", "optimality", "a substantial number", "actions", "the order", "\\(10^{13}\\", "first" ]
Deep reinforcement learning imbalanced credit risk of SMEs in supply chain finance
[ "Wen Zhang", "Shaoshan Yan", "Jian Li", "Rui Peng", "Xin Tian" ]
It is crucial to predict the credit risk of small and medium-sized enterprises (SMEs) accurately for the success of supply chain finance (SCF). However, most of the existing research ignore the fact that the data distribution is usually imbalanced, that is, the proportion of default SMEs is much smaller than that of non-default SMEs. To fill this research gap, we propose a novel approach called DRL-Risk to deal with the imbalanced credit risk prediction (ICRP) of SMEs in SCF with deep reinforcement learning (DRL). Specifically, we formulate the ICRP problem as a Markov decision process and suggest an instance-based reward function to incorporate financial loss into the reward function with consideration of the actual loss caused by misclassification in the ICRP of SMEs. Then, we recommend a deep dueling neural network for decision policy to predict the credit risk of SMEs. With deep reinforcement learning, the DRL-Risk approach can prioritize the learning on the SMEs that would lead to great financial losses. Experimental results demonstrate that the DRL-Risk approach can significantly improve the performance of credit risk prediction of SMEs in SCF compared with the baseline methods in recall, G-mean, and financial loss. We have also identified management implications for the decision-makers participating in SCF.
10.1007/s10479-024-05921-w
deep reinforcement learning imbalanced credit risk of smes in supply chain finance
it is crucial to predict the credit risk of small and medium-sized enterprises (smes) accurately for the success of supply chain finance (scf). however, most of the existing research ignore the fact that the data distribution is usually imbalanced, that is, the proportion of default smes is much smaller than that of non-default smes. to fill this research gap, we propose a novel approach called drl-risk to deal with the imbalanced credit risk prediction (icrp) of smes in scf with deep reinforcement learning (drl). specifically, we formulate the icrp problem as a markov decision process and suggest an instance-based reward function to incorporate financial loss into the reward function with consideration of the actual loss caused by misclassification in the icrp of smes. then, we recommend a deep dueling neural network for decision policy to predict the credit risk of smes. with deep reinforcement learning, the drl-risk approach can prioritize the learning on the smes that would lead to great financial losses. experimental results demonstrate that the drl-risk approach can significantly improve the performance of credit risk prediction of smes in scf compared with the baseline methods in recall, g-mean, and financial loss. we have also identified management implications for the decision-makers participating in scf.
[ "it", "the credit risk", "small and medium-sized enterprises", "smes", "the success", "supply chain finance", "scf", "the existing research", "the fact", "the data distribution", "the proportion", "default smes", "that", "non-default smes", "this research gap", "we", "a novel approach", "drl-risk", "the imbalanced credit risk prediction", "icrp", "smes", "scf", "deep reinforcement learning", "drl", "we", "the icrp problem", "a markov decision process", "an instance-based reward function", "financial loss", "the reward function", "consideration", "the actual loss", "misclassification", "the icrp", "smes", "we", "a deep dueling neural network", "decision policy", "the credit risk", "smes", "deep reinforcement learning", "the drl-risk approach", "the learning", "the smes", "that", "great financial losses", "experimental results", "the drl-risk approach", "the performance", "credit risk prediction", "smes", "scf", "the baseline methods", "recall, g-mean, and financial loss", "we", "management implications", "the decision-makers", "scf" ]
Deep learning with autoencoders and LSTM for ENSO forecasting
[ "Chibuike Chiedozie Ibebuchi", "Michael B. Richman" ]
El Niño Southern Oscillation (ENSO) is the prominent recurrent climatic pattern in the tropical Pacific Ocean with global impacts on regional climates. This study utilizes deep learning to predict the Niño 3.4 index by encoding non-linear sea surface temperature patterns in the tropical Pacific using an autoencoder neural network. The resulting encoded patterns identify crucial centers of action in the Pacific that serve as predictors of the ENSO mode. These patterns are utilized as predictors for forecasting the Niño 3.4 index with a lead time of at least 6 months using the Long Short-Term Memory (LSTM) deep learning model. The analysis uncovers multiple non-linear dipole patterns in the tropical Pacific, with anomalies that are both regionalized and latitudinally oriented that should support a single inter-tropical convergence zone for modeling efforts. Leveraging these encoded patterns as predictors, the LSTM - trained on monthly data from 1950 to 2007 and tested from 2008 to 2022 - shows fidelity in predicting the Niño 3.4 index. The encoded patterns captured the annual cycle of ENSO with a 0.94 correlation between the actual and predicted Niño 3.4 index for lag 12 and 0.91 for lags 6 and 18. Additionally, the 6-month lag predictions excel in detecting extreme ENSO events, achieving an 85% hit rate, outperforming the 70% hit rate at lag 12 and 55% hit rate at lag 18. The prediction accuracy peaks from November to March, with correlations ranging from 0.94 to 0.96. The average correlations in the boreal spring were as large as 0.84, indicating the method has the capability to decrease the spring predictability barrier.
10.1007/s00382-024-07180-8
deep learning with autoencoders and lstm for enso forecasting
el niño southern oscillation (enso) is the prominent recurrent climatic pattern in the tropical pacific ocean with global impacts on regional climates. this study utilizes deep learning to predict the niño 3.4 index by encoding non-linear sea surface temperature patterns in the tropical pacific using an autoencoder neural network. the resulting encoded patterns identify crucial centers of action in the pacific that serve as predictors of the enso mode. these patterns are utilized as predictors for forecasting the niño 3.4 index with a lead time of at least 6 months using the long short-term memory (lstm) deep learning model. the analysis uncovers multiple non-linear dipole patterns in the tropical pacific, with anomalies that are both regionalized and latitudinally oriented that should support a single inter-tropical convergence zone for modeling efforts. leveraging these encoded patterns as predictors, the lstm - trained on monthly data from 1950 to 2007 and tested from 2008 to 2022 - shows fidelity in predicting the niño 3.4 index. the encoded patterns captured the annual cycle of enso with a 0.94 correlation between the actual and predicted niño 3.4 index for lag 12 and 0.91 for lags 6 and 18. additionally, the 6-month lag predictions excel in detecting extreme enso events, achieving an 85% hit rate, outperforming the 70% hit rate at lag 12 and 55% hit rate at lag 18. the prediction accuracy peaks from november to march, with correlations ranging from 0.94 to 0.96. the average correlations in the boreal spring were as large as 0.84, indicating the method has the capability to decrease the spring predictability barrier.
[ "el niño southern oscillation", "enso", "the prominent recurrent climatic pattern", "the tropical pacific ocean", "global impacts", "regional climates", "this study", "deep learning", "the niño 3.4 index", "non-linear sea surface temperature patterns", "the tropical pacific", "an autoencoder neural network", "the resulting encoded patterns", "crucial centers", "action", "the pacific", "that", "predictors", "the enso mode", "these patterns", "predictors", "the niño 3.4 index", "a lead time", "at least 6 months", "the long short-term memory", "lstm", "deep learning model", "the analysis uncovers", "non-linear dipole patterns", "the tropical pacific", "anomalies", "that", "that", "a single inter-tropical convergence zone", "modeling efforts", "these encoded patterns", "predictors", "monthly data", "fidelity", "the niño 3.4 index", "the encoded patterns", "the annual cycle", "enso", "a 0.94 correlation", "3.4 index", "lag", "lags", "the 6-month lag predictions", "extreme enso events", "an 85% hit rate", "the 70% hit rate", "lag", "hit rate", "lag", "the prediction accuracy peaks", "november", "march", "correlations", "the average correlations", "the boreal spring", "the method", "the capability", "the spring predictability barrier", "el niño southern oscillation", "pacific", "3.4", "non-linear", "3.4", "at least 6 months", "monthly", "1950", "2007", "2008", "2022", "3.4", "annual", "0.94", "3.4", "12", "0.91", "6", "6-month", "85%", "70%", "55%", "november to march", "0.94", "0.96", "as large as 0.84" ]
Deep multi-metric training: the need of multi-metric curve evaluation to avoid weak learning
[ "Michail Mamalakis", "Abhirup Banerjee", "Surajit Ray", "Craig Wilkie", "Richard H. Clayton", "Andrew J. Swift", "George Panoutsos", "Bart Vorselaars" ]
The development and application of artificial intelligence-based computer vision systems in medicine, environment, and industry are playing an increasingly prominent role. Hence, the need for optimal and efficient hyperparameter tuning strategies is more than crucial to deliver the highest performance of the deep learning networks in large and demanding datasets. In our study, we have developed and evaluated a new training methodology named deep multi-metric training (DMMT) for enhanced training performance. The DMMT delivers a state of robust learning for deep networks using a new important criterion of multi-metric performance evaluation. We have tested the DMMT methodology in multi-class (three, four, and ten), multi-vendors (different X-ray imaging devices), and multi-size (large, medium, and small) datasets. The validity of the DMMT methodology has been tested in three different classification problems: (i) medical disease classification, (ii) environmental classification, and (iii) ecological classification. For disease classification, we have used two large COVID-19 chest X-rays datasets, namely the BIMCV COVID-19+ and Sheffield hospital datasets. The environmental application is related to the classification of weather images in cloudy, rainy, shine or sunrise conditions. The ecological classification task involves a classification of three animal species (cat, dog, wild) and a classification of ten animals and transportation vehicles categories (CIFAR-10). We have used state-of-the-art networks of DenseNet-121, ResNet-50, VGG-16, VGG-19, and DenResCov-19 (DenRes-131) to verify that our novel methodology is applicable in a variety of different deep learning networks. To the best of our knowledge, this is the first work that proposes a training methodology to deliver robust learning, over a variety of deep learning networks and multi-field classification problems.
10.1007/s00521-024-10182-6
deep multi-metric training: the need of multi-metric curve evaluation to avoid weak learning
the development and application of artificial intelligence-based computer vision systems in medicine, environment, and industry are playing an increasingly prominent role. hence, the need for optimal and efficient hyperparameter tuning strategies is more than crucial to deliver the highest performance of the deep learning networks in large and demanding datasets. in our study, we have developed and evaluated a new training methodology named deep multi-metric training (dmmt) for enhanced training performance. the dmmt delivers a state of robust learning for deep networks using a new important criterion of multi-metric performance evaluation. we have tested the dmmt methodology in multi-class (three, four, and ten), multi-vendors (different x-ray imaging devices), and multi-size (large, medium, and small) datasets. the validity of the dmmt methodology has been tested in three different classification problems: (i) medical disease classification, (ii) environmental classification, and (iii) ecological classification. for disease classification, we have used two large covid-19 chest x-rays datasets, namely the bimcv covid-19+ and sheffield hospital datasets. the environmental application is related to the classification of weather images in cloudy, rainy, shine or sunrise conditions. the ecological classification task involves a classification of three animal species (cat, dog, wild) and a classification of ten animals and transportation vehicles categories (cifar-10). we have used state-of-the-art networks of densenet-121, resnet-50, vgg-16, vgg-19, and denrescov-19 (denres-131) to verify that our novel methodology is applicable in a variety of different deep learning networks. to the best of our knowledge, this is the first work that proposes a training methodology to deliver robust learning, over a variety of deep learning networks and multi-field classification problems.
[ "the development", "application", "artificial intelligence-based computer vision systems", "medicine", "environment", "industry", "an increasingly prominent role", "the need", "optimal and efficient hyperparameter tuning strategies", "the highest performance", "the deep learning networks", "large and demanding datasets", "our study", "we", "a new training methodology", "deep multi-metric training", "dmmt", "enhanced training performance", "the dmmt", "a state", "robust learning", "deep networks", "a new important criterion", "multi-metric performance evaluation", "we", "the dmmt methodology", "-", "-", "vendors (different x-ray imaging devices", "multi-size (large, medium, and small) datasets", "the validity", "the dmmt methodology", "three different classification problems", "(i) medical disease classification", "(ii) environmental classification", "(iii) ecological classification", "disease classification", "we", "two large covid-19 chest x-rays datasets", "namely the bimcv covid-19", "hospital datasets", "the environmental application", "the classification", "weather images", "shine", "sunrise conditions", "the ecological classification task", "a classification", "three animal species", "a classification", "ten animals", "transportation vehicles categories", "cifar-10", "we", "the-art", "resnet-50", "vgg-16", "vgg-19", "denrescov-19", "(denres-131", "our novel methodology", "a variety", "different deep learning networks", "our knowledge", "this", "the first work", "that", "a training methodology", "robust learning", "a variety", "deep learning networks", "multi-field classification problems", "three", "four", "ten", "three", "two", "covid-19", "covid-19+", "three", "ten", "cifar-10", "densenet-121, resnet-50", "vgg-16", "vgg-19", "denrescov-19", "first" ]
HCR-Net: a deep learning based script independent handwritten character recognition network
[ "Vinod Kumar Chauhan", "Sukhdeep Singh", "Anuj Sharma" ]
Handwritten character recognition (HCR) remains a challenging pattern recognition problem despite decades of research, and lacks research on script independent recognition techniques. This is mainly because of similar character structures, different handwriting styles, diverse scripts, handcrafted feature extraction techniques, unavailability of data and code, and the development of script-specific deep learning techniques. To address these limitations, we have proposed a script independent deep learning network for HCR research, called HCR-Net, that sets a new research direction for the field. HCR-Net is based on a novel transfer learning approach for HCR, which partly utilizes feature extraction layers of a pre-trained network. Due to transfer learning and image augmentation, HCR-Net provides faster and computationally efficient training, better performance and generalizations, and can work with small datasets. HCR-Net is extensively evaluated on 40 publicly available datasets of Bangla, Punjabi, Hindi, English, Swedish, Urdu, Farsi, Tibetan, Kannada, Malayalam, Telugu, Marathi, Nepali and Arabic languages, and established 26 new benchmark results while performed close to the best results in the rest cases. HCR-Net showed performance improvements up to 11% against the existing results and achieved a fast convergence rate showing up to 99% of final performance in the very first epoch. HCR-Net significantly outperformed the state-of-the-art transfer learning techniques and also reduced the number of trainable parameters by 34% as compared with the corresponding pre-trained network. To facilitate reproducibility and further advancements of HCR research, the complete code is publicly released at https://github.com/jmdvinodjmd/HCR-Net.
10.1007/s11042-024-18655-5
hcr-net: a deep learning based script independent handwritten character recognition network
handwritten character recognition (hcr) remains a challenging pattern recognition problem despite decades of research, and lacks research on script independent recognition techniques. this is mainly because of similar character structures, different handwriting styles, diverse scripts, handcrafted feature extraction techniques, unavailability of data and code, and the development of script-specific deep learning techniques. to address these limitations, we have proposed a script independent deep learning network for hcr research, called hcr-net, that sets a new research direction for the field. hcr-net is based on a novel transfer learning approach for hcr, which partly utilizes feature extraction layers of a pre-trained network. due to transfer learning and image augmentation, hcr-net provides faster and computationally efficient training, better performance and generalizations, and can work with small datasets. hcr-net is extensively evaluated on 40 publicly available datasets of bangla, punjabi, hindi, english, swedish, urdu, farsi, tibetan, kannada, malayalam, telugu, marathi, nepali and arabic languages, and established 26 new benchmark results while performed close to the best results in the rest cases. hcr-net showed performance improvements up to 11% against the existing results and achieved a fast convergence rate showing up to 99% of final performance in the very first epoch. hcr-net significantly outperformed the state-of-the-art transfer learning techniques and also reduced the number of trainable parameters by 34% as compared with the corresponding pre-trained network. to facilitate reproducibility and further advancements of hcr research, the complete code is publicly released at https://github.com/jmdvinodjmd/hcr-net.
[ "handwritten character recognition", "hcr", "a challenging pattern recognition problem", "decades", "research", "research", "script independent recognition techniques", "this", "similar character structures", "different handwriting styles", "diverse scripts", "handcrafted feature extraction techniques", "unavailability", "data", "code", "the development", "script-specific deep learning techniques", "these limitations", "we", "a script independent deep learning network", "hcr research", "hcr", "-", "net", "that", "a new research direction", "the field", "hcr", "net", "a novel transfer learning approach", "hcr", "which", "feature extraction layers", "a pre-trained network", "learning", "image augmentation", "hcr-net", "faster and computationally efficient training", "better performance", "generalizations", "small datasets", "hcr", "net", "40 publicly available datasets", "bangla", "punjabi", "hindi", "english", "urdu", "farsi", "tibetan", "kannada", "malayalam", "telugu", "marathi", "nepali", "arabic languages", "26 new benchmark results", "the best results", "the rest cases", "net", "performance improvements", "the existing results", "a fast convergence rate", "up to 99%", "final performance", "the very first epoch", "hcr", "net", "the-art", "techniques", "the number", "trainable parameters", "34%", "the corresponding pre-trained network", "reproducibility", "further advancements", "hcr research", "the complete code", "https://github.com/jmdvinodjmd/hcr-net", "decades", "40", "english", "swedish", "tibetan", "kannada", "malayalam", "arabic", "26", "11%", "up to 99%", "first", "34%" ]
Prediction of non-muscle invasive bladder cancer recurrence using deep learning of pathology image
[ "Guang-Yue Wang", "Jing-Fei Zhu", "Qi-Chao Wang", "Jia-Xin Qin", "Xin-Lei Wang", "Xing Liu", "Xin-Yu Liu", "Jun-Zhi Chen", "Jie-Fei Zhu", "Shi-Chao Zhuo", "Di Wu", "Na Li", "Liu Chao", "Fan-Lai Meng", "Hao Lu", "Zhen-Duo Shi", "Zhi-Gang Jia", "Cong-Hui Han" ]
We aimed to build a deep learning-based pathomics model to predict the early recurrence of non-muscle-infiltrating bladder cancer (NMIBC) in this work. A total of 147 patients from Xuzhou Central Hospital were enrolled as the training cohort, and 63 patients from Suqian Affiliated Hospital of Xuzhou Medical University were enrolled as the test cohort. Based on two consecutive phases of patch level prediction and WSI-level predictione, we built a pathomics model, with the initial model developed in the training cohort and subjected to transfer learning, and then the test cohort was validated for generalization. The features extracted from the visualization model were used for model interpretation. After migration learning, the area under the receiver operating characteristic curve for the deep learning-based pathomics model in the test cohort was 0.860 (95% CI 0.752–0.969), with good agreement between the migration training cohort and the test cohort in predicting recurrence, and the predicted values matched well with the observed values, with p values of 0.667766 and 0.140233 for the Hosmer–Lemeshow test, respectively. The good clinical application was observed using a decision curve analysis method. We developed a deep learning-based pathomics model showed promising performance in predicting recurrence within one year in NMIBC patients. Including 10 state prediction NMIBC recurrence group pathology features be visualized, which may be used to facilitate personalized management of NMIBC patients to avoid ineffective or unnecessary treatment for the benefit of patients.
10.1038/s41598-024-66870-9
prediction of non-muscle invasive bladder cancer recurrence using deep learning of pathology image
we aimed to build a deep learning-based pathomics model to predict the early recurrence of non-muscle-infiltrating bladder cancer (nmibc) in this work. a total of 147 patients from xuzhou central hospital were enrolled as the training cohort, and 63 patients from suqian affiliated hospital of xuzhou medical university were enrolled as the test cohort. based on two consecutive phases of patch level prediction and wsi-level predictione, we built a pathomics model, with the initial model developed in the training cohort and subjected to transfer learning, and then the test cohort was validated for generalization. the features extracted from the visualization model were used for model interpretation. after migration learning, the area under the receiver operating characteristic curve for the deep learning-based pathomics model in the test cohort was 0.860 (95% ci 0.752–0.969), with good agreement between the migration training cohort and the test cohort in predicting recurrence, and the predicted values matched well with the observed values, with p values of 0.667766 and 0.140233 for the hosmer–lemeshow test, respectively. the good clinical application was observed using a decision curve analysis method. we developed a deep learning-based pathomics model showed promising performance in predicting recurrence within one year in nmibc patients. including 10 state prediction nmibc recurrence group pathology features be visualized, which may be used to facilitate personalized management of nmibc patients to avoid ineffective or unnecessary treatment for the benefit of patients.
[ "we", "a deep learning-based pathomics model", "the early recurrence", "non-muscle-infiltrating bladder cancer", "nmibc", "this work", "a total", "147 patients", "xuzhou central hospital", "the training cohort", "63 patients", "suqian affiliated hospital", "xuzhou medical university", "the test cohort", "two consecutive phases", "patch level prediction", "wsi-level predictione", "we", "a pathomics model", "the initial model", "the training cohort", "learning", "the test cohort", "generalization", "the features", "the visualization model", "model interpretation", "migration learning", "the area", "the receiver operating characteristic curve", "the deep learning-based pathomics model", "the test cohort", "(95%", "good agreement", "the migration training cohort", "the test cohort", "recurrence", "the predicted values", "the observed values", "p values", "the hosmer", "lemeshow test", "the good clinical application", "a decision curve analysis method", "we", "a deep learning-based pathomics model", "promising performance", "recurrence", "one year", "nmibc patients", "10 state prediction nmibc recurrence group pathology features", "which", "personalized management", "nmibc patients", "ineffective or unnecessary treatment", "the benefit", "patients", "147", "xuzhou", "63", "xuzhou medical university", "two", "0.860", "95%", "0.667766", "0.140233", "one year", "10" ]
A predictive analytics framework for sensor data using time series and deep learning techniques
[ "Hend A. Selmy", "Hoda K. Mohamed", "Walaa Medhat" ]
IoT devices convert billions of objects into data-generating entities, enabling them to report status and interact with their surroundings. This data comes in various formats, like structured, semi-structured, or unstructured. In addition, it can be collected in batches or in real time. The problem now is how to benefit from all of this data gathered by sensing and monitoring changes like temperature, light, and position. In this paper, we propose a predictive analytics framework constructed on top of open-source technologies such as Apache Spark and Kafka. The framework focuses on forecasting temperature time series data using traditional and deep learning predictive analytics methods. The analysis and prediction tasks were performed using Autoregressive Integrated Moving Average (ARIMA), Seasonal Autoregressive Integrated Moving Average (SARIMA), Long Short-Term Memory (LSTM), and a novel hybrid model based on Convolution Neural Network (CNN) and LSTM. The purpose of this paper is to determine whether and how recently developed deep learning-based models outperform traditional algorithms in the prediction of time series data. The empirical studies conducted and reported in this paper demonstrate that deep learning-based models, specifically LSTM and CNN-LSTM, exhibit superior performance compared to traditional-based algorithms, ARIMA and SARIMA. More specifically, the average reduction in error rates obtained by LSTM and CNN-LSTM models were substantial when compared to other models indicating the superiority of deep learning. Moreover, the CNN-LSTM-based deep learning model exhibits a higher degree of closeness to the actual values when compared to the LSTM-based model.
10.1007/s00521-023-09398-9
a predictive analytics framework for sensor data using time series and deep learning techniques
iot devices convert billions of objects into data-generating entities, enabling them to report status and interact with their surroundings. this data comes in various formats, like structured, semi-structured, or unstructured. in addition, it can be collected in batches or in real time. the problem now is how to benefit from all of this data gathered by sensing and monitoring changes like temperature, light, and position. in this paper, we propose a predictive analytics framework constructed on top of open-source technologies such as apache spark and kafka. the framework focuses on forecasting temperature time series data using traditional and deep learning predictive analytics methods. the analysis and prediction tasks were performed using autoregressive integrated moving average (arima), seasonal autoregressive integrated moving average (sarima), long short-term memory (lstm), and a novel hybrid model based on convolution neural network (cnn) and lstm. the purpose of this paper is to determine whether and how recently developed deep learning-based models outperform traditional algorithms in the prediction of time series data. the empirical studies conducted and reported in this paper demonstrate that deep learning-based models, specifically lstm and cnn-lstm, exhibit superior performance compared to traditional-based algorithms, arima and sarima. more specifically, the average reduction in error rates obtained by lstm and cnn-lstm models were substantial when compared to other models indicating the superiority of deep learning. moreover, the cnn-lstm-based deep learning model exhibits a higher degree of closeness to the actual values when compared to the lstm-based model.
[ "iot devices", "billions", "objects", "data-generating entities", "them", "status", "their surroundings", "this data", "various formats", "addition", "it", "batches", "real time", "the problem", "all", "this data", "changes", "temperature", "light", "position", "this paper", "we", "a predictive analytics framework", "top", "open-source technologies", "apache spark", "the framework", "temperature time series data", "traditional and deep learning predictive analytics methods", "the analysis and prediction tasks", "arima", "sarima", "lstm", "convolution neural network", "cnn", "lstm", "the purpose", "this paper", "how recently developed deep learning-based models", "traditional algorithms", "the prediction", "time series data", "the empirical studies", "this paper demonstrate", "deep learning-based models", "specifically lstm", "cnn-lstm", "superior performance", "traditional-based algorithms", "arima", "sarima", "the average reduction", "error rates", "lstm", "cnn-lstm models", "other models", "the superiority", "deep learning", "the cnn-lstm-based deep learning model", "a higher degree", "closeness", "the actual values", "the lstm-based model", "billions", "kafka", "cnn", "cnn-lstm", "cnn", "cnn" ]
Deep learning-based comprehensive review on pulmonary tuberculosis
[ "Twinkle Bansal", "Sheifali Gupta", "Neeru Jindal" ]
In areas with high tuberculosis (TB) prevalence, high mortality rate has significantly increased over the past few decades. Even though tuberculosis can be treated, areas with high disease burden continue to have insufficient screening tools, leading to diagnostic delays and incorrect diagnoses. As a result of these challenges, a computer-aided diagnostics (CAD) system has been developed that can automatically detect tuberculosis. There are few different methods that can be used to screen for tuberculosis; however, chest X-ray (CXR) is most commonly used and strongly suggested because it is so effective in identifying lung irregularities. Over past ten years, we have seen a meteoric rise in amount of research conducted into application of machine learning strategies to examination of chest X-ray images for screening regarding pulmonary abnormalities. Particularly, we have also noticed significant interest in testing for TB. This attentiveness has increased in tandem with phenomenal progress that has been made in deep learning (DL), which is predominately founded on convolutional neural networks (CNNs). Because of these advancements, significant research contributions have been made in field of DL techniques for TB screening by utilizing CXR images. The main focus of this paper is to emphasize favorable methods and data collection, as well as methodological contributions, identify data collections, and identify challenges.
10.1007/s00521-023-09381-4
deep learning-based comprehensive review on pulmonary tuberculosis
in areas with high tuberculosis (tb) prevalence, high mortality rate has significantly increased over the past few decades. even though tuberculosis can be treated, areas with high disease burden continue to have insufficient screening tools, leading to diagnostic delays and incorrect diagnoses. as a result of these challenges, a computer-aided diagnostics (cad) system has been developed that can automatically detect tuberculosis. there are few different methods that can be used to screen for tuberculosis; however, chest x-ray (cxr) is most commonly used and strongly suggested because it is so effective in identifying lung irregularities. over past ten years, we have seen a meteoric rise in amount of research conducted into application of machine learning strategies to examination of chest x-ray images for screening regarding pulmonary abnormalities. particularly, we have also noticed significant interest in testing for tb. this attentiveness has increased in tandem with phenomenal progress that has been made in deep learning (dl), which is predominately founded on convolutional neural networks (cnns). because of these advancements, significant research contributions have been made in field of dl techniques for tb screening by utilizing cxr images. the main focus of this paper is to emphasize favorable methods and data collection, as well as methodological contributions, identify data collections, and identify challenges.
[ "areas", "high tuberculosis", "tb", "prevalence", "high mortality rate", "the past few decades", "tuberculosis", "areas", "high disease burden", "insufficient screening tools", "diagnostic delays", "incorrect diagnoses", "a result", "these challenges", "a computer-aided diagnostics", "cad) system", "that", "tuberculosis", "few different methods", "that", "tuberculosis", "chest x", "-", "(cxr", "it", "lung irregularities", "past ten years", "we", "a meteoric rise", "amount", "research", "application", "machine learning strategies", "examination", "chest x-ray images", "pulmonary abnormalities", "we", "significant interest", "testing", "tb", "this attentiveness", "tandem", "phenomenal progress", "that", "deep learning", "dl", "which", "convolutional neural networks", "cnns", "these advancements", "significant research contributions", "field", "dl techniques", "tb", "cxr images", "the main focus", "this paper", "favorable methods", "data collection", "methodological contributions", "data collections", "challenges", "the past few decades" ]
Adversarial defence by learning differentiated feature representation in deep ensemble
[ "Xi Chen", "Wei Huang", "Wei Guo", "Fan Zhang", "Jiayu Du", "Zhizhong Zhou" ]
Deep learning models have been shown to be vulnerable to critical attacks under adversarial conditions. Attackers are able to generate powerful adversarial examples by searching for adversarial perturbations, without interfering with model training or directly modifying the model. This phenomenon indicates an endogenous problem in existing deep learning frameworks. Therefore, optimizing individual models for defense is often limited and can always be defeated by new attack methods. Ensemble defense has been shown to be effective in defending against adversarial attacks by combining diverse models. However, the problem of insufficient differentiation among existing models persists. Active defense in cyberspace security has successfully defended against unknown vulnerabilities by integrating subsystems with multiple different implementations to achieve a unified mission objective. Inspired by this, we propose exploring the feasibility of achieving model differentiation by changing the data features used in training individual models, as they are the core factor of functional implementation. We utilize several feature extraction methods to preprocess the data and train differentiated models based on these features. By generating adversarial perturbations to attack different models, we demonstrate that the feature representation of the data is highly resistant to adversarial perturbations. The entire ensemble is able to operate normally in an error-bearing environment.
10.1007/s00138-024-01571-x
adversarial defence by learning differentiated feature representation in deep ensemble
deep learning models have been shown to be vulnerable to critical attacks under adversarial conditions. attackers are able to generate powerful adversarial examples by searching for adversarial perturbations, without interfering with model training or directly modifying the model. this phenomenon indicates an endogenous problem in existing deep learning frameworks. therefore, optimizing individual models for defense is often limited and can always be defeated by new attack methods. ensemble defense has been shown to be effective in defending against adversarial attacks by combining diverse models. however, the problem of insufficient differentiation among existing models persists. active defense in cyberspace security has successfully defended against unknown vulnerabilities by integrating subsystems with multiple different implementations to achieve a unified mission objective. inspired by this, we propose exploring the feasibility of achieving model differentiation by changing the data features used in training individual models, as they are the core factor of functional implementation. we utilize several feature extraction methods to preprocess the data and train differentiated models based on these features. by generating adversarial perturbations to attack different models, we demonstrate that the feature representation of the data is highly resistant to adversarial perturbations. the entire ensemble is able to operate normally in an error-bearing environment.
[ "deep learning models", "critical attacks", "adversarial conditions", "attackers", "powerful adversarial examples", "adversarial perturbations", "model training", "the model", "this phenomenon", "an endogenous problem", "existing deep learning frameworks", "individual models", "defense", "new attack methods", "ensemble defense", "adversarial attacks", "diverse models", "the problem", "insufficient differentiation", "existing models", "active defense", "cyberspace security", "unknown vulnerabilities", "subsystems", "multiple different implementations", "a unified mission objective", "this", "we", "the feasibility", "model differentiation", "the data features", "individual models", "they", "the core factor", "functional implementation", "we", "several feature extraction methods", "the data", "differentiated models", "these features", "adversarial perturbations", "different models", "we", "the feature representation", "the data", "adversarial perturbations", "the entire ensemble", "an error-bearing environment" ]
Scaling deep learning for materials discovery
[ "Amil Merchant", "Simon Batzner", "Samuel S. Schoenholz", "Muratahan Aykol", "Gowoon Cheon", "Ekin Dogus Cubuk" ]
Novel functional materials enable fundamental breakthroughs across technological applications from clean energy to information processing1,2,3,4,5,6,7,8,9,10,11. From microchips to batteries and photovoltaics, discovery of inorganic crystals has been bottlenecked by expensive trial-and-error approaches. Concurrently, deep-learning models for language, vision and biology have showcased emergent predictive capabilities with increasing data and computation12,13,14. Here we show that graph networks trained at scale can reach unprecedented levels of generalization, improving the efficiency of materials discovery by an order of magnitude. Building on 48,000 stable crystals identified in continuing studies15,16,17, improved efficiency enables the discovery of 2.2 million structures below the current convex hull, many of which escaped previous human chemical intuition. Our work represents an order-of-magnitude expansion in stable materials known to humanity. Stable discoveries that are on the final convex hull will be made available to screen for technological applications, as we demonstrate for layered materials and solid-electrolyte candidates. Of the stable structures, 736 have already been independently experimentally realized. The scale and diversity of hundreds of millions of first-principles calculations also unlock modelling capabilities for downstream applications, leading in particular to highly accurate and robust learned interatomic potentials that can be used in condensed-phase molecular-dynamics simulations and high-fidelity zero-shot prediction of ionic conductivity.
10.1038/s41586-023-06735-9
scaling deep learning for materials discovery
novel functional materials enable fundamental breakthroughs across technological applications from clean energy to information processing1,2,3,4,5,6,7,8,9,10,11. from microchips to batteries and photovoltaics, discovery of inorganic crystals has been bottlenecked by expensive trial-and-error approaches. concurrently, deep-learning models for language, vision and biology have showcased emergent predictive capabilities with increasing data and computation12,13,14. here we show that graph networks trained at scale can reach unprecedented levels of generalization, improving the efficiency of materials discovery by an order of magnitude. building on 48,000 stable crystals identified in continuing studies15,16,17, improved efficiency enables the discovery of 2.2 million structures below the current convex hull, many of which escaped previous human chemical intuition. our work represents an order-of-magnitude expansion in stable materials known to humanity. stable discoveries that are on the final convex hull will be made available to screen for technological applications, as we demonstrate for layered materials and solid-electrolyte candidates. of the stable structures, 736 have already been independently experimentally realized. the scale and diversity of hundreds of millions of first-principles calculations also unlock modelling capabilities for downstream applications, leading in particular to highly accurate and robust learned interatomic potentials that can be used in condensed-phase molecular-dynamics simulations and high-fidelity zero-shot prediction of ionic conductivity.
[ "novel functional materials", "fundamental breakthroughs", "technological applications", "clean energy", "information processing1,2,3,4,5,6,7,8,9,10,11", "microchips", "batteries", "photovoltaics", "discovery", "inorganic crystals", "expensive trial-and-error approaches", "deep-learning models", "language", "vision", "biology", "emergent predictive capabilities", "increasing data", "computation12,13,14", "we", "graph networks", "scale", "unprecedented levels", "generalization", "the efficiency", "materials discovery", "an order", "magnitude", "48,000 stable crystals", "continuing studies15,16,17", "improved efficiency", "the discovery", "2.2 million structures", "the current convex hull", "which", "previous human chemical intuition", "our work", "magnitude", "stable materials", "humanity", "stable discoveries", "that", "the final convex hull", "technological applications", "we", "layered materials", "solid-electrolyte candidates", "the stable structures", "the scale", "diversity", "hundreds of millions", "first-principles calculations", "modelling capabilities", "downstream applications", "learned interatomic potentials", "that", "condensed-phase molecular-dynamics simulations", "high-fidelity zero-shot prediction", "ionic conductivity", "48,000", "2.2 million", "736", "hundreds of millions", "first", "zero" ]
Deep learning-aided 3D proxy-bridged region-growing framework for multi-organ segmentation
[ "Zhihong Chen", "Lisha Yao", "Yue Liu", "Xiaorui Han", "Zhengze Gong", "Jichao Luo", "Jietong Zhao", "Gang Fang" ]
Accurate multi-organ segmentation in 3D CT images is imperative for enhancing computer-aided diagnosis and radiotherapy planning. However, current deep learning-based methods for 3D multi-organ segmentation face challenges such as the need for labor-intensive manual pixel-level annotations and high hardware resource demands, especially regarding GPU resources. To address these issues, we propose a 3D proxy-bridged region-growing framework specifically designed for the segmentation of the liver and spleen. Specifically, a key slice is selected from each 3D volume according to the corresponding intensity histogram. Subsequently, a deep learning model is employed to pinpoint the semantic central patch on this key slice, to calculate the growing seed. To counteract the impact of noise, segmentation of the liver and spleen is conducted on superpixel images created through proxy-bridging strategy. The segmentation process is then extended to adjacent slices by applying the same methodology iteratively, culminating in the comprehensive segmentation results. Experimental results demonstrate that the proposed framework accomplishes segmentation of the liver and spleen with an average Dice Similarity Coefficient of approximately 0.93 and a Jaccard Similarity Coefficient of around 0.88. These outcomes substantiate the framework's capability to achieve performance on par with that of deep learning methods, albeit requiring less guidance information and lower GPU resources.
10.1038/s41598-024-60668-5
deep learning-aided 3d proxy-bridged region-growing framework for multi-organ segmentation
accurate multi-organ segmentation in 3d ct images is imperative for enhancing computer-aided diagnosis and radiotherapy planning. however, current deep learning-based methods for 3d multi-organ segmentation face challenges such as the need for labor-intensive manual pixel-level annotations and high hardware resource demands, especially regarding gpu resources. to address these issues, we propose a 3d proxy-bridged region-growing framework specifically designed for the segmentation of the liver and spleen. specifically, a key slice is selected from each 3d volume according to the corresponding intensity histogram. subsequently, a deep learning model is employed to pinpoint the semantic central patch on this key slice, to calculate the growing seed. to counteract the impact of noise, segmentation of the liver and spleen is conducted on superpixel images created through proxy-bridging strategy. the segmentation process is then extended to adjacent slices by applying the same methodology iteratively, culminating in the comprehensive segmentation results. experimental results demonstrate that the proposed framework accomplishes segmentation of the liver and spleen with an average dice similarity coefficient of approximately 0.93 and a jaccard similarity coefficient of around 0.88. these outcomes substantiate the framework's capability to achieve performance on par with that of deep learning methods, albeit requiring less guidance information and lower gpu resources.
[ "accurate multi-organ segmentation", "3d ct images", "computer-aided diagnosis", "radiotherapy", "planning", "however, current deep learning-based methods", "3d multi-organ segmentation face challenges", "the need", "labor-intensive manual pixel-level annotations", "high hardware resource demands", "gpu resources", "these issues", "we", "a 3d proxy-bridged region-growing framework", "the segmentation", "the liver", "spleen", "a key slice", "each 3d volume", "the corresponding intensity histogram", "a deep learning model", "the semantic central patch", "this key slice", "the growing seed", "the impact", "noise", "segmentation", "the liver", "spleen", "superpixel images", "proxy-bridging strategy", "the segmentation process", "adjacent slices", "the same methodology", "the comprehensive segmentation results", "experimental results", "the proposed framework", "segmentation", "the liver", "an average dice similarity", "a jaccard similarity", "these outcomes", "the framework's capability", "performance", "par", "that", "deep learning methods", "less guidance information", "lower gpu resources", "3d", "3d", "3d", "3d", "approximately 0.93", "around 0.88" ]
Biometrics recognition using deep learning: a survey
[ "Shervin Minaee", "Amirali Abdolrashidi", "Hang Su", "Mohammed Bennamoun", "David Zhang" ]
In the past few years, deep learning-based models have been very successful in achieving state-of-the-art results in many tasks in computer vision, speech recognition, and natural language processing. These models seem to be a natural fit for handling the ever-increasing scale of biometric recognition problems, from cellphone authentication to airport security systems. Deep learning-based models have increasingly been leveraged to improve the accuracy of different biometric recognition systems in recent years. In this work, we provide a comprehensive survey of more than 150 promising works on biometric recognition (including face, fingerprint, iris, palmprint, ear, voice, signature, and gait recognition), which deploy deep learning models, and show their strengths and potentials in different applications. For each biometric, we first introduce the available datasets that are widely used in the literature and their characteristics. We will then talk about several promising deep learning works developed for that biometric, and show their performance on popular public benchmarks. We will also discuss some of the main challenges while using these models for biometric recognition, and possible future directions to which research in this area is headed.
10.1007/s10462-022-10237-x
biometrics recognition using deep learning: a survey
in the past few years, deep learning-based models have been very successful in achieving state-of-the-art results in many tasks in computer vision, speech recognition, and natural language processing. these models seem to be a natural fit for handling the ever-increasing scale of biometric recognition problems, from cellphone authentication to airport security systems. deep learning-based models have increasingly been leveraged to improve the accuracy of different biometric recognition systems in recent years. in this work, we provide a comprehensive survey of more than 150 promising works on biometric recognition (including face, fingerprint, iris, palmprint, ear, voice, signature, and gait recognition), which deploy deep learning models, and show their strengths and potentials in different applications. for each biometric, we first introduce the available datasets that are widely used in the literature and their characteristics. we will then talk about several promising deep learning works developed for that biometric, and show their performance on popular public benchmarks. we will also discuss some of the main challenges while using these models for biometric recognition, and possible future directions to which research in this area is headed.
[ "the past few years", "deep learning-based models", "the-art", "many tasks", "computer vision", "speech recognition", "natural language processing", "these models", "a natural fit", "the ever-increasing scale", "biometric recognition problems", "cellphone authentication", "security systems", "deep learning-based models", "the accuracy", "different biometric recognition systems", "recent years", "this work", "we", "a comprehensive survey", "more than 150 promising works", "biometric recognition", "face", "fingerprint", "iris", "ear", "voice", "signature", "gait recognition", "which", "deep learning models", "their strengths", "potentials", "different applications", "each biometric", "we", "the available datasets", "that", "the literature", "their characteristics", "we", "several promising deep learning works", "their performance", "popular public benchmarks", "we", "some", "the main challenges", "these models", "biometric recognition", "possible future directions", "which research", "this area", "the past few years", "recent years", "more than 150", "iris", "first" ]
Harnessing the Power of 6G Connectivity for Advanced Big Data Analytics with Deep Learning
[ "Maojin Sun", "Luyi Sun" ]
The smart applications development worldwide demands for ultra-reliable data communication to assure the richness of data and processing in time. These smart applications create massive amounts of data to be processed in 6G networks with advanced technologies. 6G big data analytics become the demand for next-generation data communication and smart city applications. Traditional data analytics algorithms lag in efficiency while processing big data due to huge volume, data dependency and timely processing. A deep learning model called reinforcement learning is promising for processing big data in smart applications. The proposed study, advanced big data Analytics using Deep learning (ABDAS-DL), gives a pioneering approach that combines Deep Reinforcement learning (DRL) based Deep Q network (DQN) with long-term, short-term memory (LSTM) for harnessing the vast capacity of 6G connectivity within the domain of advanced big data analytics. This study utilises smart transport-based data for taxi route optimisation by analysing climatic and surrounding factors. The look of 6G connectivity guarantees incredible facts of data transmission speeds and tremendously low latency, taking off new horizons for managing large datasets in real time. The performance of the proposed model is measured in terms of processing time, network, reliability and scalability. The proposed model takes 30 s to process the data and fix the taxi route, while another traditional model consumes more than an hour.
10.1007/s11277-024-11044-z
harnessing the power of 6g connectivity for advanced big data analytics with deep learning
the smart applications development worldwide demands for ultra-reliable data communication to assure the richness of data and processing in time. these smart applications create massive amounts of data to be processed in 6g networks with advanced technologies. 6g big data analytics become the demand for next-generation data communication and smart city applications. traditional data analytics algorithms lag in efficiency while processing big data due to huge volume, data dependency and timely processing. a deep learning model called reinforcement learning is promising for processing big data in smart applications. the proposed study, advanced big data analytics using deep learning (abdas-dl), gives a pioneering approach that combines deep reinforcement learning (drl) based deep q network (dqn) with long-term, short-term memory (lstm) for harnessing the vast capacity of 6g connectivity within the domain of advanced big data analytics. this study utilises smart transport-based data for taxi route optimisation by analysing climatic and surrounding factors. the look of 6g connectivity guarantees incredible facts of data transmission speeds and tremendously low latency, taking off new horizons for managing large datasets in real time. the performance of the proposed model is measured in terms of processing time, network, reliability and scalability. the proposed model takes 30 s to process the data and fix the taxi route, while another traditional model consumes more than an hour.
[ "the smart applications development", "ultra-reliable data communication", "the richness", "data", "processing", "time", "these smart applications", "massive amounts", "data", "6g networks", "advanced technologies", "6g big data analytics", "the demand", "next-generation data communication", "smart city applications", "efficiency", "big data", "huge volume", "data dependency", "timely processing", "a deep learning model", "reinforcement learning", "big data", "smart applications", "the proposed study", "advanced big data analytics", "deep learning", "abdas", "dl", "a pioneering approach", "that", "deep reinforcement learning", "drl", "deep q network", "dqn", "long-term, short-term memory", "lstm", "the vast capacity", "6g connectivity", "the domain", "advanced big data analytics", "this study", "smart transport-based data", "taxi route optimisation", "climatic and surrounding factors", "the look", "6g connectivity", "incredible facts", "data transmission speeds", "tremendously low latency", "new horizons", "large datasets", "real time", "the performance", "the proposed model", "terms", "processing time", "network", "reliability", "scalability", "the proposed model", "30 s", "the data", "the taxi route", "another traditional model", "an hour", "6", "6", "6", "6", "30", "more than an hour" ]
Assessments of Data-Driven Deep Learning Models on One-Month Predictions of Pan-Arctic Sea Ice Thickness
[ "Chentao Song", "Jiang Zhu", "Xichen Li" ]
In recent years, deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration, but relatively little research has been conducted for larger spatial and temporal scales, mainly due to the limited time coverage of observations and reanalysis data. Meanwhile, deep learning predictions of sea ice thickness (SIT) have yet to receive ample attention. In this study, two data-driven deep learning (DL) models are built based on the ConvLSTM and fully convolutional U-net (FC-Unet) algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations. These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved. Through comprehensive assessments of prediction skills by season and region, the results suggest that using a broader set of CMIP6 data for transfer learning, as well as incorporating multiple climate variables as predictors, contribute to better prediction results, although both DL models can effectively predict the spatiotemporal features of SIT anomalies. Regarding the predicted SIT anomalies of the FC-Unet model, the spatial correlations with reanalysis reach an average level of 89% over all months, while the temporal anomaly correlation coefficients are close to unity in most cases. The models also demonstrate robust performances in predicting SIT and SIE during extreme events. The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions, aiding climate change research and real-time business applications.
10.1007/s00376-023-3259-3
assessments of data-driven deep learning models on one-month predictions of pan-arctic sea ice thickness
in recent years, deep learning methods have gradually been applied to prediction tasks related to arctic sea ice concentration, but relatively little research has been conducted for larger spatial and temporal scales, mainly due to the limited time coverage of observations and reanalysis data. meanwhile, deep learning predictions of sea ice thickness (sit) have yet to receive ample attention. in this study, two data-driven deep learning (dl) models are built based on the convlstm and fully convolutional u-net (fc-unet) algorithms and trained using cmip6 historical simulations for transfer learning and fine-tuned using reanalysis/observations. these models enable monthly predictions of arctic sit without considering the complex physical processes involved. through comprehensive assessments of prediction skills by season and region, the results suggest that using a broader set of cmip6 data for transfer learning, as well as incorporating multiple climate variables as predictors, contribute to better prediction results, although both dl models can effectively predict the spatiotemporal features of sit anomalies. regarding the predicted sit anomalies of the fc-unet model, the spatial correlations with reanalysis reach an average level of 89% over all months, while the temporal anomaly correlation coefficients are close to unity in most cases. the models also demonstrate robust performances in predicting sit and sie during extreme events. the effectiveness and reliability of the proposed deep transfer learning models in predicting arctic sit can facilitate more accurate pan-arctic predictions, aiding climate change research and real-time business applications.
[ "recent years", "deep learning methods", "prediction tasks", "arctic sea ice concentration", "relatively little research", "larger spatial and temporal scales", "the limited time coverage", "observations", "reanalysis data", "deep learning predictions", "sea ice thickness", "ample attention", "this study", "two data-driven deep learning (dl) models", "the convlstm", "fully convolutional u", "net", "fc-unet", "algorithms", "cmip6 historical simulations", "transfer learning", "reanalysis/observations", "these models", "monthly predictions", "the complex physical processes", "comprehensive assessments", "prediction skills", "season", "region", "the results", "a broader set", "cmip6 data", "transfer learning", "multiple climate variables", "predictors", "better prediction results", "both dl models", "the spatiotemporal features", "sit anomalies", "the predicted sit anomalies", "the fc-unet model", "the spatial correlations", "reanalysis", "an average level", "89%", "all months", "the temporal anomaly correlation coefficients", "unity", "most cases", "the models", "robust performances", "sit", "sie", "extreme events", "the effectiveness", "reliability", "the proposed deep transfer learning models", "arctic sit", "more accurate pan-arctic predictions", "climate change research", "real-time business applications", "recent years", "arctic sea ice concentration", "two", "cmip6", "monthly", "arctic", "season", "cmip6", "89%", "all months", "pan-arctic" ]
Early betel leaf disease detection using vision transformer and deep learning algorithms
[ "S. Kusuma", "K. R. Jothi" ]
Betel leaves benefits include their high content of antioxidants, which can help protect against oxidative stress and promote overall health. Additionally, these leaves are known for their potential anti-inflammatory properties, making them valuable in traditional medicine practises. The biggest threat to the security of the food supply is posed by plant diseases, and it is difficult to detect them early enough to prevent potential economic harm. This crop loss not only affects the economy but also poses a threat to food security, as betel leaves are widely used in traditional cuisines and herbal remedies. By analysing large datasets of plant images, deep learning algorithms can quickly identify specific patterns and symptoms associated with various diseases. In this study, we evaluated how well four deep learning models—VGG19, DenseNet201, ResNet152V2, and a Vision Transform model—performed at detecting diseases that affect betel leaves. Both the ResNet152V2 and VIT models attained levels of accuracy, with testing accuracies of 98.42% and 97.83%, respectively. However, the VGG19 model had slightly lower accuracy, with a testing accuracy of 91%. Overall, these deep learning models showed promising results in detecting diseases affecting betel leaves, with the DenseNet201 model performing the best with a testing accuracy of 98.77%.
10.1007/s41870-023-01647-3
early betel leaf disease detection using vision transformer and deep learning algorithms
betel leaves benefits include their high content of antioxidants, which can help protect against oxidative stress and promote overall health. additionally, these leaves are known for their potential anti-inflammatory properties, making them valuable in traditional medicine practises. the biggest threat to the security of the food supply is posed by plant diseases, and it is difficult to detect them early enough to prevent potential economic harm. this crop loss not only affects the economy but also poses a threat to food security, as betel leaves are widely used in traditional cuisines and herbal remedies. by analysing large datasets of plant images, deep learning algorithms can quickly identify specific patterns and symptoms associated with various diseases. in this study, we evaluated how well four deep learning models—vgg19, densenet201, resnet152v2, and a vision transform model—performed at detecting diseases that affect betel leaves. both the resnet152v2 and vit models attained levels of accuracy, with testing accuracies of 98.42% and 97.83%, respectively. however, the vgg19 model had slightly lower accuracy, with a testing accuracy of 91%. overall, these deep learning models showed promising results in detecting diseases affecting betel leaves, with the densenet201 model performing the best with a testing accuracy of 98.77%.
[ "betel", "benefits", "their high content", "antioxidants", "which", "oxidative stress", "overall health", "these leaves", "their potential anti-inflammatory properties", "them", "traditional medicine practises", "the biggest threat", "the security", "the food supply", "plant diseases", "it", "them", "potential economic harm", "this crop loss", "the economy", "a threat", "food security", "betel leaves", "traditional cuisines", "herbal remedies", "large datasets", "plant images", "deep learning algorithms", "specific patterns", "symptoms", "various diseases", "this study", "we", "how well four deep learning models", "vgg19", "a vision transform model", "diseases", "that", "betel leaves", "both the resnet152v2 and vit models", "levels", "accuracy", "testing", "accuracies", "98.42%", "97.83%", "the vgg19 model", "slightly lower accuracy", "a testing accuracy", "91%", "these deep learning models", "promising results", "diseases", "betel leaves", "the densenet201 model", "a testing accuracy", "98.77%", "betel", "four", "vit", "98.42%", "97.83%", "91%", "98.77%" ]
Exploring the deep learning of artificial intelligence in nursing: a concept analysis with Walker and Avant’s approach
[ "Supichaya Wangpitipanit", "Lininger Jiraporn", "Nick Anderson" ]
BackgroundIn recent years, increased attention has been given to using deep learning (DL) of artificial intelligence (AI) in healthcare to address nursing challenges. The adoption of new technologies in nursing needs to be improved, and AI in nursing is still in its early stages. However, the current literature needs more clarity, which affects clinical practice, research, and theory development. This study aimed to clarify the meaning of deep learning and identify the defining attributes of artificial intelligence within nursing.MethodsWe conducted a concept analysis of the deep learning of AI in nursing care using Walker and Avant’s 8-step approach. Our search strategy employed Boolean techniques and MeSH terms across databases, including BMC, CINAHL, ClinicalKey for Nursing, Embase, Ovid, Scopus, SpringerLink and Spinger Nature, ProQuest, PubMed, and Web of Science. By focusing on relevant keywords in titles and abstracts from articles published between 2018 and 2024, we initially found 571 sources.ResultsThirty-seven articles that met the inclusion criteria were analyzed in this study. The attributes of evidence included four themes: focus and immersion, coding and understanding, arranging layers and algorithms, and implementing within the process of use cases to modify recommendations. Antecedents, unclear systems and communication, insufficient data management knowledge and support, and compound challenges can lead to suffering and risky caregiving tasks. Applying deep learning techniques enables nurses to simulate scenarios, predict outcomes, and plan care more precisely. Embracing deep learning equipment allows nurses to make better decisions. It empowers them with enhanced knowledge while ensuring adequate support and resources essential for caregiver and patient well-being. Access to necessary equipment is vital for high-quality home healthcare.ConclusionThis study provides a clearer understanding of the use of deep learning in nursing and its implications for nursing practice. Future research should focus on exploring the impact of deep learning on healthcare operations management through quantitative and qualitative studies. Additionally, developing a framework to guide the integration of deep learning into nursing practice is recommended to facilitate its adoption and implementation.
10.1186/s12912-024-02170-x
exploring the deep learning of artificial intelligence in nursing: a concept analysis with walker and avant’s approach
backgroundin recent years, increased attention has been given to using deep learning (dl) of artificial intelligence (ai) in healthcare to address nursing challenges. the adoption of new technologies in nursing needs to be improved, and ai in nursing is still in its early stages. however, the current literature needs more clarity, which affects clinical practice, research, and theory development. this study aimed to clarify the meaning of deep learning and identify the defining attributes of artificial intelligence within nursing.methodswe conducted a concept analysis of the deep learning of ai in nursing care using walker and avant’s 8-step approach. our search strategy employed boolean techniques and mesh terms across databases, including bmc, cinahl, clinicalkey for nursing, embase, ovid, scopus, springerlink and spinger nature, proquest, pubmed, and web of science. by focusing on relevant keywords in titles and abstracts from articles published between 2018 and 2024, we initially found 571 sources.resultsthirty-seven articles that met the inclusion criteria were analyzed in this study. the attributes of evidence included four themes: focus and immersion, coding and understanding, arranging layers and algorithms, and implementing within the process of use cases to modify recommendations. antecedents, unclear systems and communication, insufficient data management knowledge and support, and compound challenges can lead to suffering and risky caregiving tasks. applying deep learning techniques enables nurses to simulate scenarios, predict outcomes, and plan care more precisely. embracing deep learning equipment allows nurses to make better decisions. it empowers them with enhanced knowledge while ensuring adequate support and resources essential for caregiver and patient well-being. access to necessary equipment is vital for high-quality home healthcare.conclusionthis study provides a clearer understanding of the use of deep learning in nursing and its implications for nursing practice. future research should focus on exploring the impact of deep learning on healthcare operations management through quantitative and qualitative studies. additionally, developing a framework to guide the integration of deep learning into nursing practice is recommended to facilitate its adoption and implementation.
[ "increased attention", "deep learning", "dl", "artificial intelligence", "healthcare", "nursing challenges", "the adoption", "new technologies", "nursing needs", "nursing", "its early stages", "the current literature", "more clarity", "which", "clinical practice", "research", "theory development", "this study", "the meaning", "deep learning", "the defining attributes", "artificial intelligence", "a concept analysis", "the deep learning", "ai", "nursing care", "walker and avant’s 8-step approach", "our search strategy", "boolean techniques", "mesh", "terms", "databases", "bmc", "cinahl", "clinicalkey", "nursing", "embase", "ovid", "scopus", "springerlink", "spinger nature", "web", "science", "relevant keywords", "titles", "abstracts", "articles", "we", "571 sources.resultsthirty-seven articles", "that", "the inclusion criteria", "this study", "the attributes", "evidence", "four themes", "immersion", "understanding", "layers", "algorithms", "the process", "use cases", "recommendations", "antecedents", "unclear systems", "communication", "insufficient data management knowledge", "support", "compound challenges", "suffering and risky caregiving tasks", "deep learning techniques", "nurses", "scenarios", "outcomes", "plan", "deep learning equipment", "nurses", "better decisions", "it", "them", "enhanced knowledge", "adequate support", "resources", "caregiver", "well-being", "access", "necessary equipment", "high-quality home", "healthcare.conclusionthis study", "a clearer understanding", "the use", "deep learning", "nursing", "its implications", "nursing practice", "future research", "the impact", "deep learning", "healthcare operations management", "quantitative and qualitative studies", "a framework", "the integration", "deep learning", "nursing practice", "its adoption", "implementation", "backgroundin recent years", "8", "between 2018 and 2024", "571", "four" ]
Automatic retinoblastoma screening and surveillance using deep learning
[ "Ruiheng Zhang", "Li Dong", "Ruyue Li", "Kai Zhang", "Yitong Li", "Hongshu Zhao", "Jitong Shi", "Xin Ge", "Xiaolin Xu", "Libin Jiang", "Xuhan Shi", "Chuan Zhang", "Wenda Zhou", "Liangyuan Xu", "Haotian Wu", "Heyan Li", "Chuyao Yu", "Jing Li", "Jianmin Ma", "Wenbin Wei" ]
BackgroundRetinoblastoma is the most common intraocular malignancy in childhood. With the advanced management strategy, the globe salvage and overall survival have significantly improved, which proposes subsequent challenges regarding long-term surveillance and offspring screening. This study aimed to apply a deep learning algorithm to reduce the burden of follow-up and offspring screening.MethodsThis cohort study includes retinoblastoma patients who visited Beijing Tongren Hospital from March 2018 to January 2022 for deep learning algorism development. Clinical-suspected and treated retinoblastoma patients from February 2022 to June 2022 were prospectively collected for prospective validation. Images from the posterior pole and peripheral retina were collected, and reference standards were made according to the consensus of the multidisciplinary management team. A deep learning algorithm was trained to identify “normal fundus”, “stable retinoblastoma” in which specific treatment is not required, and “active retinoblastoma” in which specific treatment is required. The performance of each classifier included sensitivity, specificity, accuracy, and cost-utility.ResultsA total of 36,623 images were included for developing the Deep Learning Assistant for Retinoblastoma Monitoring (DLA-RB) algorithm. In internal fivefold cross-validation, DLA-RB achieved an area under curve (AUC) of 0.998 (95% confidence interval [CI] 0.986–1.000) in distinguishing normal fundus and active retinoblastoma, and 0.940 (95% CI 0.851–0.996) in distinguishing stable and active retinoblastoma. From February 2022 to June 2022, 139 eyes of 103 patients were prospectively collected. In identifying active retinoblastoma tumours from all clinical-suspected patients and active retinoblastoma from all treated retinoblastoma patients, the AUC of DLA-RB reached 0.991 (95% CI 0.970–1.000), and 0.962 (95% CI 0.915–1.000), respectively. The combination between ophthalmologists and DLA-RB significantly improved the accuracy of competent ophthalmologists and residents regarding both binary tasks. Cost-utility analysis revealed DLA-RB-based diagnosis mode is cost-effective in both retinoblastoma diagnosis and active retinoblastoma identification.ConclusionsDLA-RB achieved high accuracy and sensitivity in identifying active retinoblastoma from the normal and stable retinoblastoma fundus. It can be used to surveil the activity of retinoblastoma during follow-up and screen high-risk offspring. Compared with referral procedures to ophthalmologic centres, DLA-RB-based screening and surveillance is cost-effective and can be incorporated within telemedicine programs.Clinical Trial RegistrationThis study was registered on ClinicalTrials.gov (NCT05308043).
10.1038/s41416-023-02320-z
automatic retinoblastoma screening and surveillance using deep learning
backgroundretinoblastoma is the most common intraocular malignancy in childhood. with the advanced management strategy, the globe salvage and overall survival have significantly improved, which proposes subsequent challenges regarding long-term surveillance and offspring screening. this study aimed to apply a deep learning algorithm to reduce the burden of follow-up and offspring screening.methodsthis cohort study includes retinoblastoma patients who visited beijing tongren hospital from march 2018 to january 2022 for deep learning algorism development. clinical-suspected and treated retinoblastoma patients from february 2022 to june 2022 were prospectively collected for prospective validation. images from the posterior pole and peripheral retina were collected, and reference standards were made according to the consensus of the multidisciplinary management team. a deep learning algorithm was trained to identify “normal fundus”, “stable retinoblastoma” in which specific treatment is not required, and “active retinoblastoma” in which specific treatment is required. the performance of each classifier included sensitivity, specificity, accuracy, and cost-utility.resultsa total of 36,623 images were included for developing the deep learning assistant for retinoblastoma monitoring (dla-rb) algorithm. in internal fivefold cross-validation, dla-rb achieved an area under curve (auc) of 0.998 (95% confidence interval [ci] 0.986–1.000) in distinguishing normal fundus and active retinoblastoma, and 0.940 (95% ci 0.851–0.996) in distinguishing stable and active retinoblastoma. from february 2022 to june 2022, 139 eyes of 103 patients were prospectively collected. in identifying active retinoblastoma tumours from all clinical-suspected patients and active retinoblastoma from all treated retinoblastoma patients, the auc of dla-rb reached 0.991 (95% ci 0.970–1.000), and 0.962 (95% ci 0.915–1.000), respectively. the combination between ophthalmologists and dla-rb significantly improved the accuracy of competent ophthalmologists and residents regarding both binary tasks. cost-utility analysis revealed dla-rb-based diagnosis mode is cost-effective in both retinoblastoma diagnosis and active retinoblastoma identification.conclusionsdla-rb achieved high accuracy and sensitivity in identifying active retinoblastoma from the normal and stable retinoblastoma fundus. it can be used to surveil the activity of retinoblastoma during follow-up and screen high-risk offspring. compared with referral procedures to ophthalmologic centres, dla-rb-based screening and surveillance is cost-effective and can be incorporated within telemedicine programs.clinical trial registrationthis study was registered on clinicaltrials.gov (nct05308043).
[ "backgroundretinoblastoma", "the most common intraocular malignancy", "childhood", "the advanced management strategy", "the globe salvage", "overall survival", "which", "subsequent challenges", "long-term surveillance", "offspring screening", "this study", "a deep learning algorithm", "the burden", "follow-up", "screening.methodsthis cohort study", "retinoblastoma patients", "who", "beijing tongren hospital", "march", "january", "deep learning algorism development", "clinical-suspected and treated retinoblastoma patients", "february", "june", "prospective validation", "images", "the posterior pole", "peripheral retina", "reference standards", "the consensus", "the multidisciplinary management team", "a deep learning algorithm", "normal fundus”, “stable retinoblastoma", "which", "specific treatment", "“active retinoblastoma", "which", "specific treatment", "the performance", "each classifier", "sensitivity", "specificity", "accuracy", "cost-utility.resultsa total", "36,623 images", "the deep learning assistant", "retinoblastoma monitoring", "dla-rb", "-", "dla-rb", "an area", "curve", "auc", "(95% confidence interval", "normal fundus", "active retinoblastoma", "0.940 (95%", "ci 0.851–0.996", "stable and active retinoblastoma", "february", "june", "139 eyes", "103 patients", "active retinoblastoma tumours", "all clinical-suspected patients", "active retinoblastoma", "all treated retinoblastoma patients", "the auc", "dla-rb", "0.991 (95%", "ci 0.970–1.000", "ci 0.915–1.000", "the combination", "ophthalmologists", "dla-rb", "the accuracy", "competent ophthalmologists", "residents", "both binary tasks", "cost-utility analysis", "dla-rb-based diagnosis mode", "both retinoblastoma diagnosis", "active retinoblastoma", "high accuracy", "sensitivity", "active retinoblastoma", "the normal and stable retinoblastoma fundus", "it", "the activity", "retinoblastoma", "follow-up and screen high-risk offspring", "referral procedures", "ophthalmologic centres", "dla-rb-based screening", "surveillance", "telemedicine programs.clinical trial registrationthis study", "nct05308043", "march 2018 to", "january 2022", "february 2022 to june 2022", "36,623", "0.998", "95%", "0.940", "95%", "0.851–0.996", "february 2022 to june 2022", "139", "103", "0.991", "95%", "0.962", "95%", "0.915–1.000" ]
OSS reliability assessment method based on deep learning and independent Wiener data preprocessing
[ "Yoshinobu Tamura", "Shoichiro Miyamoto", "Lei Zhou", "Adarsh Anand", "P. K. Kapur", "Shigeru Yamada" ]
The fault big data sets of many open source software (OSS) are recorded on the bug tracking systems. In the past, we have proposed the effort assessment method under the assumption that the fault detection phenomenon depends on the maintenance effort, because the number of software fault is influenced by the effort expenditure. The past research in terms of the effort assessment method of OSS is based on the effort data sets. On the other hand, we propose the deep learning approach to the OSS fault big data. In the past, the existing method without Wiener process cannot estimate within the range of existing data only. The proposed method assumes that the fault detection process follows the Wiener process such as the imperfect debugging and Markov property. Thereby, the proposed method can estimate the exceeding values by adding the white noise based on the Wiener process. Then, the proposed method make it possible for the OSS managers to assess the values exceeding from the existing data. Then, we show several reliability assessment measures based on the fault modification time based on the deep learning. Moreover, several numerical illustrations based on the proposed deep learning model are shown in this paper.
10.1007/s13198-024-02288-w
oss reliability assessment method based on deep learning and independent wiener data preprocessing
the fault big data sets of many open source software (oss) are recorded on the bug tracking systems. in the past, we have proposed the effort assessment method under the assumption that the fault detection phenomenon depends on the maintenance effort, because the number of software fault is influenced by the effort expenditure. the past research in terms of the effort assessment method of oss is based on the effort data sets. on the other hand, we propose the deep learning approach to the oss fault big data. in the past, the existing method without wiener process cannot estimate within the range of existing data only. the proposed method assumes that the fault detection process follows the wiener process such as the imperfect debugging and markov property. thereby, the proposed method can estimate the exceeding values by adding the white noise based on the wiener process. then, the proposed method make it possible for the oss managers to assess the values exceeding from the existing data. then, we show several reliability assessment measures based on the fault modification time based on the deep learning. moreover, several numerical illustrations based on the proposed deep learning model are shown in this paper.
[ "the fault big data sets", "many open source software", "oss", "the bug tracking systems", "the past", "we", "the effort assessment method", "the assumption", "the fault detection phenomenon", "the maintenance effort", "the number", "software fault", "the effort expenditure", "the past research", "terms", "the effort assessment method", "oss", "the effort data sets", "the other hand", "we", "the deep learning approach", "the oss", "big data", "the past", "the existing method", "wiener process", "the range", "existing data", "the proposed method", "the fault detection process", "the wiener process", "the imperfect debugging", "markov property", "the proposed method", "the exceeding values", "the white noise", "the wiener process", "the proposed method", "it", "the oss managers", "the values", "the existing data", "we", "several reliability assessment measures", "the fault modification time", "the deep learning", "several numerical illustrations", "the proposed deep learning model", "this paper" ]
A comparative study: prediction of parkinson’s disease using machine learning, deep learning and nature inspired algorithm
[ "Pankaj Kumar Keserwani", "Suman Das", "Nairita Sarkar" ]
Parkinson’s Disease (PD) is a degenerative and progressive neurological disorder worsens over time. This disease initially affects people over 55 years old. Patients with PD often exhibit a variety of non-motor and motor symptoms and are diagnosed based on those motor and non-motor symptoms as well as numerous clinical indicators. Advancement in medical science has produced medicines for many diseases but till now no significant remedies are discovered for Parkinson disease. It is very necessary to detect PD at early phase to take precautions accordingly to reduce its harmful impact and improve the patient’s life style to a considerable level. In this direction Artificial Intelligence (AI) based approaches have recently attracted many researchers to work accordingly as AI can handle vast amounts of data and generate accurate statistical predictions. Addressing this imperative, researchers have turned their focus toward Artificial Intelligence (AI) as a promising avenue. AI’s capacity to manage vast datasets and generate precise statistical predictions makes it an invaluable tool for PD detection. This article aims to provide a comprehensive survey and in-depth analysis of various AI-based approaches. Leveraging machine learning (ML), deep learning (DL), and meta-heuristic algorithms, these approaches contribute to the prediction of PD. Additionally, the article delves into current research directions. As the pursuit of advancements continues, the integration of AI holds promise in revolutionizing early detection methods and subsequently improving the lives of individuals grappling with Parkinson’s disease.
10.1007/s11042-024-18186-z
a comparative study: prediction of parkinson’s disease using machine learning, deep learning and nature inspired algorithm
parkinson’s disease (pd) is a degenerative and progressive neurological disorder worsens over time. this disease initially affects people over 55 years old. patients with pd often exhibit a variety of non-motor and motor symptoms and are diagnosed based on those motor and non-motor symptoms as well as numerous clinical indicators. advancement in medical science has produced medicines for many diseases but till now no significant remedies are discovered for parkinson disease. it is very necessary to detect pd at early phase to take precautions accordingly to reduce its harmful impact and improve the patient’s life style to a considerable level. in this direction artificial intelligence (ai) based approaches have recently attracted many researchers to work accordingly as ai can handle vast amounts of data and generate accurate statistical predictions. addressing this imperative, researchers have turned their focus toward artificial intelligence (ai) as a promising avenue. ai’s capacity to manage vast datasets and generate precise statistical predictions makes it an invaluable tool for pd detection. this article aims to provide a comprehensive survey and in-depth analysis of various ai-based approaches. leveraging machine learning (ml), deep learning (dl), and meta-heuristic algorithms, these approaches contribute to the prediction of pd. additionally, the article delves into current research directions. as the pursuit of advancements continues, the integration of ai holds promise in revolutionizing early detection methods and subsequently improving the lives of individuals grappling with parkinson’s disease.
[ "parkinson’s disease", "pd", "a degenerative and progressive neurological disorder", "time", "this disease", "people", "patients", "pd", "a variety", "non-motor and motor symptoms", "those motor and non-motor symptoms", "numerous clinical indicators", "advancement", "medical science", "medicines", "many diseases", "no significant remedies", "parkinson disease", "it", "pd", "early phase", "precautions", "its harmful impact", "the patient’s life style", "a considerable level", "this direction", "artificial intelligence", "ai) based approaches", "many researchers", "ai", "vast amounts", "data", "accurate statistical predictions", "this imperative", "researchers", "their focus", "artificial intelligence", "(ai", "a promising avenue", "ai’s capacity", "vast datasets", "precise statistical predictions", "it", "pd detection", "this article", "a comprehensive survey", "-depth", "various ai-based approaches", "machine learning", "ml", "deep learning", "dl", "meta-heuristic algorithms", "these approaches", "the prediction", "pd", "the article", "current research directions", "the pursuit", "advancements", "the integration", "ai", "promise", "early detection methods", "the lives", "individuals", "parkinson’s disease", "55 years old" ]
A hybrid approach to detecting Parkinson's disease using spectrogram and deep learning CNN-LSTM network
[ "V. Shibina", "T. M. Thasleema" ]
Parkinson’s disease (PD) is a common illness that affects brain neurons. Medical practitioners and caregivers face challenges in detecting Parkinson's disease promptly, either in its early or late stages. There is an urgent need for non-invasive PD diagnostic technologies because timely diagnosis substantially impacts patient outcomes. This research aims to provide an efficient way of identifying Parkinson's disease by transforming voice inputs into spectrograms using Short Term Fourier Transform and applying deep learning algorithms. The identification of Parkinson's disease can be done by leveraging the deep learning architectures such as Convolutional Neural Networks and Long Short-Term Memory networks. The experiment produced positive findings, with 95.67% accuracy, 97.62% precision, 94.67% recall, and an F1-score of 95.91%. The outcomes indicate that the suggested deep learning method is more successful in PD identification, surpassing the results of traditional classification methods.
10.1007/s10772-024-10128-2
a hybrid approach to detecting parkinson's disease using spectrogram and deep learning cnn-lstm network
parkinson’s disease (pd) is a common illness that affects brain neurons. medical practitioners and caregivers face challenges in detecting parkinson's disease promptly, either in its early or late stages. there is an urgent need for non-invasive pd diagnostic technologies because timely diagnosis substantially impacts patient outcomes. this research aims to provide an efficient way of identifying parkinson's disease by transforming voice inputs into spectrograms using short term fourier transform and applying deep learning algorithms. the identification of parkinson's disease can be done by leveraging the deep learning architectures such as convolutional neural networks and long short-term memory networks. the experiment produced positive findings, with 95.67% accuracy, 97.62% precision, 94.67% recall, and an f1-score of 95.91%. the outcomes indicate that the suggested deep learning method is more successful in pd identification, surpassing the results of traditional classification methods.
[ "parkinson’s disease", "pd", "a common illness", "that", "brain neurons", "medical practitioners", "caregivers", "challenges", "parkinson's disease", "its early or late stages", "an urgent need", "non-invasive pd diagnostic technologies", "timely diagnosis", "patient outcomes", "this research", "an efficient way", "parkinson's disease", "voice inputs", "spectrograms", "short term fourier transform", "deep learning algorithms", "the identification", "parkinson's disease", "the deep learning architectures", "convolutional neural networks", "long short-term memory networks", "the experiment", "positive findings", "95.67% accuracy", "97.62% precision", "94.67% recall", "an f1-score", "95.91%", "the outcomes", "the suggested deep learning method", "pd identification", "the results", "traditional classification methods", "95.67%", "97.62%", "94.67%", "95.91%" ]
A comprehensive review of deep learning approaches for group activity analysis
[ "Gang Zhang", "Yang Geng", "Zhao G. Gong" ]
The study of group activity analysis has garnered significant attention. Group activity offers a unique perspective on the relationships between individuals, providing insights that individual and crowd activities may not reveal. This paper aims to contribute to the existing body of knowledge by providing a comprehensive review of the methods employed in utilizing deep learning for the analysis of group activity. The review encompasses an overview of various methodologies for group detection, segmentation, feature extraction, and description. Additionally, it delves into the classification, recognition, and prediction of group activities, including crowd trajectory prediction. The representation of crowd activity patterns and labels, along with an exploration of datasets for crowd activity analysis, is also included. Ultimately, the paper concludes with a discussion of potential future research directions in the field. By offering a comprehensive review of the advancements in group activity analysis through the lens of deep learning, this paper aims to provide researchers with a better understanding of the field's progress, thereby contributing to the continued development of this area of study.
10.1007/s00371-024-03479-z
a comprehensive review of deep learning approaches for group activity analysis
the study of group activity analysis has garnered significant attention. group activity offers a unique perspective on the relationships between individuals, providing insights that individual and crowd activities may not reveal. this paper aims to contribute to the existing body of knowledge by providing a comprehensive review of the methods employed in utilizing deep learning for the analysis of group activity. the review encompasses an overview of various methodologies for group detection, segmentation, feature extraction, and description. additionally, it delves into the classification, recognition, and prediction of group activities, including crowd trajectory prediction. the representation of crowd activity patterns and labels, along with an exploration of datasets for crowd activity analysis, is also included. ultimately, the paper concludes with a discussion of potential future research directions in the field. by offering a comprehensive review of the advancements in group activity analysis through the lens of deep learning, this paper aims to provide researchers with a better understanding of the field's progress, thereby contributing to the continued development of this area of study.
[ "the study", "group activity analysis", "significant attention", "group activity", "a unique perspective", "the relationships", "individuals", "insights", "that", "individual and crowd activities", "this paper", "the existing body", "knowledge", "a comprehensive review", "the methods", "deep learning", "the analysis", "group activity", "the review", "an overview", "various methodologies", "group detection", "segmentation", "feature extraction", "description", "it", "the classification", "recognition", "prediction", "group activities", "crowd", "trajectory prediction", "the representation", "crowd activity patterns", "labels", "an exploration", "datasets", "crowd activity analysis", "the paper", "a discussion", "potential future research directions", "the field", "a comprehensive review", "the advancements", "group activity analysis", "the lens", "deep learning", "this paper", "researchers", "a better understanding", "the field's progress", "the continued development", "this area", "study" ]
Curriculum learning and evolutionary optimization into deep learning for text classification
[ "Alfredo Arturo Elías-Miranda", "Daniel Vallejo-Aldana", "Fernando Sánchez-Vega", "A. Pastor López-Monroy", "Alejandro Rosales-Pérez", "Victor Muñiz-Sanchez" ]
The exponential growth of social networks has given rise to a wide variety of content. Some social content violates the integrity and dignity of users, therefore, this task has become challenging. The need to deal with short texts, poorly written language, unbalanced classes, and non-thematic aspects. These can lead to overfitting in deep neural network (DNN) models used for classification tasks. Empirical evidence in previous studies indicates that some of these problems can be overcome by improving the optimization process of the DNN weights to avoid overfitting. Moreover, a well-defined learning process in the input examples could improve the order of the patterns learned throughout the optimization process. In this paper, we propose four Curriculum Learning strategies and a new Hybrid Genetic–Gradient Algorithm that proved to improve the performance of DNN models detecting the class of interest even in highly imbalanced datasets.
10.1007/s00521-023-08632-8
curriculum learning and evolutionary optimization into deep learning for text classification
the exponential growth of social networks has given rise to a wide variety of content. some social content violates the integrity and dignity of users, therefore, this task has become challenging. the need to deal with short texts, poorly written language, unbalanced classes, and non-thematic aspects. these can lead to overfitting in deep neural network (dnn) models used for classification tasks. empirical evidence in previous studies indicates that some of these problems can be overcome by improving the optimization process of the dnn weights to avoid overfitting. moreover, a well-defined learning process in the input examples could improve the order of the patterns learned throughout the optimization process. in this paper, we propose four curriculum learning strategies and a new hybrid genetic–gradient algorithm that proved to improve the performance of dnn models detecting the class of interest even in highly imbalanced datasets.
[ "the exponential growth", "social networks", "rise", "a wide variety", "content", "some social content", "the integrity", "dignity", "users", "this task", "the need", "short texts", "poorly written language", "unbalanced classes", "non-thematic aspects", "these", "deep neural network (dnn) models", "classification tasks", "empirical evidence", "previous studies", "some", "these problems", "the optimization process", "the dnn weights", "a well-defined learning process", "the input examples", "the order", "the patterns", "the optimization process", "this paper", "we", "four curriculum", "strategies", "a new hybrid genetic–gradient algorithm", "that", "the performance", "dnn models", "the class", "interest", "highly imbalanced datasets", "four" ]
Deep learning model for detection of hotspots using infrared thermographic images of electrical installations
[ "Ezechukwu Kalu Ukiwe", "Steve A. Adeshina", "Tsado Jacob", "Bukola Babatunde Adetokun" ]
Hotspots in electrical power equipment or installations are a major issue whenever it occurs within the power system. Factors responsible for this phenomenon are many, sometimes inter-related and other times they are isolated. Electrical hotspots caused by poor connections are common. Deep learning models have become popular for diagnosing anomalies in physical and biological systems, by the instrumentality of feature extraction of images in convolutional neural networks. In this work, a VGG-16 deep neural network model is applied for identifying electrical hotspots by means of transfer learning. This model was achieved by first augmenting the acquired infrared thermographic images, using the pre-trained ImageNet weights of the VGG-16 algorithm with additional global average pooling in place of conventional fully connected layers and a softmax layer at the output. With the categorical cross-entropy loss function, the model was implemented using the Adam optimizer at learning rate of 0.0001 as well as some variants of the Adam optimization algorithm. On evaluation, with a test IRT image dataset, and a comparison with similar works, the research showed that a better accuracy of 99.98% in identification of electrical hotspots was achieved. The model shows good score in performance metrics like accuracy, precision, recall, and F1-score. The obtained results proved the potential of deep learning using computer vision parameters for infrared thermographic identification of electrical hotspots in power system installations. Also, there is need for careful selection of the IR sensor’s thermal range during image acquisition, and suitable choice of color palette would make for easy hotspot isolation, reduce the pixel to pixel temperature differential across any of the images, and easily highlight the critical region of interest with high pixel values. However, it makes edge detection difficult for human visual perception which computer vision-based deep learning model could overcome.
10.1186/s43067-024-00148-y
deep learning model for detection of hotspots using infrared thermographic images of electrical installations
hotspots in electrical power equipment or installations are a major issue whenever it occurs within the power system. factors responsible for this phenomenon are many, sometimes inter-related and other times they are isolated. electrical hotspots caused by poor connections are common. deep learning models have become popular for diagnosing anomalies in physical and biological systems, by the instrumentality of feature extraction of images in convolutional neural networks. in this work, a vgg-16 deep neural network model is applied for identifying electrical hotspots by means of transfer learning. this model was achieved by first augmenting the acquired infrared thermographic images, using the pre-trained imagenet weights of the vgg-16 algorithm with additional global average pooling in place of conventional fully connected layers and a softmax layer at the output. with the categorical cross-entropy loss function, the model was implemented using the adam optimizer at learning rate of 0.0001 as well as some variants of the adam optimization algorithm. on evaluation, with a test irt image dataset, and a comparison with similar works, the research showed that a better accuracy of 99.98% in identification of electrical hotspots was achieved. the model shows good score in performance metrics like accuracy, precision, recall, and f1-score. the obtained results proved the potential of deep learning using computer vision parameters for infrared thermographic identification of electrical hotspots in power system installations. also, there is need for careful selection of the ir sensor’s thermal range during image acquisition, and suitable choice of color palette would make for easy hotspot isolation, reduce the pixel to pixel temperature differential across any of the images, and easily highlight the critical region of interest with high pixel values. however, it makes edge detection difficult for human visual perception which computer vision-based deep learning model could overcome.
[ "hotspots", "electrical power equipment", "installations", "a major issue", "it", "the power system", "factors", "this phenomenon", "many, sometimes inter-related and other times", "they", "electrical hotspots", "poor connections", "deep learning models", "anomalies", "physical and biological systems", "the instrumentality", "feature extraction", "images", "convolutional neural networks", "this work", "a vgg-16 deep neural network model", "electrical hotspots", "means", "transfer learning", "this model", "the acquired infrared thermographic images", "the pre-trained imagenet weights", "the vgg-16 algorithm", "additional global average pooling", "place", "conventional fully connected layers", "a softmax layer", "the output", "the categorical cross-entropy loss function", "the model", "the adam optimizer", "rate", "some variants", "the adam optimization algorithm", "evaluation", "a test irt image dataset", "a comparison", "similar works", "the research", "a better accuracy", "99.98%", "identification", "electrical hotspots", "the model", "good score", "performance metrics", "accuracy", "precision", "recall", "f1-score", "the obtained results", "the potential", "deep learning", "computer vision parameters", "infrared thermographic identification", "electrical hotspots", "power system installations", "need", "careful selection", "the ir sensor’s thermal range", "image acquisition", "suitable choice", "color palette", "easy hotspot isolation", "the pixel", "temperature differential", "any", "the images", "the critical region", "interest", "high pixel values", "it", "edge detection", "human visual perception", "which computer vision-based deep learning model", "first", "vgg-16", "0.0001", "99.98%" ]
Fire Hawks Optimizer with hybrid deep learning driven fall detection on multimodal sensor data
[ "K. Durga Bhavani", "M. Ferni Ukrit" ]
Falls are the main factor contributing to nonfatal and fatal injuries among older persons. Fall detection on sensor data uses different sensors to identify when an individual has fallen. The data from these sensors can be examined to determine if a fall event has occurred. This technology is highly capable of improving the well-being and safety of persons, particularly older adults, by automatically identifying falls and alerting emergency services or caregivers. Deep learning-based fall detection on sensor data is a significant area of research and development that mainly focuses on utilizing deep learning methods for identifying and predicting falls based on data gathered from different sensors. This study presents a novel approach for fall detection utilizing the Fire Hawks Optimizer with hybrid deep learning technique on multimodal sensor data. The proposed methodology integrates deep learning algorithms with the Fire Hawks Optimizer metaheuristic to enhance fall detection accuracy. The technique preprocesses input data using standard scaling and employs a convolutional-recurrent Hopfield neural network model for fall detection and classification. Hyperparameter tuning using the Fire Hawks Optimizer further improves detection outcomes. Experimental evaluation conducted on the KFall dataset demonstrates the effectiveness of the proposed approach. Key metrics including accuracy, precision, recall, F-score, and Matthews correlation coefficient are utilized for evaluation. Results indicate superior performance of the method compared to existing fall detection approaches, with accuracies exceeding 99% on both training and testing sets. Visual representations such as confusion matrices, precision-recall curves, and ROC curves further validate the method's robustness. The proposed model offers significant advancements in fall detection accuracy, making it a promising solution for ensuring the safety and well-being of individuals, particularly older adults.
10.1007/s11042-024-19970-7
fire hawks optimizer with hybrid deep learning driven fall detection on multimodal sensor data
falls are the main factor contributing to nonfatal and fatal injuries among older persons. fall detection on sensor data uses different sensors to identify when an individual has fallen. the data from these sensors can be examined to determine if a fall event has occurred. this technology is highly capable of improving the well-being and safety of persons, particularly older adults, by automatically identifying falls and alerting emergency services or caregivers. deep learning-based fall detection on sensor data is a significant area of research and development that mainly focuses on utilizing deep learning methods for identifying and predicting falls based on data gathered from different sensors. this study presents a novel approach for fall detection utilizing the fire hawks optimizer with hybrid deep learning technique on multimodal sensor data. the proposed methodology integrates deep learning algorithms with the fire hawks optimizer metaheuristic to enhance fall detection accuracy. the technique preprocesses input data using standard scaling and employs a convolutional-recurrent hopfield neural network model for fall detection and classification. hyperparameter tuning using the fire hawks optimizer further improves detection outcomes. experimental evaluation conducted on the kfall dataset demonstrates the effectiveness of the proposed approach. key metrics including accuracy, precision, recall, f-score, and matthews correlation coefficient are utilized for evaluation. results indicate superior performance of the method compared to existing fall detection approaches, with accuracies exceeding 99% on both training and testing sets. visual representations such as confusion matrices, precision-recall curves, and roc curves further validate the method's robustness. the proposed model offers significant advancements in fall detection accuracy, making it a promising solution for ensuring the safety and well-being of individuals, particularly older adults.
[ "falls", "the main factor", "nonfatal and fatal injuries", "older persons", "fall detection", "sensor data", "different sensors", "an individual", "the data", "these sensors", "a fall event", "this technology", "the well-being", "safety", "persons", "particularly older adults", "falls", "emergency services", "caregivers", "deep learning-based fall detection", "sensor data", "a significant area", "research", "development", "that", "deep learning methods", "falls", "data", "different sensors", "this study", "a novel approach", "fall detection", "the fire hawks optimizer", "hybrid deep learning technique", "multimodal sensor data", "the proposed methodology", "algorithms", "the fire hawks optimizer metaheuristic", "fall detection accuracy", "the technique preprocesses", "standard scaling", "a convolutional-recurrent hopfield neural network model", "fall detection", "classification", "hyperparameter", "the fire hawks optimizer", "detection outcomes", "experimental evaluation", "the kfall dataset", "the effectiveness", "the proposed approach", "key metrics", "accuracy", "precision", "recall", "f-score", "matthews correlation coefficient", "evaluation", "results", "superior performance", "the method", "existing fall detection approaches", "accuracies", "99%", "both training and testing sets", "visual representations", "confusion matrices", "precision-recall curves", "the method's robustness", "the proposed model", "significant advancements", "fall detection accuracy", "it", "the safety", "well-being", "individuals", "particularly older adults", "hopfield neural", "99%", "roc" ]
Anthropogenic fingerprints in daily precipitation revealed by deep learning
[ "Yoo-Geun Ham", "Jeong-Hwan Kim", "Seung-Ki Min", "Daehyun Kim", "Tim Li", "Axel Timmermann", "Malte F. Stuecker" ]
According to twenty-first century climate-model projections, greenhouse warming will intensify rainfall variability and extremes across the globe1,2,3,4. However, verifying this prediction using observations has remained a substantial challenge owing to large natural rainfall fluctuations at regional scales3,4. Here we show that deep learning successfully detects the emerging climate-change signals in daily precipitation fields during the observed record. We trained a convolutional neural network (CNN)5 with daily precipitation fields and annual global mean surface air temperature data obtained from an ensemble of present-day and future climate-model simulations6. After applying the algorithm to the observational record, we found that the daily precipitation data represented an excellent predictor for the observed planetary warming, as they showed a clear deviation from natural variability since the mid-2010s. Furthermore, we analysed the deep-learning model with an explainable framework and observed that the precipitation variability of the weather timescale (period less than 10 days) over the tropical eastern Pacific and mid-latitude storm-track regions was most sensitive to anthropogenic warming. Our results highlight that, although the long-term shifts in annual mean precipitation remain indiscernible from the natural background variability, the impact of global warming on daily hydrological fluctuations has already emerged.
10.1038/s41586-023-06474-x
anthropogenic fingerprints in daily precipitation revealed by deep learning
according to twenty-first century climate-model projections, greenhouse warming will intensify rainfall variability and extremes across the globe1,2,3,4. however, verifying this prediction using observations has remained a substantial challenge owing to large natural rainfall fluctuations at regional scales3,4. here we show that deep learning successfully detects the emerging climate-change signals in daily precipitation fields during the observed record. we trained a convolutional neural network (cnn)5 with daily precipitation fields and annual global mean surface air temperature data obtained from an ensemble of present-day and future climate-model simulations6. after applying the algorithm to the observational record, we found that the daily precipitation data represented an excellent predictor for the observed planetary warming, as they showed a clear deviation from natural variability since the mid-2010s. furthermore, we analysed the deep-learning model with an explainable framework and observed that the precipitation variability of the weather timescale (period less than 10 days) over the tropical eastern pacific and mid-latitude storm-track regions was most sensitive to anthropogenic warming. our results highlight that, although the long-term shifts in annual mean precipitation remain indiscernible from the natural background variability, the impact of global warming on daily hydrological fluctuations has already emerged.
[ "twenty-first century climate-model projections", "greenhouse warming", "rainfall variability", "extremes", "the globe1,2,3,4", "this prediction", "observations", "a substantial challenge", "large natural rainfall fluctuations", "regional scales3,4", "we", "deep learning", "the emerging climate-change signals", "daily precipitation fields", "the observed record", "we", "a convolutional neural network", "cnn)5", "daily precipitation fields", "annual global mean surface air temperature data", "an ensemble", "present-day and future climate-model simulations6", "the algorithm", "the observational record", "we", "the daily precipitation data", "an excellent predictor", "the observed planetary warming", "they", "a clear deviation", "natural variability", "we", "the deep-learning model", "an explainable framework", "the precipitation variability", "the weather timescale", "period", "the tropical eastern pacific and mid-latitude storm-track regions", "anthropogenic warming", "our results", "the long-term shifts", "annual mean precipitation", "the natural background variability", "the impact", "global warming", "daily hydrological fluctuations", "twenty-first century", "daily", "cnn)5", "daily", "present-day", "daily", "the mid-2010s", "less than 10 days", "annual", "daily" ]
Optimization of deep learning models: benchmark and analysis
[ "Rasheed Ahmad", "Izzat Alsmadi", "Mohammad Al-Ramahi" ]
Model optimization in deep learning (DL) and neural networks is concerned about how and why the model can be successfully trained towards one or more objective functions. The evolutionary learning or training process continuously considers the dynamic parameters of the model. Many researchers propose a deep learning-based solution by randomly selecting a single classifier model architecture. Such approaches generally overlook the hidden and complex nature of the model’s internal working, producing biased results. Larger and deeper NN models bring many complexities and logistic challenges while building and deploying them. To obtain high-quality performance results, an optimal model generally depends on the appropriate architectural settings, such as the number of hidden layers and the number of neurons at each layer. A challenging and time-consuming task is to select and test various combinations of these settings manually. This paper presents an extensive empirical analysis of various deep learning algorithms trained recursively using permutated settings to establish benchmarks and find an optimal model. The paper analyzed the Stack Overflow dataset to predict the quality of posted questions. The extensive empirical analysis revealed that some famous deep learning algorithms such as CNN are the least effective algorithm in solving this problem compared to multilayer perceptron (MLP), which provides efficient computing and the best results in terms of prediction accuracy. The analysis also shows that manipulating the number of neurons alone at each layer in a network does not influence model optimization. This paper’s findings will help to recognize the fact that future models should be built by considering a vast range of model architectural settings for an optimal solution.
10.1007/s43674-023-00055-1
optimization of deep learning models: benchmark and analysis
model optimization in deep learning (dl) and neural networks is concerned about how and why the model can be successfully trained towards one or more objective functions. the evolutionary learning or training process continuously considers the dynamic parameters of the model. many researchers propose a deep learning-based solution by randomly selecting a single classifier model architecture. such approaches generally overlook the hidden and complex nature of the model’s internal working, producing biased results. larger and deeper nn models bring many complexities and logistic challenges while building and deploying them. to obtain high-quality performance results, an optimal model generally depends on the appropriate architectural settings, such as the number of hidden layers and the number of neurons at each layer. a challenging and time-consuming task is to select and test various combinations of these settings manually. this paper presents an extensive empirical analysis of various deep learning algorithms trained recursively using permutated settings to establish benchmarks and find an optimal model. the paper analyzed the stack overflow dataset to predict the quality of posted questions. the extensive empirical analysis revealed that some famous deep learning algorithms such as cnn are the least effective algorithm in solving this problem compared to multilayer perceptron (mlp), which provides efficient computing and the best results in terms of prediction accuracy. the analysis also shows that manipulating the number of neurons alone at each layer in a network does not influence model optimization. this paper’s findings will help to recognize the fact that future models should be built by considering a vast range of model architectural settings for an optimal solution.
[ "model optimization", "deep learning", "dl", "neural networks", "the model", "one or more objective functions", "the evolutionary learning or training process", "the dynamic parameters", "the model", "many researchers", "a deep learning-based solution", "a single classifier model architecture", "such approaches", "the hidden and complex nature", "the model’s internal working", "biased results", "models", "many complexities", "logistic challenges", "them", "high-quality performance results", "an optimal model", "the appropriate architectural settings", "the number", "hidden layers", "the number", "neurons", "each layer", "a challenging and time-consuming task", "various combinations", "these settings", "this paper", "an extensive empirical analysis", "various deep learning algorithms", "permutated settings", "benchmarks", "an optimal model", "the paper", "the stack overflow", "the quality", "posted questions", "the extensive empirical analysis", "some famous deep learning algorithms", "cnn", "the least effective algorithm", "this problem", "multilayer perceptron", "mlp", "which", "efficient computing", "the best results", "terms", "prediction accuracy", "the analysis", "the number", "neurons", "each layer", "a network", "model optimization", "this paper’s findings", "the fact", "future models", "a vast range", "model architectural settings", "an optimal solution", "one", "cnn" ]
Classification and detection of natural disasters using machine learning and deep learning techniques: A review
[ "Kibitok Abraham", "Moataz Abdelwahab", "Mohammed Abo-Zahhad" ]
For efficient disaster management, it is essential to identify and categorize natural disasters. The classical approaches and current technological advancements for identifying, categorizing, and reducing the harmful effects of natural catastrophes are discussed in this review article. They include human observation and reporting, satellite images, seismology, radar, infrared imagery, and sonar. The article explores natural disasters’ challenges and harmful effects and their mitigation measures. The article explains the benefits and drawbacks of published approaches and emphasizes how they may be used to identify many kinds of natural catastrophes, including earthquakes, floods, wildfires, and hurricanes. Discussions on current technological advancements, including machine and deep learning applications, that can potentially increase the precision and efficiency of natural disaster detection and classification are presented. Overall, the review article emphasizes the significance of continuing research and improving current techniques to increase communities’ and countries’ resilience and preparedness for natural disasters. Moreover, future directions and suggestions to stakeholders in disaster management are highlighted.
10.1007/s12145-023-01205-2
classification and detection of natural disasters using machine learning and deep learning techniques: a review
for efficient disaster management, it is essential to identify and categorize natural disasters. the classical approaches and current technological advancements for identifying, categorizing, and reducing the harmful effects of natural catastrophes are discussed in this review article. they include human observation and reporting, satellite images, seismology, radar, infrared imagery, and sonar. the article explores natural disasters’ challenges and harmful effects and their mitigation measures. the article explains the benefits and drawbacks of published approaches and emphasizes how they may be used to identify many kinds of natural catastrophes, including earthquakes, floods, wildfires, and hurricanes. discussions on current technological advancements, including machine and deep learning applications, that can potentially increase the precision and efficiency of natural disaster detection and classification are presented. overall, the review article emphasizes the significance of continuing research and improving current techniques to increase communities’ and countries’ resilience and preparedness for natural disasters. moreover, future directions and suggestions to stakeholders in disaster management are highlighted.
[ "efficient disaster management", "it", "natural disasters", "the classical approaches", "current technological advancements", "the harmful effects", "natural catastrophes", "this review article", "they", "human observation", "reporting", "satellite images", "seismology", "radar", "infrared imagery", "sonar", "the article", "natural disasters’ challenges", "harmful effects", "their mitigation measures", "the article", "the benefits", "drawbacks", "published approaches", "they", "many kinds", "natural catastrophes", "earthquakes", "floods", "wildfires", "hurricanes", "discussions", "current technological advancements", "machine", "deep learning applications", "that", "the precision", "efficiency", "natural disaster detection", "classification", "the review article", "the significance", "continuing research", "current techniques", "communities", "countries’ resilience", "preparedness", "natural disasters", "future directions", "suggestions", "stakeholders", "disaster management" ]
A comprehensive review of deep learning approaches for group activity analysis
[ "Gang Zhang", "Yang Geng", "Zhao G. Gong" ]
The study of group activity analysis has garnered significant attention. Group activity offers a unique perspective on the relationships between individuals, providing insights that individual and crowd activities may not reveal. This paper aims to contribute to the existing body of knowledge by providing a comprehensive review of the methods employed in utilizing deep learning for the analysis of group activity. The review encompasses an overview of various methodologies for group detection, segmentation, feature extraction, and description. Additionally, it delves into the classification, recognition, and prediction of group activities, including crowd trajectory prediction. The representation of crowd activity patterns and labels, along with an exploration of datasets for crowd activity analysis, is also included. Ultimately, the paper concludes with a discussion of potential future research directions in the field. By offering a comprehensive review of the advancements in group activity analysis through the lens of deep learning, this paper aims to provide researchers with a better understanding of the field's progress, thereby contributing to the continued development of this area of study.
10.1007/s00371-024-03479-z
a comprehensive review of deep learning approaches for group activity analysis
the study of group activity analysis has garnered significant attention. group activity offers a unique perspective on the relationships between individuals, providing insights that individual and crowd activities may not reveal. this paper aims to contribute to the existing body of knowledge by providing a comprehensive review of the methods employed in utilizing deep learning for the analysis of group activity. the review encompasses an overview of various methodologies for group detection, segmentation, feature extraction, and description. additionally, it delves into the classification, recognition, and prediction of group activities, including crowd trajectory prediction. the representation of crowd activity patterns and labels, along with an exploration of datasets for crowd activity analysis, is also included. ultimately, the paper concludes with a discussion of potential future research directions in the field. by offering a comprehensive review of the advancements in group activity analysis through the lens of deep learning, this paper aims to provide researchers with a better understanding of the field's progress, thereby contributing to the continued development of this area of study.
[ "the study", "group activity analysis", "significant attention", "group activity", "a unique perspective", "the relationships", "individuals", "insights", "that", "individual and crowd activities", "this paper", "the existing body", "knowledge", "a comprehensive review", "the methods", "deep learning", "the analysis", "group activity", "the review", "an overview", "various methodologies", "group detection", "segmentation", "feature extraction", "description", "it", "the classification", "recognition", "prediction", "group activities", "crowd", "trajectory prediction", "the representation", "crowd activity patterns", "labels", "an exploration", "datasets", "crowd activity analysis", "the paper", "a discussion", "potential future research directions", "the field", "a comprehensive review", "the advancements", "group activity analysis", "the lens", "deep learning", "this paper", "researchers", "a better understanding", "the field's progress", "the continued development", "this area", "study" ]
Fire Hawks Optimizer with hybrid deep learning driven fall detection on multimodal sensor data
[ "K. Durga Bhavani", "M. Ferni Ukrit" ]
Falls are the main factor contributing to nonfatal and fatal injuries among older persons. Fall detection on sensor data uses different sensors to identify when an individual has fallen. The data from these sensors can be examined to determine if a fall event has occurred. This technology is highly capable of improving the well-being and safety of persons, particularly older adults, by automatically identifying falls and alerting emergency services or caregivers. Deep learning-based fall detection on sensor data is a significant area of research and development that mainly focuses on utilizing deep learning methods for identifying and predicting falls based on data gathered from different sensors. This study presents a novel approach for fall detection utilizing the Fire Hawks Optimizer with hybrid deep learning technique on multimodal sensor data. The proposed methodology integrates deep learning algorithms with the Fire Hawks Optimizer metaheuristic to enhance fall detection accuracy. The technique preprocesses input data using standard scaling and employs a convolutional-recurrent Hopfield neural network model for fall detection and classification. Hyperparameter tuning using the Fire Hawks Optimizer further improves detection outcomes. Experimental evaluation conducted on the KFall dataset demonstrates the effectiveness of the proposed approach. Key metrics including accuracy, precision, recall, F-score, and Matthews correlation coefficient are utilized for evaluation. Results indicate superior performance of the method compared to existing fall detection approaches, with accuracies exceeding 99% on both training and testing sets. Visual representations such as confusion matrices, precision-recall curves, and ROC curves further validate the method's robustness. The proposed model offers significant advancements in fall detection accuracy, making it a promising solution for ensuring the safety and well-being of individuals, particularly older adults.
10.1007/s11042-024-19970-7
fire hawks optimizer with hybrid deep learning driven fall detection on multimodal sensor data
falls are the main factor contributing to nonfatal and fatal injuries among older persons. fall detection on sensor data uses different sensors to identify when an individual has fallen. the data from these sensors can be examined to determine if a fall event has occurred. this technology is highly capable of improving the well-being and safety of persons, particularly older adults, by automatically identifying falls and alerting emergency services or caregivers. deep learning-based fall detection on sensor data is a significant area of research and development that mainly focuses on utilizing deep learning methods for identifying and predicting falls based on data gathered from different sensors. this study presents a novel approach for fall detection utilizing the fire hawks optimizer with hybrid deep learning technique on multimodal sensor data. the proposed methodology integrates deep learning algorithms with the fire hawks optimizer metaheuristic to enhance fall detection accuracy. the technique preprocesses input data using standard scaling and employs a convolutional-recurrent hopfield neural network model for fall detection and classification. hyperparameter tuning using the fire hawks optimizer further improves detection outcomes. experimental evaluation conducted on the kfall dataset demonstrates the effectiveness of the proposed approach. key metrics including accuracy, precision, recall, f-score, and matthews correlation coefficient are utilized for evaluation. results indicate superior performance of the method compared to existing fall detection approaches, with accuracies exceeding 99% on both training and testing sets. visual representations such as confusion matrices, precision-recall curves, and roc curves further validate the method's robustness. the proposed model offers significant advancements in fall detection accuracy, making it a promising solution for ensuring the safety and well-being of individuals, particularly older adults.
[ "falls", "the main factor", "nonfatal and fatal injuries", "older persons", "fall detection", "sensor data", "different sensors", "an individual", "the data", "these sensors", "a fall event", "this technology", "the well-being", "safety", "persons", "particularly older adults", "falls", "emergency services", "caregivers", "deep learning-based fall detection", "sensor data", "a significant area", "research", "development", "that", "deep learning methods", "falls", "data", "different sensors", "this study", "a novel approach", "fall detection", "the fire hawks optimizer", "hybrid deep learning technique", "multimodal sensor data", "the proposed methodology", "algorithms", "the fire hawks optimizer metaheuristic", "fall detection accuracy", "the technique preprocesses", "standard scaling", "a convolutional-recurrent hopfield neural network model", "fall detection", "classification", "hyperparameter", "the fire hawks optimizer", "detection outcomes", "experimental evaluation", "the kfall dataset", "the effectiveness", "the proposed approach", "key metrics", "accuracy", "precision", "recall", "f-score", "matthews correlation coefficient", "evaluation", "results", "superior performance", "the method", "existing fall detection approaches", "accuracies", "99%", "both training and testing sets", "visual representations", "confusion matrices", "precision-recall curves", "the method's robustness", "the proposed model", "significant advancements", "fall detection accuracy", "it", "the safety", "well-being", "individuals", "particularly older adults", "hopfield neural", "99%", "roc" ]
Optimization of deep learning models: benchmark and analysis
[ "Rasheed Ahmad", "Izzat Alsmadi", "Mohammad Al-Ramahi" ]
Model optimization in deep learning (DL) and neural networks is concerned about how and why the model can be successfully trained towards one or more objective functions. The evolutionary learning or training process continuously considers the dynamic parameters of the model. Many researchers propose a deep learning-based solution by randomly selecting a single classifier model architecture. Such approaches generally overlook the hidden and complex nature of the model’s internal working, producing biased results. Larger and deeper NN models bring many complexities and logistic challenges while building and deploying them. To obtain high-quality performance results, an optimal model generally depends on the appropriate architectural settings, such as the number of hidden layers and the number of neurons at each layer. A challenging and time-consuming task is to select and test various combinations of these settings manually. This paper presents an extensive empirical analysis of various deep learning algorithms trained recursively using permutated settings to establish benchmarks and find an optimal model. The paper analyzed the Stack Overflow dataset to predict the quality of posted questions. The extensive empirical analysis revealed that some famous deep learning algorithms such as CNN are the least effective algorithm in solving this problem compared to multilayer perceptron (MLP), which provides efficient computing and the best results in terms of prediction accuracy. The analysis also shows that manipulating the number of neurons alone at each layer in a network does not influence model optimization. This paper’s findings will help to recognize the fact that future models should be built by considering a vast range of model architectural settings for an optimal solution.
10.1007/s43674-023-00055-1
optimization of deep learning models: benchmark and analysis
model optimization in deep learning (dl) and neural networks is concerned about how and why the model can be successfully trained towards one or more objective functions. the evolutionary learning or training process continuously considers the dynamic parameters of the model. many researchers propose a deep learning-based solution by randomly selecting a single classifier model architecture. such approaches generally overlook the hidden and complex nature of the model’s internal working, producing biased results. larger and deeper nn models bring many complexities and logistic challenges while building and deploying them. to obtain high-quality performance results, an optimal model generally depends on the appropriate architectural settings, such as the number of hidden layers and the number of neurons at each layer. a challenging and time-consuming task is to select and test various combinations of these settings manually. this paper presents an extensive empirical analysis of various deep learning algorithms trained recursively using permutated settings to establish benchmarks and find an optimal model. the paper analyzed the stack overflow dataset to predict the quality of posted questions. the extensive empirical analysis revealed that some famous deep learning algorithms such as cnn are the least effective algorithm in solving this problem compared to multilayer perceptron (mlp), which provides efficient computing and the best results in terms of prediction accuracy. the analysis also shows that manipulating the number of neurons alone at each layer in a network does not influence model optimization. this paper’s findings will help to recognize the fact that future models should be built by considering a vast range of model architectural settings for an optimal solution.
[ "model optimization", "deep learning", "dl", "neural networks", "the model", "one or more objective functions", "the evolutionary learning or training process", "the dynamic parameters", "the model", "many researchers", "a deep learning-based solution", "a single classifier model architecture", "such approaches", "the hidden and complex nature", "the model’s internal working", "biased results", "models", "many complexities", "logistic challenges", "them", "high-quality performance results", "an optimal model", "the appropriate architectural settings", "the number", "hidden layers", "the number", "neurons", "each layer", "a challenging and time-consuming task", "various combinations", "these settings", "this paper", "an extensive empirical analysis", "various deep learning algorithms", "permutated settings", "benchmarks", "an optimal model", "the paper", "the stack overflow", "the quality", "posted questions", "the extensive empirical analysis", "some famous deep learning algorithms", "cnn", "the least effective algorithm", "this problem", "multilayer perceptron", "mlp", "which", "efficient computing", "the best results", "terms", "prediction accuracy", "the analysis", "the number", "neurons", "each layer", "a network", "model optimization", "this paper’s findings", "the fact", "future models", "a vast range", "model architectural settings", "an optimal solution", "one", "cnn" ]
Literature survey on deep learning methods for liver segmentation from CT images: a comprehensive review
[ "Kumar S. S.", "Vinod Kumar R. S." ]
Segmentation of the liver from computed tomography images is an essential and critical task in medical image analysis, with significant implications for liver disease diagnosis and treatment. Deep learning techniques have emerged as a powerful tool in this domain, offering unprecedented accuracy and robustness. This literature survey paper provides a comprehensive overview of deep learning techniques for segmentation of liver from CT images, aiming to synthesize recent advancements, identify key contributions, and address challenges in this rapidly evolving field. The survey covers various deep learning architectures, including convolution neural networks, U-Net, attention mechanisms, generative adversarial neural networks and transformer models, highlighting their strengths and weaknesses. Evaluation metrics and benchmark datasets commonly used for performance assessment are discussed in this survey. Furthermore, the survey delves into the challenges and limitations of deep learning methods, including interpretability, model robustness, and ethical considerations. The survey concludes by summarizing key findings, highlighting advancements, and outlining future research directions, such as interpretable models, ethical considerations, and bridging the gap between research and clinical implementation. This literature survey serves as a valuable reference for researchers, healthcare professionals, and developers in their pursuit of accurate liver segmentation and advancing medical image analysis.
10.1007/s11042-024-18388-5
literature survey on deep learning methods for liver segmentation from ct images: a comprehensive review
segmentation of the liver from computed tomography images is an essential and critical task in medical image analysis, with significant implications for liver disease diagnosis and treatment. deep learning techniques have emerged as a powerful tool in this domain, offering unprecedented accuracy and robustness. this literature survey paper provides a comprehensive overview of deep learning techniques for segmentation of liver from ct images, aiming to synthesize recent advancements, identify key contributions, and address challenges in this rapidly evolving field. the survey covers various deep learning architectures, including convolution neural networks, u-net, attention mechanisms, generative adversarial neural networks and transformer models, highlighting their strengths and weaknesses. evaluation metrics and benchmark datasets commonly used for performance assessment are discussed in this survey. furthermore, the survey delves into the challenges and limitations of deep learning methods, including interpretability, model robustness, and ethical considerations. the survey concludes by summarizing key findings, highlighting advancements, and outlining future research directions, such as interpretable models, ethical considerations, and bridging the gap between research and clinical implementation. this literature survey serves as a valuable reference for researchers, healthcare professionals, and developers in their pursuit of accurate liver segmentation and advancing medical image analysis.
[ "segmentation", "the liver", "computed tomography images", "an essential and critical task", "medical image analysis", "significant implications", "liver disease diagnosis", "treatment", "deep learning techniques", "a powerful tool", "this domain", "unprecedented accuracy", "robustness", "this literature survey paper", "a comprehensive overview", "deep learning techniques", "segmentation", "liver", "ct images", "recent advancements", "key contributions", "address challenges", "this rapidly evolving field", "the survey", "various deep learning architectures", "convolution neural networks", "u", "net", "attention mechanisms", "generative adversarial neural networks", "transformer models", "their strengths", "weaknesses", "evaluation metrics", "benchmark datasets", "performance assessment", "this survey", "the survey", "the challenges", "limitations", "deep learning methods", "interpretability", "model robustness", "ethical considerations", "the survey", "key findings", "advancements", "future research directions", "interpretable models", "ethical considerations", "the gap", "research and clinical implementation", "this literature survey", "a valuable reference", "researchers", "healthcare professionals", "developers", "their pursuit", "accurate liver segmentation", "medical image analysis" ]
Evaluating the impact of reinforcement learning on automatic deep brain stimulation planning
[ "Anja Pantovic", "Caroline Essert" ]
PurposeTraditional techniques for automating the planning of brain electrode placement based on multi-objective optimization involving many parameters are subject to limitations, especially in terms of sensitivity to local optima, and tend to be replaced by machine learning approaches. This paper explores the feasibility of using deep reinforcement learning (DRL) in this context, starting with the single-electrode use-case of deep brain stimulation (DBS).MethodsWe propose a DRL approach based on deep Q-learning where the states represent the electrode trajectory and associated information, and actions are the possible motions. Deep neural networks allow to navigate the complex state space derived from MRI data. The chosen reward function emphasizes safety and accuracy in reaching the target structure. The results were compared with a reference (segmented electrode) and a conventional technique.ResultsThe DRL approach excelled in navigating the complex anatomy, consistently providing safer and more precise electrode placements than the reference. Compared to conventional techniques, it showed an improvement in accuracy of 2.3% in average proximity to obstacles and 19.4% in average orientation angle. Expectedly, computation times rose significantly, from 2 to 18 min.ConclusionOur investigation into DRL for DBS electrode trajectory planning has showcased its promising potential. Despite only delivering modest accuracy gains compared to traditional methods in the single-electrode case, its relevance for problems with high-dimensional state and action spaces and its resilience against local optima highlight its promising role for complex scenarios. This preliminary study constitutes a first step toward the more challenging problem of multiple-electrodes planning.
10.1007/s11548-024-03078-2
evaluating the impact of reinforcement learning on automatic deep brain stimulation planning
purposetraditional techniques for automating the planning of brain electrode placement based on multi-objective optimization involving many parameters are subject to limitations, especially in terms of sensitivity to local optima, and tend to be replaced by machine learning approaches. this paper explores the feasibility of using deep reinforcement learning (drl) in this context, starting with the single-electrode use-case of deep brain stimulation (dbs).methodswe propose a drl approach based on deep q-learning where the states represent the electrode trajectory and associated information, and actions are the possible motions. deep neural networks allow to navigate the complex state space derived from mri data. the chosen reward function emphasizes safety and accuracy in reaching the target structure. the results were compared with a reference (segmented electrode) and a conventional technique.resultsthe drl approach excelled in navigating the complex anatomy, consistently providing safer and more precise electrode placements than the reference. compared to conventional techniques, it showed an improvement in accuracy of 2.3% in average proximity to obstacles and 19.4% in average orientation angle. expectedly, computation times rose significantly, from 2 to 18 min.conclusionour investigation into drl for dbs electrode trajectory planning has showcased its promising potential. despite only delivering modest accuracy gains compared to traditional methods in the single-electrode case, its relevance for problems with high-dimensional state and action spaces and its resilience against local optima highlight its promising role for complex scenarios. this preliminary study constitutes a first step toward the more challenging problem of multiple-electrodes planning.
[ "purposetraditional techniques", "the planning", "brain electrode placement", "multi-objective optimization", "many parameters", "limitations", "terms", "sensitivity", "local optima", "machine learning approaches", "this paper", "the feasibility", "deep reinforcement learning", "drl", "this context", "the single-electrode use-case", "deep brain stimulation", "dbs).methodswe", "a drl approach", "deep q-learning", "the states", "the electrode trajectory", "associated information", "actions", "the possible motions", "deep neural networks", "the complex state space", "mri data", "the chosen reward function", "safety", "accuracy", "the target structure", "the results", "a reference", "a conventional technique.resultsthe drl approach", "the complex anatomy", "safer and more precise electrode placements", "the reference", "conventional techniques", "it", "an improvement", "accuracy", "2.3%", "average proximity", "obstacles", "19.4%", "average orientation angle", "computation times", "2 to 18 min.conclusionour investigation", "drl", "dbs electrode trajectory planning", "its promising potential", "modest accuracy gains", "traditional methods", "the single-electrode case", "its relevance", "problems", "high-dimensional state and action spaces", "its resilience", "local optima", "its promising role", "complex scenarios", "this preliminary study", "a first step", "the more challenging problem", "multiple-electrodes planning", "the chosen reward function", "2.3%", "19.4%", "2", "first" ]
Deep learning for determining the difficulty of endodontic treatment: a pilot study
[ "Hamed Karkehabadi", "Elham Khoshbin", "Nikoo Ghasemi", "Amal Mahavi", "Hossein Mohammad-Rahimi", "Soroush Sadr" ]
BackgroundTo develop and validate a deep learning model for automated assessment of endodontic case difficulty from periapical radiographs.MethodsA dataset of 1,386 periapical radiographs was compiled from two clinical sites. Two dentists and two endodontists annotated the radiographs for difficulty using the “simple assessment” criteria from the American Association of Endodontists’ case difficulty assessment form in the Endocase application. A classification task labeled cases as “easy” or “hard”, while regression predicted overall difficulty scores. Convolutional neural networks (i.e. VGG16, ResNet18, ResNet50, ResNext50, and Inception v2) were used, with a baseline model trained via transfer learning from ImageNet weights. Other models was pre-trained using self-supervised contrastive learning (i.e. BYOL, SimCLR, MoCo, and DINO) on 20,295 unlabeled dental radiographs to learn representation without manual labels. Both models were evaluated using 10-fold cross-validation, with performance compared to seven human examiners (three general dentists and four endodontists) on a hold-out test set.ResultsThe baseline VGG16 model attained 87.62% accuracy in classifying difficulty. Self-supervised pretraining did not improve performance. Regression predicted scores with ± 3.21 score error. All models outperformed human raters, with poor inter-examiner reliability.ConclusionThis pilot study demonstrated the feasibility of automated endodontic difficulty assessment via deep learning models.
10.1186/s12903-024-04235-4
deep learning for determining the difficulty of endodontic treatment: a pilot study
backgroundto develop and validate a deep learning model for automated assessment of endodontic case difficulty from periapical radiographs.methodsa dataset of 1,386 periapical radiographs was compiled from two clinical sites. two dentists and two endodontists annotated the radiographs for difficulty using the “simple assessment” criteria from the american association of endodontists’ case difficulty assessment form in the endocase application. a classification task labeled cases as “easy” or “hard”, while regression predicted overall difficulty scores. convolutional neural networks (i.e. vgg16, resnet18, resnet50, resnext50, and inception v2) were used, with a baseline model trained via transfer learning from imagenet weights. other models was pre-trained using self-supervised contrastive learning (i.e. byol, simclr, moco, and dino) on 20,295 unlabeled dental radiographs to learn representation without manual labels. both models were evaluated using 10-fold cross-validation, with performance compared to seven human examiners (three general dentists and four endodontists) on a hold-out test set.resultsthe baseline vgg16 model attained 87.62% accuracy in classifying difficulty. self-supervised pretraining did not improve performance. regression predicted scores with ± 3.21 score error. all models outperformed human raters, with poor inter-examiner reliability.conclusionthis pilot study demonstrated the feasibility of automated endodontic difficulty assessment via deep learning models.
[ "backgroundto", "a deep learning model", "automated assessment", "endodontic case difficulty", "periapical radiographs.methodsa dataset", "1,386 periapical radiographs", "two clinical sites", "two dentists", "two endodontists", "the radiographs", "difficulty", "the “simple assessment” criteria", "the american association", "endodontists’ case difficulty assessment form", "the endocase application", "a classification task", "cases", "regression", "overall difficulty scores", "convolutional neural networks", "i.e. vgg16", "resnet18", "resnet50", "resnext50", "inception v2", "a baseline model", "imagenet weights", "other models", "self-supervised contrastive learning", "i.e. byol", "moco", "dino", "20,295 unlabeled dental radiographs", "representation", "manual labels", "both models", "10-fold cross", "-", "validation", "performance", "seven human examiners", "three general dentists", "four endodontists", "a hold-out test set.resultsthe baseline vgg16 model", "87.62% accuracy", "difficulty", "self-supervised pretraining", "performance", "regression", "scores", "± 3.21 score error", "all models", "human raters", "poor inter-examiner reliability.conclusionthis pilot study", "the feasibility", "automated endodontic difficulty assessment", "deep learning models", "backgroundto", "radiographs.methodsa", "1,386", "two", "two", "two", "the american association of endodontists’", "resnet18", "resnet50", "resnext50", "20,295", "10-fold", "seven", "three", "four", "87.62%", "3.21" ]
Application Level Resource Scheduling for Deep Learning Acceleration on MPSoC
[ "Cong Gao", "Sangeet Saha", "Xuqi Zhu", "Hongyuan Jing", "Klaus D. McDonald-Maier", "Xiaojun Zhai" ]
Deep Neutral Networks (DNNs) have been widely used in many applications, such as self-driving cars, natural language processing (NLP), image classification, visual object recognition, and so on. Field-programmable gate array (FPGA) based Multiprocessor System on a Chip (MPSoC) is recently considered one of the popular choices for deploying DNN models. However, the limited resource capacity of MPSoC imposes a challenge for such practical implementation. Recent studies revealed the trade-off between the “resources consumed" vs. the “performance achieved". Taking a cue from these findings, we address the problem of efficient implementation of deep learning into the resource-constrained MPSoC in this paper, where each deep learning network is run with different service levels based on resource usage (where a higher service level implies higher performance with increased resource consumption). To this end, we propose a heuristic-based strategy, Application Wise Level Selector (AWLS), for selecting service levels to maximize the overall performance subject to a given resource bound. AWLS can achieve higher performance within a constrained resource budget under various simulation scenarios. Further, we verify the proposed strategy using an AMD-Xilinx Zynq UltraScale+ XCZU9EG SoC. Using a framework designed to deploy multi-DNN on multi-DPUs (Deep Learning Units), it is proved that an optimal solution is achieved from the algorithm, which obtains the highest performance (Frames Per Second) using the same resource budget.
10.1007/s11265-023-01881-9
application level resource scheduling for deep learning acceleration on mpsoc
deep neutral networks (dnns) have been widely used in many applications, such as self-driving cars, natural language processing (nlp), image classification, visual object recognition, and so on. field-programmable gate array (fpga) based multiprocessor system on a chip (mpsoc) is recently considered one of the popular choices for deploying dnn models. however, the limited resource capacity of mpsoc imposes a challenge for such practical implementation. recent studies revealed the trade-off between the “resources consumed" vs. the “performance achieved". taking a cue from these findings, we address the problem of efficient implementation of deep learning into the resource-constrained mpsoc in this paper, where each deep learning network is run with different service levels based on resource usage (where a higher service level implies higher performance with increased resource consumption). to this end, we propose a heuristic-based strategy, application wise level selector (awls), for selecting service levels to maximize the overall performance subject to a given resource bound. awls can achieve higher performance within a constrained resource budget under various simulation scenarios. further, we verify the proposed strategy using an amd-xilinx zynq ultrascale+ xczu9eg soc. using a framework designed to deploy multi-dnn on multi-dpus (deep learning units), it is proved that an optimal solution is achieved from the algorithm, which obtains the highest performance (frames per second) using the same resource budget.
[ "deep neutral networks", "dnns", "many applications", "self-driving cars", "natural language processing", "nlp", "image classification", "visual object recognition", "field-programmable gate array", "fpga", "a chip (mpsoc", "the popular choices", "dnn models", "the limited resource capacity", "mpsoc", "a challenge", "such practical implementation", "recent studies", "the trade-off", "the “resources", "the “performance", "a cue", "these findings", "we", "the problem", "efficient implementation", "deep learning", "the resource-constrained mpsoc", "this paper", "each deep learning network", "different service levels", "resource usage", "a higher service level", "higher performance", "increased resource consumption", "this end", "we", "a heuristic-based strategy", "application wise level selector", "(awls", "service levels", "the overall performance", "a given resource", "awls", "higher performance", "a constrained resource budget", "various simulation scenarios", "we", "the proposed strategy", "an amd-xilinx zynq", "xczu9eg soc", "a framework", "-", "dnn", "multi-dpus (deep learning units", "it", "an optimal solution", "the algorithm", "which", "the highest performance", "frames", "the same resource budget", "xczu9eg", "second" ]
Radiomic and deep learning analysis of dermoscopic images for skin lesion pattern decoding
[ "Zheng Wang", "Chong Wang", "Li Peng", "Kaibin Lin", "Yang Xue", "Xiao Chen", "Linlin Bao", "Chao Liu", "Jianglin Zhang", "Yang Xie" ]
This study aims to explore the efficacy of a hybrid deep learning and radiomics approach, supplemented with patient metadata, in the noninvasive dermoscopic imaging-based diagnosis of skin lesions. We analyzed dermoscopic images from the International Skin Imaging Collaboration (ISIC) dataset, spanning 2016–2020, encompassing a variety of skin lesions. Our approach integrates deep learning with a comprehensive radiomics analysis, utilizing a vast array of quantitative image features to precisely quantify skin lesion patterns. The dataset includes cases of three, four, and eight different skin lesion types. Our methodology was benchmarked against seven classification methods from the ISIC 2020 challenge and prior research using a binary decision framework. The proposed hybrid model demonstrated superior performance in distinguishing benign from malignant lesions, achieving area under the receiver operating characteristic curve (AUROC) scores of 99%, 95%, and 96%, and multiclass decoding AUROCs of 98.5%, 94.9%, and 96.4%, with sensitivities of 97.6%, 93.9%, and 96.0% and specificities of 98.4%, 96.7%, and 96.9% in the internal ISIC 2018 challenge, as well as in the external Jinan and Longhua datasets, respectively. Our findings suggest that the integration of radiomics and deep learning, utilizing dermoscopic images, effectively captures the heterogeneity and pattern expression of skin lesions.
10.1038/s41598-024-70231-x
radiomic and deep learning analysis of dermoscopic images for skin lesion pattern decoding
this study aims to explore the efficacy of a hybrid deep learning and radiomics approach, supplemented with patient metadata, in the noninvasive dermoscopic imaging-based diagnosis of skin lesions. we analyzed dermoscopic images from the international skin imaging collaboration (isic) dataset, spanning 2016–2020, encompassing a variety of skin lesions. our approach integrates deep learning with a comprehensive radiomics analysis, utilizing a vast array of quantitative image features to precisely quantify skin lesion patterns. the dataset includes cases of three, four, and eight different skin lesion types. our methodology was benchmarked against seven classification methods from the isic 2020 challenge and prior research using a binary decision framework. the proposed hybrid model demonstrated superior performance in distinguishing benign from malignant lesions, achieving area under the receiver operating characteristic curve (auroc) scores of 99%, 95%, and 96%, and multiclass decoding aurocs of 98.5%, 94.9%, and 96.4%, with sensitivities of 97.6%, 93.9%, and 96.0% and specificities of 98.4%, 96.7%, and 96.9% in the internal isic 2018 challenge, as well as in the external jinan and longhua datasets, respectively. our findings suggest that the integration of radiomics and deep learning, utilizing dermoscopic images, effectively captures the heterogeneity and pattern expression of skin lesions.
[ "this study", "the efficacy", "a hybrid deep learning and radiomics approach", "patient metadata", "the noninvasive dermoscopic imaging-based diagnosis", "skin lesions", "we", "dermoscopic images", "the international skin imaging collaboration", "isic) dataset", "a variety", "skin lesions", "our approach", "a comprehensive radiomics analysis", "a vast array", "quantitative image features", "skin lesion patterns", "the dataset", "cases", "eight different skin lesion types", "our methodology", "seven classification methods", "the isic 2020 challenge", "prior research", "a binary decision framework", "the proposed hybrid model", "superior performance", "malignant lesions", "area", "the receiver operating characteristic curve", "auroc) scores", "99%", "95%", "96%", "multiclass decoding aurocs", "98.5%", "94.9%", "96.4%", "sensitivities", "97.6%", "93.9%", "96.0%", "specificities", "98.4%", "96.7%", "96.9%", "the internal isic 2018 challenge", "the external jinan", "longhua datasets", "our findings", "the integration", "radiomics", "deep learning", "dermoscopic images", "the heterogeneity and pattern expression", "skin lesions", "metadata", "2016–2020", "three", "four", "eight", "seven", "2020", "99%", "95%", "96%", "98.5%", "94.9%", "96.4%", "97.6%", "93.9%", "96.0%", "98.4%", "96.7%", "96.9%", "2018", "longhua datasets" ]
Fusing deep learning features for parameter identification of a stochastic airfoil system
[ "Jing Feng", "Xiaolong Wang", "Qi Liu", "Yong Xu", "Jürgen Kurths" ]
This work proposes a data-driven parameter identification approach for a two-degree-of-freedom airfoil system with cubic nonlinearity and stochasticity, where the random turbulent flow is quantified by non-Gaussian Lévy colored noise. The joint identification of the parameters controlling the flow velocity, airfoil geometry and structural stiffness is shaped as a unified machine learning task that includes three stages. (1) The first stage extracts local deep learning features from measurement data. (2) Next, the local features are fused to construct fixed-length global features representing the whole sample trajectory. (3) The global features are mapped to the parameter estimates and the accuracy indicators for uncertainty quantification. The numerical studies show that the obtained parameter estimation neural network can identify the system parameters from a sample trajectory with partially observed state measurements, namely, system parameters can be fully identified if only one or two of the pitch and plunge degrees of freedom are available. The intermediate deep features extracted by the PENN are compact representations of the stochastic system, as they carry key information of the system parameters. Suitable rules for information fusion are further designed, adapting the PENN to identify the system parameters from multiple short trajectories or time-varying parameters from a sample trajectory. The results suggest that the proposed deep learning approach is a flexible and versatile computation device for information extraction and fusion from limited data of stochastic nonlinear systems.
10.1007/s11071-024-10152-6
fusing deep learning features for parameter identification of a stochastic airfoil system
this work proposes a data-driven parameter identification approach for a two-degree-of-freedom airfoil system with cubic nonlinearity and stochasticity, where the random turbulent flow is quantified by non-gaussian lévy colored noise. the joint identification of the parameters controlling the flow velocity, airfoil geometry and structural stiffness is shaped as a unified machine learning task that includes three stages. (1) the first stage extracts local deep learning features from measurement data. (2) next, the local features are fused to construct fixed-length global features representing the whole sample trajectory. (3) the global features are mapped to the parameter estimates and the accuracy indicators for uncertainty quantification. the numerical studies show that the obtained parameter estimation neural network can identify the system parameters from a sample trajectory with partially observed state measurements, namely, system parameters can be fully identified if only one or two of the pitch and plunge degrees of freedom are available. the intermediate deep features extracted by the penn are compact representations of the stochastic system, as they carry key information of the system parameters. suitable rules for information fusion are further designed, adapting the penn to identify the system parameters from multiple short trajectories or time-varying parameters from a sample trajectory. the results suggest that the proposed deep learning approach is a flexible and versatile computation device for information extraction and fusion from limited data of stochastic nonlinear systems.
[ "this work", "a data-driven parameter identification approach", "freedom", "cubic nonlinearity", "stochasticity", "the random turbulent flow", "non-gaussian lévy colored noise", "the joint identification", "the parameters", "the flow velocity", "airfoil geometry", "structural stiffness", "a unified machine learning task", "that", "three stages", "the first stage", "local deep learning features", "measurement data", "the local features", "fixed-length global features", "the whole sample trajectory", "the global features", "the parameter estimates", "the accuracy indicators", "uncertainty quantification", "the numerical studies", "the obtained parameter estimation neural network", "the system parameters", "a sample trajectory", "partially observed state measurements", "system parameters", "the pitch and plunge degrees", "freedom", "the intermediate deep features", "the penn", "compact representations", "the stochastic system", "they", "key information", "the system parameters", "suitable rules", "information fusion", "the penn", "the system parameters", "multiple short trajectories", "time-varying parameters", "a sample trajectory", "the results", "the proposed deep learning approach", "a flexible and versatile computation device", "information extraction", "fusion", "limited data", "stochastic nonlinear systems", "two-degree", "non-gaussian", "three", "1", "first", "2", "3", "only one", "two" ]
Biologically informed deep learning for explainable epigenetic clocks
[ "Aurel Prosz", "Orsolya Pipek", "Judit Börcsök", "Gergely Palla", "Zoltan Szallasi", "Sandor Spisak", "István Csabai" ]
Ageing is often characterised by progressive accumulation of damage, and it is one of the most important risk factors for chronic disease development. Epigenetic mechanisms including DNA methylation could functionally contribute to organismal aging, however the key functions and biological processes may govern ageing are still not understood. Although age predictors called epigenetic clocks can accurately estimate the biological age of an individual based on cellular DNA methylation, their models have limited ability to explain the prediction algorithm behind and underlying key biological processes controlling ageing. Here we present XAI-AGE, a biologically informed, explainable deep neural network model for accurate biological age prediction across multiple tissue types. We show that XAI-AGE outperforms the first-generation age predictors and achieves similar results to deep learning-based models, while opening up the possibility to infer biologically meaningful insights of the activity of pathways and other abstract biological processes directly from the model.
10.1038/s41598-023-50495-5
biologically informed deep learning for explainable epigenetic clocks
ageing is often characterised by progressive accumulation of damage, and it is one of the most important risk factors for chronic disease development. epigenetic mechanisms including dna methylation could functionally contribute to organismal aging, however the key functions and biological processes may govern ageing are still not understood. although age predictors called epigenetic clocks can accurately estimate the biological age of an individual based on cellular dna methylation, their models have limited ability to explain the prediction algorithm behind and underlying key biological processes controlling ageing. here we present xai-age, a biologically informed, explainable deep neural network model for accurate biological age prediction across multiple tissue types. we show that xai-age outperforms the first-generation age predictors and achieves similar results to deep learning-based models, while opening up the possibility to infer biologically meaningful insights of the activity of pathways and other abstract biological processes directly from the model.
[ "ageing", "progressive accumulation", "damage", "it", "the most important risk factors", "chronic disease development", "epigenetic mechanisms", "dna methylation", "organismal aging", "the key functions", "biological processes", "ageing", "age predictors", "epigenetic clocks", "the biological age", "an individual", "cellular dna methylation", "their models", "limited ability", "the prediction algorithm", "key biological processes", "ageing", "we", "xai-age", "a biologically informed, explainable deep neural network model", "accurate biological age prediction", "multiple tissue types", "we", "xai-age", "the first-generation age predictors", "similar results", "deep learning-based models", "the possibility", "biologically meaningful insights", "the activity", "pathways", "other abstract biological processes", "the model", "first" ]
Remote Cardiac System Monitoring Using 6G-IoT Communication and Deep Learning
[ "Abdulbasid S. Banga", "Mohammed M. Alenazi", "Nisreen Innab", "Mansor Alohali", "Fahad M. Alhomayani", "Mohammad H. Algarni", "Taoufik Saidani" ]
Remote patient monitoring has recently been popularised due to advanced technological innovations. The advent of the Sixth Generation Internet of Things (6G-IoT) communication technology, combined with deep learning algorithms, presents a groundbreaking opportunity for enhancing remote cardiac system monitoring. This paper proposes an innovative framework leveraging the ultra-reliable, low-latency communication capabilities of 6G-IoT to transmit real-time cardiac data from wearable devices directly to healthcare providers. Integrating deep learning models facilitates the accurate analysis and prediction of cardiac anomalies, significantly improving traditional monitoring systems. Our methodology involves the deployment of cutting-edge wearable sensors capable of capturing high-fidelity cardiac signals. These signals are transmitted via 6G-IoT networks, ensuring minimal delay and maximum reliability. Upon receiving the data, a densely connected deep neural network with an optimised swish activation function—designed explicitly for cardiac anomaly detection is employed to analyse the data in real-time. These algorithms are trained on vast datasets to recognise patterns indicative of potential cardiac issues, allowing immediate intervention when necessary. The proposed system’s efficacy is validated through extensive testing in simulated environments, demonstrating its ability to accurately detect and swiftly predict a wide range of cardiac conditions. Moreover, implementing 6G-IoT communication ensures the system's scalability and adaptability to future technological advancements.
10.1007/s11277-024-11217-w
remote cardiac system monitoring using 6g-iot communication and deep learning
remote patient monitoring has recently been popularised due to advanced technological innovations. the advent of the sixth generation internet of things (6g-iot) communication technology, combined with deep learning algorithms, presents a groundbreaking opportunity for enhancing remote cardiac system monitoring. this paper proposes an innovative framework leveraging the ultra-reliable, low-latency communication capabilities of 6g-iot to transmit real-time cardiac data from wearable devices directly to healthcare providers. integrating deep learning models facilitates the accurate analysis and prediction of cardiac anomalies, significantly improving traditional monitoring systems. our methodology involves the deployment of cutting-edge wearable sensors capable of capturing high-fidelity cardiac signals. these signals are transmitted via 6g-iot networks, ensuring minimal delay and maximum reliability. upon receiving the data, a densely connected deep neural network with an optimised swish activation function—designed explicitly for cardiac anomaly detection is employed to analyse the data in real-time. these algorithms are trained on vast datasets to recognise patterns indicative of potential cardiac issues, allowing immediate intervention when necessary. the proposed system’s efficacy is validated through extensive testing in simulated environments, demonstrating its ability to accurately detect and swiftly predict a wide range of cardiac conditions. moreover, implementing 6g-iot communication ensures the system's scalability and adaptability to future technological advancements.
[ "remote patient monitoring", "advanced technological innovations", "the advent", "the sixth generation internet", "things", "6g-iot) communication technology", "deep learning algorithms", "a groundbreaking opportunity", "remote cardiac system monitoring", "this paper", "an innovative framework", "the ultra-reliable, low-latency communication capabilities", "6g-iot", "real-time cardiac data", "wearable devices", "providers", "deep learning models", "the accurate analysis", "prediction", "cardiac anomalies", "traditional monitoring systems", "our methodology", "the deployment", "cutting-edge wearable sensors", "high-fidelity cardiac signals", "these signals", "6g-iot networks", "minimal delay", "maximum reliability", "the data", "a densely connected deep neural network", "an optimised swish activation function", "cardiac anomaly detection", "the data", "real-time", "these algorithms", "vast datasets", "patterns", "potential cardiac issues", "immediate intervention", "the proposed system’s efficacy", "extensive testing", "simulated environments", "its ability", "a wide range", "cardiac conditions", "6g-iot communication", "the system's scalability", "adaptability", "future technological advancements", "sixth", "6g-iot", "6g-iot", "6g", "6g-iot" ]
A deep learning approach for host-based cryptojacking malware detection
[ "Olanrewaju Sanda", "Michalis Pavlidis", "Nikolaos Polatidis" ]
With the continued growth and popularity of blockchain-based cryptocurrencies there is a parallel growth in illegal mining to earn cryptocurrency. Since mining for cryptocurrencies requires high computational resource; malicious actors have resorted to using malicious file downloads and other methods to illegally use a victim’s system to mine for cryptocurrency without them knowing. This process is known as host-based cryptojacking and is gradually becoming one of the most popular cyberthreats in recent years. There are some proposed traditional machine learning methods to detect host-based cryptojacking but only a few have proposed using deep-learning models for detection. This paper presents a novel approach, dubbed CryptoJackingModel. This approach is a deep-learning host-based cryptojacking detection model that will effectively detect evolving host-based cryptojacking techniques and reduce false positives and false negatives. The approach has an overall accuracy of 98% on a dataset of 129,380 samples and a low performance overhead making it highly scalable. This approach will be an improvement of current countermeasures for detecting, mitigating, and preventing cryptojacking.
10.1007/s12530-023-09534-9
a deep learning approach for host-based cryptojacking malware detection
with the continued growth and popularity of blockchain-based cryptocurrencies there is a parallel growth in illegal mining to earn cryptocurrency. since mining for cryptocurrencies requires high computational resource; malicious actors have resorted to using malicious file downloads and other methods to illegally use a victim’s system to mine for cryptocurrency without them knowing. this process is known as host-based cryptojacking and is gradually becoming one of the most popular cyberthreats in recent years. there are some proposed traditional machine learning methods to detect host-based cryptojacking but only a few have proposed using deep-learning models for detection. this paper presents a novel approach, dubbed cryptojackingmodel. this approach is a deep-learning host-based cryptojacking detection model that will effectively detect evolving host-based cryptojacking techniques and reduce false positives and false negatives. the approach has an overall accuracy of 98% on a dataset of 129,380 samples and a low performance overhead making it highly scalable. this approach will be an improvement of current countermeasures for detecting, mitigating, and preventing cryptojacking.
[ "the continued growth", "popularity", "blockchain-based cryptocurrencies", "a parallel growth", "illegal mining", "cryptocurrency", "mining", "cryptocurrencies", "high computational resource", "malicious actors", "malicious file downloads", "other methods", "a victim’s system", "mine", "cryptocurrency", "them", "this process", "host-based cryptojacking", "the most popular cyberthreats", "recent years", "some proposed traditional machine learning methods", "host-based cryptojacking", "deep-learning models", "detection", "this paper", "a novel approach", "cryptojackingmodel", "this approach", "a deep-learning host-based cryptojacking detection model", "that", "evolving host-based cryptojacking techniques", "false positives", "false negatives", "the approach", "an overall accuracy", "98%", "a dataset", "129,380 samples", "a low performance", "it", "this approach", "an improvement", "current countermeasures", "mitigating", "cryptojacking", "recent years", "98%", "129,380" ]
Pneumonia detection based on RSNA dataset and anchor-free deep learning detector
[ "Linghua Wu", "Jing Zhang", "Yilin Wang", "Rong Ding", "Yueqin Cao", "Guiqin Liu", "Changsheng Liufu", "Baowei Xie", "Shanping Kang", "Rui Liu", "Wenle Li", "Furen Guan" ]
Pneumonia is a highly lethal disease, and research on its treatment and early screening tools has received extensive attention from researchers. Due to the maturity and cost reduction of chest X-ray technology, and with the development of artificial intelligence technology, pneumonia identification based on deep learning and chest X-ray has attracted attention from all over the world. Although the feature extraction capability of deep learning is strong, existing deep learning object detection frameworks are based on pre-defined anchors, which require a lot of tuning and experience to guarantee their excellent results in the face of new applications or data. To avoid the influence of anchor settings in pneumonia detection, this paper proposes an anchor-free object detection framework and RSNA dataset based on pneumonia detection. First, a data enhancement scheme is used to preprocess the chest X-ray images; second, an anchor-free object detection framework is used for pneumonia detection, which contains a feature pyramid, two-branch detection head, and focal loss. The average precision of 51.5 obtained by Intersection over Union (IoU) calculation shows that the pneumonia detection results obtained in this paper can surpass the existing classical object detection framework, providing an idea for future research and exploration.
10.1038/s41598-024-52156-7
pneumonia detection based on rsna dataset and anchor-free deep learning detector
pneumonia is a highly lethal disease, and research on its treatment and early screening tools has received extensive attention from researchers. due to the maturity and cost reduction of chest x-ray technology, and with the development of artificial intelligence technology, pneumonia identification based on deep learning and chest x-ray has attracted attention from all over the world. although the feature extraction capability of deep learning is strong, existing deep learning object detection frameworks are based on pre-defined anchors, which require a lot of tuning and experience to guarantee their excellent results in the face of new applications or data. to avoid the influence of anchor settings in pneumonia detection, this paper proposes an anchor-free object detection framework and rsna dataset based on pneumonia detection. first, a data enhancement scheme is used to preprocess the chest x-ray images; second, an anchor-free object detection framework is used for pneumonia detection, which contains a feature pyramid, two-branch detection head, and focal loss. the average precision of 51.5 obtained by intersection over union (iou) calculation shows that the pneumonia detection results obtained in this paper can surpass the existing classical object detection framework, providing an idea for future research and exploration.
[ "pneumonia", "a highly lethal disease", "research", "its treatment", "early screening tools", "extensive attention", "researchers", "the maturity and cost reduction", "chest x-ray technology", "the development", "artificial intelligence technology", "pneumonia identification", "deep learning", "chest x", "-", "ray", "attention", "the world", "the feature extraction capability", "deep learning", "existing deep learning object detection frameworks", "pre-defined anchors", "which", "a lot", "tuning", "their excellent results", "the face", "new applications", "data", "the influence", "anchor settings", "pneumonia detection", "this paper", "an anchor-free object detection framework", "rsna dataset", "pneumonia detection", "a data enhancement scheme", "the chest x-ray images", "an anchor-free object detection framework", "pneumonia detection", "which", "a feature pyramid", "two-branch detection head", "focal loss", "the average precision", "intersection", "union (iou) calculation", "the pneumonia detection results", "this paper", "the existing classical object detection framework", "an idea", "future research", "exploration", "first", "second", "two", "51.5" ]
Comparative Study for Optimized Deep Learning-Based Road Accidents Severity Prediction Models
[ "Hussam Hijazi", "Karim Sattar", "Hassan M. Al-Ahmadi", "Sami El-Ferik" ]
Road traffic accidents remain a major cause of fatalities and injuries worldwide. Effective classification of accident type and severity is crucial for prompt post-accident protocols and the development of comprehensive road safety policies. This study explores the application of deep learning techniques for predicting crash injury severity in the Eastern Province of Saudi Arabia. Five deep learning models were trained and evaluated, including various variants of feedforward multilayer perceptron, a back-propagated artificial neural network (ANN), an ANN with radial basis function (RPF), and tabular data learning network (TabNet). The models were optimized using Bayesian optimization (BO) and employed the synthetic minority oversampling technique (SMOTE) for oversampling the training dataset. While SMOTE enhanced balanced accuracy for ANN with RBF and TabNet, it compromised precision and increased recall. The results indicated that oversampling techniques did not consistently improve model performance. Additionally, significant features were identified using least absolute shrinkage and selection operator (LASSO) regularization, feature importance, and permutation importance. The results indicated that oversampling techniques did not consistently improve model performance. While SMOTE enhanced balanced accuracy for ANN with RBF and TabNet, it compromised precision and increased recall. The study's findings emphasize the consistent significance of the 'Number of Injuries Major' feature as a vital predictor in deep learning models, regardless of the selection techniques employed. These results shed light on the pivotal role played by the count of individuals with major injuries in influencing the severity of crash injuries, highlighting its potential relevance in shaping road safety policy development.
10.1007/s13369-023-08510-4
comparative study for optimized deep learning-based road accidents severity prediction models
road traffic accidents remain a major cause of fatalities and injuries worldwide. effective classification of accident type and severity is crucial for prompt post-accident protocols and the development of comprehensive road safety policies. this study explores the application of deep learning techniques for predicting crash injury severity in the eastern province of saudi arabia. five deep learning models were trained and evaluated, including various variants of feedforward multilayer perceptron, a back-propagated artificial neural network (ann), an ann with radial basis function (rpf), and tabular data learning network (tabnet). the models were optimized using bayesian optimization (bo) and employed the synthetic minority oversampling technique (smote) for oversampling the training dataset. while smote enhanced balanced accuracy for ann with rbf and tabnet, it compromised precision and increased recall. the results indicated that oversampling techniques did not consistently improve model performance. additionally, significant features were identified using least absolute shrinkage and selection operator (lasso) regularization, feature importance, and permutation importance. the results indicated that oversampling techniques did not consistently improve model performance. while smote enhanced balanced accuracy for ann with rbf and tabnet, it compromised precision and increased recall. the study's findings emphasize the consistent significance of the 'number of injuries major' feature as a vital predictor in deep learning models, regardless of the selection techniques employed. these results shed light on the pivotal role played by the count of individuals with major injuries in influencing the severity of crash injuries, highlighting its potential relevance in shaping road safety policy development.
[ "road traffic accidents", "a major cause", "fatalities", "injuries", "effective classification", "accident type", "severity", "prompt post-accident protocols", "the development", "comprehensive road safety policies", "this study", "the application", "deep learning techniques", "crash injury severity", "the eastern province", "saudi arabia", "five deep learning models", "various variants", "feedforward multilayer perceptron", "a back-propagated artificial neural network", "ann", "radial basis function", "rpf", "data learning network", "tabnet", "the models", "bayesian optimization", "bo", "the synthetic minority oversampling technique", "smote", "the training dataset", "smote", "balanced accuracy", "ann", "rbf", "tabnet", "it", "precision", "increased recall", "the results", "techniques", "model performance", "significant features", "lasso", "feature importance", "permutation importance", "the results", "techniques", "model performance", "smote", "balanced accuracy", "ann", "rbf", "tabnet", "it", "precision", "increased recall", "the study's findings", "the consistent significance", "the 'number", "injuries major' feature", "a vital predictor", "deep learning models", "the selection techniques", "these results", "light", "the pivotal role", "the count", "individuals", "major injuries", "the severity", "crash injuries", "its potential relevance", "road safety policy development", "saudi arabia", "five" ]
Blockchain-based multi-diagnosis deep learning application for various diseases classification
[ "Hakima Rym Rahal", "Sihem Slatnia", "Okba Kazar", "Ezedin Barka", "Saad Harous" ]
Misdiagnosis is a critical issue in healthcare, which can lead to severe consequences for patients, including delayed or inappropriate treatment, unnecessary procedures, psychological distress, financial burden, and legal implications. To mitigate this issue, we propose using deep learning algorithms to improve diagnostic accuracy. However, building accurate deep learning models for medical diagnosis requires substantial amounts of high-quality data, which can be challenging for individual healthcare sectors or organizations to acquire. Therefore, combining data from multiple sources to create a diverse dataset for efficient training is needed. However, sharing medical data between different healthcare sectors can be problematic from a security standpoint due to sensitive information and privacy laws. To address these challenges, we propose using blockchain technology to provide a secure, decentralized, and privacy-respecting way to share locally trained deep learning models instead of the data itself. Our proposed method of model ensembling, which combines the weights of several local deep learning models to build a single global model, that enables accurate diagnosis of complex medical conditions across multiple locations while preserving patient privacy and data security. Our research demonstrates the effectiveness of this approach in accurately diagnosing three diseases (breast cancer, lung cancer, and diabetes) with high accuracy rates, surpassing the accuracy of local models and building a multi-diagnosis application.
10.1007/s10207-023-00733-8
blockchain-based multi-diagnosis deep learning application for various diseases classification
misdiagnosis is a critical issue in healthcare, which can lead to severe consequences for patients, including delayed or inappropriate treatment, unnecessary procedures, psychological distress, financial burden, and legal implications. to mitigate this issue, we propose using deep learning algorithms to improve diagnostic accuracy. however, building accurate deep learning models for medical diagnosis requires substantial amounts of high-quality data, which can be challenging for individual healthcare sectors or organizations to acquire. therefore, combining data from multiple sources to create a diverse dataset for efficient training is needed. however, sharing medical data between different healthcare sectors can be problematic from a security standpoint due to sensitive information and privacy laws. to address these challenges, we propose using blockchain technology to provide a secure, decentralized, and privacy-respecting way to share locally trained deep learning models instead of the data itself. our proposed method of model ensembling, which combines the weights of several local deep learning models to build a single global model, that enables accurate diagnosis of complex medical conditions across multiple locations while preserving patient privacy and data security. our research demonstrates the effectiveness of this approach in accurately diagnosing three diseases (breast cancer, lung cancer, and diabetes) with high accuracy rates, surpassing the accuracy of local models and building a multi-diagnosis application.
[ "misdiagnosis", "a critical issue", "healthcare", "which", "severe consequences", "patients", "delayed or inappropriate treatment", "unnecessary procedures", "psychological distress", "financial burden", "legal implications", "this issue", "we", "deep learning algorithms", "diagnostic accuracy", "accurate deep learning models", "medical diagnosis", "substantial amounts", "high-quality data", "which", "individual healthcare sectors", "organizations", "data", "multiple sources", "a diverse dataset", "efficient training", "medical data", "different healthcare sectors", "a security standpoint", "sensitive information", "privacy laws", "these challenges", "we", "blockchain technology", "a secure, decentralized, and privacy-respecting way", "locally trained deep learning models", "the data", "itself", "our proposed method", "model ensembling", "which", "the weights", "several local deep learning models", "a single global model", "that", "accurate diagnosis", "complex medical conditions", "multiple locations", "patient privacy", "data security", "our research", "the effectiveness", "this approach", "three diseases", "breast cancer", "lung cancer", "high accuracy rates", "the accuracy", "local models", "a multi-diagnosis application", "three" ]
Deep learning-based question answering: a survey
[ "Heba Abdel-Nabi", "Arafat Awajan", "Mostafa Z. Ali" ]
Question Answering is a crucial natural language processing task. This field of research has attracted a sudden amount of interest lately due mainly to the integration of the deep learning models in the Question Answering Systems which consequently power up many advancements and improvements. This survey aims to explore and shed light upon the recent and most powerful deep learning-based Question Answering Systems and classify them based on the deep learning model used, stating the details of the used word representation, datasets, and evaluation metrics. It aims to highlight and discuss the currently used models and give insights that direct future research to enhance this increasingly growing field.
10.1007/s10115-022-01783-5
deep learning-based question answering: a survey
question answering is a crucial natural language processing task. this field of research has attracted a sudden amount of interest lately due mainly to the integration of the deep learning models in the question answering systems which consequently power up many advancements and improvements. this survey aims to explore and shed light upon the recent and most powerful deep learning-based question answering systems and classify them based on the deep learning model used, stating the details of the used word representation, datasets, and evaluation metrics. it aims to highlight and discuss the currently used models and give insights that direct future research to enhance this increasingly growing field.
[ "a crucial natural language processing task", "this field", "research", "a sudden amount", "interest", "the integration", "the deep learning models", "the question", "systems", "which", "many advancements", "improvements", "this survey", "light", "the recent and most powerful deep learning-based question", "systems", "them", "the deep learning model", "the details", "the used word representation", "datasets", "evaluation metrics", "it", "the currently used models", "insights", "that", "future research", "this increasingly growing field" ]
Deep Learning Architecture for Computer Vision-based Structural Defect Detection
[ "Ruoyu Yang", "Shubhendu Kumar Singh", "Mostafa Tavakkoli", "M. Amin Karami", "Rahul Rai" ]
Structural health monitoring (SHM) refers to the implementation of a damage detection strategy for structures. Fault occurrence in these structural systems during the operation is inevitable. Efficient, fast, and precise health monitoring methods are required to proactively perform the necessary repairs and maintenance on time before it is too late. The current structural health monitoring methods involve physically attached sensors or non-contact vision-based vibration measurements. However, these methods have significant drawbacks due to the low spatial resolution, weight influence on the lightweight structure, and time/labor consumption. Recently, computer-vison-based deep learning methods like convolutional neural network (CNN) and fully convolutional neural network (FCN) have been applied for defect detection and localization, which address the aforementioned problems and obtain high accuracy. This paper proposes a novel hybrid deep learning architecture comprising CNN and temporal convolutional networks (CNN-TCN) for the computer vision-based defect detection task. Various beam samples, consisting of five different materials and various structural defects, were used to evaluate the proposed deep learning algorithms’ performance. The proposed deep learning methods treat each pixel of the video frame like a sensor to extract valuable features for defect detection. Through empirical results, we demonstrate that this ’pixel-sensor’ approach is more efficient and accurate and can achieve a better defect detection performance on different beam samples compared with the current state-of-the-art approaches, including CNN-long short-term memory (LSTM), CNN-bidirectional long short-term memory (BiLSTM), multi-scale CNN-LSTM, and CNN-gated recurrent unit(GRU) methods.
10.1007/s10489-023-04654-w
deep learning architecture for computer vision-based structural defect detection
structural health monitoring (shm) refers to the implementation of a damage detection strategy for structures. fault occurrence in these structural systems during the operation is inevitable. efficient, fast, and precise health monitoring methods are required to proactively perform the necessary repairs and maintenance on time before it is too late. the current structural health monitoring methods involve physically attached sensors or non-contact vision-based vibration measurements. however, these methods have significant drawbacks due to the low spatial resolution, weight influence on the lightweight structure, and time/labor consumption. recently, computer-vison-based deep learning methods like convolutional neural network (cnn) and fully convolutional neural network (fcn) have been applied for defect detection and localization, which address the aforementioned problems and obtain high accuracy. this paper proposes a novel hybrid deep learning architecture comprising cnn and temporal convolutional networks (cnn-tcn) for the computer vision-based defect detection task. various beam samples, consisting of five different materials and various structural defects, were used to evaluate the proposed deep learning algorithms’ performance. the proposed deep learning methods treat each pixel of the video frame like a sensor to extract valuable features for defect detection. through empirical results, we demonstrate that this ’pixel-sensor’ approach is more efficient and accurate and can achieve a better defect detection performance on different beam samples compared with the current state-of-the-art approaches, including cnn-long short-term memory (lstm), cnn-bidirectional long short-term memory (bilstm), multi-scale cnn-lstm, and cnn-gated recurrent unit(gru) methods.
[ "structural health monitoring", "shm", "the implementation", "a damage detection strategy", "structures", "fault occurrence", "these structural systems", "the operation", "precise health monitoring methods", "the necessary repairs", "maintenance", "time", "it", "the current structural health monitoring methods", "physically attached sensors", "non-contact vision-based vibration measurements", "these methods", "significant drawbacks", "the low spatial resolution", "weight influence", "the lightweight structure", "time/labor consumption", "computer-vison-based deep learning methods", "convolutional neural network", "cnn", "fully convolutional neural network", "fcn", "defect detection", "localization", "which", "the aforementioned problems", "high accuracy", "this paper", "a novel hybrid deep learning architecture", "cnn", "temporal convolutional networks", "cnn-tcn", "the computer vision-based defect detection task", "various beam samples", "five different materials", "various structural defects", "the proposed deep learning algorithms’ performance", "the proposed deep learning methods", "each pixel", "the video frame", "a sensor", "valuable features", "defect detection", "empirical results", "we", "this ’pixel-sensor’ approach", "a better defect detection performance", "different beam samples", "the-art", "cnn-long short-term memory", "lstm", "cnn-bidirectional long short-term memory", "bilstm", "multi-scale cnn-lstm", "cnn-gated recurrent unit(gru) methods", "cnn", "cnn", "cnn-tcn", "five", "cnn", "cnn", "cnn", "cnn" ]
Using machine learning and deep learning algorithms for downtime minimization in manufacturing systems: an early failure detection diagnostic service
[ "Mohammad Shahin", "F. Frank Chen", "Ali Hosseinzadeh", "Neda Zand" ]
Accurate detection of possible machine failure allows manufacturers to identify potential fault situations in processes to avoid downtimes caused by unexpected tool wear or unacceptable workpiece quality. This paper aims to report the study of more than 20 fault detection models using machine learning (ML), deep learning (DL), and deep hybrid learning (DHL). Predicting how the system could fail based on certain features or system settings (input variables) can help avoid future breakdowns and minimize downtime. The effectiveness of the proposed algorithms was experimented with a synthetic predictive maintenance dataset published by the School of Engineering of the University of Applied Sciences in Berlin, Germany. The fidelity of these algorithms was evaluated using performance measurement values such as accuracy, precision, recall, and the F-score. Final results demonstrated that deep forest and gradient boosting algorithms had shown very high levels of average accuracy (exceeded 90%). Additionally, the multinomial logistic regression and long short-term memory-based algorithms have shown satisfactory average accuracy (above 80%). Further analysis of models suggests that some models outperformed others. The research concluded that, through various ML, DL, and DHL algorithms, operational data analytics, and health monitoring system, engineers could optimize maintenance and reduce reliability risks.
10.1007/s00170-023-12020-w
using machine learning and deep learning algorithms for downtime minimization in manufacturing systems: an early failure detection diagnostic service
accurate detection of possible machine failure allows manufacturers to identify potential fault situations in processes to avoid downtimes caused by unexpected tool wear or unacceptable workpiece quality. this paper aims to report the study of more than 20 fault detection models using machine learning (ml), deep learning (dl), and deep hybrid learning (dhl). predicting how the system could fail based on certain features or system settings (input variables) can help avoid future breakdowns and minimize downtime. the effectiveness of the proposed algorithms was experimented with a synthetic predictive maintenance dataset published by the school of engineering of the university of applied sciences in berlin, germany. the fidelity of these algorithms was evaluated using performance measurement values such as accuracy, precision, recall, and the f-score. final results demonstrated that deep forest and gradient boosting algorithms had shown very high levels of average accuracy (exceeded 90%). additionally, the multinomial logistic regression and long short-term memory-based algorithms have shown satisfactory average accuracy (above 80%). further analysis of models suggests that some models outperformed others. the research concluded that, through various ml, dl, and dhl algorithms, operational data analytics, and health monitoring system, engineers could optimize maintenance and reduce reliability risks.
[ "accurate detection", "possible machine failure", "manufacturers", "potential fault situations", "processes", "downtimes", "unexpected tool", "this paper", "the study", "more than 20 fault detection models", "machine learning", "ml", "deep learning", "dl", "deep hybrid learning", "dhl", "the system", "certain features", "system settings", "input variables", "future breakdowns", "downtime", "the effectiveness", "the proposed algorithms", "a synthetic predictive maintenance dataset", "the school", "engineering", "the university", "applied sciences", "berlin", "germany", "the fidelity", "these algorithms", "performance measurement values", "accuracy", "precision", "recall", "the f-score", "final results", "deep forest and gradient boosting algorithms", "very high levels", "average accuracy", "the multinomial logistic regression", "long short-term memory-based algorithms", "satisfactory average accuracy", "80%", "further analysis", "models", "some models", "others", "the research", "various ml", "dl", "dhl algorithms", "operational data analytics", "health monitoring system", "engineers", "maintenance", "reliability risks", "more than 20", "the university of applied sciences", "berlin", "germany", "90%", "80%" ]
Secure Communications with THz Reconfigurable Intelligent Surfaces and Deep Learning in 6G Systems
[ "Ajmeera Kiran", "Abhilash Sonker", "Sachin Jadhav", "Makarand Mohan Jadhav", "Janjhyam Venkata Naga Ramesh", "Elangovan Muniyandy" ]
In anticipation of the 6G era, this paper explores the integration of terahertz (THz) communications with Reconfigurable Intelligent Surfaces (RIS) and deep learning to establish a secure wireless network capable of ultra-high data rates. Addressing the non-convex challenge of maximizing secure energy efficiency, we introduce a novel deep learning framework that employs a variety of neural network architectures for optimizing RIS reflection and beamforming. Our simulations, set against scenarios with varying eavesdropper cooperation, confirm the efficacy of the proposed solution, achieving 97% of the optimal performance benchmarked against a genie-aided model. This research underlines a significant advancement in 6G network security, potentially influencing future standards and laying the groundwork for practical deployment, thereby marking a milestone in the convergence of THz technology, intelligent surfaces, and AI for future-proof secure communications.
10.1007/s11277-024-11163-7
secure communications with thz reconfigurable intelligent surfaces and deep learning in 6g systems
in anticipation of the 6g era, this paper explores the integration of terahertz (thz) communications with reconfigurable intelligent surfaces (ris) and deep learning to establish a secure wireless network capable of ultra-high data rates. addressing the non-convex challenge of maximizing secure energy efficiency, we introduce a novel deep learning framework that employs a variety of neural network architectures for optimizing ris reflection and beamforming. our simulations, set against scenarios with varying eavesdropper cooperation, confirm the efficacy of the proposed solution, achieving 97% of the optimal performance benchmarked against a genie-aided model. this research underlines a significant advancement in 6g network security, potentially influencing future standards and laying the groundwork for practical deployment, thereby marking a milestone in the convergence of thz technology, intelligent surfaces, and ai for future-proof secure communications.
[ "anticipation", "the 6g era", "this paper", "the integration", "thz", "reconfigurable intelligent surfaces", "ris", "deep learning", "a secure wireless network", "ultra-high data rates", "the non-convex challenge", "secure energy efficiency", "we", "a novel deep learning framework", "that", "a variety", "neural network", "ris reflection", "beamforming", "our simulations", "scenarios", "varying eavesdropper cooperation", "the efficacy", "the proposed solution", "97%", "the optimal performance", "a genie-aided model", "this research", "a significant advancement", "6g network security", "future standards", "the groundwork", "practical deployment", "a milestone", "the convergence", "thz technology", "intelligent surfaces", "future-proof secure communications", "6", "97%", "genie", "6" ]
HCCNet Fusion: a synergistic approach for accurate hepatocellular carcinoma staging using deep learning paradigm
[ "Devi Rajeev", "S. Remya", "Anand Nayyar" ]
Hepatocellular carcinoma (HCC) stands as the second most prevalent cancer and a leading cause of cancer-related mortality globally, necessitating precise diagnostic and prognostic methodologies. The study introduces an innovative approach centered around the HCCNet Fusion model; a robust integration of advanced deep-learning techniques designed to elevate the accuracy of HCC stage recognition. Leveraging the synergies between the VGG16 architecture and U-Net and incorporating sophisticated data pre-processing methods such as Otsu’s binary thresholding and marker-based watershed segmentation, this approach aims to strengthen the precision of HCC stage identification. Furthermore, transfer learning plays a pivotal role in HCCNet Fusion, enabling the models to integrate knowledge from diverse medical image settings through pre-trained weights from VGG16 and U-Net architectures. This strategic integration demonstrates the efficacy of advanced deep learning strategies in addressing intricate medical challenges, and outperforms conventional methods with a remarkable accuracy rate of 95%, underscoring the potential of cutting-edge deep learning techniques in medical diagnostics. The evaluation and validation of the proposed HCCNet Fusion model demonstrate its strong performance across many metrics like AUC ROC, Loss, Accuracy, Precision, Recall, and F1. Additionally, a comparative study was done against well-known methods, including CNN, Inception ResNetV2, VGG16, Inception V3, EfficientNet-B0, and ResNet50 and the results state that the proposed system not only advances HCC detection but also sets a pattern for leveraging state-of-the-art methodologies in addressing complex medical issues.
10.1007/s11042-024-19446-8
hccnet fusion: a synergistic approach for accurate hepatocellular carcinoma staging using deep learning paradigm
hepatocellular carcinoma (hcc) stands as the second most prevalent cancer and a leading cause of cancer-related mortality globally, necessitating precise diagnostic and prognostic methodologies. the study introduces an innovative approach centered around the hccnet fusion model; a robust integration of advanced deep-learning techniques designed to elevate the accuracy of hcc stage recognition. leveraging the synergies between the vgg16 architecture and u-net and incorporating sophisticated data pre-processing methods such as otsu’s binary thresholding and marker-based watershed segmentation, this approach aims to strengthen the precision of hcc stage identification. furthermore, transfer learning plays a pivotal role in hccnet fusion, enabling the models to integrate knowledge from diverse medical image settings through pre-trained weights from vgg16 and u-net architectures. this strategic integration demonstrates the efficacy of advanced deep learning strategies in addressing intricate medical challenges, and outperforms conventional methods with a remarkable accuracy rate of 95%, underscoring the potential of cutting-edge deep learning techniques in medical diagnostics. the evaluation and validation of the proposed hccnet fusion model demonstrate its strong performance across many metrics like auc roc, loss, accuracy, precision, recall, and f1. additionally, a comparative study was done against well-known methods, including cnn, inception resnetv2, vgg16, inception v3, efficientnet-b0, and resnet50 and the results state that the proposed system not only advances hcc detection but also sets a pattern for leveraging state-of-the-art methodologies in addressing complex medical issues.
[ "hepatocellular carcinoma", "hcc", "the second most prevalent cancer", "a leading cause", "cancer-related mortality", "precise diagnostic and prognostic methodologies", "the study", "an innovative approach", "the hccnet fusion model", "a robust integration", "advanced deep-learning techniques", "the accuracy", "hcc stage recognition", "the synergies", "the vgg16 architecture", "u", "-", "net", "sophisticated data pre-processing methods", "otsu’s binary thresholding and marker-based watershed segmentation", "this approach", "the precision", "hcc stage identification", "transfer learning", "a pivotal role", "hccnet fusion", "the models", "knowledge", "diverse medical image settings", "pre-trained weights", "vgg16 and u-net architectures", "this strategic integration", "the efficacy", "advanced deep learning strategies", "intricate medical challenges", "conventional methods", "a remarkable accuracy rate", "95%", "the potential", "cutting-edge deep learning techniques", "medical diagnostics", "the evaluation", "validation", "the proposed hccnet fusion model", "its strong performance", "many metrics", "auc roc", "loss", "accuracy", "precision", "recall", "f1", "a comparative study", "well-known methods", "cnn", "inception resnetv2", "vgg16", "inception v3", "efficientnet-b0", "resnet50", "the results", "the proposed system", "hcc detection", "a pattern", "the-art", "complex medical issues", "second", "95%", "roc", "cnn", "resnetv2", "v3", "resnet50" ]
DVNE-DRL: dynamic virtual network embedding algorithm based on deep reinforcement learning
[ "Xiancui Xiao" ]
Virtual network embedding (VNE), as the key challenge of network resource management technology, lies in the contradiction between online embedding decision and pursuing long-term average revenue goals. Most of the previous work ignored the dynamics in Virtual Network (VN) modeling, or could not automatically detect the complex and time-varying network state to provide a reasonable network embedding scheme. In view of this, we model a network embedding framework where the topology and resource allocation change dynamically with the number of network users and workload, and then introduce a deep reinforcement learning method to solve the VNE problem. Further, a dynamic virtual network embedding algorithm based on Deep Reinforcement Learning (DRL), named DVNE-DRL, is proposed. In DVNE-DRL, VNE is modeled as a Markov Decision Process (MDP), and then deep learning is introduced to perceive the current network state through historical data and embedded knowledge, while utilizing reinforcement learning decision-making capabilities to implement the network embedding process. In addition, we improve the method of feature extraction and matrix optimization, and consider the characteristics of virtual network and physical network together to alleviate the problem of redundancy and slow convergence. The simulation results show that compared with the existing advanced algorithms, the acceptance rate and average revenue of DVNE-DRL are increased by about 25% and 35%, respectively.
10.1038/s41598-023-47195-5
dvne-drl: dynamic virtual network embedding algorithm based on deep reinforcement learning
virtual network embedding (vne), as the key challenge of network resource management technology, lies in the contradiction between online embedding decision and pursuing long-term average revenue goals. most of the previous work ignored the dynamics in virtual network (vn) modeling, or could not automatically detect the complex and time-varying network state to provide a reasonable network embedding scheme. in view of this, we model a network embedding framework where the topology and resource allocation change dynamically with the number of network users and workload, and then introduce a deep reinforcement learning method to solve the vne problem. further, a dynamic virtual network embedding algorithm based on deep reinforcement learning (drl), named dvne-drl, is proposed. in dvne-drl, vne is modeled as a markov decision process (mdp), and then deep learning is introduced to perceive the current network state through historical data and embedded knowledge, while utilizing reinforcement learning decision-making capabilities to implement the network embedding process. in addition, we improve the method of feature extraction and matrix optimization, and consider the characteristics of virtual network and physical network together to alleviate the problem of redundancy and slow convergence. the simulation results show that compared with the existing advanced algorithms, the acceptance rate and average revenue of dvne-drl are increased by about 25% and 35%, respectively.
[ "virtual network", "the key challenge", "network resource management technology", "the contradiction", "online embedding decision", "long-term average revenue goals", "the previous work", "the dynamics", "virtual network", "modeling", "the complex and time-varying network state", "a reasonable network", "scheme", "view", "this", "we", "a network", "framework", "the number", "network users", "workload", "a deep reinforcement learning method", "the vne problem", "a dynamic virtual network", "algorithm", "deep reinforcement learning", "drl", "dvne-drl", "dvne-drl", "vne", "a markov decision process", "mdp", "deep learning", "the current network state", "historical data", "embedded knowledge", "reinforcement learning decision-making capabilities", "the network embedding process", "addition", "we", "the method", "feature extraction and matrix optimization", "the characteristics", "virtual network", "physical network", "the problem", "redundancy", "slow convergence", "the simulation results", "the existing advanced algorithms", "the acceptance rate", "average revenue", "dvne-drl", "about 25%", "35%", "about 25% and 35%" ]
Deep learning-based 1-D magnetotelluric inversion: performance comparison of architectures
[ "Mehdi Rahmani Jevinani", "Banafsheh Habibian Dehkordi", "Ian J. Ferguson", "Mohammad Hossein Rohban" ]
The study compares the three deep learning approaches and assesses their relative performance solving the 1-D magnetotellurics (MT) inverse problem. MT data from a 1-D geothermal-type structure are used as an example to examine Variational Autoencoder (VAE), Residual Network (Res-Net), and U-Net architectures, adapted for 1-D MT inversion. Root Mean Square Error (RMSE) and Pearson correlation coefficient are applied as misfit measure and similarity criterion, and box plot tools are used to parameterize individual model parameters. The results show that the U-Net provides the most successful recovery of the 1-D resistivity models, even though all three approaches can produce accurate inversions of MT data. To investigate applicability of results to real data sets, the models performance are examined for the case of data containing noise. Three deep learning algorithms are robust with respect to data noise, although the U-Net is relatively superior. The study results provide a platform for more complex magnetotelluric inverse problems and ones involving real data sets.
10.1007/s12145-024-01233-6
deep learning-based 1-d magnetotelluric inversion: performance comparison of architectures
the study compares the three deep learning approaches and assesses their relative performance solving the 1-d magnetotellurics (mt) inverse problem. mt data from a 1-d geothermal-type structure are used as an example to examine variational autoencoder (vae), residual network (res-net), and u-net architectures, adapted for 1-d mt inversion. root mean square error (rmse) and pearson correlation coefficient are applied as misfit measure and similarity criterion, and box plot tools are used to parameterize individual model parameters. the results show that the u-net provides the most successful recovery of the 1-d resistivity models, even though all three approaches can produce accurate inversions of mt data. to investigate applicability of results to real data sets, the models performance are examined for the case of data containing noise. three deep learning algorithms are robust with respect to data noise, although the u-net is relatively superior. the study results provide a platform for more complex magnetotelluric inverse problems and ones involving real data sets.
[ "the study", "the three deep learning approaches", "their relative performance", "the 1-d magnetotellurics (mt) inverse problem", "mt data", "a 1-d geothermal-type structure", "an example", "variational autoencoder", "vae", "residual network", "res-net", "u-net architectures", "1-d mt inversion", "root mean square error", "rmse", "pearson correlation coefficient", "misfit measure", "similarity criterion", "box plot tools", "individual model parameters", "the results", "the u", "-", "net", "the most successful recovery", "the 1-d resistivity models", "all three approaches", "accurate inversions", "mt data", "applicability", "results", "real data sets", "the models performance", "the case", "data", "noise", "three deep learning algorithms", "respect", "data noise", "the u", "-", "net", "the study results", "a platform", "more complex magnetotelluric inverse problems", "ones", "real data sets", "three", "1-d", "1", "1-d mt inversion", "root", "1", "three", "three" ]
Automatic segmentation of inconstant fractured fragments for tibia/fibula from CT images using deep learning
[ "Hyeonjoo Kim", "Young Dae Jeon", "Ki Bong Park", "Hayeong Cha", "Moo-Sub Kim", "Juyeon You", "Se-Won Lee", "Seung-Han Shin", "Yang-Guk Chung", "Sung Bin Kang", "Won Seuk Jang", "Do-Kun Yoon" ]
Orthopaedic surgeons need to correctly identify bone fragments using 2D/3D CT images before trauma surgery. Advances in deep learning technology provide good insights into trauma surgery over manual diagnosis. This study demonstrates the application of the DeepLab v3+ -based deep learning model for the automatic segmentation of fragments of the fractured tibia and fibula from CT images and the results of the evaluation of the performance of the automatic segmentation. The deep learning model, which was trained using over 11 million images, showed good performance with a global accuracy of 98.92%, a weighted intersection over the union of 0.9841, and a mean boundary F1 score of 0.8921. Moreover, deep learning performed 5–8 times faster than the experts’ recognition performed manually, which is comparatively inefficient, with almost the same significance. This study will play an important role in preoperative surgical planning for trauma surgery with convenience and speed.
10.1038/s41598-023-47706-4
automatic segmentation of inconstant fractured fragments for tibia/fibula from ct images using deep learning
orthopaedic surgeons need to correctly identify bone fragments using 2d/3d ct images before trauma surgery. advances in deep learning technology provide good insights into trauma surgery over manual diagnosis. this study demonstrates the application of the deeplab v3+ -based deep learning model for the automatic segmentation of fragments of the fractured tibia and fibula from ct images and the results of the evaluation of the performance of the automatic segmentation. the deep learning model, which was trained using over 11 million images, showed good performance with a global accuracy of 98.92%, a weighted intersection over the union of 0.9841, and a mean boundary f1 score of 0.8921. moreover, deep learning performed 5–8 times faster than the experts’ recognition performed manually, which is comparatively inefficient, with almost the same significance. this study will play an important role in preoperative surgical planning for trauma surgery with convenience and speed.
[ "orthopaedic surgeons", "bone fragments", "2d/3d ct images", "trauma surgery", "advances", "deep learning technology", "good insights", "trauma surgery", "manual diagnosis", "this study", "the application", "the deeplab v3", "-based deep learning model", "the automatic segmentation", "fragments", "the fractured tibia", "ct images", "the results", "the evaluation", "the performance", "the automatic segmentation", "the deep learning model", "which", "over 11 million images", "good performance", "a global accuracy", "98.92%", "a weighted intersection", "the union", "a mean boundary f1 score", "deep learning", "the experts’ recognition", "which", "almost the same significance", "this study", "an important role", "preoperative surgical planning", "trauma surgery", "convenience", "speed", "2d/3d", "over 11 million", "98.92%", "0.9841", "0.8921", "5–8" ]
Smart city urban planning using an evolutionary deep learning model
[ "Mansoor Alghamdi" ]
Following the evolution of big data collection, storage, and manipulation techniques, deep learning has drawn the attention of numerous recent studies proposing solutions for smart cities. These solutions were focusing especially on energy consumption, pollution levels, public services, and traffic management issues. Predicting urban evolution and planning is another recent concern for smart cities. In this context, this paper introduces a hybrid model that incorporates evolutionary optimization algorithms, such as Teaching–learning-based optimization (TLBO), into the functioning process of neural deep learning models, such as recurrent neural network (RNN) networks. According to the achieved simulations, deep learning enhanced by evolutionary optimizers can be an effective and promising method for predicting urban evolution of future smart cities.
10.1007/s00500-023-08219-4
smart city urban planning using an evolutionary deep learning model
following the evolution of big data collection, storage, and manipulation techniques, deep learning has drawn the attention of numerous recent studies proposing solutions for smart cities. these solutions were focusing especially on energy consumption, pollution levels, public services, and traffic management issues. predicting urban evolution and planning is another recent concern for smart cities. in this context, this paper introduces a hybrid model that incorporates evolutionary optimization algorithms, such as teaching–learning-based optimization (tlbo), into the functioning process of neural deep learning models, such as recurrent neural network (rnn) networks. according to the achieved simulations, deep learning enhanced by evolutionary optimizers can be an effective and promising method for predicting urban evolution of future smart cities.
[ "the evolution", "big data collection", "storage", "manipulation techniques", "deep learning", "the attention", "numerous recent studies", "solutions", "smart cities", "these solutions", "energy consumption", "pollution levels", "public services", "traffic management issues", "urban evolution", "planning", "another recent concern", "smart cities", "this context", "this paper", "a hybrid model", "that", "evolutionary optimization algorithms", "teaching", "learning-based optimization", "tlbo", "the functioning process", "neural deep learning models", "recurrent neural network (rnn) networks", "the achieved simulations", "deep learning", "evolutionary optimizers", "an effective and promising method", "urban evolution", "future smart cities" ]
Recent advances in deep learning models: a systematic literature review
[ "Ruchika Malhotra", "Priya Singh" ]
In recent years, deep learning has evolved as a rapidly growing and stimulating field of machine learning and has redefined state-of-the-art performances in a variety of applications. There are multiple deep learning models that have distinct architectures and capabilities. Up to the present, a large number of novel variants of these baseline deep learning models is proposed to address the shortcomings of the existing baseline models. This paper provides a comprehensive review of one hundred seven novel variants of six baseline deep learning models viz. Convolutional Neural Network, Recurrent Neural Network, Long Short Term Memory, Generative Adversarial Network, Autoencoder and Transformer Neural Network. The current review thoroughly examines the novel variants of each of the six baseline models to identify the advancements adopted by them to address one or more limitations of the respective baseline model. It is achieved by critically reviewing the novel variants based on their improved approach. It further provides the merits and demerits of incorporating the advancements in novel variants compared to the baseline deep learning model. Additionally, it reports the domain, datasets and performance measures exploited by the novel variants to make an overall judgment in terms of the improvements. This is because the performance of the deep learning models are subject to the application domain, type of datasets and may also vary on different performance measures. The critical findings of the review would facilitate the researchers and practitioners with the most recent progressions and advancements in the baseline deep learning models and guide them in selecting an appropriate novel variant of the baseline to solve deep learning based tasks in a similar setting.
10.1007/s11042-023-15295-z
recent advances in deep learning models: a systematic literature review
in recent years, deep learning has evolved as a rapidly growing and stimulating field of machine learning and has redefined state-of-the-art performances in a variety of applications. there are multiple deep learning models that have distinct architectures and capabilities. up to the present, a large number of novel variants of these baseline deep learning models is proposed to address the shortcomings of the existing baseline models. this paper provides a comprehensive review of one hundred seven novel variants of six baseline deep learning models viz. convolutional neural network, recurrent neural network, long short term memory, generative adversarial network, autoencoder and transformer neural network. the current review thoroughly examines the novel variants of each of the six baseline models to identify the advancements adopted by them to address one or more limitations of the respective baseline model. it is achieved by critically reviewing the novel variants based on their improved approach. it further provides the merits and demerits of incorporating the advancements in novel variants compared to the baseline deep learning model. additionally, it reports the domain, datasets and performance measures exploited by the novel variants to make an overall judgment in terms of the improvements. this is because the performance of the deep learning models are subject to the application domain, type of datasets and may also vary on different performance measures. the critical findings of the review would facilitate the researchers and practitioners with the most recent progressions and advancements in the baseline deep learning models and guide them in selecting an appropriate novel variant of the baseline to solve deep learning based tasks in a similar setting.
[ "recent years", "deep learning", "a rapidly growing and stimulating field", "machine learning", "the-art", "a variety", "applications", "multiple deep learning models", "that", "distinct architectures", "capabilities", "the present", "a large number", "novel variants", "these baseline deep learning models", "the shortcomings", "the existing baseline models", "this paper", "a comprehensive review", "one hundred seven novel variants", "six baseline deep learning models", "convolutional neural network", "recurrent neural network", "long short term memory", "generative adversarial network", "autoencoder", "transformer neural network", "the current review", "the novel variants", "each", "the six baseline models", "the advancements", "them", "one or more limitations", "the respective baseline model", "it", "the novel variants", "their improved approach", "it", "the merits", "demerits", "the advancements", "novel variants", "the baseline deep learning model", "it", "the domain", "datasets", "performance measures", "the novel variants", "an overall judgment", "terms", "the improvements", "this", "the performance", "the deep learning models", "the application domain", "type", "datasets", "different performance measures", "the critical findings", "the review", "the researchers", "practitioners", "the most recent progressions", "advancements", "the baseline deep learning models", "them", "an appropriate novel variant", "the baseline", "deep learning based tasks", "a similar setting", "recent years", "one hundred seven", "six", "six", "one" ]
Deep Learning Based Alzheimer Disease Diagnosis: A Comprehensive Review
[ "S. Suganyadevi", "A. Shiny Pershiya", "K. Balasamy", "V. Seethalakshmi", "Saroj Bala", "Kumud Arora" ]
Dementia encompasses a range of cognitive disorders, with Alzheimer’s Disease being the utmost widespread and devastating. AD gradually erodes memory and daily functioning through the progressive deterioration of brain cells. It poses a significant global health challenge, necessitating early identification and intervention. Detecting AD at its onset holds immense potential to predict future health outcomes for individuals. By harnessing the power of artificial intelligence and leveraging MRI scans, we could utilize advanced technology to not only classify AD patients but also predict the likelihood of them developing this life-altering condition. This paper delves into the latest advancements in Deep Learning techniques and their functions in image analysis in medical field. Its primary goals are to elucidate the intricacies of medical image processing and to elucidate and implement key findings and recommendations from recent research.
10.1007/s42979-024-02743-2
deep learning based alzheimer disease diagnosis: a comprehensive review
dementia encompasses a range of cognitive disorders, with alzheimer’s disease being the utmost widespread and devastating. ad gradually erodes memory and daily functioning through the progressive deterioration of brain cells. it poses a significant global health challenge, necessitating early identification and intervention. detecting ad at its onset holds immense potential to predict future health outcomes for individuals. by harnessing the power of artificial intelligence and leveraging mri scans, we could utilize advanced technology to not only classify ad patients but also predict the likelihood of them developing this life-altering condition. this paper delves into the latest advancements in deep learning techniques and their functions in image analysis in medical field. its primary goals are to elucidate the intricacies of medical image processing and to elucidate and implement key findings and recommendations from recent research.
[ "dementia", "a range", "cognitive disorders", "alzheimer’s disease", "ad", "memory", "the progressive deterioration", "brain cells", "it", "a significant global health challenge", "early identification", "intervention", "ad", "its onset", "immense potential", "future health outcomes", "individuals", "the power", "artificial intelligence", "mri scans", "we", "advanced technology", "ad patients", "the likelihood", "them", "this life-altering condition", "this paper", "the latest advancements", "deep learning techniques", "their functions", "image analysis", "medical field", "its primary goals", "the intricacies", "medical image processing", "key findings", "recommendations", "recent research", "daily" ]
Can deep learning replace histopathological examinations in the differential diagnosis of cervical lymphadenopathy?
[ "Sermin Can", "Ömer Türk", "Muhammed Ayral", "Günay Kozan", "Hamza Arı", "Mehmet Akdağ", "Müzeyyen Yıldırım Baylan" ]
IntroductionWe aimed to develop a diagnostic deep learning model using contrast-enhanced CT images and to investigate whether cervical lymphadenopathies can be diagnosed with these deep learning methods without radiologist interpretations and histopathological examinations.Material methodA total of 400 patients who underwent surgery for lymphadenopathy in the neck between 2010 and 2022 were retrospectively analyzed. They were examined in four groups of 100 patients: the granulomatous diseases group, the lymphoma group, the squamous cell tumor group, and the reactive hyperplasia group. The diagnoses of the patients were confirmed histopathologically. Two CT images from all the patients in each group were used in the study. The CT images were classified using ResNet50, NASNetMobile, and DenseNet121 architecture input.ResultsThe classification accuracies obtained with ResNet50, DenseNet121, and NASNetMobile were 92.5%, 90.62, and 87.5, respectively.ConclusionDeep learning is a useful diagnostic tool in diagnosing cervical lymphadenopathy. In the near future, many diseases could be diagnosed with deep learning models without radiologist interpretations and invasive examinations such as histopathological examinations. However, further studies with much larger case series are needed to develop accurate deep-learning models.
10.1007/s00405-023-08181-9
can deep learning replace histopathological examinations in the differential diagnosis of cervical lymphadenopathy?
introductionwe aimed to develop a diagnostic deep learning model using contrast-enhanced ct images and to investigate whether cervical lymphadenopathies can be diagnosed with these deep learning methods without radiologist interpretations and histopathological examinations.material methoda total of 400 patients who underwent surgery for lymphadenopathy in the neck between 2010 and 2022 were retrospectively analyzed. they were examined in four groups of 100 patients: the granulomatous diseases group, the lymphoma group, the squamous cell tumor group, and the reactive hyperplasia group. the diagnoses of the patients were confirmed histopathologically. two ct images from all the patients in each group were used in the study. the ct images were classified using resnet50, nasnetmobile, and densenet121 architecture input.resultsthe classification accuracies obtained with resnet50, densenet121, and nasnetmobile were 92.5%, 90.62, and 87.5, respectively.conclusiondeep learning is a useful diagnostic tool in diagnosing cervical lymphadenopathy. in the near future, many diseases could be diagnosed with deep learning models without radiologist interpretations and invasive examinations such as histopathological examinations. however, further studies with much larger case series are needed to develop accurate deep-learning models.
[ "introductionwe", "a diagnostic deep learning model", "contrast-enhanced ct images", "cervical lymphadenopathies", "these deep learning methods", "radiologist interpretations", "histopathological examinations.material methoda total", "400 patients", "who", "surgery", "lymphadenopathy", "the neck", "they", "four groups", "100 patients", "the granulomatous diseases group", "the lymphoma group", "the squamous cell tumor group", "the reactive hyperplasia group", "the diagnoses", "the patients", "two ct images", "all the patients", "each group", "the study", "the ct images", "resnet50", "nasnetmobile", "architecture", "input.resultsthe classification accuracies", "resnet50", "densenet121", "92.5%", "respectively.conclusiondeep learning", "a useful diagnostic tool", "cervical lymphadenopathy", "the near future", "many diseases", "deep learning models", "radiologist interpretations", "invasive examinations", "histopathological examinations", "further studies", "much larger case series", "accurate deep-learning models", "400", "between 2010 and 2022", "four", "100", "two", "resnet50", "resnet50", "92.5%", "90.62", "87.5" ]
Deep Learning Models for Diagnosis of Schizophrenia Using EEG Signals: Emerging Trends, Challenges, and Prospects
[ "Rakesh Ranjan", "Bikash Chandra Sahana", "Ashish Kumar Bhandari" ]
Schizophrenia (ScZ) is a chronic neuropsychiatric disorder characterized by disruptions in cognitive, perceptual, social, emotional, and behavioral functions. In the traditional approach, the diagnosis of ScZ primarily relies on the subject’s response and the psychiatrist’s experience, making it highly subjective, prejudiced, and time-consuming. In recent medical research, incorporating deep learning (DL) into the diagnostic process improves performance by reducing inter-observer variation and providing qualitative and quantitative support for clinical decisions. Compared with other modalities, such as magnetic resonance images (MRI) or computed tomography (CT) scans, electroencephalogram (EEG) signals give better insights into the underlying neural mechanisms and brain biomarkers of ScZ. Deep learning models show promising results but the utilization of EEG signals as an effective biomarker for ScZ is still under research. Numerous deep learning models have recently been developed for automated ScZ diagnosis with EEG signals exclusively, yet a comprehensive assessment of these approaches still does not exist in the literature. To fill this gap, we comprehensively review the current advancements in deep learning-based schizophrenia diagnosis using EEG signals. This review is intended to provide systematic details of prominent components: deep learning models, ScZ EEG datasets, data preprocessing approaches, input data formulations for DL, chronological DL methodology advancement in ScZ diagnosis, and design trends of DL architecture. Finally, few challenges in both clinical and technical aspects that create hindrances in achieving the full potential of DL models in EEG-based ScZ diagnosis are expounded along with future outlooks.
10.1007/s11831-023-10047-6
deep learning models for diagnosis of schizophrenia using eeg signals: emerging trends, challenges, and prospects
schizophrenia (scz) is a chronic neuropsychiatric disorder characterized by disruptions in cognitive, perceptual, social, emotional, and behavioral functions. in the traditional approach, the diagnosis of scz primarily relies on the subject’s response and the psychiatrist’s experience, making it highly subjective, prejudiced, and time-consuming. in recent medical research, incorporating deep learning (dl) into the diagnostic process improves performance by reducing inter-observer variation and providing qualitative and quantitative support for clinical decisions. compared with other modalities, such as magnetic resonance images (mri) or computed tomography (ct) scans, electroencephalogram (eeg) signals give better insights into the underlying neural mechanisms and brain biomarkers of scz. deep learning models show promising results but the utilization of eeg signals as an effective biomarker for scz is still under research. numerous deep learning models have recently been developed for automated scz diagnosis with eeg signals exclusively, yet a comprehensive assessment of these approaches still does not exist in the literature. to fill this gap, we comprehensively review the current advancements in deep learning-based schizophrenia diagnosis using eeg signals. this review is intended to provide systematic details of prominent components: deep learning models, scz eeg datasets, data preprocessing approaches, input data formulations for dl, chronological dl methodology advancement in scz diagnosis, and design trends of dl architecture. finally, few challenges in both clinical and technical aspects that create hindrances in achieving the full potential of dl models in eeg-based scz diagnosis are expounded along with future outlooks.
[ "schizophrenia", "scz", "a chronic neuropsychiatric disorder", "disruptions", "cognitive, perceptual, social, emotional, and behavioral functions", "the traditional approach", "the diagnosis", "scz", "the subject’s response", "the psychiatrist’s experience", "it", "recent medical research", "deep learning", "dl", "the diagnostic process", "performance", "inter-observer variation", "qualitative and quantitative support", "clinical decisions", "other modalities", "magnetic resonance images", "mri", "tomography (ct) scans", "electroencephalogram (eeg) signals", "better insights", "the underlying neural mechanisms", "brain biomarkers", "scz", "deep learning models", "promising results", "the utilization", "eeg signals", "an effective biomarker", "scz", "research", "numerous deep learning models", "automated scz diagnosis", "eeg signals", "a comprehensive assessment", "these approaches", "the literature", "this gap", "we", "the current advancements", "deep learning-based schizophrenia diagnosis", "eeg signals", "this review", "systematic details", "prominent components", "deep learning models", "scz eeg datasets", "data", "approaches", "input data formulations", "dl, chronological dl methodology advancement", "scz diagnosis", "design trends", "dl architecture", "few challenges", "both clinical and technical aspects", "that", "hindrances", "the full potential", "dl models", "eeg-based scz diagnosis", "future outlooks" ]
GMPP-NN: a deep learning architecture for graph molecular property prediction
[ "Outhman Abbassi", "Soumia Ziti", "Meryam Belhiah", "Souad Najoua Lagmiri", "Yassine Zaoui Seghroucheni" ]
The pharmacy industry is highly focused on drug discovery and development for the identification and optimization of potential drug candidates. One of the key aspects of this process is the prediction of various molecular properties that justify their potential effectiveness in treating specific diseases. Recently, graph neural networks have gained significant attention, primarily due to their strong suitability for predicting complex relationships that exist between atoms and other molecular structures. GNNs require significant depth to capture global features and to allow the network to iteratively aggregate and propagate information across the entire graph structure. In this research study, we present a deep learning architecture known as a graph molecular property prediction neural network. which combines MPNN feature extraction with a multilayer perceptron classifier. The deep learning architecture was evaluated on four benchmark datasets, and its performance was compared to the smiles transformer, fingerprint to vector, deeper graph convolutional networks, geometry-enhanced molecular, and atom-bond transformer-based message-passing neural network. The results showed that the architecture outperformed the other models using the receiver operating characteristic area under the curve metric. These findings offer an exciting opportunity to enhance and improve molecular property prediction in drug discovery and development.
10.1007/s42452-024-05944-9
gmpp-nn: a deep learning architecture for graph molecular property prediction
the pharmacy industry is highly focused on drug discovery and development for the identification and optimization of potential drug candidates. one of the key aspects of this process is the prediction of various molecular properties that justify their potential effectiveness in treating specific diseases. recently, graph neural networks have gained significant attention, primarily due to their strong suitability for predicting complex relationships that exist between atoms and other molecular structures. gnns require significant depth to capture global features and to allow the network to iteratively aggregate and propagate information across the entire graph structure. in this research study, we present a deep learning architecture known as a graph molecular property prediction neural network. which combines mpnn feature extraction with a multilayer perceptron classifier. the deep learning architecture was evaluated on four benchmark datasets, and its performance was compared to the smiles transformer, fingerprint to vector, deeper graph convolutional networks, geometry-enhanced molecular, and atom-bond transformer-based message-passing neural network. the results showed that the architecture outperformed the other models using the receiver operating characteristic area under the curve metric. these findings offer an exciting opportunity to enhance and improve molecular property prediction in drug discovery and development.
[ "the pharmacy industry", "drug discovery", "development", "the identification", "optimization", "potential drug candidates", "the key aspects", "this process", "the prediction", "various molecular properties", "that", "their potential effectiveness", "specific diseases", "graph neural networks", "significant attention", "their strong suitability", "complex relationships", "that", "atoms", "other molecular structures", "gnns", "significant depth", "global features", "the network", "information", "the entire graph structure", "this research study", "we", "a deep learning architecture", "a graph molecular property prediction neural network", "which", "mpnn feature extraction", "a multilayer perceptron classifier", "the deep learning architecture", "four benchmark datasets", "its performance", "the smiles transformer", "vector", "deeper graph convolutional networks", "atom-bond transformer-based message-passing neural network", "the results", "the architecture", "the other models", "the receiver operating characteristic area", "the curve metric", "these findings", "an exciting opportunity", "molecular property prediction", "drug discovery", "development", "one", "four" ]
High-dimensional stochastic control models for newsvendor problems and deep learning resolution
[ "Jingtang Ma", "Shan Yang" ]
This paper studies continuous-time models for newsvendor problems with dynamic replenishment, financial hedging and Stackelberg competition. These factors are considered simultaneously and the high-dimensional stochastic control models are established. High-dimensional Hamilton-Jacobi-Bellman (HJB) equations are derived for the value functions. To circumvent the curse of dimensionality, a deep learning algorithm is proposed to solve the HJB equations. A projection is introduced in the algorithm to avoid the gradient explosion during the training phase. The deep learning algorithm is implemented for HJB equations derived from the newsvendor models with dimensions up to six. Numerical outcomes validate the algorithm’s accuracy and demonstrate that the high-dimensional stochastic control models can successfully mitigate the risk.
10.1007/s10479-024-05872-2
high-dimensional stochastic control models for newsvendor problems and deep learning resolution
this paper studies continuous-time models for newsvendor problems with dynamic replenishment, financial hedging and stackelberg competition. these factors are considered simultaneously and the high-dimensional stochastic control models are established. high-dimensional hamilton-jacobi-bellman (hjb) equations are derived for the value functions. to circumvent the curse of dimensionality, a deep learning algorithm is proposed to solve the hjb equations. a projection is introduced in the algorithm to avoid the gradient explosion during the training phase. the deep learning algorithm is implemented for hjb equations derived from the newsvendor models with dimensions up to six. numerical outcomes validate the algorithm’s accuracy and demonstrate that the high-dimensional stochastic control models can successfully mitigate the risk.
[ "this paper", "newsvendor problems", "dynamic replenishment", "financial hedging", "stackelberg competition", "these factors", "the high-dimensional stochastic control models", "hjb", "the value functions", "the curse", "dimensionality", "a deep learning algorithm", "the hjb equations", "a projection", "the algorithm", "the gradient explosion", "the training phase", "the deep learning algorithm", "hjb equations", "the newsvendor models", "dimensions", "numerical outcomes", "the algorithm’s accuracy", "the high-dimensional stochastic control models", "the risk", "hamilton-jacobi-bellman", "six" ]
Integrated deep learning approach for automatic coronary artery segmentation and classification on computed tomographic coronary angiography
[ "Chitra Devi Muthusamy", "Ramaswami Murugesh" ]
The field of coronary artery disease (CAD) has seen a rapid development in coronary computed tomography angiography (CCTA). However, manual coronary artery tree segmentation and reconstruction take time and effort. Deep learning algorithms have been created effectively to analyze large amounts of data for medical image analysis. The primary goal of this research is to create an automated CAD diagnostic model and a deep learning tool for automatic coronary artery reconstruction using a large, single-center retrospective CCTA cohort. We propose an integrated deep learning-based intelligent system for human heart blood vessel position within heart coronary CT angiography images using a multi-class ensemble classification mechanism in this research. The Modified DenseNet201 will segment the cardiac blood vessels in the proposed work. Then, the low-level features are successfully extracted using the ResNet-152 model. Finally, the improved deep residual shrinkage network (IDRSN) model classifies heart blood vessels into four distinct classes: normal, block, narrow, and blood flow-reduced. The CCTA image dataset is used for the experiment analysis. The integrated deep learning-based blood vessel segmentation and classification system (Modified DenseNet201-IDRSN) was developed using the Python tool, and the system’s efficiency was determined using different performance metrics. The experimental results of this research demonstrate that the proposed system is effective and efficient concerning related studies. Compared to the U-Net segmentation model, the segmentation result in the proposed research is smoother and nearest to the segmentation result of the human expert. The proposed integrated deep learning intelligent system improves the efficiency of disease diagnosis, reduces dependence on medical personnel, reduces manual interaction in diagnosis, and offers auxiliary strategies for subsequent medical diagnosis systems based on cardiac coronary angiography.
10.1007/s13721-024-00473-2
integrated deep learning approach for automatic coronary artery segmentation and classification on computed tomographic coronary angiography
the field of coronary artery disease (cad) has seen a rapid development in coronary computed tomography angiography (ccta). however, manual coronary artery tree segmentation and reconstruction take time and effort. deep learning algorithms have been created effectively to analyze large amounts of data for medical image analysis. the primary goal of this research is to create an automated cad diagnostic model and a deep learning tool for automatic coronary artery reconstruction using a large, single-center retrospective ccta cohort. we propose an integrated deep learning-based intelligent system for human heart blood vessel position within heart coronary ct angiography images using a multi-class ensemble classification mechanism in this research. the modified densenet201 will segment the cardiac blood vessels in the proposed work. then, the low-level features are successfully extracted using the resnet-152 model. finally, the improved deep residual shrinkage network (idrsn) model classifies heart blood vessels into four distinct classes: normal, block, narrow, and blood flow-reduced. the ccta image dataset is used for the experiment analysis. the integrated deep learning-based blood vessel segmentation and classification system (modified densenet201-idrsn) was developed using the python tool, and the system’s efficiency was determined using different performance metrics. the experimental results of this research demonstrate that the proposed system is effective and efficient concerning related studies. compared to the u-net segmentation model, the segmentation result in the proposed research is smoother and nearest to the segmentation result of the human expert. the proposed integrated deep learning intelligent system improves the efficiency of disease diagnosis, reduces dependence on medical personnel, reduces manual interaction in diagnosis, and offers auxiliary strategies for subsequent medical diagnosis systems based on cardiac coronary angiography.
[ "the field", "coronary artery disease", "cad", "a rapid development", "coronary computed tomography angiography", "ccta", "manual coronary artery tree segmentation", "reconstruction", "time", "effort", "deep learning algorithms", "large amounts", "data", "medical image analysis", "the primary goal", "this research", "an automated cad diagnostic model", "a deep learning tool", "automatic coronary artery reconstruction", "a large, single-center retrospective ccta cohort", "we", "an integrated deep learning-based intelligent system", "human heart blood vessel position", "heart coronary ct angiography images", "a multi-class ensemble classification mechanism", "this research", "the modified densenet201", "the cardiac blood vessels", "the proposed work", "the low-level features", "the resnet-152 model", "the improved deep residual shrinkage network", "four distinct classes", "the ccta image dataset", "the experiment analysis", "the integrated deep learning-based blood vessel segmentation and classification system", "modified densenet201-idrsn", "the python tool", "the system’s efficiency", "different performance metrics", "the experimental results", "this research demonstrate", "the proposed system", "related studies", "the u-net segmentation model", "the segmentation", "the proposed research", "the segmentation result", "the human expert", "the proposed integrated deep learning intelligent system", "the efficiency", "disease diagnosis", "dependence", "medical personnel", "manual interaction", "diagnosis", "auxiliary strategies", "subsequent medical diagnosis systems", "cardiac coronary angiography", "resnet-152", "four" ]
Deep learning with autoencoders and LSTM for ENSO forecasting
[ "Chibuike Chiedozie Ibebuchi", "Michael B. Richman" ]
El Niño Southern Oscillation (ENSO) is the prominent recurrent climatic pattern in the tropical Pacific Ocean with global impacts on regional climates. This study utilizes deep learning to predict the Niño 3.4 index by encoding non-linear sea surface temperature patterns in the tropical Pacific using an autoencoder neural network. The resulting encoded patterns identify crucial centers of action in the Pacific that serve as predictors of the ENSO mode. These patterns are utilized as predictors for forecasting the Niño 3.4 index with a lead time of at least 6 months using the Long Short-Term Memory (LSTM) deep learning model. The analysis uncovers multiple non-linear dipole patterns in the tropical Pacific, with anomalies that are both regionalized and latitudinally oriented that should support a single inter-tropical convergence zone for modeling efforts. Leveraging these encoded patterns as predictors, the LSTM - trained on monthly data from 1950 to 2007 and tested from 2008 to 2022 - shows fidelity in predicting the Niño 3.4 index. The encoded patterns captured the annual cycle of ENSO with a 0.94 correlation between the actual and predicted Niño 3.4 index for lag 12 and 0.91 for lags 6 and 18. Additionally, the 6-month lag predictions excel in detecting extreme ENSO events, achieving an 85% hit rate, outperforming the 70% hit rate at lag 12 and 55% hit rate at lag 18. The prediction accuracy peaks from November to March, with correlations ranging from 0.94 to 0.96. The average correlations in the boreal spring were as large as 0.84, indicating the method has the capability to decrease the spring predictability barrier.
10.1007/s00382-024-07180-8
deep learning with autoencoders and lstm for enso forecasting
el niño southern oscillation (enso) is the prominent recurrent climatic pattern in the tropical pacific ocean with global impacts on regional climates. this study utilizes deep learning to predict the niño 3.4 index by encoding non-linear sea surface temperature patterns in the tropical pacific using an autoencoder neural network. the resulting encoded patterns identify crucial centers of action in the pacific that serve as predictors of the enso mode. these patterns are utilized as predictors for forecasting the niño 3.4 index with a lead time of at least 6 months using the long short-term memory (lstm) deep learning model. the analysis uncovers multiple non-linear dipole patterns in the tropical pacific, with anomalies that are both regionalized and latitudinally oriented that should support a single inter-tropical convergence zone for modeling efforts. leveraging these encoded patterns as predictors, the lstm - trained on monthly data from 1950 to 2007 and tested from 2008 to 2022 - shows fidelity in predicting the niño 3.4 index. the encoded patterns captured the annual cycle of enso with a 0.94 correlation between the actual and predicted niño 3.4 index for lag 12 and 0.91 for lags 6 and 18. additionally, the 6-month lag predictions excel in detecting extreme enso events, achieving an 85% hit rate, outperforming the 70% hit rate at lag 12 and 55% hit rate at lag 18. the prediction accuracy peaks from november to march, with correlations ranging from 0.94 to 0.96. the average correlations in the boreal spring were as large as 0.84, indicating the method has the capability to decrease the spring predictability barrier.
[ "el niño southern oscillation", "enso", "the prominent recurrent climatic pattern", "the tropical pacific ocean", "global impacts", "regional climates", "this study", "deep learning", "the niño 3.4 index", "non-linear sea surface temperature patterns", "the tropical pacific", "an autoencoder neural network", "the resulting encoded patterns", "crucial centers", "action", "the pacific", "that", "predictors", "the enso mode", "these patterns", "predictors", "the niño 3.4 index", "a lead time", "at least 6 months", "the long short-term memory", "lstm", "deep learning model", "the analysis uncovers", "non-linear dipole patterns", "the tropical pacific", "anomalies", "that", "that", "a single inter-tropical convergence zone", "modeling efforts", "these encoded patterns", "predictors", "monthly data", "fidelity", "the niño 3.4 index", "the encoded patterns", "the annual cycle", "enso", "a 0.94 correlation", "3.4 index", "lag", "lags", "the 6-month lag predictions", "extreme enso events", "an 85% hit rate", "the 70% hit rate", "lag", "hit rate", "lag", "the prediction accuracy peaks", "november", "march", "correlations", "the average correlations", "the boreal spring", "the method", "the capability", "the spring predictability barrier", "el niño southern oscillation", "pacific", "3.4", "non-linear", "3.4", "at least 6 months", "monthly", "1950", "2007", "2008", "2022", "3.4", "annual", "0.94", "3.4", "12", "0.91", "6", "6-month", "85%", "70%", "55%", "november to march", "0.94", "0.96", "as large as 0.84" ]
Survey on deep learning in multimodal medical imaging for cancer detection
[ "Yan Tian", "Zhaocheng Xu", "Yujun Ma", "Weiping Ding", "Ruili Wang", "Zhihong Gao", "Guohua Cheng", "Linyang He", "Xuran Zhao" ]
The task of multimodal cancer detection is to determine the locations and categories of lesions by using different imaging techniques, which is one of the key research methods for cancer diagnosis. Recently, deep learning-based object detection has made significant developments due to its strength in semantic feature extraction and nonlinear function fitting. However, multimodal cancer detection remains challenging due to morphological differences in lesions, interpatient variability, difficulty in annotation, and imaging artifacts. In this survey, we mainly investigate over 150 papers in recent years with respect to multimodal cancer detection using deep learning, with a focus on datasets and solutions to various challenges such as data annotation, variance between classes, small-scale lesions, and occlusion. We also provide an overview of the advantages and drawbacks of each approach. Finally, we discuss the current scope of work and provide directions for the future development of multimodal cancer detection.
10.1007/s00521-023-09214-4
survey on deep learning in multimodal medical imaging for cancer detection
the task of multimodal cancer detection is to determine the locations and categories of lesions by using different imaging techniques, which is one of the key research methods for cancer diagnosis. recently, deep learning-based object detection has made significant developments due to its strength in semantic feature extraction and nonlinear function fitting. however, multimodal cancer detection remains challenging due to morphological differences in lesions, interpatient variability, difficulty in annotation, and imaging artifacts. in this survey, we mainly investigate over 150 papers in recent years with respect to multimodal cancer detection using deep learning, with a focus on datasets and solutions to various challenges such as data annotation, variance between classes, small-scale lesions, and occlusion. we also provide an overview of the advantages and drawbacks of each approach. finally, we discuss the current scope of work and provide directions for the future development of multimodal cancer detection.
[ "the task", "multimodal cancer detection", "the locations", "categories", "lesions", "different imaging techniques", "which", "the key research methods", "cancer diagnosis", "deep learning-based object detection", "significant developments", "its strength", "semantic feature extraction and nonlinear function", "multimodal cancer detection", "morphological differences", "lesions", "interpatient variability", "difficulty", "annotation", "imaging artifacts", "this survey", "we", "over 150 papers", "recent years", "respect", "multimodal cancer detection", "deep learning", "a focus", "datasets", "solutions", "various challenges", "data annotation", "variance", "classes", "small-scale lesions", "occlusion", "we", "an overview", "the advantages", "drawbacks", "each approach", "we", "the current scope", "work", "directions", "the future development", "multimodal cancer detection", "150", "recent years" ]
A systematic review and meta-analysis of artificial neural network, machine learning, deep learning, and ensemble learning approaches in field of geotechnical engineering
[ "Elaheh Yaghoubi", "Elnaz Yaghoubi", "Ahmed Khamees", "Amir Hossein Vakili" ]
Artificial neural networks (ANN), machine learning (ML), deep learning (DL), and ensemble learning (EL) are four outstanding approaches that enable algorithms to extract information from data and make predictions or decisions autonomously without the need for direct instructions. ANN, ML, DL, and EL models have found extensive application in predicting geotechnical and geoenvironmental parameters. This research aims to provide a comprehensive assessment of the applications of ANN, ML, DL, and EL in addressing forecasting within the field related to geotechnical engineering, including soil mechanics, foundation engineering, rock mechanics, environmental geotechnics, and transportation geotechnics. Previous studies have not collectively examined all four algorithms—ANN, ML, DL, and EL—and have not explored their advantages and disadvantages in the field of geotechnical engineering. This research aims to categorize and address this gap in the existing literature systematically. An extensive dataset of relevant research studies was gathered from the Web of Science and subjected to an analysis based on their approach, primary focus and objectives, year of publication, geographical distribution, and results. Additionally, this study included a co-occurrence keyword analysis that covered ANN, ML, DL, and EL techniques, systematic reviews, geotechnical engineering, and review articles that the data, sourced from the Scopus database through the Elsevier Journal, were then visualized using VOS Viewer for further examination. The results demonstrated that ANN is widely utilized despite the proven potential of ML, DL, and EL methods in geotechnical engineering due to the need for real-world laboratory data that civil and geotechnical engineers often encounter. However, when it comes to predicting behavior in geotechnical scenarios, EL techniques outperform all three other methods. Additionally, the techniques discussed here assist geotechnical engineering in understanding the benefits and disadvantages of ANN, ML, DL, and EL within the geo techniques area. This understanding enables geotechnical practitioners to select the most suitable techniques for creating a certainty and resilient ecosystem.
10.1007/s00521-024-09893-7
a systematic review and meta-analysis of artificial neural network, machine learning, deep learning, and ensemble learning approaches in field of geotechnical engineering
artificial neural networks (ann), machine learning (ml), deep learning (dl), and ensemble learning (el) are four outstanding approaches that enable algorithms to extract information from data and make predictions or decisions autonomously without the need for direct instructions. ann, ml, dl, and el models have found extensive application in predicting geotechnical and geoenvironmental parameters. this research aims to provide a comprehensive assessment of the applications of ann, ml, dl, and el in addressing forecasting within the field related to geotechnical engineering, including soil mechanics, foundation engineering, rock mechanics, environmental geotechnics, and transportation geotechnics. previous studies have not collectively examined all four algorithms—ann, ml, dl, and el—and have not explored their advantages and disadvantages in the field of geotechnical engineering. this research aims to categorize and address this gap in the existing literature systematically. an extensive dataset of relevant research studies was gathered from the web of science and subjected to an analysis based on their approach, primary focus and objectives, year of publication, geographical distribution, and results. additionally, this study included a co-occurrence keyword analysis that covered ann, ml, dl, and el techniques, systematic reviews, geotechnical engineering, and review articles that the data, sourced from the scopus database through the elsevier journal, were then visualized using vos viewer for further examination. the results demonstrated that ann is widely utilized despite the proven potential of ml, dl, and el methods in geotechnical engineering due to the need for real-world laboratory data that civil and geotechnical engineers often encounter. however, when it comes to predicting behavior in geotechnical scenarios, el techniques outperform all three other methods. additionally, the techniques discussed here assist geotechnical engineering in understanding the benefits and disadvantages of ann, ml, dl, and el within the geo techniques area. this understanding enables geotechnical practitioners to select the most suitable techniques for creating a certainty and resilient ecosystem.
[ "artificial neural networks", "ann", "machine learning", "ml", "deep learning", "dl", "ensemble learning", "el", "four outstanding approaches", "that", "algorithms", "information", "data", "predictions", "decisions", "the need", "direct instructions", "ann", "el models", "extensive application", "geotechnical and geoenvironmental parameters", "this research", "a comprehensive assessment", "the applications", "ann", "ml", "dl", "el", "forecasting", "the field", "geotechnical engineering", "soil mechanics", "foundation engineering", "rock mechanics", "environmental geotechnics", "transportation geotechnics", "previous studies", "all four algorithms", "ann", "ml", "dl", "el", "their advantages", "disadvantages", "the field", "geotechnical engineering", "this research", "this gap", "the existing literature", "an extensive dataset", "relevant research studies", "the web", "science", "an analysis", "their approach", "primary focus", "objectives", "year", "publication", "geographical distribution", "results", "this study", "a co-occurrence keyword analysis", "that", "ann", "ml", "dl", "el techniques", "systematic reviews", "geotechnical engineering", "articles", "the data", "the scopus database", "the elsevier journal", "vos viewer", "further examination", "the results", "ann", "the proven potential", "ml", "dl", "el methods", "geotechnical engineering", "the need", "real-world laboratory data", "that", "civil and geotechnical engineers", "it", "behavior", "geotechnical scenarios", "el techniques", "all three other methods", "the techniques", "geotechnical engineering", "the benefits", "disadvantages", "ann", "ml", "dl", "el", "the geo techniques area", "this understanding", "geotechnical practitioners", "the most suitable techniques", "a certainty and resilient ecosystem", "four", "ann", "el models", "four", "el techniques", "el techniques", "three" ]
HCR-Net: a deep learning based script independent handwritten character recognition network
[ "Vinod Kumar Chauhan", "Sukhdeep Singh", "Anuj Sharma" ]
Handwritten character recognition (HCR) remains a challenging pattern recognition problem despite decades of research, and lacks research on script independent recognition techniques. This is mainly because of similar character structures, different handwriting styles, diverse scripts, handcrafted feature extraction techniques, unavailability of data and code, and the development of script-specific deep learning techniques. To address these limitations, we have proposed a script independent deep learning network for HCR research, called HCR-Net, that sets a new research direction for the field. HCR-Net is based on a novel transfer learning approach for HCR, which partly utilizes feature extraction layers of a pre-trained network. Due to transfer learning and image augmentation, HCR-Net provides faster and computationally efficient training, better performance and generalizations, and can work with small datasets. HCR-Net is extensively evaluated on 40 publicly available datasets of Bangla, Punjabi, Hindi, English, Swedish, Urdu, Farsi, Tibetan, Kannada, Malayalam, Telugu, Marathi, Nepali and Arabic languages, and established 26 new benchmark results while performed close to the best results in the rest cases. HCR-Net showed performance improvements up to 11% against the existing results and achieved a fast convergence rate showing up to 99% of final performance in the very first epoch. HCR-Net significantly outperformed the state-of-the-art transfer learning techniques and also reduced the number of trainable parameters by 34% as compared with the corresponding pre-trained network. To facilitate reproducibility and further advancements of HCR research, the complete code is publicly released at https://github.com/jmdvinodjmd/HCR-Net.
10.1007/s11042-024-18655-5
hcr-net: a deep learning based script independent handwritten character recognition network
handwritten character recognition (hcr) remains a challenging pattern recognition problem despite decades of research, and lacks research on script independent recognition techniques. this is mainly because of similar character structures, different handwriting styles, diverse scripts, handcrafted feature extraction techniques, unavailability of data and code, and the development of script-specific deep learning techniques. to address these limitations, we have proposed a script independent deep learning network for hcr research, called hcr-net, that sets a new research direction for the field. hcr-net is based on a novel transfer learning approach for hcr, which partly utilizes feature extraction layers of a pre-trained network. due to transfer learning and image augmentation, hcr-net provides faster and computationally efficient training, better performance and generalizations, and can work with small datasets. hcr-net is extensively evaluated on 40 publicly available datasets of bangla, punjabi, hindi, english, swedish, urdu, farsi, tibetan, kannada, malayalam, telugu, marathi, nepali and arabic languages, and established 26 new benchmark results while performed close to the best results in the rest cases. hcr-net showed performance improvements up to 11% against the existing results and achieved a fast convergence rate showing up to 99% of final performance in the very first epoch. hcr-net significantly outperformed the state-of-the-art transfer learning techniques and also reduced the number of trainable parameters by 34% as compared with the corresponding pre-trained network. to facilitate reproducibility and further advancements of hcr research, the complete code is publicly released at https://github.com/jmdvinodjmd/hcr-net.
[ "handwritten character recognition", "hcr", "a challenging pattern recognition problem", "decades", "research", "research", "script independent recognition techniques", "this", "similar character structures", "different handwriting styles", "diverse scripts", "handcrafted feature extraction techniques", "unavailability", "data", "code", "the development", "script-specific deep learning techniques", "these limitations", "we", "a script independent deep learning network", "hcr research", "hcr", "-", "net", "that", "a new research direction", "the field", "hcr", "net", "a novel transfer learning approach", "hcr", "which", "feature extraction layers", "a pre-trained network", "learning", "image augmentation", "hcr-net", "faster and computationally efficient training", "better performance", "generalizations", "small datasets", "hcr", "net", "40 publicly available datasets", "bangla", "punjabi", "hindi", "english", "urdu", "farsi", "tibetan", "kannada", "malayalam", "telugu", "marathi", "nepali", "arabic languages", "26 new benchmark results", "the best results", "the rest cases", "net", "performance improvements", "the existing results", "a fast convergence rate", "up to 99%", "final performance", "the very first epoch", "hcr", "net", "the-art", "techniques", "the number", "trainable parameters", "34%", "the corresponding pre-trained network", "reproducibility", "further advancements", "hcr research", "the complete code", "https://github.com/jmdvinodjmd/hcr-net", "decades", "40", "english", "swedish", "tibetan", "kannada", "malayalam", "arabic", "26", "11%", "up to 99%", "first", "34%" ]
A review of deep learning approaches in clinical and healthcare systems based on medical image analysis
[ "Hadeer A. Helaly", "Mahmoud Badawy", "Amira Y. Haikal" ]
Healthcare is a high-priority sector where people expect the highest levels of care and service, regardless of cost. That makes it distinct from other sectors. Due to the promising results of deep learning in other practical applications, many deep learning algorithms have been proposed for use in healthcare and to solve traditional artificial intelligence issues. The main objective of this study is to review and analyze current deep learning algorithms in healthcare systems. In addition, it highlights the contributions and limitations of recent research papers. It combines deep learning methods with the interpretability of human healthcare by providing insights into deep learning applications in healthcare solutions. It first provides an overview of several deep learning models and their most recent developments. It then briefly examines how these models are applied in several medical practices. Finally, it summarizes current trends and issues in the design and training of deep neural networks besides the future direction in this field.
10.1007/s11042-023-16605-1
a review of deep learning approaches in clinical and healthcare systems based on medical image analysis
healthcare is a high-priority sector where people expect the highest levels of care and service, regardless of cost. that makes it distinct from other sectors. due to the promising results of deep learning in other practical applications, many deep learning algorithms have been proposed for use in healthcare and to solve traditional artificial intelligence issues. the main objective of this study is to review and analyze current deep learning algorithms in healthcare systems. in addition, it highlights the contributions and limitations of recent research papers. it combines deep learning methods with the interpretability of human healthcare by providing insights into deep learning applications in healthcare solutions. it first provides an overview of several deep learning models and their most recent developments. it then briefly examines how these models are applied in several medical practices. finally, it summarizes current trends and issues in the design and training of deep neural networks besides the future direction in this field.
[ "healthcare", "a high-priority sector", "people", "the highest levels", "care", "service", "cost", "that", "it", "other sectors", "the promising results", "deep learning", "other practical applications", "many deep learning algorithms", "use", "healthcare", "traditional artificial intelligence issues", "the main objective", "this study", "current deep learning algorithms", "healthcare systems", "addition", "it", "the contributions", "limitations", "recent research papers", "it", "deep learning methods", "the interpretability", "human healthcare", "insights", "deep learning applications", "healthcare solutions", "it", "an overview", "several deep learning models", "their most recent developments", "it", "these models", "several medical practices", "it", "current trends", "issues", "the design", "training", "deep neural networks", "the future direction", "this field", "healthcare", "first" ]
Automated multifocus pollen detection using deep learning
[ "Ramón Gallardo", "Carlos J. García-Orellana", "Horacio M. González-Velasco", "Antonio García-Manso", "Rafael Tormo-Molina", "Miguel Macías-Macías", "Eugenio Abengózar" ]
Pollen-induced allergies affect a significant part of the population in developed countries. Current palynological analysis in Europe is a slow and laborious process which provides pollen information in a weekly-cycle basis. In this paper, we describe a system that allows to locate and classify, in a single step, the pollen grains present in standard glass microscope slides. Besides, processing the samples in the z-axis allows us to increase the probability of detecting grains compared to solutions based on one image per sample. Our system has been trained to recognise 11 pollen types, achieving 97.6 % success rate locating grains, of which 96.3 % are also correctly identified (0.956 macro–F1 score), and with a 2.4 % grains lost. Our results indicate that deep learning provides a robust framework to address automated identification of various pollen types, facilitating their daily measurement.
10.1007/s11042-024-18450-2
automated multifocus pollen detection using deep learning
pollen-induced allergies affect a significant part of the population in developed countries. current palynological analysis in europe is a slow and laborious process which provides pollen information in a weekly-cycle basis. in this paper, we describe a system that allows to locate and classify, in a single step, the pollen grains present in standard glass microscope slides. besides, processing the samples in the z-axis allows us to increase the probability of detecting grains compared to solutions based on one image per sample. our system has been trained to recognise 11 pollen types, achieving 97.6 % success rate locating grains, of which 96.3 % are also correctly identified (0.956 macro–f1 score), and with a 2.4 % grains lost. our results indicate that deep learning provides a robust framework to address automated identification of various pollen types, facilitating their daily measurement.
[ "pollen-induced allergies", "a significant part", "the population", "developed countries", "current palynological analysis", "europe", "a slow and laborious process", "which", "pollen information", "a weekly-cycle basis", "this paper", "we", "a system", "that", "a single step", "the pollen grains", "standard glass microscope", "the samples", "the z-axis", "us", "the probability", "grains", "solutions", "one image", "sample", "our system", "11 pollen types", "97.6 % success rate locating grains", "which", "96.3 %", "0.956 macro", "f1 score", "a 2.4 % grains", "our results", "deep learning", "a robust framework", "automated identification", "various pollen types", "their daily measurement", "europe", "weekly", "one", "11", "97.6 %", "96.3 %", "0.956", "2.4 %", "daily" ]
Cataract-1K Dataset for Deep-Learning-Assisted Analysis of Cataract Surgery Videos
[ "Negin Ghamsarian", "Yosuf El-Shabrawi", "Sahar Nasirihaghighi", "Doris Putzgruber-Adamitsch", "Martin Zinkernagel", "Sebastian Wolf", "Klaus Schoeffmann", "Raphael Sznitman" ]
In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons’ skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.
10.1038/s41597-024-03193-4
cataract-1k dataset for deep-learning-assisted analysis of cataract surgery videos
in recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons’ skills, operation room management, and overall surgical outcomes. however, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. in particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. in this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. we validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. the dataset and annotations are publicly available in synapse.
[ "recent years", "the landscape", "computer-assisted interventions", "post-operative surgical video analysis", "deep-learning techniques", "significant advancements", "surgeons’ skills", "operation room management", "overall surgical outcomes", "the progression", "deep-learning-powered surgical technologies", "large-scale datasets", "annotations", "surgical scene understanding", "phase recognition", "pivotal pillars", "the realm", "computer-assisted surgery", "post-operative assessment", "cataract surgery videos", "this context", "we", "the largest cataract surgery video dataset", "that", "diverse requisites", "computerized surgical workflow analysis", "post-operative irregularities", "cataract surgery", "we", "the quality", "annotations", "the performance", "the-art", "phase recognition", "surgical scene segmentation", "we", "the research", "domain adaptation", "instrument segmentation", "cataract surgery", "cross-domain instrument segmentation performance", "cataract surgery videos", "the dataset", "annotations", "synapse", "recent years" ]
A systematic review and meta-analysis of artificial neural network, machine learning, deep learning, and ensemble learning approaches in field of geotechnical engineering
[ "Elaheh Yaghoubi", "Elnaz Yaghoubi", "Ahmed Khamees", "Amir Hossein Vakili" ]
Artificial neural networks (ANN), machine learning (ML), deep learning (DL), and ensemble learning (EL) are four outstanding approaches that enable algorithms to extract information from data and make predictions or decisions autonomously without the need for direct instructions. ANN, ML, DL, and EL models have found extensive application in predicting geotechnical and geoenvironmental parameters. This research aims to provide a comprehensive assessment of the applications of ANN, ML, DL, and EL in addressing forecasting within the field related to geotechnical engineering, including soil mechanics, foundation engineering, rock mechanics, environmental geotechnics, and transportation geotechnics. Previous studies have not collectively examined all four algorithms—ANN, ML, DL, and EL—and have not explored their advantages and disadvantages in the field of geotechnical engineering. This research aims to categorize and address this gap in the existing literature systematically. An extensive dataset of relevant research studies was gathered from the Web of Science and subjected to an analysis based on their approach, primary focus and objectives, year of publication, geographical distribution, and results. Additionally, this study included a co-occurrence keyword analysis that covered ANN, ML, DL, and EL techniques, systematic reviews, geotechnical engineering, and review articles that the data, sourced from the Scopus database through the Elsevier Journal, were then visualized using VOS Viewer for further examination. The results demonstrated that ANN is widely utilized despite the proven potential of ML, DL, and EL methods in geotechnical engineering due to the need for real-world laboratory data that civil and geotechnical engineers often encounter. However, when it comes to predicting behavior in geotechnical scenarios, EL techniques outperform all three other methods. Additionally, the techniques discussed here assist geotechnical engineering in understanding the benefits and disadvantages of ANN, ML, DL, and EL within the geo techniques area. This understanding enables geotechnical practitioners to select the most suitable techniques for creating a certainty and resilient ecosystem.
10.1007/s00521-024-09893-7
a systematic review and meta-analysis of artificial neural network, machine learning, deep learning, and ensemble learning approaches in field of geotechnical engineering
artificial neural networks (ann), machine learning (ml), deep learning (dl), and ensemble learning (el) are four outstanding approaches that enable algorithms to extract information from data and make predictions or decisions autonomously without the need for direct instructions. ann, ml, dl, and el models have found extensive application in predicting geotechnical and geoenvironmental parameters. this research aims to provide a comprehensive assessment of the applications of ann, ml, dl, and el in addressing forecasting within the field related to geotechnical engineering, including soil mechanics, foundation engineering, rock mechanics, environmental geotechnics, and transportation geotechnics. previous studies have not collectively examined all four algorithms—ann, ml, dl, and el—and have not explored their advantages and disadvantages in the field of geotechnical engineering. this research aims to categorize and address this gap in the existing literature systematically. an extensive dataset of relevant research studies was gathered from the web of science and subjected to an analysis based on their approach, primary focus and objectives, year of publication, geographical distribution, and results. additionally, this study included a co-occurrence keyword analysis that covered ann, ml, dl, and el techniques, systematic reviews, geotechnical engineering, and review articles that the data, sourced from the scopus database through the elsevier journal, were then visualized using vos viewer for further examination. the results demonstrated that ann is widely utilized despite the proven potential of ml, dl, and el methods in geotechnical engineering due to the need for real-world laboratory data that civil and geotechnical engineers often encounter. however, when it comes to predicting behavior in geotechnical scenarios, el techniques outperform all three other methods. additionally, the techniques discussed here assist geotechnical engineering in understanding the benefits and disadvantages of ann, ml, dl, and el within the geo techniques area. this understanding enables geotechnical practitioners to select the most suitable techniques for creating a certainty and resilient ecosystem.
[ "artificial neural networks", "ann", "machine learning", "ml", "deep learning", "dl", "ensemble learning", "el", "four outstanding approaches", "that", "algorithms", "information", "data", "predictions", "decisions", "the need", "direct instructions", "ann", "el models", "extensive application", "geotechnical and geoenvironmental parameters", "this research", "a comprehensive assessment", "the applications", "ann", "ml", "dl", "el", "forecasting", "the field", "geotechnical engineering", "soil mechanics", "foundation engineering", "rock mechanics", "environmental geotechnics", "transportation geotechnics", "previous studies", "all four algorithms", "ann", "ml", "dl", "el", "their advantages", "disadvantages", "the field", "geotechnical engineering", "this research", "this gap", "the existing literature", "an extensive dataset", "relevant research studies", "the web", "science", "an analysis", "their approach", "primary focus", "objectives", "year", "publication", "geographical distribution", "results", "this study", "a co-occurrence keyword analysis", "that", "ann", "ml", "dl", "el techniques", "systematic reviews", "geotechnical engineering", "articles", "the data", "the scopus database", "the elsevier journal", "vos viewer", "further examination", "the results", "ann", "the proven potential", "ml", "dl", "el methods", "geotechnical engineering", "the need", "real-world laboratory data", "that", "civil and geotechnical engineers", "it", "behavior", "geotechnical scenarios", "el techniques", "all three other methods", "the techniques", "geotechnical engineering", "the benefits", "disadvantages", "ann", "ml", "dl", "el", "the geo techniques area", "this understanding", "geotechnical practitioners", "the most suitable techniques", "a certainty and resilient ecosystem", "four", "ann", "el models", "four", "el techniques", "el techniques", "three" ]
Integrated deep learning approach for automatic coronary artery segmentation and classification on computed tomographic coronary angiography
[ "Chitra Devi Muthusamy", "Ramaswami Murugesh" ]
The field of coronary artery disease (CAD) has seen a rapid development in coronary computed tomography angiography (CCTA). However, manual coronary artery tree segmentation and reconstruction take time and effort. Deep learning algorithms have been created effectively to analyze large amounts of data for medical image analysis. The primary goal of this research is to create an automated CAD diagnostic model and a deep learning tool for automatic coronary artery reconstruction using a large, single-center retrospective CCTA cohort. We propose an integrated deep learning-based intelligent system for human heart blood vessel position within heart coronary CT angiography images using a multi-class ensemble classification mechanism in this research. The Modified DenseNet201 will segment the cardiac blood vessels in the proposed work. Then, the low-level features are successfully extracted using the ResNet-152 model. Finally, the improved deep residual shrinkage network (IDRSN) model classifies heart blood vessels into four distinct classes: normal, block, narrow, and blood flow-reduced. The CCTA image dataset is used for the experiment analysis. The integrated deep learning-based blood vessel segmentation and classification system (Modified DenseNet201-IDRSN) was developed using the Python tool, and the system’s efficiency was determined using different performance metrics. The experimental results of this research demonstrate that the proposed system is effective and efficient concerning related studies. Compared to the U-Net segmentation model, the segmentation result in the proposed research is smoother and nearest to the segmentation result of the human expert. The proposed integrated deep learning intelligent system improves the efficiency of disease diagnosis, reduces dependence on medical personnel, reduces manual interaction in diagnosis, and offers auxiliary strategies for subsequent medical diagnosis systems based on cardiac coronary angiography.
10.1007/s13721-024-00473-2
integrated deep learning approach for automatic coronary artery segmentation and classification on computed tomographic coronary angiography
the field of coronary artery disease (cad) has seen a rapid development in coronary computed tomography angiography (ccta). however, manual coronary artery tree segmentation and reconstruction take time and effort. deep learning algorithms have been created effectively to analyze large amounts of data for medical image analysis. the primary goal of this research is to create an automated cad diagnostic model and a deep learning tool for automatic coronary artery reconstruction using a large, single-center retrospective ccta cohort. we propose an integrated deep learning-based intelligent system for human heart blood vessel position within heart coronary ct angiography images using a multi-class ensemble classification mechanism in this research. the modified densenet201 will segment the cardiac blood vessels in the proposed work. then, the low-level features are successfully extracted using the resnet-152 model. finally, the improved deep residual shrinkage network (idrsn) model classifies heart blood vessels into four distinct classes: normal, block, narrow, and blood flow-reduced. the ccta image dataset is used for the experiment analysis. the integrated deep learning-based blood vessel segmentation and classification system (modified densenet201-idrsn) was developed using the python tool, and the system’s efficiency was determined using different performance metrics. the experimental results of this research demonstrate that the proposed system is effective and efficient concerning related studies. compared to the u-net segmentation model, the segmentation result in the proposed research is smoother and nearest to the segmentation result of the human expert. the proposed integrated deep learning intelligent system improves the efficiency of disease diagnosis, reduces dependence on medical personnel, reduces manual interaction in diagnosis, and offers auxiliary strategies for subsequent medical diagnosis systems based on cardiac coronary angiography.
[ "the field", "coronary artery disease", "cad", "a rapid development", "coronary computed tomography angiography", "ccta", "manual coronary artery tree segmentation", "reconstruction", "time", "effort", "deep learning algorithms", "large amounts", "data", "medical image analysis", "the primary goal", "this research", "an automated cad diagnostic model", "a deep learning tool", "automatic coronary artery reconstruction", "a large, single-center retrospective ccta cohort", "we", "an integrated deep learning-based intelligent system", "human heart blood vessel position", "heart coronary ct angiography images", "a multi-class ensemble classification mechanism", "this research", "the modified densenet201", "the cardiac blood vessels", "the proposed work", "the low-level features", "the resnet-152 model", "the improved deep residual shrinkage network", "four distinct classes", "the ccta image dataset", "the experiment analysis", "the integrated deep learning-based blood vessel segmentation and classification system", "modified densenet201-idrsn", "the python tool", "the system’s efficiency", "different performance metrics", "the experimental results", "this research demonstrate", "the proposed system", "related studies", "the u-net segmentation model", "the segmentation", "the proposed research", "the segmentation result", "the human expert", "the proposed integrated deep learning intelligent system", "the efficiency", "disease diagnosis", "dependence", "medical personnel", "manual interaction", "diagnosis", "auxiliary strategies", "subsequent medical diagnosis systems", "cardiac coronary angiography", "resnet-152", "four" ]
Can deep learning replace histopathological examinations in the differential diagnosis of cervical lymphadenopathy?
[ "Sermin Can", "Ömer Türk", "Muhammed Ayral", "Günay Kozan", "Hamza Arı", "Mehmet Akdağ", "Müzeyyen Yıldırım Baylan" ]
IntroductionWe aimed to develop a diagnostic deep learning model using contrast-enhanced CT images and to investigate whether cervical lymphadenopathies can be diagnosed with these deep learning methods without radiologist interpretations and histopathological examinations.Material methodA total of 400 patients who underwent surgery for lymphadenopathy in the neck between 2010 and 2022 were retrospectively analyzed. They were examined in four groups of 100 patients: the granulomatous diseases group, the lymphoma group, the squamous cell tumor group, and the reactive hyperplasia group. The diagnoses of the patients were confirmed histopathologically. Two CT images from all the patients in each group were used in the study. The CT images were classified using ResNet50, NASNetMobile, and DenseNet121 architecture input.ResultsThe classification accuracies obtained with ResNet50, DenseNet121, and NASNetMobile were 92.5%, 90.62, and 87.5, respectively.ConclusionDeep learning is a useful diagnostic tool in diagnosing cervical lymphadenopathy. In the near future, many diseases could be diagnosed with deep learning models without radiologist interpretations and invasive examinations such as histopathological examinations. However, further studies with much larger case series are needed to develop accurate deep-learning models.
10.1007/s00405-023-08181-9
can deep learning replace histopathological examinations in the differential diagnosis of cervical lymphadenopathy?
introductionwe aimed to develop a diagnostic deep learning model using contrast-enhanced ct images and to investigate whether cervical lymphadenopathies can be diagnosed with these deep learning methods without radiologist interpretations and histopathological examinations.material methoda total of 400 patients who underwent surgery for lymphadenopathy in the neck between 2010 and 2022 were retrospectively analyzed. they were examined in four groups of 100 patients: the granulomatous diseases group, the lymphoma group, the squamous cell tumor group, and the reactive hyperplasia group. the diagnoses of the patients were confirmed histopathologically. two ct images from all the patients in each group were used in the study. the ct images were classified using resnet50, nasnetmobile, and densenet121 architecture input.resultsthe classification accuracies obtained with resnet50, densenet121, and nasnetmobile were 92.5%, 90.62, and 87.5, respectively.conclusiondeep learning is a useful diagnostic tool in diagnosing cervical lymphadenopathy. in the near future, many diseases could be diagnosed with deep learning models without radiologist interpretations and invasive examinations such as histopathological examinations. however, further studies with much larger case series are needed to develop accurate deep-learning models.
[ "introductionwe", "a diagnostic deep learning model", "contrast-enhanced ct images", "cervical lymphadenopathies", "these deep learning methods", "radiologist interpretations", "histopathological examinations.material methoda total", "400 patients", "who", "surgery", "lymphadenopathy", "the neck", "they", "four groups", "100 patients", "the granulomatous diseases group", "the lymphoma group", "the squamous cell tumor group", "the reactive hyperplasia group", "the diagnoses", "the patients", "two ct images", "all the patients", "each group", "the study", "the ct images", "resnet50", "nasnetmobile", "architecture", "input.resultsthe classification accuracies", "resnet50", "densenet121", "92.5%", "respectively.conclusiondeep learning", "a useful diagnostic tool", "cervical lymphadenopathy", "the near future", "many diseases", "deep learning models", "radiologist interpretations", "invasive examinations", "histopathological examinations", "further studies", "much larger case series", "accurate deep-learning models", "400", "between 2010 and 2022", "four", "100", "two", "resnet50", "resnet50", "92.5%", "90.62", "87.5" ]
Survey on deep learning in multimodal medical imaging for cancer detection
[ "Yan Tian", "Zhaocheng Xu", "Yujun Ma", "Weiping Ding", "Ruili Wang", "Zhihong Gao", "Guohua Cheng", "Linyang He", "Xuran Zhao" ]
The task of multimodal cancer detection is to determine the locations and categories of lesions by using different imaging techniques, which is one of the key research methods for cancer diagnosis. Recently, deep learning-based object detection has made significant developments due to its strength in semantic feature extraction and nonlinear function fitting. However, multimodal cancer detection remains challenging due to morphological differences in lesions, interpatient variability, difficulty in annotation, and imaging artifacts. In this survey, we mainly investigate over 150 papers in recent years with respect to multimodal cancer detection using deep learning, with a focus on datasets and solutions to various challenges such as data annotation, variance between classes, small-scale lesions, and occlusion. We also provide an overview of the advantages and drawbacks of each approach. Finally, we discuss the current scope of work and provide directions for the future development of multimodal cancer detection.
10.1007/s00521-023-09214-4
survey on deep learning in multimodal medical imaging for cancer detection
the task of multimodal cancer detection is to determine the locations and categories of lesions by using different imaging techniques, which is one of the key research methods for cancer diagnosis. recently, deep learning-based object detection has made significant developments due to its strength in semantic feature extraction and nonlinear function fitting. however, multimodal cancer detection remains challenging due to morphological differences in lesions, interpatient variability, difficulty in annotation, and imaging artifacts. in this survey, we mainly investigate over 150 papers in recent years with respect to multimodal cancer detection using deep learning, with a focus on datasets and solutions to various challenges such as data annotation, variance between classes, small-scale lesions, and occlusion. we also provide an overview of the advantages and drawbacks of each approach. finally, we discuss the current scope of work and provide directions for the future development of multimodal cancer detection.
[ "the task", "multimodal cancer detection", "the locations", "categories", "lesions", "different imaging techniques", "which", "the key research methods", "cancer diagnosis", "deep learning-based object detection", "significant developments", "its strength", "semantic feature extraction and nonlinear function", "multimodal cancer detection", "morphological differences", "lesions", "interpatient variability", "difficulty", "annotation", "imaging artifacts", "this survey", "we", "over 150 papers", "recent years", "respect", "multimodal cancer detection", "deep learning", "a focus", "datasets", "solutions", "various challenges", "data annotation", "variance", "classes", "small-scale lesions", "occlusion", "we", "an overview", "the advantages", "drawbacks", "each approach", "we", "the current scope", "work", "directions", "the future development", "multimodal cancer detection", "150", "recent years" ]
A real-time traffic sign detection in intelligent transportation system using YOLOv8-based deep learning approach
[ "Mingdeng Tang" ]
Intelligent transportation systems rely heavily on accurate traffic sign detection (TSD) to enhance road safety and traffic management. Various methods have been explored in the literature for this purpose, with deep learning methods consistently demonstrating superior accuracy. However, existing research highlights the persistent challenge of achieving high accuracy rates while maintaining non-destructive and real-time requirements. In this study, we propose a deep learning model based on the YOLOv8 architecture to address this challenge. The model is trained and evaluated using a custom dataset, and extensive experiments and performance analysis demonstrate its ability to achieve precise results, thus offering a promising solution to the current research challenge in deep learning-based TSD.
10.1007/s11760-024-03300-3
a real-time traffic sign detection in intelligent transportation system using yolov8-based deep learning approach
intelligent transportation systems rely heavily on accurate traffic sign detection (tsd) to enhance road safety and traffic management. various methods have been explored in the literature for this purpose, with deep learning methods consistently demonstrating superior accuracy. however, existing research highlights the persistent challenge of achieving high accuracy rates while maintaining non-destructive and real-time requirements. in this study, we propose a deep learning model based on the yolov8 architecture to address this challenge. the model is trained and evaluated using a custom dataset, and extensive experiments and performance analysis demonstrate its ability to achieve precise results, thus offering a promising solution to the current research challenge in deep learning-based tsd.
[ "intelligent transportation systems", "accurate traffic sign detection", "(tsd", "road safety and traffic management", "various methods", "the literature", "this purpose", "deep learning methods", "superior accuracy", "the persistent challenge", "high accuracy rates", "-destructive and real-time requirements", "this study", "we", "a deep learning model", "the yolov8 architecture", "this challenge", "the model", "a custom dataset", "extensive experiments", "performance analysis", "its ability", "precise results", "a promising solution", "the current research challenge", "deep learning-based tsd", "yolov8" ]
Cataract-1K Dataset for Deep-Learning-Assisted Analysis of Cataract Surgery Videos
[ "Negin Ghamsarian", "Yosuf El-Shabrawi", "Sahar Nasirihaghighi", "Doris Putzgruber-Adamitsch", "Martin Zinkernagel", "Sebastian Wolf", "Klaus Schoeffmann", "Raphael Sznitman" ]
In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons’ skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.
10.1038/s41597-024-03193-4
cataract-1k dataset for deep-learning-assisted analysis of cataract surgery videos
in recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons’ skills, operation room management, and overall surgical outcomes. however, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. in particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. in this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. we validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. the dataset and annotations are publicly available in synapse.
[ "recent years", "the landscape", "computer-assisted interventions", "post-operative surgical video analysis", "deep-learning techniques", "significant advancements", "surgeons’ skills", "operation room management", "overall surgical outcomes", "the progression", "deep-learning-powered surgical technologies", "large-scale datasets", "annotations", "surgical scene understanding", "phase recognition", "pivotal pillars", "the realm", "computer-assisted surgery", "post-operative assessment", "cataract surgery videos", "this context", "we", "the largest cataract surgery video dataset", "that", "diverse requisites", "computerized surgical workflow analysis", "post-operative irregularities", "cataract surgery", "we", "the quality", "annotations", "the performance", "the-art", "phase recognition", "surgical scene segmentation", "we", "the research", "domain adaptation", "instrument segmentation", "cataract surgery", "cross-domain instrument segmentation performance", "cataract surgery videos", "the dataset", "annotations", "synapse", "recent years" ]
Analyzing and identifying predictable time range for stress prediction based on chaos theory and deep learning
[ "Ningyun Li", "Huijun Zhang", "Ling Feng", "Yang Ding", "Haichuan Li" ]
ProposeStress is a common problem globally. Prediction of stress in advance could help people take effective measures to manage stress before bad consequences occur. Considering the chaotic features of human psychological states, in this study, we integrate deep learning and chaos theory to address the stress prediction problem.MethodsBased on chaos theory, we embed one’s seemingly disordered stress sequence into a high dimensional phase space so as to reveal the underlying dynamics and patterns of the stress system, and meanwhile are able to identify the stress predictable time range. We then conduct deep learning with a two-layer (dimension and temporal) attention mechanism to simulate the nonlinear state of the embedded stress sequence for stress prediction.ResultsWe validate the effectiveness of the proposed method on the public available Tesserae dataset. The experimental results show that the proposed method outperforms the pure deep learning method and Chaos method in both 2-label and 3-label stress prediction.ConclusionIntegrating deep learning and chaos theory for stress prediction is effective, and can improve the prediction accuracy over 2% and 8% more than those of the deep learning and the Chaos method respectively. Implications and further possible improvements are also discussed at the end of the paper.
10.1007/s13755-024-00280-z
analyzing and identifying predictable time range for stress prediction based on chaos theory and deep learning
proposestress is a common problem globally. prediction of stress in advance could help people take effective measures to manage stress before bad consequences occur. considering the chaotic features of human psychological states, in this study, we integrate deep learning and chaos theory to address the stress prediction problem.methodsbased on chaos theory, we embed one’s seemingly disordered stress sequence into a high dimensional phase space so as to reveal the underlying dynamics and patterns of the stress system, and meanwhile are able to identify the stress predictable time range. we then conduct deep learning with a two-layer (dimension and temporal) attention mechanism to simulate the nonlinear state of the embedded stress sequence for stress prediction.resultswe validate the effectiveness of the proposed method on the public available tesserae dataset. the experimental results show that the proposed method outperforms the pure deep learning method and chaos method in both 2-label and 3-label stress prediction.conclusionintegrating deep learning and chaos theory for stress prediction is effective, and can improve the prediction accuracy over 2% and 8% more than those of the deep learning and the chaos method respectively. implications and further possible improvements are also discussed at the end of the paper.
[ "proposestress", "a common problem", "prediction", "stress", "advance", "people", "effective measures", "stress", "bad consequences", "the chaotic features", "human psychological states", "this study", "we", "deep learning", "chaos theory", "the stress prediction", "chaos theory", "we", "one", "stress sequence", "a high dimensional phase space", "the underlying dynamics", "patterns", "the stress system", "the stress predictable time range", "we", "deep learning", "dimension", "the nonlinear state", "the embedded stress sequence", "stress", "prediction.resultswe validate", "the effectiveness", "the proposed method", "the public available tesserae dataset", "the experimental results", "the proposed method", "the pure deep learning method", "chaos method", "both 2-label and 3-label stress", "deep learning", "chaos theory", "stress prediction", "the prediction accuracy", "2%", "those", "the deep learning", "the chaos", "implications", "further possible improvements", "the end", "the paper", "two", "2", "3", "over 2% and 8%" ]
Arithmetic Circuits, Structured Matrices and (not so) Deep Learning
[ "Atri Rudra" ]
This survey presents a necessarily incomplete (and biased) overview of results at the intersection of arithmetic circuit complexity, structured matrices and deep learning. Recently there has been some research activity in replacing unstructured weight matrices in neural networks by structured ones (with the aim of reducing the size of the corresponding deep learning models). Most of this work has been experimental and in this survey, we formalize the research question and show how a recent work that combines arithmetic circuit complexity, structured matrices and deep learning essentially answers this question. This survey is targeted at complexity theorists who might enjoy reading about how tools developed in arithmetic circuit complexity helped design (to the best of our knowledge) a new family of structured matrices, which in turn seem well-suited for applications in deep learning. However, we hope that folks primarily interested in deep learning would also appreciate the connections to complexity theory.
10.1007/s00224-022-10112-w
arithmetic circuits, structured matrices and (not so) deep learning
this survey presents a necessarily incomplete (and biased) overview of results at the intersection of arithmetic circuit complexity, structured matrices and deep learning. recently there has been some research activity in replacing unstructured weight matrices in neural networks by structured ones (with the aim of reducing the size of the corresponding deep learning models). most of this work has been experimental and in this survey, we formalize the research question and show how a recent work that combines arithmetic circuit complexity, structured matrices and deep learning essentially answers this question. this survey is targeted at complexity theorists who might enjoy reading about how tools developed in arithmetic circuit complexity helped design (to the best of our knowledge) a new family of structured matrices, which in turn seem well-suited for applications in deep learning. however, we hope that folks primarily interested in deep learning would also appreciate the connections to complexity theory.
[ "this survey", "a necessarily incomplete (and biased) overview", "results", "the intersection", "arithmetic circuit complexity", "structured matrices", "deep learning", "some research activity", "unstructured weight matrices", "neural networks", "structured ones", "the aim", "the size", "the corresponding deep learning models", "this work", "this survey", "we", "the research question", "a recent work", "that", "arithmetic circuit complexity", "structured matrices", "deep learning", "this question", "this survey", "complexity theorists", "who", "tools", "arithmetic circuit complexity", "our knowledge", "a new family", "structured matrices", "which", "turn", "applications", "deep learning", "we", "folks", "deep learning", "the connections", "complexity theory" ]
Geometric deep learning for molecular property predictions with chemical accuracy across chemical space
[ "Maarten R. Dobbelaere", "István Lengyel", "Christian V. Stevens", "Kevin M. Van Geem" ]
Chemical engineers heavily rely on precise knowledge of physicochemical properties to model chemical processes. Despite the growing popularity of deep learning, it is only rarely applied for property prediction due to data scarcity and limited accuracy for compounds in industrially-relevant areas of the chemical space. Herein, we present a geometric deep learning framework for predicting gas- and liquid-phase properties based on novel quantum chemical datasets comprising 124,000 molecules. Our findings reveal that the necessity for quantum-chemical information in deep learning models varies significantly depending on the modeled physicochemical property. Specifically, our top-performing geometric model meets the most stringent criteria for “chemically accurate” thermochemistry predictions. We also show that by carefully selecting the appropriate model featurization and evaluating prediction uncertainties, the reliability of the predictions can be strongly enhanced. These insights represent a crucial step towards establishing deep learning as the standard property prediction workflow in both industry and academia.Scientific contributionWe propose a flexible property prediction tool that can handle two-dimensional and three-dimensional molecular information. A thermochemistry prediction methodology that achieves high-level quantum chemistry accuracy for a broad application range is presented. Trained deep learning models and large novel molecular databases of real-world molecules are provided to offer a directly usable and fast property prediction solution to practitioners.
10.1186/s13321-024-00895-0
geometric deep learning for molecular property predictions with chemical accuracy across chemical space
chemical engineers heavily rely on precise knowledge of physicochemical properties to model chemical processes. despite the growing popularity of deep learning, it is only rarely applied for property prediction due to data scarcity and limited accuracy for compounds in industrially-relevant areas of the chemical space. herein, we present a geometric deep learning framework for predicting gas- and liquid-phase properties based on novel quantum chemical datasets comprising 124,000 molecules. our findings reveal that the necessity for quantum-chemical information in deep learning models varies significantly depending on the modeled physicochemical property. specifically, our top-performing geometric model meets the most stringent criteria for “chemically accurate” thermochemistry predictions. we also show that by carefully selecting the appropriate model featurization and evaluating prediction uncertainties, the reliability of the predictions can be strongly enhanced. these insights represent a crucial step towards establishing deep learning as the standard property prediction workflow in both industry and academia.scientific contributionwe propose a flexible property prediction tool that can handle two-dimensional and three-dimensional molecular information. a thermochemistry prediction methodology that achieves high-level quantum chemistry accuracy for a broad application range is presented. trained deep learning models and large novel molecular databases of real-world molecules are provided to offer a directly usable and fast property prediction solution to practitioners.
[ "chemical engineers", "precise knowledge", "physicochemical properties", "chemical processes", "the growing popularity", "deep learning", "it", "property prediction", "data scarcity", "limited accuracy", "compounds", "industrially-relevant areas", "the chemical space", "we", "a geometric deep learning framework", "gas- and liquid-phase properties", "novel quantum chemical datasets", "124,000 molecules", "our findings", "the necessity", "quantum-chemical information", "deep learning models", "the modeled physicochemical property", "our top-performing geometric model", "the most stringent criteria", "“chemically accurate” thermochemistry predictions", "we", "the appropriate model featurization", "prediction uncertainties", "the reliability", "the predictions", "these insights", "a crucial step", "deep learning", "the standard property prediction", "both industry", "academia.scientific contributionwe", "a flexible property prediction tool", "that", "two-dimensional and three-dimensional molecular information", "a thermochemistry prediction methodology", "that", "high-level quantum chemistry accuracy", "a broad application range", "trained deep learning models", "large novel molecular databases", "real-world molecules", "a directly usable and fast property prediction solution", "practitioners", "124,000", "two", "three" ]
DVNE-DRL: dynamic virtual network embedding algorithm based on deep reinforcement learning
[ "Xiancui Xiao" ]
Virtual network embedding (VNE), as the key challenge of network resource management technology, lies in the contradiction between online embedding decision and pursuing long-term average revenue goals. Most of the previous work ignored the dynamics in Virtual Network (VN) modeling, or could not automatically detect the complex and time-varying network state to provide a reasonable network embedding scheme. In view of this, we model a network embedding framework where the topology and resource allocation change dynamically with the number of network users and workload, and then introduce a deep reinforcement learning method to solve the VNE problem. Further, a dynamic virtual network embedding algorithm based on Deep Reinforcement Learning (DRL), named DVNE-DRL, is proposed. In DVNE-DRL, VNE is modeled as a Markov Decision Process (MDP), and then deep learning is introduced to perceive the current network state through historical data and embedded knowledge, while utilizing reinforcement learning decision-making capabilities to implement the network embedding process. In addition, we improve the method of feature extraction and matrix optimization, and consider the characteristics of virtual network and physical network together to alleviate the problem of redundancy and slow convergence. The simulation results show that compared with the existing advanced algorithms, the acceptance rate and average revenue of DVNE-DRL are increased by about 25% and 35%, respectively.
10.1038/s41598-023-47195-5
dvne-drl: dynamic virtual network embedding algorithm based on deep reinforcement learning
virtual network embedding (vne), as the key challenge of network resource management technology, lies in the contradiction between online embedding decision and pursuing long-term average revenue goals. most of the previous work ignored the dynamics in virtual network (vn) modeling, or could not automatically detect the complex and time-varying network state to provide a reasonable network embedding scheme. in view of this, we model a network embedding framework where the topology and resource allocation change dynamically with the number of network users and workload, and then introduce a deep reinforcement learning method to solve the vne problem. further, a dynamic virtual network embedding algorithm based on deep reinforcement learning (drl), named dvne-drl, is proposed. in dvne-drl, vne is modeled as a markov decision process (mdp), and then deep learning is introduced to perceive the current network state through historical data and embedded knowledge, while utilizing reinforcement learning decision-making capabilities to implement the network embedding process. in addition, we improve the method of feature extraction and matrix optimization, and consider the characteristics of virtual network and physical network together to alleviate the problem of redundancy and slow convergence. the simulation results show that compared with the existing advanced algorithms, the acceptance rate and average revenue of dvne-drl are increased by about 25% and 35%, respectively.
[ "virtual network", "the key challenge", "network resource management technology", "the contradiction", "online embedding decision", "long-term average revenue goals", "the previous work", "the dynamics", "virtual network", "modeling", "the complex and time-varying network state", "a reasonable network", "scheme", "view", "this", "we", "a network", "framework", "the number", "network users", "workload", "a deep reinforcement learning method", "the vne problem", "a dynamic virtual network", "algorithm", "deep reinforcement learning", "drl", "dvne-drl", "dvne-drl", "vne", "a markov decision process", "mdp", "deep learning", "the current network state", "historical data", "embedded knowledge", "reinforcement learning decision-making capabilities", "the network embedding process", "addition", "we", "the method", "feature extraction and matrix optimization", "the characteristics", "virtual network", "physical network", "the problem", "redundancy", "slow convergence", "the simulation results", "the existing advanced algorithms", "the acceptance rate", "average revenue", "dvne-drl", "about 25%", "35%", "about 25% and 35%" ]