title
stringlengths
31
206
authors
sequencelengths
1
85
abstract
stringlengths
428
3.21k
doi
stringlengths
21
31
cleaned_title
stringlengths
31
206
cleaned_abstract
stringlengths
428
3.21k
key_phrases
sequencelengths
19
150
A comparative analysis of deep learning and deep transfer learning approaches for identification of rice varieties
[ "Komal Sharma", "Ganesh Kumar Sethi", "Rajesh Kumar Bawa" ]
Rice is an essential staple food for human nutrition. Rice varieties worldwide have been planted, imported, and exported. During production and trading, different types of rice can be mixed. Due to rice impurities, rice importers and exporters may lose trust in each other, requiring the development of a rice variety identification system. India is a significant player in the global rice market, and this extensive study delves into the importance of rice there. The study uses state-of-the-art deep learning and TL classifiers to tackle the problems of rice variety detection. An enormous dataset consisting of more than 600,000 rice photographs divided into 22 different classes is presented in the study to improve classification accuracy. With a training accuracy of 96% and a testing accuracy of 80.5%, ResNet50 stands well among other deep learning models compared by the authors. These models include CNN, Deep CNN, AlexNet2, Xception, Inception V3, DenseNet121, and ResNet50. Finding the best classifiers to identify varieties accurately is crucial, and this work highlights their possible uses in rice seed production. This paper lays the groundwork for future research on image-based rice categorization by suggesting areas for development and investigating ensemble strategies to improve performance.
10.1007/s11042-024-19126-7
a comparative analysis of deep learning and deep transfer learning approaches for identification of rice varieties
rice is an essential staple food for human nutrition. rice varieties worldwide have been planted, imported, and exported. during production and trading, different types of rice can be mixed. due to rice impurities, rice importers and exporters may lose trust in each other, requiring the development of a rice variety identification system. india is a significant player in the global rice market, and this extensive study delves into the importance of rice there. the study uses state-of-the-art deep learning and tl classifiers to tackle the problems of rice variety detection. an enormous dataset consisting of more than 600,000 rice photographs divided into 22 different classes is presented in the study to improve classification accuracy. with a training accuracy of 96% and a testing accuracy of 80.5%, resnet50 stands well among other deep learning models compared by the authors. these models include cnn, deep cnn, alexnet2, xception, inception v3, densenet121, and resnet50. finding the best classifiers to identify varieties accurately is crucial, and this work highlights their possible uses in rice seed production. this paper lays the groundwork for future research on image-based rice categorization by suggesting areas for development and investigating ensemble strategies to improve performance.
[ "rice", "an essential staple food", "human nutrition", "rice varieties", "production", "trading", "different types", "rice", "rice impurities", "rice importers", "exporters", "trust", "the development", "a rice variety identification system", "india", "a significant player", "the global rice market", "this extensive study", "the importance", "rice", "the study", "the-art", "tl classifiers", "the problems", "rice variety detection", "an enormous dataset", "more than 600,000 rice photographs", "22 different classes", "the study", "classification accuracy", "a training accuracy", "96%", "a testing accuracy", "80.5%", "resnet50", "other deep learning models", "the authors", "these models", "cnn", "deep cnn", "alexnet2", "xception", "inception v3", "densenet121", "resnet50", "the best classifiers", "varieties", "this work", "their possible uses", "rice seed production", "this paper", "the groundwork", "future research", "image-based rice categorization", "areas", "development", "ensemble strategies", "performance", "rice", "india", "more than 600,000", "22", "96%", "80.5%", "resnet50", "cnn", "cnn", "v3", "resnet50" ]
Interpretable deep learning methods for multiview learning
[ "Hengkang Wang", "Han Lu", "Ju Sun", "Sandra E. Safo" ]
BackgroundTechnological advances have enabled the generation of unique and complementary types of data or views (e.g. genomics, proteomics, metabolomics) and opened up a new era in multiview learning research with the potential to lead to new biomedical discoveries.ResultsWe propose iDeepViewLearn (Interpretable Deep Learning Method for Multiview Learning) to learn nonlinear relationships in data from multiple views while achieving feature selection. iDeepViewLearn combines deep learning flexibility with the statistical benefits of data and knowledge-driven feature selection, giving interpretable results. Deep neural networks are used to learn view-independent low-dimensional embedding through an optimization problem that minimizes the difference between observed and reconstructed data, while imposing a regularization penalty on the reconstructed data. The normalized Laplacian of a graph is used to model bilateral relationships between variables in each view, therefore, encouraging selection of related variables. iDeepViewLearn is tested on simulated and three real-world data for classification, clustering, and reconstruction tasks. For the classification tasks, iDeepViewLearn had competitive classification results with state-of-the-art methods in various settings. For the clustering task, we detected molecular clusters that differed in their 10-year survival rates for breast cancer. For the reconstruction task, we were able to reconstruct handwritten images using a few pixels while achieving competitive classification accuracy. The results of our real data application and simulations with small to moderate sample sizes suggest that iDeepViewLearn may be a useful method for small-sample-size problems compared to other deep learning methods for multiview learning.ConclusioniDeepViewLearn is an innovative deep learning model capable of capturing nonlinear relationships between data from multiple views while achieving feature selection. It is fully open source and is freely available at https://github.com/lasandrall/iDeepViewLearn.
10.1186/s12859-024-05679-9
interpretable deep learning methods for multiview learning
backgroundtechnological advances have enabled the generation of unique and complementary types of data or views (e.g. genomics, proteomics, metabolomics) and opened up a new era in multiview learning research with the potential to lead to new biomedical discoveries.resultswe propose ideepviewlearn (interpretable deep learning method for multiview learning) to learn nonlinear relationships in data from multiple views while achieving feature selection. ideepviewlearn combines deep learning flexibility with the statistical benefits of data and knowledge-driven feature selection, giving interpretable results. deep neural networks are used to learn view-independent low-dimensional embedding through an optimization problem that minimizes the difference between observed and reconstructed data, while imposing a regularization penalty on the reconstructed data. the normalized laplacian of a graph is used to model bilateral relationships between variables in each view, therefore, encouraging selection of related variables. ideepviewlearn is tested on simulated and three real-world data for classification, clustering, and reconstruction tasks. for the classification tasks, ideepviewlearn had competitive classification results with state-of-the-art methods in various settings. for the clustering task, we detected molecular clusters that differed in their 10-year survival rates for breast cancer. for the reconstruction task, we were able to reconstruct handwritten images using a few pixels while achieving competitive classification accuracy. the results of our real data application and simulations with small to moderate sample sizes suggest that ideepviewlearn may be a useful method for small-sample-size problems compared to other deep learning methods for multiview learning.conclusionideepviewlearn is an innovative deep learning model capable of capturing nonlinear relationships between data from multiple views while achieving feature selection. it is fully open source and is freely available at https://github.com/lasandrall/ideepviewlearn.
[ "backgroundtechnological advances", "the generation", "unique and complementary types", "data", "views", "e.g. genomics", "proteomics", "metabolomics", "a new era", "multiview learning research", "the potential", "new biomedical discoveries.resultswe propose ideepviewlearn", "(interpretable deep learning method", "multiview learning", "nonlinear relationships", "data", "multiple views", "feature selection", "ideepviewlearn", "deep learning flexibility", "the statistical benefits", "data", "knowledge-driven feature selection", "interpretable results", "deep neural networks", "an optimization problem", "that", "the difference", "observed and reconstructed data", "a regularization penalty", "the reconstructed data", "the normalized laplacian", "a graph", "bilateral relationships", "variables", "each view", "selection", "related variables", "ideepviewlearn", "simulated and three real-world data", "classification", "clustering", "reconstruction tasks", "the classification tasks", "ideepviewlearn", "competitive classification results", "the-art", "various settings", "the clustering task", "we", "molecular clusters", "that", "their 10-year survival rates", "breast cancer", "the reconstruction task", "we", "handwritten images", "a few pixels", "competitive classification accuracy", "the results", "our real data application", "simulations", "small to moderate sample sizes", "that ideepviewlearn", "a useful method", "small-sample-size problems", "other deep learning methods", "multiview learning.conclusionideepviewlearn", "an innovative deep learning model", "nonlinear relationships", "data", "multiple views", "feature selection", "it", "fully open source", "https://github.com/lasandrall/ideepviewlearn", "ideepviewlearn", "three", "10-year", "https://github.com/lasandrall/ideepviewlearn" ]
Deep-learning-based image reconstruction with limited data: generating synthetic raw data using deep learning
[ "Frank Zijlstra", "Peter Thomas While" ]
ObjectDeep learning has shown great promise for fast reconstruction of accelerated MRI acquisitions by learning from large amounts of raw data. However, raw data is not always available in sufficient quantities. This study investigates synthetic data generation to complement small datasets and improve reconstruction quality.Materials and methodsAn adversarial auto-encoder was trained to generate phase and coil sensitivity maps from magnitude images, which were combined into synthetic raw data.On a fourfold accelerated MR reconstruction task, deep-learning-based reconstruction networks were trained with varying amounts of training data (20 to 160 scans). Test set performance was compared between baseline experiments and experiments that incorporated synthetic training data.ResultsTraining with synthetic raw data showed decreasing reconstruction errors with increasing amounts of training data, but importantly this was magnitude-only data, rather than real raw data. For small training sets, training with synthetic data decreased the mean absolute error (MAE) by up to 7.5%, whereas for larger training sets the MAE increased by up to 2.6%.DiscussionSynthetic raw data generation improved reconstruction quality in scenarios with limited training data. A major advantage of synthetic data generation is that it allows for the reuse of magnitude-only datasets, which are more readily available than raw datasets.
10.1007/s10334-024-01193-4
deep-learning-based image reconstruction with limited data: generating synthetic raw data using deep learning
objectdeep learning has shown great promise for fast reconstruction of accelerated mri acquisitions by learning from large amounts of raw data. however, raw data is not always available in sufficient quantities. this study investigates synthetic data generation to complement small datasets and improve reconstruction quality.materials and methodsan adversarial auto-encoder was trained to generate phase and coil sensitivity maps from magnitude images, which were combined into synthetic raw data.on a fourfold accelerated mr reconstruction task, deep-learning-based reconstruction networks were trained with varying amounts of training data (20 to 160 scans). test set performance was compared between baseline experiments and experiments that incorporated synthetic training data.resultstraining with synthetic raw data showed decreasing reconstruction errors with increasing amounts of training data, but importantly this was magnitude-only data, rather than real raw data. for small training sets, training with synthetic data decreased the mean absolute error (mae) by up to 7.5%, whereas for larger training sets the mae increased by up to 2.6%.discussionsynthetic raw data generation improved reconstruction quality in scenarios with limited training data. a major advantage of synthetic data generation is that it allows for the reuse of magnitude-only datasets, which are more readily available than raw datasets.
[ "objectdeep learning", "great promise", "fast reconstruction", "accelerated mri acquisitions", "large amounts", "raw data", "raw data", "sufficient quantities", "this study", "synthetic data generation", "small datasets", "reconstruction quality.materials", "methodsan adversarial auto-encoder", "phase and coil sensitivity maps", "magnitude images", "which", "mr reconstruction task", "deep-learning-based reconstruction networks", "varying amounts", "training data", "20 to 160 scans", "test set performance", "baseline experiments", "experiments", "that", "synthetic raw data", "reconstruction errors", "increasing amounts", "training data", "this", "magnitude-only data", "real raw data", "small training sets", "synthetic data", "the mean absolute error", "mae", "up to 7.5%", "larger training sets", "the mae", "2.6%.discussionsynthetic raw data generation", "improved reconstruction quality", "scenarios", "limited training data", "a major advantage", "synthetic data generation", "it", "the reuse", "magnitude-only datasets", "which", "raw datasets", "20", "160", "up to 7.5%" ]
Topic- and learning-related predictors of deep-level learning strategies
[ "Eve Kikas", "Gintautas Silinskas", "Eliis Härma" ]
The aim of this study was to examine which topic- and learning-related knowledge and motivational beliefs predict the use of specific deep-level learning strategies during an independent learning task. Participants included 335 Estonian fourth- and sixth-grade students who were asked to read about light processes and seasonal changes. The study was completed electronically. Topic-related knowledge was assessed via an open question about seasonal changes, and learning-related knowledge was assessed via scenario-based tasks. Expectancies, interest, and utility values related to learning astronomy and using deep-level learning strategies were assessed via questions based on the Situated Expectancy-Value Theory. Deep-level learning strategies (using drawings in addition to reading and self-testing) were assessed while completing the reading task. Among topic-related variables, prior knowledge and utility value—but not interest or expectancy in learning astronomy—were related to using deep-level learning strategies. Among learning-related variables, interest and utility value of effective learning—but not metacognitive knowledge of learning strategies or expectancy in using deep-level learning strategies—were related to using deep-level learning strategies. This study confirms that it is not enough to examine students’ knowledge and skills in using learning strategies with general or hypothetical questions, instead, it is of crucial importance to study students in real learning situations.
10.1007/s10212-023-00766-6
topic- and learning-related predictors of deep-level learning strategies
the aim of this study was to examine which topic- and learning-related knowledge and motivational beliefs predict the use of specific deep-level learning strategies during an independent learning task. participants included 335 estonian fourth- and sixth-grade students who were asked to read about light processes and seasonal changes. the study was completed electronically. topic-related knowledge was assessed via an open question about seasonal changes, and learning-related knowledge was assessed via scenario-based tasks. expectancies, interest, and utility values related to learning astronomy and using deep-level learning strategies were assessed via questions based on the situated expectancy-value theory. deep-level learning strategies (using drawings in addition to reading and self-testing) were assessed while completing the reading task. among topic-related variables, prior knowledge and utility value—but not interest or expectancy in learning astronomy—were related to using deep-level learning strategies. among learning-related variables, interest and utility value of effective learning—but not metacognitive knowledge of learning strategies or expectancy in using deep-level learning strategies—were related to using deep-level learning strategies. this study confirms that it is not enough to examine students’ knowledge and skills in using learning strategies with general or hypothetical questions, instead, it is of crucial importance to study students in real learning situations.
[ "the aim", "this study", "which", "the use", "specific deep-level learning strategies", "an independent learning task", "participants", "335 estonian fourth- and sixth-grade students", "who", "light processes", "seasonal changes", "the study", "topic-related knowledge", "an open question", "seasonal changes", "learning-related knowledge", "scenario-based tasks", "expectancies", "interest", "utility values", "astronomy", "deep-level learning strategies", "questions", "the situated expectancy-value theory", "deep-level learning strategies", "drawings", "addition", "reading", "self-testing", "the reading task", "topic-related variables", "prior knowledge", "utility value", "not interest", "expectancy", "astronomy", "deep-level learning strategies", "learning-related variables", "interest", "utility value", "effective learning", "not metacognitive knowledge", "strategies", "expectancy", "deep-level learning strategies", "deep-level learning strategies", "this study", "it", "students’ knowledge", "skills", "learning strategies", "general or hypothetical questions", "it", "crucial importance", "students", "real learning situations", "335", "sixth" ]
Urban traffic signal control optimization through Deep Q Learning and double Deep Q Learning: a novel approach for efficient traffic management
[ "Qazi Umer Jamil", "Karam Dad Kallu", "Muhammad Jawad Khan", "Muhammad Safdar", "Amad Zafar", "Muhammad Umair Ali" ]
Traffic congestion remains a persistent challenge in urban areas, necessitating efficient traffic control strategies. This research explores the application of advanced reinforcement learning techniques, specifically Deep Q-Learning (DQN) and Double Deep Q-Learning (DDQN), to address this issue at a four-way traffic intersection. The RL agents are trained using a reward function based on minimizing waiting times, enabling them to learn effective traffic signal control policies. The study focuses on comparing the performance of a simple non-reinforcement learning (Non RL) agent, a Deep Q-Network (DQN) agent, and an improved Double Deep Q-Learning (DDQN) agent in different traffic scenarios. The Non RL agent, which follows a fixed order of traffic phases, demonstrates limitations in both low and high traffic situations, leading to inefficiencies and imbalanced queue lengths. On the other hand, the DQN agent exhibits promising results in low traffic conditions but struggles in high traffic due to its greedy behavior. The DDQN agent, with an extended green light base time, outperforms both the Non RL agent and the original DQN agent in high traffic scenarios, making it more suitable for real-world traffic conditions. However, it shows some inefficiencies in low traffic scenarios. Future research is recommended to address multi-agent deep reinforcement learning challenges, incorporate attention mechanisms and hierarchical reinforcement learning, explore graph theory applications, and develop efficient communication protocols among agents to further enhance traffic control solutions.
10.1007/s11042-024-20060-x
urban traffic signal control optimization through deep q learning and double deep q learning: a novel approach for efficient traffic management
traffic congestion remains a persistent challenge in urban areas, necessitating efficient traffic control strategies. this research explores the application of advanced reinforcement learning techniques, specifically deep q-learning (dqn) and double deep q-learning (ddqn), to address this issue at a four-way traffic intersection. the rl agents are trained using a reward function based on minimizing waiting times, enabling them to learn effective traffic signal control policies. the study focuses on comparing the performance of a simple non-reinforcement learning (non rl) agent, a deep q-network (dqn) agent, and an improved double deep q-learning (ddqn) agent in different traffic scenarios. the non rl agent, which follows a fixed order of traffic phases, demonstrates limitations in both low and high traffic situations, leading to inefficiencies and imbalanced queue lengths. on the other hand, the dqn agent exhibits promising results in low traffic conditions but struggles in high traffic due to its greedy behavior. the ddqn agent, with an extended green light base time, outperforms both the non rl agent and the original dqn agent in high traffic scenarios, making it more suitable for real-world traffic conditions. however, it shows some inefficiencies in low traffic scenarios. future research is recommended to address multi-agent deep reinforcement learning challenges, incorporate attention mechanisms and hierarchical reinforcement learning, explore graph theory applications, and develop efficient communication protocols among agents to further enhance traffic control solutions.
[ "traffic congestion", "a persistent challenge", "urban areas", "efficient traffic control strategies", "this research", "the application", "advanced reinforcement learning techniques", "specifically deep q-learning (dqn", "double deep q-learning", "(ddqn", "this issue", "a four-way traffic intersection", "the rl agents", "a reward function", "waiting times", "them", "effective traffic signal control policies", "the study", "the performance", "a simple non-reinforcement learning", "(non rl) agent", "a deep q-network (dqn) agent", "an improved double deep q-learning (ddqn) agent", "different traffic scenarios", "the non rl agent", "which", "a fixed order", "traffic phases", "limitations", "both low and high traffic situations", "inefficiencies", "queue lengths", "the other hand", "the dqn agent", "promising results", "low traffic conditions", "struggles", "high traffic", "its greedy behavior", "the ddqn agent", "an extended green light base time", "both the non rl agent", "the original dqn agent", "high traffic scenarios", "it", "real-world traffic conditions", "it", "some inefficiencies", "low traffic scenarios", "future research", "multi-agent deep reinforcement learning challenges", "attention mechanisms", "hierarchical reinforcement learning", "graph theory applications", "efficient communication protocols", "agents", "traffic control solutions", "four" ]
Research trends in deep learning and machine learning for cloud computing security
[ "Yehia Ibrahim Alzoubi", "Alok Mishra", "Ahmet Ercan Topcu" ]
Deep learning and machine learning show effectiveness in identifying and addressing cloud security threats. Despite the large number of articles published in this field, there remains a dearth of comprehensive reviews that synthesize the techniques, trends, and challenges of using deep learning and machine learning for cloud computing security. Accordingly, this paper aims to provide the most updated statistics on the development and research in cloud computing security utilizing deep learning and machine learning. Up to the middle of December 2023, 4051 publications were identified after we searched the Scopus database. This paper highlights key trend solutions for cloud computing security utilizing machine learning and deep learning, such as anomaly detection, security automation, and emerging technology's role. However, challenges such as data privacy, scalability, and explainability, among others, are also identified as challenges of using machine learning and deep learning for cloud security. The findings of this paper reveal that deep learning and machine learning for cloud computing security are emerging research areas. Future research directions may include addressing these challenges when utilizing machine learning and deep learning for cloud security. Additionally, exploring the development of algorithms and techniques that comply with relevant laws and regulations is essential for effective implementation in this domain.
10.1007/s10462-024-10776-5
research trends in deep learning and machine learning for cloud computing security
deep learning and machine learning show effectiveness in identifying and addressing cloud security threats. despite the large number of articles published in this field, there remains a dearth of comprehensive reviews that synthesize the techniques, trends, and challenges of using deep learning and machine learning for cloud computing security. accordingly, this paper aims to provide the most updated statistics on the development and research in cloud computing security utilizing deep learning and machine learning. up to the middle of december 2023, 4051 publications were identified after we searched the scopus database. this paper highlights key trend solutions for cloud computing security utilizing machine learning and deep learning, such as anomaly detection, security automation, and emerging technology's role. however, challenges such as data privacy, scalability, and explainability, among others, are also identified as challenges of using machine learning and deep learning for cloud security. the findings of this paper reveal that deep learning and machine learning for cloud computing security are emerging research areas. future research directions may include addressing these challenges when utilizing machine learning and deep learning for cloud security. additionally, exploring the development of algorithms and techniques that comply with relevant laws and regulations is essential for effective implementation in this domain.
[ "deep learning and machine learning show effectiveness", "cloud security threats", "the large number", "articles", "this field", "a dearth", "comprehensive reviews", "that", "the techniques", "trends", "challenges", "deep learning", "machine learning", "cloud computing security", "this paper", "the most updated statistics", "the development", "research", "cloud computing security", "deep learning", "machine learning", "the middle", "december", "4051 publications", "we", "the scopus database", "this paper", "key trend solutions", "cloud computing security", "machine learning", "deep learning", "anomaly detection", "security automation", "emerging technology's role", "challenges", "data privacy", "scalability", "explainability", "others", "challenges", "machine learning", "deep learning", "cloud security", "the findings", "this paper", "deep learning and machine learning", "cloud computing security", "research areas", "future research directions", "these challenges", "machine learning", "deep learning", "cloud security", "the development", "algorithms", "techniques", "that", "relevant laws", "regulations", "effective implementation", "this domain", "the middle of december 2023", "4051", "anomaly detection" ]
Distributed Deep Reinforcement Learning: A Survey and a Multi-player Multi-agent Learning Toolbox
[ "Qiyue Yin", "Tongtong Yu", "Shengqi Shen", "Jun Yang", "Meijing Zhao", "Wancheng Ni", "Kaiqi Huang", "Bin Liang", "Liang Wang" ]
With the breakthrough of AlphaGo, deep reinforcement learning has become a recognized technique for solving sequential decision-making problems. Despite its reputation, data inefficiency caused by its trial and error learning mechanism makes deep reinforcement learning difficult to apply in a wide range of areas. Many methods have been developed for sample efficient deep reinforcement learning, such as environment modelling, experience transfer, and distributed modifications, among which distributed deep reinforcement learning has shown its potential in various applications, such as human-computer gaming and intelligent transportation. In this paper, we conclude the state of this exciting field, by comparing the classical distributed deep reinforcement learning methods and studying important components to achieve efficient distributed learning, covering single player single agent distributed deep reinforcement learning to the most complex multiple players multiple agents distributed deep reinforcement learning. Furthermore, we review recently released toolboxes that help to realize distributed deep reinforcement learning without many modifications of their non-distributed versions. By analysing their strengths and weaknesses, a multi-player multi-agent distributed deep reinforcement learning toolbox is developed and released, which is further validated on Wargame, a complex environment, showing the usability of the proposed toolbox for multiple players and multiple agents distributed deep reinforcement learning under complex games. Finally, we try to point out challenges and future trends, hoping that this brief review can provide a guide or a spark for researchers who are interested in distributed deep reinforcement learning.
10.1007/s11633-023-1454-4
distributed deep reinforcement learning: a survey and a multi-player multi-agent learning toolbox
with the breakthrough of alphago, deep reinforcement learning has become a recognized technique for solving sequential decision-making problems. despite its reputation, data inefficiency caused by its trial and error learning mechanism makes deep reinforcement learning difficult to apply in a wide range of areas. many methods have been developed for sample efficient deep reinforcement learning, such as environment modelling, experience transfer, and distributed modifications, among which distributed deep reinforcement learning has shown its potential in various applications, such as human-computer gaming and intelligent transportation. in this paper, we conclude the state of this exciting field, by comparing the classical distributed deep reinforcement learning methods and studying important components to achieve efficient distributed learning, covering single player single agent distributed deep reinforcement learning to the most complex multiple players multiple agents distributed deep reinforcement learning. furthermore, we review recently released toolboxes that help to realize distributed deep reinforcement learning without many modifications of their non-distributed versions. by analysing their strengths and weaknesses, a multi-player multi-agent distributed deep reinforcement learning toolbox is developed and released, which is further validated on wargame, a complex environment, showing the usability of the proposed toolbox for multiple players and multiple agents distributed deep reinforcement learning under complex games. finally, we try to point out challenges and future trends, hoping that this brief review can provide a guide or a spark for researchers who are interested in distributed deep reinforcement learning.
[ "the breakthrough", "alphago", "deep reinforcement learning", "a recognized technique", "sequential decision-making problems", "its reputation", "data inefficiency", "its trial and error learning mechanism", "deep reinforcement learning", "a wide range", "areas", "many methods", "sample efficient deep reinforcement learning", "environment modelling", "experience transfer", "modifications", "which", "deep reinforcement learning", "its potential", "various applications", "human-computer gaming", "intelligent transportation", "this paper", "we", "the state", "this exciting field", "the classical distributed deep reinforcement learning methods", "important components", "efficient distributed learning", "single player single agent", "deep reinforcement learning", "the most complex multiple players", "multiple agents", "deep reinforcement learning", "we", "toolboxes", "that", "distributed deep reinforcement learning", "many modifications", "their non-distributed versions", "their strengths", "weaknesses", "-", "agent", "which", "wargame", "a complex environment", "the usability", "the proposed toolbox", "multiple players", "multiple agents", "deep reinforcement learning", "complex games", "we", "challenges", "future trends", "this brief review", "a guide", "a spark", "researchers", "who", "distributed deep reinforcement learning" ]
Deep learning in rheumatological image interpretation
[ "Berend C. Stoel", "Marius Staring", "Monique Reijnierse", "Annette H. M. van der Helm-van Mil" ]
Artificial intelligence techniques, specifically deep learning, have already affected daily life in a wide range of areas. Likewise, initial applications have been explored in rheumatology. Deep learning might not easily surpass the accuracy of classic techniques when performing classification or regression on low-dimensional numerical data. With images as input, however, deep learning has become so successful that it has already outperformed the majority of conventional image-processing techniques developed during the past 50 years. As with any new imaging technology, rheumatologists and radiologists need to consider adapting their arsenal of diagnostic, prognostic and monitoring tools, and even their clinical role and collaborations. This adaptation requires a basic understanding of the technical background of deep learning, to efficiently utilize its benefits but also to recognize its drawbacks and pitfalls, as blindly relying on deep learning might be at odds with its capabilities. To facilitate such an understanding, it is necessary to provide an overview of deep-learning techniques for automatic image analysis in detecting, quantifying, predicting and monitoring rheumatic diseases, and of currently published deep-learning applications in radiological imaging for rheumatology, with critical assessment of possible limitations, errors and confounders, and conceivable consequences for rheumatologists and radiologists in clinical practice.
10.1038/s41584-023-01074-5
deep learning in rheumatological image interpretation
artificial intelligence techniques, specifically deep learning, have already affected daily life in a wide range of areas. likewise, initial applications have been explored in rheumatology. deep learning might not easily surpass the accuracy of classic techniques when performing classification or regression on low-dimensional numerical data. with images as input, however, deep learning has become so successful that it has already outperformed the majority of conventional image-processing techniques developed during the past 50 years. as with any new imaging technology, rheumatologists and radiologists need to consider adapting their arsenal of diagnostic, prognostic and monitoring tools, and even their clinical role and collaborations. this adaptation requires a basic understanding of the technical background of deep learning, to efficiently utilize its benefits but also to recognize its drawbacks and pitfalls, as blindly relying on deep learning might be at odds with its capabilities. to facilitate such an understanding, it is necessary to provide an overview of deep-learning techniques for automatic image analysis in detecting, quantifying, predicting and monitoring rheumatic diseases, and of currently published deep-learning applications in radiological imaging for rheumatology, with critical assessment of possible limitations, errors and confounders, and conceivable consequences for rheumatologists and radiologists in clinical practice.
[ "artificial intelligence techniques", "specifically deep learning", "daily life", "a wide range", "areas", "initial applications", "rheumatology", "deep learning", "the accuracy", "classic techniques", "classification", "regression", "low-dimensional numerical data", "images", "input", "deep learning", "it", "the majority", "conventional image-processing techniques", "the past 50 years", "any new imaging technology", "rheumatologists", "radiologists", "their arsenal", "diagnostic, prognostic and monitoring tools", "even their clinical role", "collaborations", "this adaptation", "a basic understanding", "the technical background", "deep learning", "its benefits", "its drawbacks", "pitfalls", "deep learning", "odds", "its capabilities", "such an understanding", "it", "an overview", "deep-learning techniques", "automatic image analysis", "rheumatic diseases", "currently published deep-learning applications", "radiological imaging", "rheumatology", "critical assessment", "possible limitations", "errors", "confounders", "conceivable consequences", "rheumatologists", "radiologists", "clinical practice", "the past 50 years" ]
Loss of plasticity in deep continual learning
[ "Shibhansh Dohare", "J. Fernando Hernandez-Garcia", "Qingfeng Lan", "Parash Rahman", "A. Rupam Mahmood", "Richard S. Sutton" ]
Artificial neural networks, deep-learning methods and the backpropagation algorithm1 form the foundation of modern machine learning and artificial intelligence. These methods are almost always used in two phases, one in which the weights of the network are updated and one in which the weights are held constant while the network is used or evaluated. This contrasts with natural learning and many applications, which require continual learning. It has been unclear whether or not deep learning methods work in continual learning settings. Here we show that they do not—that standard deep-learning methods gradually lose plasticity in continual-learning settings until they learn no better than a shallow network. We show such loss of plasticity using the classic ImageNet dataset and reinforcement-learning problems across a wide range of variations in the network and the learning algorithm. Plasticity is maintained indefinitely only by algorithms that continually inject diversity into the network, such as our continual backpropagation algorithm, a variation of backpropagation in which a small fraction of less-used units are continually and randomly reinitialized. Our results indicate that methods based on gradient descent are not enough—that sustained deep learning requires a random, non-gradient component to maintain variability and plasticity.
10.1038/s41586-024-07711-7
loss of plasticity in deep continual learning
artificial neural networks, deep-learning methods and the backpropagation algorithm1 form the foundation of modern machine learning and artificial intelligence. these methods are almost always used in two phases, one in which the weights of the network are updated and one in which the weights are held constant while the network is used or evaluated. this contrasts with natural learning and many applications, which require continual learning. it has been unclear whether or not deep learning methods work in continual learning settings. here we show that they do not—that standard deep-learning methods gradually lose plasticity in continual-learning settings until they learn no better than a shallow network. we show such loss of plasticity using the classic imagenet dataset and reinforcement-learning problems across a wide range of variations in the network and the learning algorithm. plasticity is maintained indefinitely only by algorithms that continually inject diversity into the network, such as our continual backpropagation algorithm, a variation of backpropagation in which a small fraction of less-used units are continually and randomly reinitialized. our results indicate that methods based on gradient descent are not enough—that sustained deep learning requires a random, non-gradient component to maintain variability and plasticity.
[ "artificial neural networks", "deep-learning methods", "the backpropagation algorithm1 form", "the foundation", "modern machine learning", "artificial intelligence", "these methods", "two phases", "which", "the weights", "the network", "which", "the weights", "the network", "this", "natural learning", "many applications", "which", "continual learning", "it", "not deep learning methods", "continual learning settings", "we", "they", "that standard deep-learning methods", "plasticity", "continual-learning settings", "they", "a shallow network", "we", "such loss", "plasticity", "the classic imagenet dataset and reinforcement-learning problems", "a wide range", "variations", "the network", "the learning algorithm", "plasticity", "algorithms", "that", "diversity", "the network", "our continual backpropagation algorithm", "a variation", "backpropagation", "which", "a small fraction", "less-used units", "our results", "methods", "gradient descent", "that", "deep learning", "a random, non-gradient component", "variability", "plasticity", "algorithm1", "two", "one" ]
Comparative approach on crop detection using machine learning and deep learning techniques
[ "V. Nithya", "M. S. Josephine", "V. Jeyabalaraja" ]
Agriculture is an expanding area of study. Crop prediction in agriculture is highly dependent on soil and environmental factors, such as rainfall, humidity, and temperature. Previously, farmers had the authority to select the crop to be farmed, oversee its development, and ascertain the optimal harvest time. The farming community is facing challenges in sustaining its practices due to the swift alterations in climatic conditions. Therefore, machine learning algorithms have replaced traditional methods in predicting agricultural productivity in recent years. To guarantee optimal precision through a specific machine learning approach. Authors extend their approach not limited to Machine Learning but also with Deep Learning Techniques. We use machine and deep learning algorithms to predict crop outcomes accurately. In this proposed model, we utilise machine learning algorithms such as Naive Bayes, decision tree, and KNN. It is worth noting that the decision tree algorithm demonstrates superior performance compared to the other algorithms, achieving an accuracy rate of 83%. In order to enhance the precision, we have suggested implementing a deep learning technique, specifically a convolutional neural network, to identify the crops. Achieving an accuracy of 93.54% was made possible by implementing this advanced deep-learning model.
10.1007/s13198-024-02483-9
comparative approach on crop detection using machine learning and deep learning techniques
agriculture is an expanding area of study. crop prediction in agriculture is highly dependent on soil and environmental factors, such as rainfall, humidity, and temperature. previously, farmers had the authority to select the crop to be farmed, oversee its development, and ascertain the optimal harvest time. the farming community is facing challenges in sustaining its practices due to the swift alterations in climatic conditions. therefore, machine learning algorithms have replaced traditional methods in predicting agricultural productivity in recent years. to guarantee optimal precision through a specific machine learning approach. authors extend their approach not limited to machine learning but also with deep learning techniques. we use machine and deep learning algorithms to predict crop outcomes accurately. in this proposed model, we utilise machine learning algorithms such as naive bayes, decision tree, and knn. it is worth noting that the decision tree algorithm demonstrates superior performance compared to the other algorithms, achieving an accuracy rate of 83%. in order to enhance the precision, we have suggested implementing a deep learning technique, specifically a convolutional neural network, to identify the crops. achieving an accuracy of 93.54% was made possible by implementing this advanced deep-learning model.
[ "agriculture", "an expanding area", "study", "crop prediction", "agriculture", "soil and environmental factors", "rainfall", "humidity", "temperature", "farmers", "the authority", "the crop", "its development", "the optimal harvest time", "the farming community", "challenges", "its practices", "the swift alterations", "climatic conditions", "machine learning algorithms", "traditional methods", "agricultural productivity", "recent years", "optimal precision", "a specific machine learning approach", "authors", "their approach", "machine learning", "deep learning techniques", "we", "machine", "deep learning", "algorithms", "crop outcomes", "this proposed model", "we", "machine learning algorithms", "naive bayes", "decision tree", "knn", "it", "the decision tree", "algorithm", "superior performance", "the other algorithms", "an accuracy rate", "83%", "order", "the precision", "we", "a deep learning technique", "specifically a convolutional neural network", "the crops", "an accuracy", "93.54%", "this advanced deep-learning model", "recent years", "83%", "93.54%" ]
A systematic review on machine learning and deep learning techniques in the effective diagnosis of Alzheimer’s disease
[ "Akhilesh Deep Arya", "Sourabh Singh Verma", "Prasun Chakarabarti", "Tulika Chakrabarti", "Ahmed A. Elngar", "Ali-Mohammad Kamali", "Mohammad Nami" ]
Alzheimer’s disease (AD) is a brain-related disease in which the condition of the patient gets worse with time. AD is not a curable disease by any medication. It is impossible to halt the death of brain cells, but with the help of medication, the effects of AD can be delayed. As not all MCI patients will suffer from AD, it is required to accurately diagnose whether a mild cognitive impaired (MCI) patient will convert to AD (namely MCI converter MCI-C) or not (namely MCI non-converter MCI-NC), during early diagnosis. There are two modalities, positron emission tomography (PET) and magnetic resonance image (MRI), used by a physician for the diagnosis of Alzheimer’s disease. Machine learning and deep learning perform exceptionally well in the field of computer vision where there is a requirement to extract information from high-dimensional data. Researchers use deep learning models in the field of medicine for diagnosis, prognosis, and even to predict the future health of the patient under medication. This study is a systematic review of publications using machine learning and deep learning methods for early classification of normal cognitive (NC) and Alzheimer’s disease (AD).This study is an effort to provide the details of the two most commonly used modalities PET and MRI for the identification of AD, and to evaluate the performance of both modalities while working with different classifiers.
10.1186/s40708-023-00195-7
a systematic review on machine learning and deep learning techniques in the effective diagnosis of alzheimer’s disease
alzheimer’s disease (ad) is a brain-related disease in which the condition of the patient gets worse with time. ad is not a curable disease by any medication. it is impossible to halt the death of brain cells, but with the help of medication, the effects of ad can be delayed. as not all mci patients will suffer from ad, it is required to accurately diagnose whether a mild cognitive impaired (mci) patient will convert to ad (namely mci converter mci-c) or not (namely mci non-converter mci-nc), during early diagnosis. there are two modalities, positron emission tomography (pet) and magnetic resonance image (mri), used by a physician for the diagnosis of alzheimer’s disease. machine learning and deep learning perform exceptionally well in the field of computer vision where there is a requirement to extract information from high-dimensional data. researchers use deep learning models in the field of medicine for diagnosis, prognosis, and even to predict the future health of the patient under medication. this study is a systematic review of publications using machine learning and deep learning methods for early classification of normal cognitive (nc) and alzheimer’s disease (ad).this study is an effort to provide the details of the two most commonly used modalities pet and mri for the identification of ad, and to evaluate the performance of both modalities while working with different classifiers.
[ "alzheimer’s disease", "ad", "a brain-related disease", "which", "the condition", "the patient", "time", "ad", "a curable disease", "any medication", "it", "the death", "brain cells", "the help", "medication", "the effects", "ad", "not all mci patients", "ad", "it", "a mild cognitive impaired (mci) patient", "ad", "namely mci converter mci-c", "namely mci non-converter mci-nc", "early diagnosis", "two modalities", "positron emission tomography", "pet", "magnetic resonance image", "mri", "a physician", "the diagnosis", "alzheimer’s disease", "machine learning", "deep learning", "the field", "computer vision", "a requirement", "information", "high-dimensional data", "researchers", "deep learning models", "the field", "medicine", "diagnosis", "prognosis", "the future health", "the patient", "medication", "this study", "a systematic review", "publications", "machine learning", "deep learning methods", "early classification", "(nc", "ad).this study", "an effort", "the details", "the two most commonly used modalities", "the identification", "ad", "the performance", "both modalities", "different classifiers", "mci", "mci", "mci", "mci", "mci", "mci-nc", "two", "two" ]
Exploring the connection between deep learning and learning assessments: a cross-disciplinary engineering education perspective
[ "Sabrina Fawzia", "Azharul Karim" ]
It is widely accepted that student learning is significantly affected by assessment methods, but a concrete relationship has not been established in the context of multidisciplinary engineering education. Students make a physiological investment and internalize learning (deep learning) if they see high value in their learning. They persist despite challenges and take delight in accomplishing their work. As student deep learning is affected by the assessment system, it is important to explore the relationship between assessment systems and factors affecting deep learning. This study identifies the factors associated with deep learning and examines the relationships between different assessment systems those factors. A conceptual model is proposed, and a structured questionnaire was designed and directed to 600 Queensland University of Technology (QUT) multidisciplinary engineering students, with 243 responses received. The gathered data were analyzed using both SPSS and SEM. Exploratory factor analysis revealed that deep learning is strongly associated with learning environment and course design and content. Strong influence of both summative and formative assessment on learning was established in this study. Engineering educators can facilitate deep learning by adopting both assessment types simultaneously to make the learning process more effective. The proposed theoretical model related to the deep learning concept can support the key practices and modern learning methodologies currently adopted to enhance the learning and teaching process.
10.1057/s41599-023-02542-9
exploring the connection between deep learning and learning assessments: a cross-disciplinary engineering education perspective
it is widely accepted that student learning is significantly affected by assessment methods, but a concrete relationship has not been established in the context of multidisciplinary engineering education. students make a physiological investment and internalize learning (deep learning) if they see high value in their learning. they persist despite challenges and take delight in accomplishing their work. as student deep learning is affected by the assessment system, it is important to explore the relationship between assessment systems and factors affecting deep learning. this study identifies the factors associated with deep learning and examines the relationships between different assessment systems those factors. a conceptual model is proposed, and a structured questionnaire was designed and directed to 600 queensland university of technology (qut) multidisciplinary engineering students, with 243 responses received. the gathered data were analyzed using both spss and sem. exploratory factor analysis revealed that deep learning is strongly associated with learning environment and course design and content. strong influence of both summative and formative assessment on learning was established in this study. engineering educators can facilitate deep learning by adopting both assessment types simultaneously to make the learning process more effective. the proposed theoretical model related to the deep learning concept can support the key practices and modern learning methodologies currently adopted to enhance the learning and teaching process.
[ "it", "that student learning", "assessment methods", "a concrete relationship", "the context", "multidisciplinary engineering education", "students", "a physiological investment", "they", "high value", "their learning", "they", "challenges", "delight", "their work", "student deep learning", "the assessment system", "it", "the relationship", "assessment systems", "factors", "deep learning", "this study", "the factors", "deep learning", "the relationships", "those factors", "a conceptual model", "a structured questionnaire", "technology", "243 responses", "the gathered data", "sem", "exploratory factor analysis", "deep learning", "environment", "course design", "content", "strong influence", "both summative and formative assessment", "learning", "this study", "engineering educators", "deep learning", "both assessment types", "the learning process", "the proposed theoretical model", "the deep learning concept", "the key practices", "modern learning methodologies", "the learning and teaching process", "600 queensland university of technology", "243", "spss" ]
Integrating QSAR modelling and deep learning in drug discovery: the emergence of deep QSAR
[ "Alexander Tropsha", "Olexandr Isayev", "Alexandre Varnek", "Gisbert Schneider", "Artem Cherkasov" ]
Quantitative structure–activity relationship (QSAR) modelling, an approach that was introduced 60 years ago, is widely used in computer-aided drug design. In recent years, progress in artificial intelligence techniques, such as deep learning, the rapid growth of databases of molecules for virtual screening and dramatic improvements in computational power have supported the emergence of a new field of QSAR applications that we term ‘deep QSAR’. Marking a decade from the pioneering applications of deep QSAR to tasks involved in small-molecule drug discovery, we herein describe key advances in the field, including deep generative and reinforcement learning approaches in molecular design, deep learning models for synthetic planning and the application of deep QSAR models in structure-based virtual screening. We also reflect on the emergence of quantum computing, which promises to further accelerate deep QSAR applications and the need for open-source and democratized resources to support computer-aided drug design.
10.1038/s41573-023-00832-0
integrating qsar modelling and deep learning in drug discovery: the emergence of deep qsar
quantitative structure–activity relationship (qsar) modelling, an approach that was introduced 60 years ago, is widely used in computer-aided drug design. in recent years, progress in artificial intelligence techniques, such as deep learning, the rapid growth of databases of molecules for virtual screening and dramatic improvements in computational power have supported the emergence of a new field of qsar applications that we term ‘deep qsar’. marking a decade from the pioneering applications of deep qsar to tasks involved in small-molecule drug discovery, we herein describe key advances in the field, including deep generative and reinforcement learning approaches in molecular design, deep learning models for synthetic planning and the application of deep qsar models in structure-based virtual screening. we also reflect on the emergence of quantum computing, which promises to further accelerate deep qsar applications and the need for open-source and democratized resources to support computer-aided drug design.
[ "quantitative structure", "activity relationship", "qsar", "modelling", "an approach", "that", "computer-aided drug design", "recent years", "progress", "artificial intelligence techniques", "deep learning", "the rapid growth", "databases", "molecules", "virtual screening", "dramatic improvements", "computational power", "the emergence", "a new field", "qsar applications", "we", "a decade", "the pioneering applications", "deep qsar", "tasks", "small-molecule drug discovery", "we", "key advances", "the field", "deep generative and reinforcement learning approaches", "molecular design", "deep learning models", "synthetic planning", "the application", "deep qsar models", "structure-based virtual screening", "we", "the emergence", "quantum computing", "which", "deep qsar applications", "the need", "open-source", "democratized resources", "computer-aided drug design", "60 years ago", "recent years", "quantum" ]
Deep learning for water quality
[ "Wei Zhi", "Alison P. Appling", "Heather E. Golden", "Joel Podgorski", "Li Li" ]
Understanding and predicting the quality of inland waters are challenging, particularly in the context of intensifying climate extremes expected in the future. These challenges arise partly due to complex processes that regulate water quality, and arduous and expensive data collection that exacerbate the issue of data scarcity. Traditional process-based and statistical models often fall short in predicting water quality. In this Review, we posit that deep learning represents an underutilized yet promising approach that can unravel intricate structures and relationships in high-dimensional data. We demonstrate that deep learning methods can help address data scarcity by filling temporal and spatial gaps and aid in formulating and testing hypotheses via identifying influential drivers of water quality. This Review highlights the strengths and limitations of deep learning methods relative to traditional approaches, and underscores its potential as an emerging and indispensable approach in overcoming challenges and discovering new knowledge in water-quality sciences.
10.1038/s44221-024-00202-z
deep learning for water quality
understanding and predicting the quality of inland waters are challenging, particularly in the context of intensifying climate extremes expected in the future. these challenges arise partly due to complex processes that regulate water quality, and arduous and expensive data collection that exacerbate the issue of data scarcity. traditional process-based and statistical models often fall short in predicting water quality. in this review, we posit that deep learning represents an underutilized yet promising approach that can unravel intricate structures and relationships in high-dimensional data. we demonstrate that deep learning methods can help address data scarcity by filling temporal and spatial gaps and aid in formulating and testing hypotheses via identifying influential drivers of water quality. this review highlights the strengths and limitations of deep learning methods relative to traditional approaches, and underscores its potential as an emerging and indispensable approach in overcoming challenges and discovering new knowledge in water-quality sciences.
[ "understanding", "the quality", "inland waters", "the context", "climate extremes", "the future", "these challenges", "complex processes", "that", "water quality", "arduous and expensive data collection", "that", "the issue", "data scarcity", "traditional process-based and statistical models", "water quality", "this review", "we", "deep learning", "an underutilized yet promising approach", "that", "intricate structures", "relationships", "high-dimensional data", "we", "deep learning methods", "data scarcity", "temporal and spatial gaps", "aid", "formulating and testing hypotheses", "influential drivers", "water quality", "this review", "the strengths", "limitations", "deep learning methods", "traditional approaches", "its potential", "an emerging and indispensable approach", "challenges", "new knowledge", "water-quality sciences" ]
Comparative Analysis of Machine Learning, Ensemble Learning and Deep Learning Classifiers for Parkinson’s Disease Detection
[ "Palak Goyal", "Rinkle Rani" ]
A progressive neurodegenerative ailment called Parkinson's disease (PD) is marked by the death of dopamine-producing cells in the substantia nigra area of the brain. The exact etiology of PD remains elusive, but it is believed to involve the presence of Lewy bodies, abnormal protein aggregates, in affected brain regions, leading to the mobile symptoms of PD. Hence, as the management of PD continues to evolve, there is a growing demand for the establishment of a descriptive system that enables the early detection of PD. In this study, we conducted an extensive analysis using machine learning, ensemble learning, and deep learning models with different hyperparameters to develop accurate classification models for PD prediction. To enhance classifier performance and address overfitting, we employed principal component analysis (PCA) for feature selection along with various preprocessing techniques. The dataset used consisted of voice samples, comprising 188 PD patients and 64 normal individuals. Our results demonstrated that the Random Forest (RF) model with accuracy of 82.37% outperformed the other base classifiers Among the ensemble classifiers, the LGBM model exhibited the highest accuracy of 85.90% when compared to both base and ensemble classifiers. Notably, the deep learning model has 91.33% training accuracy and 85.02% testing accuracy, suggesting that deep learning models perform comparably equivalent on small datasets compared to machine learning classifiers. Overall, our findings underscore the effectiveness of machine learning, ensemble techniques and deep learning models in accurately predicting PD.
10.1007/s42979-023-02368-x
comparative analysis of machine learning, ensemble learning and deep learning classifiers for parkinson’s disease detection
a progressive neurodegenerative ailment called parkinson's disease (pd) is marked by the death of dopamine-producing cells in the substantia nigra area of the brain. the exact etiology of pd remains elusive, but it is believed to involve the presence of lewy bodies, abnormal protein aggregates, in affected brain regions, leading to the mobile symptoms of pd. hence, as the management of pd continues to evolve, there is a growing demand for the establishment of a descriptive system that enables the early detection of pd. in this study, we conducted an extensive analysis using machine learning, ensemble learning, and deep learning models with different hyperparameters to develop accurate classification models for pd prediction. to enhance classifier performance and address overfitting, we employed principal component analysis (pca) for feature selection along with various preprocessing techniques. the dataset used consisted of voice samples, comprising 188 pd patients and 64 normal individuals. our results demonstrated that the random forest (rf) model with accuracy of 82.37% outperformed the other base classifiers among the ensemble classifiers, the lgbm model exhibited the highest accuracy of 85.90% when compared to both base and ensemble classifiers. notably, the deep learning model has 91.33% training accuracy and 85.02% testing accuracy, suggesting that deep learning models perform comparably equivalent on small datasets compared to machine learning classifiers. overall, our findings underscore the effectiveness of machine learning, ensemble techniques and deep learning models in accurately predicting pd.
[ "a progressive neurodegenerative ailment", "parkinson's disease", "pd", "the death", "dopamine-producing cells", "the substantia nigra area", "the brain", "the exact etiology", "pd", "it", "the presence", "lewy bodies", "abnormal protein aggregates", "affected brain regions", "the mobile symptoms", "pd", "the management", "pd", "a growing demand", "the establishment", "a descriptive system", "that", "the early detection", "pd", "this study", "we", "an extensive analysis", "machine learning", "ensemble learning", "deep learning models", "different hyperparameters", "accurate classification models", "pd prediction", "classifier performance", "address overfitting", "we", "principal component analysis", "pca", "feature selection", "various preprocessing techniques", "the dataset", "voice samples", "188 pd patients", "64 normal individuals", "our results", "the random forest", "(rf) model", "accuracy", "82.37%", "the other base classifiers", "the ensemble classifiers", "the lgbm model", "the highest accuracy", "85.90%", "both base and ensemble classifiers", "the deep learning model", "91.33% training accuracy", "85.02% testing accuracy", "deep learning models", "small datasets", "machine learning classifiers", "our findings", "the effectiveness", "machine learning", "ensemble techniques", "deep learning models", "pd", "188", "64", "82.37%", "85.90%", "91.33%", "85.02%" ]
Employing deep learning and transfer learning for accurate brain tumor detection
[ "Sandeep Kumar Mathivanan", "Sridevi Sonaimuthu", "Sankar Murugesan", "Hariharan Rajadurai", "Basu Dev Shivahare", "Mohd Asif Shah" ]
Artificial intelligence-powered deep learning methods are being used to diagnose brain tumors with high accuracy, owing to their ability to process large amounts of data. Magnetic resonance imaging stands as the gold standard for brain tumor diagnosis using machine vision, surpassing computed tomography, ultrasound, and X-ray imaging in its effectiveness. Despite this, brain tumor diagnosis remains a challenging endeavour due to the intricate structure of the brain. This study delves into the potential of deep transfer learning architectures to elevate the accuracy of brain tumor diagnosis. Transfer learning is a machine learning technique that allows us to repurpose pre-trained models on new tasks. This can be particularly useful for medical imaging tasks, where labelled data is often scarce. Four distinct transfer learning architectures were assessed in this study: ResNet152, VGG19, DenseNet169, and MobileNetv3. The models were trained and validated on a dataset from benchmark database: Kaggle. Five-fold cross validation was adopted for training and testing. To enhance the balance of the dataset and improve the performance of the models, image enhancement techniques were applied to the data for the four categories: pituitary, normal, meningioma, and glioma. MobileNetv3 achieved the highest accuracy of 99.75%, significantly outperforming other existing methods. This demonstrates the potential of deep transfer learning architectures to revolutionize the field of brain tumor diagnosis.
10.1038/s41598-024-57970-7
employing deep learning and transfer learning for accurate brain tumor detection
artificial intelligence-powered deep learning methods are being used to diagnose brain tumors with high accuracy, owing to their ability to process large amounts of data. magnetic resonance imaging stands as the gold standard for brain tumor diagnosis using machine vision, surpassing computed tomography, ultrasound, and x-ray imaging in its effectiveness. despite this, brain tumor diagnosis remains a challenging endeavour due to the intricate structure of the brain. this study delves into the potential of deep transfer learning architectures to elevate the accuracy of brain tumor diagnosis. transfer learning is a machine learning technique that allows us to repurpose pre-trained models on new tasks. this can be particularly useful for medical imaging tasks, where labelled data is often scarce. four distinct transfer learning architectures were assessed in this study: resnet152, vgg19, densenet169, and mobilenetv3. the models were trained and validated on a dataset from benchmark database: kaggle. five-fold cross validation was adopted for training and testing. to enhance the balance of the dataset and improve the performance of the models, image enhancement techniques were applied to the data for the four categories: pituitary, normal, meningioma, and glioma. mobilenetv3 achieved the highest accuracy of 99.75%, significantly outperforming other existing methods. this demonstrates the potential of deep transfer learning architectures to revolutionize the field of brain tumor diagnosis.
[ "artificial intelligence-powered deep learning methods", "brain tumors", "high accuracy", "their ability", "large amounts", "data", "magnetic resonance imaging", "the gold standard", "brain tumor diagnosis", "machine vision", "computed tomography", "ultrasound", "x", "-ray imaging", "its effectiveness", "this", "brain tumor diagnosis", "a challenging endeavour", "the intricate structure", "the brain", "this study", "the potential", "architectures", "the accuracy", "brain tumor diagnosis", "transfer learning", "a machine learning technique", "that", "us", "pre-trained models", "new tasks", "this", "medical imaging tasks", "labelled data", "four distinct transfer learning architectures", "this study", "resnet152", "vgg19", "densenet169", "mobilenetv3", "the models", "a dataset", "benchmark database", "kaggle", "five-fold cross validation", "training", "testing", "the balance", "the dataset", "the performance", "the models", "image enhancement techniques", "the data", "the four categories", "glioma", "mobilenetv3", "the highest accuracy", "99.75%", "other existing methods", "this", "the potential", "architectures", "the field", "brain tumor diagnosis", "four", "mobilenetv3", "five-fold", "four", "mobilenetv3", "99.75%" ]
Detecting Suicidality in Arabic Tweets Using Machine Learning and Deep Learning Techniques
[ "Asma Abdulsalam", "Areej Alhothali", "Saleh Al-Ghamdi" ]
Social media platforms have revolutionized traditional communication techniques by allowing people to connect instantaneously, openly, and frequently. As people use social media to share personal stories and express their opinions, negative emotions such as thoughts of death, self-harm, and hardship are commonly expressed, particularly among younger generations. Accordingly, the use of social media to detect suicidality may help provide proper intervention that will ultimately deter the spread of self-harm and suicidal ideation on social media. To investigate the automated detection of suicidal thoughts in Arabic tweets, we developed a novel Arabic suicidal tweet dataset, examined several machine learning models trained on word frequency and embedding features, and investigated the performance of pre-trained deep learning models in identifying suicidal sentiment. The results indicate that the support vector machine trained on character n-gram features yields the best performance among conventional machine learning models, with an accuracy of 86% and F1 score of 79%. In the subsequent deep learning experiment, AraBert outperformed all other machine and deep learning models with an accuracy of 91% and F1-score of 88%, significantly improving the detection of suicidal ideation in the dataset. To the best of our knowledge, this study represents the first attempt to compile an Arabic suicidality detection dataset from Twitter and to use deep learning to detect suicidal sentiment in Arabic posts.
10.1007/s13369-024-08767-3
detecting suicidality in arabic tweets using machine learning and deep learning techniques
social media platforms have revolutionized traditional communication techniques by allowing people to connect instantaneously, openly, and frequently. as people use social media to share personal stories and express their opinions, negative emotions such as thoughts of death, self-harm, and hardship are commonly expressed, particularly among younger generations. accordingly, the use of social media to detect suicidality may help provide proper intervention that will ultimately deter the spread of self-harm and suicidal ideation on social media. to investigate the automated detection of suicidal thoughts in arabic tweets, we developed a novel arabic suicidal tweet dataset, examined several machine learning models trained on word frequency and embedding features, and investigated the performance of pre-trained deep learning models in identifying suicidal sentiment. the results indicate that the support vector machine trained on character n-gram features yields the best performance among conventional machine learning models, with an accuracy of 86% and f1 score of 79%. in the subsequent deep learning experiment, arabert outperformed all other machine and deep learning models with an accuracy of 91% and f1-score of 88%, significantly improving the detection of suicidal ideation in the dataset. to the best of our knowledge, this study represents the first attempt to compile an arabic suicidality detection dataset from twitter and to use deep learning to detect suicidal sentiment in arabic posts.
[ "social media platforms", "traditional communication techniques", "people", "people", "social media", "personal stories", "their opinions", "negative emotions", "thoughts", "death", "self-harm", "hardship", "younger generations", "the use", "social media", "suicidality", "proper intervention", "that", "the spread", "self-harm", "suicidal ideation", "social media", "the automated detection", "suicidal thoughts", "arabic tweets", "we", "a novel arabic suicidal tweet dataset", "several machine learning models", "word frequency", "features", "the performance", "pre-trained deep learning models", "suicidal sentiment", "the results", "the support vector machine", "character n-gram features", "the best performance", "conventional machine learning models", "an accuracy", "86%", "f1 score", "79%", "the subsequent deep learning experiment", "arabert", "all other machine", "deep learning models", "an accuracy", "91%", "f1-score", "88%", "the detection", "suicidal ideation", "the dataset", "our knowledge", "this study", "the first attempt", "an arabic suicidality detection", "twitter", "deep learning", "suicidal sentiment", "arabic posts", "arabic", "arabic", "86%", "79%", "91%", "88%", "first", "arabic", "arabic" ]
Deep learning for code generation: a survey
[ "Huangzhao Zhang", "Kechi Zhang", "Zhuo Li", "Jia Li", "Jia Li", "Yongmin Li", "Yunfei Zhao", "Yuqi Zhu", "Fang Liu", "Ge Li", "Zhi Jin" ]
In the past decade, thanks to the powerfulness of deep-learning techniques, we have witnessed a whole new era of automated code generation. To sort out developments, we have conducted a comprehensive review of solutions to deep learning-based code generation. In this survey, we generally formalize the pipeline and procedure of code generation and categorize existing solutions according to taxonomy from perspectives of architecture, model-agnostic enhancing strategy, metrics, and tasks. In addition, we outline the challenges faced by current dominant large models and list several plausible directions for future research. We hope that this survey may provide handy guidance to understanding, utilizing, and developing deep learning-based code-generation techniques for researchers and practitioners.
10.1007/s11432-023-3956-3
deep learning for code generation: a survey
in the past decade, thanks to the powerfulness of deep-learning techniques, we have witnessed a whole new era of automated code generation. to sort out developments, we have conducted a comprehensive review of solutions to deep learning-based code generation. in this survey, we generally formalize the pipeline and procedure of code generation and categorize existing solutions according to taxonomy from perspectives of architecture, model-agnostic enhancing strategy, metrics, and tasks. in addition, we outline the challenges faced by current dominant large models and list several plausible directions for future research. we hope that this survey may provide handy guidance to understanding, utilizing, and developing deep learning-based code-generation techniques for researchers and practitioners.
[ "the past decade", "the powerfulness", "deep-learning techniques", "we", "a whole new era", "automated code generation", "developments", "we", "a comprehensive review", "solutions", "deep learning-based code generation", "this survey", "we", "the pipeline", "procedure", "code generation", "existing solutions", "taxonomy", "perspectives", "architecture", "model-agnostic enhancing strategy", "metrics", "tasks", "addition", "we", "the challenges", "current dominant large models", "several plausible directions", "future research", "we", "this survey", "handy guidance", "understanding", "deep learning-based code-generation techniques", "researchers", "practitioners", "the past decade" ]
Privacy enhanced course recommendations through deep learning in Federated Learning environments
[ "Chandra Sekhar Kolli", "Sreenivasu Seelamanthula", "Venkata Krishna Reddy V", "Padamata Ramesh Babu", "Mule Rama Krishna Reddy", "Babu Rao Gumpina" ]
The increasing concerns around data security and privacy among users have significantly pushed the interest of the research community towards developing privacy-preserving recommendation systems. Amidst this backdrop, our study introduces a novel course recommendation methodology leveraging Federated Learning (FL) coupled with advanced Deep Learning techniques. This method executes the recommendation process across local nodes through several stages, including agglomerative matrix formulation, course clustering, bi-level matching, identification of learner-preferred courses, and ultimately, course recommendation. Notably, course clustering is achieved through Deep Fuzzy Clustering (DFC), while Deep Convolutional Neural Networks (DCNN) are employed for the recommendation phase. The efficacy of our DFC-DCNN-FL approach is rigorously evaluated based on several metrics: accuracy, False Positive Rate (FPR), loss function, Mean Square Error (MSE), Root MSE (RMSE), and Mean Average Precision (MAP). The results demonstrate remarkable performance with scores of 0.909, 0.116, 0.126, 0.291, 0.539, and 0.925, respectively.
10.1007/s41870-024-02087-3
privacy enhanced course recommendations through deep learning in federated learning environments
the increasing concerns around data security and privacy among users have significantly pushed the interest of the research community towards developing privacy-preserving recommendation systems. amidst this backdrop, our study introduces a novel course recommendation methodology leveraging federated learning (fl) coupled with advanced deep learning techniques. this method executes the recommendation process across local nodes through several stages, including agglomerative matrix formulation, course clustering, bi-level matching, identification of learner-preferred courses, and ultimately, course recommendation. notably, course clustering is achieved through deep fuzzy clustering (dfc), while deep convolutional neural networks (dcnn) are employed for the recommendation phase. the efficacy of our dfc-dcnn-fl approach is rigorously evaluated based on several metrics: accuracy, false positive rate (fpr), loss function, mean square error (mse), root mse (rmse), and mean average precision (map). the results demonstrate remarkable performance with scores of 0.909, 0.116, 0.126, 0.291, 0.539, and 0.925, respectively.
[ "the increasing concerns", "data security", "privacy", "users", "the interest", "the research community", "privacy-preserving recommendation systems", "this backdrop", "our study", "a novel course recommendation methodology", "federated learning", "advanced deep learning techniques", "this method", "the recommendation process", "local nodes", "several stages", "agglomerative matrix formulation", "bi-level matching", "identification", "learner-preferred courses", "ultimately, course recommendation", "course clustering", "deep fuzzy clustering", "dfc", "deep convolutional neural networks", "dcnn", "the recommendation phase", "the efficacy", "our dfc-dcnn-fl approach", "several metrics", "accuracy", "false positive rate", "fpr", "loss function", "square error", "mse", "root mse", "rmse", "average precision", "(map", "the results", "remarkable performance", "scores", "rmse", "0.909", "0.116", "0.126", "0.291", "0.539", "0.925" ]
DEEP-squared: deep learning powered De-scattering with Excitation Patterning
[ "Navodini Wijethilake", "Mithunjha Anandakumar", "Cheng Zheng", "Peter T. C. So", "Murat Yildirim", "Dushan N. Wadduwage" ]
Limited throughput is a key challenge in in vivo deep tissue imaging using nonlinear optical microscopy. Point scanning multiphoton microscopy, the current gold standard, is slow especially compared to the widefield imaging modalities used for optically cleared or thin specimens. We recently introduced “De-scattering with Excitation Patterning” or “DEEP” as a widefield alternative to point-scanning geometries. Using patterned multiphoton excitation, DEEP encodes spatial information inside tissue before scattering. However, to de-scatter at typical depths, hundreds of such patterned excitations were needed. In this work, we present DEEP2, a deep learning-based model that can de-scatter images from just tens of patterned excitations instead of hundreds. Consequently, we improve DEEP’s throughput by almost an order of magnitude. We demonstrate our method in multiple numerical and experimental imaging studies, including in vivo cortical vasculature imaging up to 4 scattering lengths deep in live mice.
10.1038/s41377-023-01248-6
deep-squared: deep learning powered de-scattering with excitation patterning
limited throughput is a key challenge in in vivo deep tissue imaging using nonlinear optical microscopy. point scanning multiphoton microscopy, the current gold standard, is slow especially compared to the widefield imaging modalities used for optically cleared or thin specimens. we recently introduced “de-scattering with excitation patterning” or “deep” as a widefield alternative to point-scanning geometries. using patterned multiphoton excitation, deep encodes spatial information inside tissue before scattering. however, to de-scatter at typical depths, hundreds of such patterned excitations were needed. in this work, we present deep2, a deep learning-based model that can de-scatter images from just tens of patterned excitations instead of hundreds. consequently, we improve deep’s throughput by almost an order of magnitude. we demonstrate our method in multiple numerical and experimental imaging studies, including in vivo cortical vasculature imaging up to 4 scattering lengths deep in live mice.
[ "limited throughput", "a key challenge", "vivo deep tissue", "nonlinear optical microscopy", "point", "multiphoton microscopy", "the current gold standard", "the widefield imaging modalities", "optically cleared or thin specimens", "we", "de", "a widefield alternative", "point-scanning geometries", "patterned multiphoton excitation", "tissue", "de", "-", "scatter", "typical depths", "hundreds", "such patterned excitations", "this work", "we", "deep2", "a deep learning-based model", "that", "de-scatter images", "just tens", "patterned excitations", "hundreds", "we", "deep’s throughput", "almost an order", "magnitude", "we", "our method", "multiple numerical and experimental imaging studies", "vivo cortical vasculature", "scattering lengths", "live mice", "multiphoton", "hundreds", "deep2", "just tens", "hundreds", "4" ]
Harnessing deep learning for population genetic inference
[ "Xin Huang", "Aigerim Rymbekova", "Olga Dolgova", "Oscar Lao", "Martin Kuhlwilm" ]
In population genetics, the emergence of large-scale genomic data for various species and populations has provided new opportunities to understand the evolutionary forces that drive genetic diversity using statistical inference. However, the era of population genomics presents new challenges in analysing the massive amounts of genomes and variants. Deep learning has demonstrated state-of-the-art performance for numerous applications involving large-scale data. Recently, deep learning approaches have gained popularity in population genetics; facilitated by the advent of massive genomic data sets, powerful computational hardware and complex deep learning architectures, they have been used to identify population structure, infer demographic history and investigate natural selection. Here, we introduce common deep learning architectures and provide comprehensive guidelines for implementing deep learning models for population genetic inference. We also discuss current challenges and future directions for applying deep learning in population genetics, focusing on efficiency, robustness and interpretability.
10.1038/s41576-023-00636-3
harnessing deep learning for population genetic inference
in population genetics, the emergence of large-scale genomic data for various species and populations has provided new opportunities to understand the evolutionary forces that drive genetic diversity using statistical inference. however, the era of population genomics presents new challenges in analysing the massive amounts of genomes and variants. deep learning has demonstrated state-of-the-art performance for numerous applications involving large-scale data. recently, deep learning approaches have gained popularity in population genetics; facilitated by the advent of massive genomic data sets, powerful computational hardware and complex deep learning architectures, they have been used to identify population structure, infer demographic history and investigate natural selection. here, we introduce common deep learning architectures and provide comprehensive guidelines for implementing deep learning models for population genetic inference. we also discuss current challenges and future directions for applying deep learning in population genetics, focusing on efficiency, robustness and interpretability.
[ "population genetics", "the emergence", "large-scale genomic data", "various species", "populations", "new opportunities", "the evolutionary forces", "that", "genetic diversity", "statistical inference", "the era", "population genomics", "new challenges", "the massive amounts", "genomes", "variants", "deep learning", "the-art", "numerous applications", "large-scale data", "deep learning approaches", "popularity", "population genetics", "the advent", "massive genomic data sets", "powerful computational hardware", "complex deep learning architectures", "they", "population structure", "demographic history", "natural selection", "we", "common deep learning architectures", "comprehensive guidelines", "deep learning models", "population genetic inference", "we", "current challenges", "future directions", "deep learning", "population genetics", "efficiency", "robustness", "interpretability" ]
An enhanced deep learning method for multi-class brain tumor classification using deep transfer learning
[ "Sohaib Asif", "Ming Zhao", "Fengxiao Tang", "Yusen Zhu" ]
Multi-class brain tumor classification is an important area of research in the field of medical imaging because of the different tumor characteristics. One such challenging problem is the multiclass classification of brain tumors using MR images. Since accuracy is critical in classification, computer vision researchers are introducing a number of techniques; However, achieving high accuracy remains challenging when classifying brain images. Early diagnosis of brain tumor types can activate timely treatment, thereby improving the patient’s chances of survival. In recent years, deep learning models have achieved promising results, especially in classifying brain tumors to help neurologists. This work proposes a deep transfer learning model that accelerates brain tumor detection using MR imaging. In this paper, five popular deep learning architectures are utilized to develop a system for diagnosing brain tumors. The architectures used is this paper are Xception, DenseNet201, DenseNet121, ResNet152V2, and InceptionResNetV2. The final layer of these architectures has been modified with our deep dense block and softmax layer as the output layer to improve the classification accuracy. This article presents two main experiments to assess the effectiveness of the proposed model. First, three-class results using images from patients with glioma, meningioma, and pituitary are discussed. Second, the results of four classes are discussed using images of glioma, meningioma, pituitary and healthy patients. The results show that the proposed model based on Xception architecture is the most suitable deep learning model for detecting brain tumors. It achieves a classification accuracy of 99.67% on the 3-class dataset and 95.87% on the 4-class dataset, which is better than the state-of-the-art methods. In conclusion, the proposed model can provide radiologists with an automated medical diagnostic system to make fast and accurate decisions.
10.1007/s11042-023-14828-w
an enhanced deep learning method for multi-class brain tumor classification using deep transfer learning
multi-class brain tumor classification is an important area of research in the field of medical imaging because of the different tumor characteristics. one such challenging problem is the multiclass classification of brain tumors using mr images. since accuracy is critical in classification, computer vision researchers are introducing a number of techniques; however, achieving high accuracy remains challenging when classifying brain images. early diagnosis of brain tumor types can activate timely treatment, thereby improving the patient’s chances of survival. in recent years, deep learning models have achieved promising results, especially in classifying brain tumors to help neurologists. this work proposes a deep transfer learning model that accelerates brain tumor detection using mr imaging. in this paper, five popular deep learning architectures are utilized to develop a system for diagnosing brain tumors. the architectures used is this paper are xception, densenet201, densenet121, resnet152v2, and inceptionresnetv2. the final layer of these architectures has been modified with our deep dense block and softmax layer as the output layer to improve the classification accuracy. this article presents two main experiments to assess the effectiveness of the proposed model. first, three-class results using images from patients with glioma, meningioma, and pituitary are discussed. second, the results of four classes are discussed using images of glioma, meningioma, pituitary and healthy patients. the results show that the proposed model based on xception architecture is the most suitable deep learning model for detecting brain tumors. it achieves a classification accuracy of 99.67% on the 3-class dataset and 95.87% on the 4-class dataset, which is better than the state-of-the-art methods. in conclusion, the proposed model can provide radiologists with an automated medical diagnostic system to make fast and accurate decisions.
[ "multi-class brain tumor classification", "an important area", "research", "the field", "medical imaging", "the different tumor characteristics", "one such challenging problem", "the multiclass classification", "brain tumors", "mr images", "accuracy", "classification", "computer vision researchers", "a number", "techniques", "high accuracy", "brain images", "early diagnosis", "brain tumor types", "timely treatment", "the patient’s chances", "survival", "recent years", "deep learning models", "promising results", "brain tumors", "neurologists", "this work", "a deep transfer learning model", "that", "brain tumor detection", "mr imaging", "this paper", "five popular deep learning architectures", "a system", "brain tumors", "the architectures", "this paper", "xception", "densenet201", "densenet121", "resnet152v2", "the final layer", "these architectures", "our deep dense block", "softmax layer", "the output layer", "the classification accuracy", "this article", "two main experiments", "the effectiveness", "the proposed model", "first, three-class results", "images", "patients", "glioma", "pituitary", "the results", "four classes", "images", "glioma", "patients", "the results", "the proposed model", "xception architecture", "the most suitable deep learning model", "brain tumors", "it", "a classification accuracy", "99.67%", "the 3-class dataset", "95.87%", "the 4-class dataset", "which", "the-art", "conclusion", "the proposed model", "radiologists", "an automated medical diagnostic system", "fast and accurate decisions", "one", "recent years", "five", "inceptionresnetv2", "two", "first", "three", "glioma", "second", "four", "glioma", "99.67%", "3", "95.87%", "4" ]
A Procedural Constructive Learning Mechanism with Deep Reinforcement Learning for Cognitive Agents
[ "Leonardo de Lellis Rossi", "Eric Rohmer", "Paula Dornhofer Paro Costa", "Esther Luna Colombini", "Alexandre da Silva Simões", "Ricardo Ribeiro Gudwin" ]
Recent advancements in AI and deep learning have created a growing demand for artificial agents capable of performing tasks within increasingly complex environments. To address the challenges associated with continuous learning constraints and knowledge capacity in this context, cognitive architectures inspired by human cognition have gained significance. This study contributes to existing research by introducing a cognitive-attentional system employing a constructive neural network-based learning approach for continuous acquisition of procedural knowledge. We replace an incremental tabular Reinforcement Learning algorithm with a constructive neural network deep reinforcement learning mechanism for continuous sensorimotor knowledge acquisition, thereby enhancing the overall learning capacity. The primary emphasis of this modification centers on optimizing memory utilization and reducing training time. Our study presents a learning strategy that amalgamates deep reinforcement learning with procedural learning, mirroring the incremental learning process observed in human sensorimotor development. This approach is embedded within the CONAIM cognitive-attentional architecture, leveraging the cognitive tools of CST. The proposed learning mechanism allows the model to dynamically create and modify elements in its procedural memory, facilitating the reuse of previously acquired functions and procedures. Additionally, it equips the model with the capability to combine learned elements to effectively adapt to complex scenarios. A constructive neural network was employed, initiating with an initial hidden layer comprising one neuron. However, it possesses the capacity to adapt its internal architecture in response to its performance in procedural and sensorimotor learning tasks, inserting new hidden layers or neurons. Experimentation conducted through simulations involving a humanoid robot demonstrates the successful resolution of tasks that were previously unsolved through incremental knowledge acquisition. Throughout the training phase, the constructive agent achieved a minimum of 40% greater rewards and executed 8% more actions when compared to other agents. In the subsequent testing phase, the constructive agent exhibited a 15% increase in the number of actions performed in contrast to its counterparts.
10.1007/s10846-024-02064-9
a procedural constructive learning mechanism with deep reinforcement learning for cognitive agents
recent advancements in ai and deep learning have created a growing demand for artificial agents capable of performing tasks within increasingly complex environments. to address the challenges associated with continuous learning constraints and knowledge capacity in this context, cognitive architectures inspired by human cognition have gained significance. this study contributes to existing research by introducing a cognitive-attentional system employing a constructive neural network-based learning approach for continuous acquisition of procedural knowledge. we replace an incremental tabular reinforcement learning algorithm with a constructive neural network deep reinforcement learning mechanism for continuous sensorimotor knowledge acquisition, thereby enhancing the overall learning capacity. the primary emphasis of this modification centers on optimizing memory utilization and reducing training time. our study presents a learning strategy that amalgamates deep reinforcement learning with procedural learning, mirroring the incremental learning process observed in human sensorimotor development. this approach is embedded within the conaim cognitive-attentional architecture, leveraging the cognitive tools of cst. the proposed learning mechanism allows the model to dynamically create and modify elements in its procedural memory, facilitating the reuse of previously acquired functions and procedures. additionally, it equips the model with the capability to combine learned elements to effectively adapt to complex scenarios. a constructive neural network was employed, initiating with an initial hidden layer comprising one neuron. however, it possesses the capacity to adapt its internal architecture in response to its performance in procedural and sensorimotor learning tasks, inserting new hidden layers or neurons. experimentation conducted through simulations involving a humanoid robot demonstrates the successful resolution of tasks that were previously unsolved through incremental knowledge acquisition. throughout the training phase, the constructive agent achieved a minimum of 40% greater rewards and executed 8% more actions when compared to other agents. in the subsequent testing phase, the constructive agent exhibited a 15% increase in the number of actions performed in contrast to its counterparts.
[ "recent advancements", "ai", "deep learning", "a growing demand", "artificial agents", "tasks", "increasingly complex environments", "the challenges", "continuous learning constraints", "knowledge capacity", "this context", "cognitive architectures", "human cognition", "significance", "this study", "existing research", "a cognitive-attentional system", "a constructive neural network-based learning approach", "continuous acquisition", "procedural knowledge", "we", "an incremental tabular reinforcement learning algorithm", "a constructive neural network deep reinforcement learning mechanism", "continuous sensorimotor knowledge acquisition", "the overall learning capacity", "the primary emphasis", "this modification centers", "memory utilization", "training time", "our study", "a learning strategy", "that", "deep reinforcement learning", "procedural learning", "the incremental learning process", "human sensorimotor development", "this approach", "the conaim cognitive-attentional architecture", "the cognitive tools", "cst", "the proposed learning mechanism", "the model", "elements", "its procedural memory", "the reuse", "previously acquired functions", "procedures", "it", "the model", "the capability", "elements", "complex scenarios", "a constructive neural network", "an initial hidden layer", "one neuron", "it", "the capacity", "its internal architecture", "response", "its performance", "procedural and sensorimotor learning tasks", "new hidden layers", "neurons", "experimentation", "simulations", "a humanoid robot", "the successful resolution", "tasks", "that", "incremental knowledge acquisition", "the training phase", "the constructive agent", "a minimum", "40% greater rewards", "8% more actions", "other agents", "the subsequent testing phase", "the constructive agent", "a 15% increase", "the number", "actions", "contrast", "its counterparts", "one", "40%", "8%", "15%" ]
Classification of Different Plant Species Using Deep Learning and Machine Learning Algorithms
[ "Siddharth Singh Chouhan", "Uday Pratap Singh", "Utkarsh Sharma", "Sanjeev Jain" ]
In the present situation, a lot of research has been directed towards the potency of plants. These natural resources contain characteristics valuable in combat against a number of diseases. But due to lack of familiarity of these plants among human beings, an appropriate advantage of their significance cannot be drawn away. Plants also shares the certain similar characteristics of leaves like color, texture, shape or size, making them hard to classify them among others. So, to eradicate this problem, a deep learning model has been used for the purpose for classification of different plants species captured in real-time using internet of things practice. Six different plants namely Ashwagandha, Black Pepper, Garlic, Ginger, Basil, and Turmeric has been selected for this purpose. Our proposed convolutional neural network (CNN) model achieved higher performance with an accuracy of 99% when compared with other benchmark deep learning models. Also, to analyze the performance of deep learning versus machine learning models like logistic regression, decision tree, random forest, Gaussian naïve Bayes, support vector machine results were evaluated and when compared CNN outperforms against all machine learning models. The future study will be directed towards the automated plant growth estimation.
10.1007/s11277-024-11374-y
classification of different plant species using deep learning and machine learning algorithms
in the present situation, a lot of research has been directed towards the potency of plants. these natural resources contain characteristics valuable in combat against a number of diseases. but due to lack of familiarity of these plants among human beings, an appropriate advantage of their significance cannot be drawn away. plants also shares the certain similar characteristics of leaves like color, texture, shape or size, making them hard to classify them among others. so, to eradicate this problem, a deep learning model has been used for the purpose for classification of different plants species captured in real-time using internet of things practice. six different plants namely ashwagandha, black pepper, garlic, ginger, basil, and turmeric has been selected for this purpose. our proposed convolutional neural network (cnn) model achieved higher performance with an accuracy of 99% when compared with other benchmark deep learning models. also, to analyze the performance of deep learning versus machine learning models like logistic regression, decision tree, random forest, gaussian naïve bayes, support vector machine results were evaluated and when compared cnn outperforms against all machine learning models. the future study will be directed towards the automated plant growth estimation.
[ "the present situation", "a lot", "research", "the potency", "plants", "these natural resources", "characteristics", "combat", "a number", "diseases", "lack", "familiarity", "these plants", "human beings", "an appropriate advantage", "their significance", "plants", "the certain similar characteristics", "leaves", "color", "texture", "shape", "size", "them", "them", "others", "this problem", "a deep learning model", "the purpose", "classification", "different plants species", "real-time", "internet", "six different plants", "namely ashwagandha", "black pepper", "garlic", "ginger", "basil", "turmeric", "this purpose", "our proposed convolutional neural network (cnn) model", "higher performance", "an accuracy", "99%", "other benchmark deep learning models", "the performance", "deep learning", "machine learning models", "logistic regression", "decision tree", "random forest", "gaussian naïve bayes", "support vector machine results", "cnn", "outperforms", "all machine learning models", "the future study", "the automated plant growth estimation", "six", "basil", "cnn", "99%", "gaussian naïve bayes", "cnn" ]
A deep learning model for anti-inflammatory peptides identification based on deep variational autoencoder and contrastive learning
[ "Yujie Xu", "Shengli Zhang", "Feng Zhu", "Yunyun Liang" ]
As a class of biologically active molecules with significant immunomodulatory and anti-inflammatory effects, anti-inflammatory peptides have important application value in the medical and biotechnology fields due to their unique biological functions. Research on the identification of anti-inflammatory peptides provides important theoretical foundations and practical value for a deeper understanding of the biological mechanisms of inflammation and immune regulation, as well as for the development of new drugs and biotechnological applications. Therefore, it is necessary to develop more advanced computational models for identifying anti-inflammatory peptides. In this study, we propose a deep learning model named DAC-AIPs based on variational autoencoder and contrastive learning for accurate identification of anti-inflammatory peptides. In the sequence encoding part, the incorporation of multi-hot encoding helps capture richer sequence information. The autoencoder, composed of convolutional layers and linear layers, can learn latent features and reconstruct features, with variational inference enhancing the representation capability of latent features. Additionally, the introduction of contrastive learning aims to improve the model's classification ability. Through cross-validation and independent dataset testing experiments, DAC-AIPs achieves superior performance compared to existing state-of-the-art models. In cross-validation, the classification accuracy of DAC-AIPs reached around 88%, which is 7% higher than previous models. Furthermore, various ablation experiments and interpretability experiments validate the effectiveness of DAC-AIPs. Finally, a user-friendly online predictor is designed to enhance the practicality of the model, and the server is freely accessible at http://dac-aips.online.
10.1038/s41598-024-69419-y
a deep learning model for anti-inflammatory peptides identification based on deep variational autoencoder and contrastive learning
as a class of biologically active molecules with significant immunomodulatory and anti-inflammatory effects, anti-inflammatory peptides have important application value in the medical and biotechnology fields due to their unique biological functions. research on the identification of anti-inflammatory peptides provides important theoretical foundations and practical value for a deeper understanding of the biological mechanisms of inflammation and immune regulation, as well as for the development of new drugs and biotechnological applications. therefore, it is necessary to develop more advanced computational models for identifying anti-inflammatory peptides. in this study, we propose a deep learning model named dac-aips based on variational autoencoder and contrastive learning for accurate identification of anti-inflammatory peptides. in the sequence encoding part, the incorporation of multi-hot encoding helps capture richer sequence information. the autoencoder, composed of convolutional layers and linear layers, can learn latent features and reconstruct features, with variational inference enhancing the representation capability of latent features. additionally, the introduction of contrastive learning aims to improve the model's classification ability. through cross-validation and independent dataset testing experiments, dac-aips achieves superior performance compared to existing state-of-the-art models. in cross-validation, the classification accuracy of dac-aips reached around 88%, which is 7% higher than previous models. furthermore, various ablation experiments and interpretability experiments validate the effectiveness of dac-aips. finally, a user-friendly online predictor is designed to enhance the practicality of the model, and the server is freely accessible at http://dac-aips.online.
[ "a class", "biologically active molecules", "anti-inflammatory effects", "anti-inflammatory peptides", "important application value", "the medical and biotechnology fields", "their unique biological functions", "research", "the identification", "anti-inflammatory peptides", "important theoretical foundations", "practical value", "a deeper understanding", "the biological mechanisms", "inflammation", "immune regulation", "the development", "new drugs", "biotechnological applications", "it", "more advanced computational models", "anti-inflammatory peptides", "this study", "we", "a deep learning model", "dac-aips", "variational autoencoder", "contrastive learning", "accurate identification", "anti-inflammatory peptides", "the sequence", "part", "the incorporation", "multi-hot encoding", "richer sequence information", "the autoencoder", "convolutional layers", "linear layers", "latent features", "features", "variational inference", "the representation capability", "latent features", "the introduction", "contrastive learning", "the model's classification ability", "cross-validation and independent dataset testing experiments", "dac-aips", "superior performance", "the-art", "-", "validation", "the classification accuracy", "dac-aips", "around 88%", "which", "previous models", "various ablation experiments", "interpretability experiments", "the effectiveness", "dac-aips", "a user-friendly online predictor", "the practicality", "the model", "the server", "http://dac-aips.online", "linear", "around 88%", "7%" ]
Fraud Detection Using Machine Learning and Deep Learning
[ "Akash Gandhar", "Kapil Gupta", "Aman Kumar Pandey", "Dharm Raj" ]
Detecting fraudulent activities is a major worry for businesses and financial organizations because they can result in significant financial losses and reputational harm. Traditional fraud detection a method frequently depend on present rules and patterns that skilled scammer can easily circumvent. Machine learning and deep learning algorithms have surfaced as promising methods for detecting fraud in order to handle this problem. Authors present a thorough overview of the most recent ML and DL techniques for fraud identification in this article. These approaches are classified based on their fundamental tactics, which include supervised learning, unsupervised learning, and reinforcement learning. We review recent developments in each area, as well as their strengths and weaknesses. Additionally, we draw attention to some of the major problems with imbalanced datasets, adversarial assaults, and the interpretability of models as well as other important research tasks and difficulties in fraud detection. We also stress the value of feature science and data pre-processing techniques in enhancing the effectiveness of scam detection systems. Finally, we show a case study on the use of DL and ML techniques in the financial sector for fraud detection. Authors show how these algorithms can successfully identify fraudulent transactions, minimize false positives, and keep high precision and scalability. The overall aim of this article is to provide a comprehensive evaluation of the most cutting-edge ML and DL techniques for fraud identification and to shed light on potential future paths for this field of study.
10.1007/s42979-024-02772-x
fraud detection using machine learning and deep learning
detecting fraudulent activities is a major worry for businesses and financial organizations because they can result in significant financial losses and reputational harm. traditional fraud detection a method frequently depend on present rules and patterns that skilled scammer can easily circumvent. machine learning and deep learning algorithms have surfaced as promising methods for detecting fraud in order to handle this problem. authors present a thorough overview of the most recent ml and dl techniques for fraud identification in this article. these approaches are classified based on their fundamental tactics, which include supervised learning, unsupervised learning, and reinforcement learning. we review recent developments in each area, as well as their strengths and weaknesses. additionally, we draw attention to some of the major problems with imbalanced datasets, adversarial assaults, and the interpretability of models as well as other important research tasks and difficulties in fraud detection. we also stress the value of feature science and data pre-processing techniques in enhancing the effectiveness of scam detection systems. finally, we show a case study on the use of dl and ml techniques in the financial sector for fraud detection. authors show how these algorithms can successfully identify fraudulent transactions, minimize false positives, and keep high precision and scalability. the overall aim of this article is to provide a comprehensive evaluation of the most cutting-edge ml and dl techniques for fraud identification and to shed light on potential future paths for this field of study.
[ "fraudulent activities", "a major worry", "businesses", "financial organizations", "they", "significant financial losses", "reputational harm", "traditional fraud detection", "a method", "present rules", "patterns", "skilled scammer", "machine learning", "deep learning algorithms", "promising methods", "fraud", "order", "this problem", "authors", "a thorough overview", "the most recent ml and dl techniques", "fraud identification", "this article", "these approaches", "their fundamental tactics", "which", "supervised learning", "unsupervised learning", "reinforcement learning", "we", "recent developments", "each area", "their strengths", "weaknesses", "we", "attention", "some", "the major problems", "imbalanced datasets", "adversarial assaults", "the interpretability", "models", "other important research tasks", "difficulties", "fraud detection", "we", "the value", "feature science and data pre-processing techniques", "the effectiveness", "scam detection systems", "we", "a case study", "the use", "dl and ml techniques", "the financial sector", "fraud detection", "authors", "these algorithms", "fraudulent transactions", "false positives", "high precision", "scalability", "the overall aim", "this article", "a comprehensive evaluation", "the most cutting-edge ml and dl techniques", "fraud identification", "light", "potential future paths", "this field", "study" ]
Robot autonomous grasping and assembly skill learning based on deep reinforcement learning
[ "Chengjun Chen", "Hao Zhang", "Yong Pan", "Dongnian Li" ]
This paper proposes a deep reinforcement learning-based framework for robot autonomous grasping and assembly skill learning. Meanwhile, a deep Q-learning-based robot grasping skill learning algorithm and a PPO-based robot assembly skill learning algorithm are presented, where a priori knowledge information is introduced to optimize the grasping action and reduce the training time and interaction data needed by the assembly strategy learning algorithm. Besides, a grasping constraint reward function and an assembly constraint reward function are designed to evaluate the robot grasping and assembly quality effectively. Finally, the effectiveness of the proposed framework and algorithms was verified in both simulated and real environments, and the average success rate of grasping in both environments was up to 90%. Under a peg-in-hole assembly tolerance of 3 mm, the assembly success rate was 86.7% and 73.3% in the simulated environment and the physical environment, respectively.
10.1007/s00170-024-13004-0
robot autonomous grasping and assembly skill learning based on deep reinforcement learning
this paper proposes a deep reinforcement learning-based framework for robot autonomous grasping and assembly skill learning. meanwhile, a deep q-learning-based robot grasping skill learning algorithm and a ppo-based robot assembly skill learning algorithm are presented, where a priori knowledge information is introduced to optimize the grasping action and reduce the training time and interaction data needed by the assembly strategy learning algorithm. besides, a grasping constraint reward function and an assembly constraint reward function are designed to evaluate the robot grasping and assembly quality effectively. finally, the effectiveness of the proposed framework and algorithms was verified in both simulated and real environments, and the average success rate of grasping in both environments was up to 90%. under a peg-in-hole assembly tolerance of 3 mm, the assembly success rate was 86.7% and 73.3% in the simulated environment and the physical environment, respectively.
[ "this paper", "a deep reinforcement learning-based framework", "robot autonomous grasping and assembly skill learning", "a deep q-learning-based robot grasping skill", "algorithm", "a ppo-based robot assembly skill learning algorithm", "a priori knowledge information", "the grasping action", "the training time", "interaction data", "the assembly strategy", "a grasping constraint reward function", "an assembly constraint reward function", "the robot grasping and assembly quality", "the effectiveness", "the proposed framework", "algorithms", "both simulated and real environments", "the average success rate", "both environments", "up to 90%", "hole", "the assembly success rate", "86.7%", "73.3%", "the simulated environment", "the physical environment", "up to 90%", "3 mm", "86.7%", "73.3%" ]
Enabling business sustainability for stock market data using machine learning and deep learning approaches
[ "S. Divyashree", "Christy Jackson Joshua", "Abdul Quadir Md", "Senthilkumar Mohan", "A. Sheik Abdullah", "Ummul Hanan Mohamad", "Nisreen Innab", "Ali Ahmadian" ]
This paper introduces AlphaVision, an innovative decision support model designed for stock price prediction by seamlessly integrating real-time news updates and Return on Investment (ROI) values, utilizing various machine learning and deep learning approaches. The research investigates the application of these techniques to enhance the effectiveness of stock trading and investment decisions by accurately anticipating stock prices and providing valuable insights to investors and businesses. The study begins by analyzing the complexities and challenges of stock market analysis, considering factors like political, macroeconomic, and legal issues that contribute to market volatility. To address these challenges, we proposed the methodology called AlphaVision, which incorporates various machine learning algorithms, including Decision Trees, Random Forest, Naïve Bayes, Boosting, K-Nearest Neighbors, and Support Vector Machine, alongside deep learning models such as Multi-layer Perceptron (MLP), Artificial Neural Networks, and Recurrent Neural Networks. The effectiveness of each model is evaluated based on their accuracy in predicting stock prices. Experimental results revealed that the MLP model achieved the highest accuracy of approximately 92%, outperforming other deep learning models. The Random Forest algorithm also demonstrated promising results with an accuracy of around 84.6%. These findings indicate the potential of machine learning and deep learning techniques in improving stock market analysis and prediction. The AlphaVision methodology presented in this research empowers investors and businesses with valuable tools to make informed investment decisions and navigate the complexities of the stock market. By accurately forecasting stock prices based on news updates and ROI values, the model contributes to better financial management and business sustainability. The integration of machine learning and deep learning approaches offers a promising solution for enhancing stock market analysis and prediction. Future research will focus on extracting more relevant financial features to further improve the model’s accuracy. By advancing decision support models for stock price prediction, researchers and practitioners can foster better investment strategies and foster economic growth. The proposed model holds potential to revolutionize stock trading and investment practices, enabling more informed and profitable decision-making in the financial sector.
10.1007/s10479-024-06118-x
enabling business sustainability for stock market data using machine learning and deep learning approaches
this paper introduces alphavision, an innovative decision support model designed for stock price prediction by seamlessly integrating real-time news updates and return on investment (roi) values, utilizing various machine learning and deep learning approaches. the research investigates the application of these techniques to enhance the effectiveness of stock trading and investment decisions by accurately anticipating stock prices and providing valuable insights to investors and businesses. the study begins by analyzing the complexities and challenges of stock market analysis, considering factors like political, macroeconomic, and legal issues that contribute to market volatility. to address these challenges, we proposed the methodology called alphavision, which incorporates various machine learning algorithms, including decision trees, random forest, naïve bayes, boosting, k-nearest neighbors, and support vector machine, alongside deep learning models such as multi-layer perceptron (mlp), artificial neural networks, and recurrent neural networks. the effectiveness of each model is evaluated based on their accuracy in predicting stock prices. experimental results revealed that the mlp model achieved the highest accuracy of approximately 92%, outperforming other deep learning models. the random forest algorithm also demonstrated promising results with an accuracy of around 84.6%. these findings indicate the potential of machine learning and deep learning techniques in improving stock market analysis and prediction. the alphavision methodology presented in this research empowers investors and businesses with valuable tools to make informed investment decisions and navigate the complexities of the stock market. by accurately forecasting stock prices based on news updates and roi values, the model contributes to better financial management and business sustainability. the integration of machine learning and deep learning approaches offers a promising solution for enhancing stock market analysis and prediction. future research will focus on extracting more relevant financial features to further improve the model’s accuracy. by advancing decision support models for stock price prediction, researchers and practitioners can foster better investment strategies and foster economic growth. the proposed model holds potential to revolutionize stock trading and investment practices, enabling more informed and profitable decision-making in the financial sector.
[ "this paper", "alphavision", "an innovative decision support model", "stock price prediction", "real-time news updates", "return", "roi", "various machine learning", "deep learning approaches", "the research", "the application", "these techniques", "the effectiveness", "stock trading and investment decisions", "stock prices", "valuable insights", "investors", "businesses", "the study", "the complexities", "challenges", "stock market analysis", "factors", "political, macroeconomic, and legal issues", "that", "market volatility", "these challenges", "we", "the methodology", "alphavision", "which", "various machine learning algorithms", "decision trees", "random forest", "naïve bayes", "k-nearest neighbors", "vector machine", "deep learning models", "multi-layer perceptron", "mlp", "artificial neural networks", "neural networks", "the effectiveness", "each model", "their accuracy", "stock prices", "experimental results", "the mlp model", "the highest accuracy", "approximately 92%", "other deep learning models", "the random forest algorithm", "promising results", "an accuracy", "around 84.6%", "these findings", "the potential", "machine learning", "deep learning techniques", "stock market analysis", "prediction", "the alphavision methodology", "this research", "investors", "businesses", "valuable tools", "informed investment decisions", "the complexities", "the stock market", "stock prices", "news updates", "roi values", "the model", "better financial management", "business sustainability", "the integration", "machine learning", "deep learning approaches", "a promising solution", "stock market analysis", "prediction", "future research", "more relevant financial features", "the model’s accuracy", "decision support models", "stock price prediction", "researchers", "practitioners", "better investment strategies", "foster economic growth", "the proposed model", "potential", "stock trading", "investment practices", "more informed and profitable decision-making", "the financial sector", "approximately 92%", "around 84.6%" ]
Deep learning for lungs cancer detection: a review
[ "Rabia Javed", "Tahir Abbas", "Ali Haider Khan", "Ali Daud", "Amal Bukhari", "Riad Alharbey" ]
Although lung cancer has been recognized to be the deadliest type of cancer, a good prognosis and efficient treatment depend on early detection. Medical practitioners’ burden is reduced by deep learning techniques, especially Deep Convolutional Neural Networks (DCNN), which are essential in automating the diagnosis and classification of diseases. In this study, we use a variety of medical imaging modalities, including X-rays, WSI, CT scans, and MRI, to thoroughly investigate the use of deep learning techniques in the field of lung cancer diagnosis and classification. This study conducts a comprehensive Systematic Literature Review (SLR) using deep learning techniques for lung cancer research, providing a comprehensive overview of the methodology, cutting-edge developments, quality assessments, and customized deep learning approaches. It presents data from reputable journals and concentrates on the years 2015–2024. Deep learning techniques solve the difficulty of manually identifying and selecting abstract features from lung cancer images. This study includes a wide range of deep learning methods for classifying lung cancer but focuses especially on the most popular method, the Convolutional Neural Network (CNN). CNN can achieve maximum accuracy because of its multi-layer structure, automatic learning of weights, and capacity to communicate local weights. Various algorithms are shown with performance measures like precision, accuracy, specificity, sensitivity, and AUC; CNN consistently shows the greatest accuracy. The findings highlight the important contributions of DCNN in improving lung cancer detection and classification, making them an invaluable resource for researchers looking to gain a greater knowledge of deep learning’s function in medical applications.
10.1007/s10462-024-10807-1
deep learning for lungs cancer detection: a review
although lung cancer has been recognized to be the deadliest type of cancer, a good prognosis and efficient treatment depend on early detection. medical practitioners’ burden is reduced by deep learning techniques, especially deep convolutional neural networks (dcnn), which are essential in automating the diagnosis and classification of diseases. in this study, we use a variety of medical imaging modalities, including x-rays, wsi, ct scans, and mri, to thoroughly investigate the use of deep learning techniques in the field of lung cancer diagnosis and classification. this study conducts a comprehensive systematic literature review (slr) using deep learning techniques for lung cancer research, providing a comprehensive overview of the methodology, cutting-edge developments, quality assessments, and customized deep learning approaches. it presents data from reputable journals and concentrates on the years 2015–2024. deep learning techniques solve the difficulty of manually identifying and selecting abstract features from lung cancer images. this study includes a wide range of deep learning methods for classifying lung cancer but focuses especially on the most popular method, the convolutional neural network (cnn). cnn can achieve maximum accuracy because of its multi-layer structure, automatic learning of weights, and capacity to communicate local weights. various algorithms are shown with performance measures like precision, accuracy, specificity, sensitivity, and auc; cnn consistently shows the greatest accuracy. the findings highlight the important contributions of dcnn in improving lung cancer detection and classification, making them an invaluable resource for researchers looking to gain a greater knowledge of deep learning’s function in medical applications.
[ "lung cancer", "the deadliest type", "cancer", "a good prognosis and efficient treatment", "early detection", "medical practitioners’ burden", "deep learning techniques", "especially deep convolutional neural networks", "dcnn", "which", "the diagnosis", "classification", "diseases", "this study", "we", "a variety", "medical imaging modalities", "x", "-", "rays", "wsi", "ct scans", "mri", "the use", "deep learning techniques", "the field", "lung cancer diagnosis", "classification", "this study", "a comprehensive systematic literature review", "slr", "deep learning techniques", "lung cancer research", "a comprehensive overview", "the methodology", "cutting-edge developments", "quality assessments", "customized deep learning approaches", "it", "data", "reputable journals", "concentrates", "the years", "deep learning techniques", "the difficulty", "abstract features", "lung cancer images", "this study", "a wide range", "deep learning methods", "lung cancer", "the most popular method", "the convolutional neural network", "cnn", "cnn", "maximum accuracy", "its multi-layer structure", "automatic learning", "weights", "capacity", "local weights", "various algorithms", "performance measures", "precision", "accuracy", "specificity", "sensitivity", "auc", "cnn", "the greatest accuracy", "the findings", "the important contributions", "dcnn", "lung cancer detection", "classification", "them", "researchers", "a greater knowledge", "deep learning’s function", "medical applications", "the years 2015–2024", "cnn", "cnn", "cnn" ]
A brief review of hypernetworks in deep learning
[ "Vinod Kumar Chauhan", "Jiandong Zhou", "Ping Lu", "Soheila Molaei", "David A. Clifton" ]
Hypernetworks, or hypernets for short, are neural networks that generate weights for another neural network, known as the target network. They have emerged as a powerful deep learning technique that allows for greater flexibility, adaptability, dynamism, faster training, information sharing, and model compression. Hypernets have shown promising results in a variety of deep learning problems, including continual learning, causal inference, transfer learning, weight pruning, uncertainty quantification, zero-shot learning, natural language processing, and reinforcement learning. Despite their success across different problem settings, there is currently no comprehensive review available to inform researchers about the latest developments and to assist in utilizing hypernets. To fill this gap, we review the progress in hypernets. We present an illustrative example of training deep neural networks using hypernets and propose categorizing hypernets based on five design criteria: inputs, outputs, variability of inputs and outputs, and the architecture of hypernets. We also review applications of hypernets across different deep learning problem settings, followed by a discussion of general scenarios where hypernets can be effectively employed. Finally, we discuss the challenges and future directions that remain underexplored in the field of hypernets. We believe that hypernetworks have the potential to revolutionize the field of deep learning. They offer a new way to design and train neural networks, and they have the potential to improve the performance of deep learning models on a variety of tasks. Through this review, we aim to inspire further advancements in deep learning through hypernetworks.
10.1007/s10462-024-10862-8
a brief review of hypernetworks in deep learning
hypernetworks, or hypernets for short, are neural networks that generate weights for another neural network, known as the target network. they have emerged as a powerful deep learning technique that allows for greater flexibility, adaptability, dynamism, faster training, information sharing, and model compression. hypernets have shown promising results in a variety of deep learning problems, including continual learning, causal inference, transfer learning, weight pruning, uncertainty quantification, zero-shot learning, natural language processing, and reinforcement learning. despite their success across different problem settings, there is currently no comprehensive review available to inform researchers about the latest developments and to assist in utilizing hypernets. to fill this gap, we review the progress in hypernets. we present an illustrative example of training deep neural networks using hypernets and propose categorizing hypernets based on five design criteria: inputs, outputs, variability of inputs and outputs, and the architecture of hypernets. we also review applications of hypernets across different deep learning problem settings, followed by a discussion of general scenarios where hypernets can be effectively employed. finally, we discuss the challenges and future directions that remain underexplored in the field of hypernets. we believe that hypernetworks have the potential to revolutionize the field of deep learning. they offer a new way to design and train neural networks, and they have the potential to improve the performance of deep learning models on a variety of tasks. through this review, we aim to inspire further advancements in deep learning through hypernetworks.
[ "hypernetworks", "hypernets", "neural networks", "that", "weights", "another neural network", "the target network", "they", "a powerful deep learning technique", "that", "greater flexibility", "adaptability", "dynamism", "faster training", "information sharing", "model compression", "hypernets", "promising results", "a variety", "deep learning problems", "continual learning", "causal inference", "transfer learning", "weight pruning", "uncertainty quantification", "zero-shot learning", "natural language processing", "reinforcement learning", "their success", "different problem settings", "no comprehensive review", "researchers", "the latest developments", "hypernets", "this gap", "we", "the progress", "hypernets", "we", "an illustrative example", "deep neural networks", "hypernets", "categorizing hypernets", "five design criteria", "inputs", "outputs", "variability", "inputs", "outputs", "the architecture", "hypernets", "we", "applications", "hypernets", "different deep learning problem settings", "a discussion", "general scenarios", "hypernets", "we", "the challenges", "future directions", "that", "the field", "hypernets", "we", "hypernetworks", "the potential", "the field", "deep learning", "they", "a new way", "neural networks", "they", "the potential", "the performance", "deep learning models", "a variety", "tasks", "this review", "we", "further advancements", "deep learning", "hypernetworks", "zero", "five" ]
Learning Dynamic Batch-Graph Representation for Deep Representation Learning
[ "Xixi Wang", "Bo Jiang", "Xiao Wang", "Bin Luo" ]
Recently, batch-based image data representation has been demonstrated to be effective for context-enhanced image representation. The core issue for this task is capturing the dependences of image samples within each mini-batch and conducting message communication among different samples. Existing approaches mainly adopt self-attention or local self-attention models (on patch dimension) for this task which fail to fully exploit the intrinsic relationships of samples within mini-batch and also be sensitive to noises and outliers. To address this issue, in this paper, we propose a flexible Dynamic Batch-Graph Representation (DyBGR) model, to automatically explore the intrinsic relationship of samples for contextual sample representation. Specifically, DyBGR first represents the mini-batch with a graph (termed batch-graph) in which nodes represent image samples and edges encode the dependences of images. This graph is dynamically learned with the constraint of similarity, sparseness and semantic correlation. Upon this, DyBGR exchanges the sample (node) information on the batch-graph to update each node representation. Note that, both batch-graph learning and information propagation are jointly optimized to boost their respective performance. Furthermore, in practical, DyBGR model can be implemented via a simple plug-and-play block (named DyBGR block) which thus can be potentially integrated into any mini-batch based deep representation learning schemes. Extensive experiments on deep metric learning tasks demonstrate the effectiveness of DyBGR. We will release the code at https://github.com/SissiW/DyBGR.
10.1007/s11263-024-02175-8
learning dynamic batch-graph representation for deep representation learning
recently, batch-based image data representation has been demonstrated to be effective for context-enhanced image representation. the core issue for this task is capturing the dependences of image samples within each mini-batch and conducting message communication among different samples. existing approaches mainly adopt self-attention or local self-attention models (on patch dimension) for this task which fail to fully exploit the intrinsic relationships of samples within mini-batch and also be sensitive to noises and outliers. to address this issue, in this paper, we propose a flexible dynamic batch-graph representation (dybgr) model, to automatically explore the intrinsic relationship of samples for contextual sample representation. specifically, dybgr first represents the mini-batch with a graph (termed batch-graph) in which nodes represent image samples and edges encode the dependences of images. this graph is dynamically learned with the constraint of similarity, sparseness and semantic correlation. upon this, dybgr exchanges the sample (node) information on the batch-graph to update each node representation. note that, both batch-graph learning and information propagation are jointly optimized to boost their respective performance. furthermore, in practical, dybgr model can be implemented via a simple plug-and-play block (named dybgr block) which thus can be potentially integrated into any mini-batch based deep representation learning schemes. extensive experiments on deep metric learning tasks demonstrate the effectiveness of dybgr. we will release the code at https://github.com/sissiw/dybgr.
[ "batch-based image data representation", "context-enhanced image representation", "the core issue", "this task", "the dependences", "image samples", "each mini", "-", "batch", "message communication", "different samples", "existing approaches", "self-attention", "local self-attention models", "patch dimension", "this task", "which", "the intrinsic relationships", "samples", "mini", "-", "batch", "noises", "outliers", "this issue", "this paper", "we", "a flexible dynamic batch-graph representation (dybgr) model", "the intrinsic relationship", "samples", "contextual sample representation", "the mini", "-", "batch", "a graph", "termed batch-graph", "which", "nodes", "image samples", "edges", "the dependences", "images", "this graph", "the constraint", "similarity", "sparseness", "semantic correlation", "this", "the batch-graph", "each node representation", "both batch-graph learning and information propagation", "their respective performance", "practical, dybgr model", "a simple plug-and-play block", "dybgr block", "which", "any mini-batch based deep representation learning schemes", "extensive experiments", "deep metric learning tasks", "the effectiveness", "dybgr", "we", "the code", "https://github.com/sissiw/dybgr", "first" ]
Relay learning: a physically secure framework for clinical multi-site deep learning
[ "Zi-Hao Bo", "Yuchen Guo", "Jinhao Lyu", "Hengrui Liang", "Jianxing He", "Shijie Deng", "Feng Xu", "Xin Lou", "Qionghai Dai" ]
Big data serves as the cornerstone for constructing real-world deep learning systems across various domains. In medicine and healthcare, a single clinical site lacks sufficient data, thus necessitating the involvement of multiple sites. Unfortunately, concerns regarding data security and privacy hinder the sharing and reuse of data across sites. Existing approaches to multi-site clinical learning heavily depend on the security of the network firewall and system implementation. To address this issue, we propose Relay Learning, a secure deep-learning framework that physically isolates clinical data from external intruders while still leveraging the benefits of multi-site big data. We demonstrate the efficacy of Relay Learning in three medical tasks of different diseases and anatomical structures, including structure segmentation of retina fundus, mediastinum tumors diagnosis, and brain midline localization. We evaluate Relay Learning by comparing its performance to alternative solutions through multi-site validation and external validation. Incorporating a total of 41,038 medical images from 21 medical hosts, including 7 external hosts, with non-uniform distributions, we observe significant performance improvements with Relay Learning across all three tasks. Specifically, it achieves an average performance increase of 44.4%, 24.2%, and 36.7% for retinal fundus segmentation, mediastinum tumor diagnosis, and brain midline localization, respectively. Remarkably, Relay Learning even outperforms central learning on external test sets. In the meanwhile, Relay Learning keeps data sovereignty locally without cross-site network connections. We anticipate that Relay Learning will revolutionize clinical multi-site collaboration and reshape the landscape of healthcare in the future.
10.1038/s41746-023-00934-4
relay learning: a physically secure framework for clinical multi-site deep learning
big data serves as the cornerstone for constructing real-world deep learning systems across various domains. in medicine and healthcare, a single clinical site lacks sufficient data, thus necessitating the involvement of multiple sites. unfortunately, concerns regarding data security and privacy hinder the sharing and reuse of data across sites. existing approaches to multi-site clinical learning heavily depend on the security of the network firewall and system implementation. to address this issue, we propose relay learning, a secure deep-learning framework that physically isolates clinical data from external intruders while still leveraging the benefits of multi-site big data. we demonstrate the efficacy of relay learning in three medical tasks of different diseases and anatomical structures, including structure segmentation of retina fundus, mediastinum tumors diagnosis, and brain midline localization. we evaluate relay learning by comparing its performance to alternative solutions through multi-site validation and external validation. incorporating a total of 41,038 medical images from 21 medical hosts, including 7 external hosts, with non-uniform distributions, we observe significant performance improvements with relay learning across all three tasks. specifically, it achieves an average performance increase of 44.4%, 24.2%, and 36.7% for retinal fundus segmentation, mediastinum tumor diagnosis, and brain midline localization, respectively. remarkably, relay learning even outperforms central learning on external test sets. in the meanwhile, relay learning keeps data sovereignty locally without cross-site network connections. we anticipate that relay learning will revolutionize clinical multi-site collaboration and reshape the landscape of healthcare in the future.
[ "big data", "the cornerstone", "real-world deep learning systems", "various domains", "medicine", "healthcare", "a single clinical site", "sufficient data", "the involvement", "multiple sites", "concerns", "data security", "privacy", "the sharing", "reuse", "data", "sites", "existing approaches", "multi-site clinical learning", "the security", "the network firewall and system implementation", "this issue", "we", "relay learning", "a secure deep-learning framework", "that", "clinical data", "external intruders", "the benefits", "multi-site big data", "we", "the efficacy", "three medical tasks", "different diseases", "anatomical structures", "structure segmentation", "retina fundus", "mediastinum tumors diagnosis", "brain midline localization", "we", "relay", "its performance", "solutions", "multi-site validation", "external validation", "a total", "41,038 medical images", "21 medical hosts", "7 external hosts", "non-uniform distributions", "we", "significant performance improvements", "relay", "all three tasks", "it", "an average performance increase", "44.4%", "24.2%", "36.7%", "retinal fundus segmentation", "mediastinum tumor diagnosis", "brain midline localization", "remarkably, relay", "central learning", "external test sets", "the meanwhile", "relay learning", "data sovereignty", "cross-site network connections", "we", "relay learning", "clinical multi-site collaboration", "the landscape", "healthcare", "the future", "three", "41,038", "21", "7", "three", "44.4%", "24.2%", "36.7%" ]
Predicting Apple Plant Diseases in Orchards Using Machine Learning and Deep Learning Algorithms
[ "Imtiaz Ahmed", "Pramod Kumar Yadav" ]
Apple cultivation in the Kashmir Valley is a cornerstone of the region’s agriculture, contributing significantly to the economy through substantial annual apple exports. This study explores the application of machine learning and deep learning algorithms for predicting apple plant diseases in orchards. By leveraging advanced computational techniques, the research aims to enhance early detection and diagnosis of diseases, thereby enabling proactive disease management. The study utilizes a dataset comprising diverse environmental and plant health factors to train and validate the models. Key highlights include the comparative analysis of machine learning and deep learning approaches, the identification of optimal feature sets, and the assessment of model performance. The findings contribute to the development of efficient and accurate tools for precision agriculture, facilitating timely intervention and sustainable orchard management. The apple industry in Kashmir faces a significant challenge due to the prevalence of various diseases affecting apple trees. One prominent disease that adversely impacts apple yields in the region is the Apple Scab, caused by the fungus Venturia inadequacies. Apple Scab is characterized by dark, scaly lesions on leaves, fruit, and twigs, leading to defoliation and reduced fruit quality. The disease thrives in cool and humid conditions, which are prevalent in the Kashmir Valley. This study addresses the limitations of traditional, labor-intensive, and time-consuming laboratory methods for diagnosing apple plant diseases. The goal is to provide an accurate and efficient deep learning-based system for the prompt identification and prediction of foliar diseases in Kashmiri apple plants. Our study begins involves the creation of a dataset annotated by experts containing approximately 10,000 high-quality RGB images that illustrate key symptoms associated with foliar diseases. In the next step, an approach to deep learning that utilizes convolutional neural networks (CNNs) was developed. Comparative analysis five different deep learning algorithms, including Faster R-CNN, showed that the method was effective in detecting apple diseases in real time. The proposed framework, when tested, achieves state-of-the-art results with a remarkable 92% accuracy in identifying apple plant diseases. A new dataset is presented that includes samples of leaves from Kashmiri apple plants that have three different illnesses. The findings hold promise for revolutionizing orchard management practices, ultimately benefiting apple growers and sustaining the thriving apple industry in the Kashmir Valley.
10.1007/s42979-024-02959-2
predicting apple plant diseases in orchards using machine learning and deep learning algorithms
apple cultivation in the kashmir valley is a cornerstone of the region’s agriculture, contributing significantly to the economy through substantial annual apple exports. this study explores the application of machine learning and deep learning algorithms for predicting apple plant diseases in orchards. by leveraging advanced computational techniques, the research aims to enhance early detection and diagnosis of diseases, thereby enabling proactive disease management. the study utilizes a dataset comprising diverse environmental and plant health factors to train and validate the models. key highlights include the comparative analysis of machine learning and deep learning approaches, the identification of optimal feature sets, and the assessment of model performance. the findings contribute to the development of efficient and accurate tools for precision agriculture, facilitating timely intervention and sustainable orchard management. the apple industry in kashmir faces a significant challenge due to the prevalence of various diseases affecting apple trees. one prominent disease that adversely impacts apple yields in the region is the apple scab, caused by the fungus venturia inadequacies. apple scab is characterized by dark, scaly lesions on leaves, fruit, and twigs, leading to defoliation and reduced fruit quality. the disease thrives in cool and humid conditions, which are prevalent in the kashmir valley. this study addresses the limitations of traditional, labor-intensive, and time-consuming laboratory methods for diagnosing apple plant diseases. the goal is to provide an accurate and efficient deep learning-based system for the prompt identification and prediction of foliar diseases in kashmiri apple plants. our study begins involves the creation of a dataset annotated by experts containing approximately 10,000 high-quality rgb images that illustrate key symptoms associated with foliar diseases. in the next step, an approach to deep learning that utilizes convolutional neural networks (cnns) was developed. comparative analysis five different deep learning algorithms, including faster r-cnn, showed that the method was effective in detecting apple diseases in real time. the proposed framework, when tested, achieves state-of-the-art results with a remarkable 92% accuracy in identifying apple plant diseases. a new dataset is presented that includes samples of leaves from kashmiri apple plants that have three different illnesses. the findings hold promise for revolutionizing orchard management practices, ultimately benefiting apple growers and sustaining the thriving apple industry in the kashmir valley.
[ "apple cultivation", "the kashmir valley", "a cornerstone", "the region’s agriculture", "the economy", "substantial annual apple exports", "this study", "the application", "machine learning", "deep learning algorithms", "apple plant diseases", "orchards", "advanced computational techniques", "the research", "early detection", "diagnosis", "diseases", "proactive disease management", "the study", "a dataset", "diverse environmental and plant health factors", "the models", "key highlights", "the comparative analysis", "machine learning", "deep learning approaches", "the identification", "optimal feature sets", "the assessment", "model performance", "the findings", "the development", "efficient and accurate tools", "precision agriculture", "timely intervention", "sustainable orchard management", "the apple industry", "kashmir", "a significant challenge", "the prevalence", "various diseases", "apple trees", "one prominent disease", "that", "apple yields", "the region", "the apple scab", "the fungus venturia inadequacies", "apple scab", "dark", "scaly lesions", "leaves", "fruit", "twigs", "defoliation", "reduced fruit quality", "the disease", "cool and humid conditions", "which", "the kashmir valley", "this study", "the limitations", "traditional, labor-intensive, and time-consuming laboratory methods", "apple plant diseases", "the goal", "an accurate and efficient deep learning-based system", "the prompt identification", "prediction", "foliar diseases", "kashmiri apple plants", "our study", "the creation", "a dataset", "experts", "approximately 10,000 high-quality rgb images", "that", "key symptoms", "foliar diseases", "the next step", "an approach", "deep learning", "that", "convolutional neural networks", "cnns", "comparative analysis", "five different deep learning algorithms", "faster r-cnn", "the method", "apple diseases", "real time", "the proposed framework", "the-art", "a remarkable 92% accuracy", "apple plant diseases", "a new dataset", "that", "samples", "leaves", "kashmiri apple plants", "that", "three different illnesses", "the findings", "promise", "orchard management practices", "apple growers", "the thriving apple industry", "the kashmir valley", "the kashmir valley", "annual", "kashmir", "one", "apple scab", "the kashmir valley", "10,000", "five", "92%", "three", "the kashmir valley" ]
Deep Learning zur Kariesdiagnostik
[ "Norbert Krämer", "Roland Frankenberger" ]
Deep-Learning-Modelle spielen auch in der Zahnheilkunde eine zunehmend größere Rolle und werden in unterschiedlichen Feldern eingesetzt. Vor diesem Hintergrund wurde in der vorliegenden Literaturübersicht eine systematische Übersichtsarbeit einer internationalen Autorengruppe vorgestellt, die Deep-Learning-Modelle zur Kariesdiagnostik analysierte und bewertete. Sie kam zu dem Schluss, dass in einer zunehmenden Anzahl von Studien die Kariesdiagnostik mithilfe von Deep-Learning-Modellen unterstützt wird. Die dokumentierte Genauigkeit erscheint vielversprechend, während Studien- und Berichtsqualität derzeit unzureichend sind, um weiterführende Analysen durchzuführen. Bei verbesserter Datenlage könnten jedoch künftig Deep-Learning-Modelle als Hilfsmittel für Entscheidungen über das Vorhandensein von kariösen Läsionen herangezogen werden.
10.1007/s44190-023-0647-4
deep learning zur kariesdiagnostik
deep-learning-modelle spielen auch in der zahnheilkunde eine zunehmend größere rolle und werden in unterschiedlichen feldern eingesetzt. vor diesem hintergrund wurde in der vorliegenden literaturübersicht eine systematische übersichtsarbeit einer internationalen autorengruppe vorgestellt, die deep-learning-modelle zur kariesdiagnostik analysierte und bewertete. sie kam zu dem schluss, dass in einer zunehmenden anzahl von studien die kariesdiagnostik mithilfe von deep-learning-modellen unterstützt wird. die dokumentierte genauigkeit erscheint vielversprechend, während studien- und berichtsqualität derzeit unzureichend sind, um weiterführende analysen durchzuführen. bei verbesserter datenlage könnten jedoch künftig deep-learning-modelle als hilfsmittel für entscheidungen über das vorhandensein von kariösen läsionen herangezogen werden.
[ "deep-learning-modelle spielen", "unterschiedlichen feldern eingesetzt", "vor diesem hintergrund wurde", "der vorliegenden literaturübersicht eine systematische übersichtsarbeit einer internationalen autorengruppe", "deep-learning-modelle zur kariesdiagnostik analysierte", "bewertete", "sie kam zu dem schluss", "dass", "einer zunehmenden", "anzahl von studien die kariesdiagnostik mithilfe von deep-learning-modellen unterstützt wird", "die dokumentierte genauigkeit erscheint vielversprechend", "studien-", "und berichtsqualität derzeit unzureichend sind", "analysen durchzuführen", "bei verbesserter datenlage", "jedoch künftig deep-learning-modelle als hilfsmittel für entscheidungen über das vorhandensein von kariösen läsionen herangezogen werden", "unterschiedlichen feldern eingesetzt", "diesem", "hintergrund wurde", "literaturübersicht", "kam", "dem", "anzahl von studien", "könnten jedoch", "das vorhandensein von kariösen" ]
Deep doubly robust outcome weighted learning
[ "Xiaotong Jiang", "Xin Zhou", "Michael R. Kosorok" ]
Precision medicine is a framework that adapts treatment strategies to a patient’s individual characteristics and provides helpful clinical decision support. Existing research has been extended to various situations but high-dimensional data have not yet been fully incorporated into the paradigm. We propose a new precision medicine approach called deep doubly robust outcome weighted learning (DDROWL) that can handle big and complex data. This is a machine learning tool that directly estimates the optimal decision rule and achieves the best of three worlds: deep learning, double robustness, and residual weighted learning. Two architectures have been implemented in the proposed method, a fully-connected feedforward neural network and the Deep Kernel Learning model, a Gaussian process with deep learning-filtered inputs. We compare and discuss the performance and limitation of different methods through a range of simulations. Using longitudinal and brain imaging data from patients with Alzheimer’s disease, we demonstrate the application of the proposed method in real-world clinical practice. With the implementation of deep learning, the proposed method can expand the influence of precision medicine to high-dimensional abundant data with greater flexibility and computational power.
10.1007/s10994-023-06484-w
deep doubly robust outcome weighted learning
precision medicine is a framework that adapts treatment strategies to a patient’s individual characteristics and provides helpful clinical decision support. existing research has been extended to various situations but high-dimensional data have not yet been fully incorporated into the paradigm. we propose a new precision medicine approach called deep doubly robust outcome weighted learning (ddrowl) that can handle big and complex data. this is a machine learning tool that directly estimates the optimal decision rule and achieves the best of three worlds: deep learning, double robustness, and residual weighted learning. two architectures have been implemented in the proposed method, a fully-connected feedforward neural network and the deep kernel learning model, a gaussian process with deep learning-filtered inputs. we compare and discuss the performance and limitation of different methods through a range of simulations. using longitudinal and brain imaging data from patients with alzheimer’s disease, we demonstrate the application of the proposed method in real-world clinical practice. with the implementation of deep learning, the proposed method can expand the influence of precision medicine to high-dimensional abundant data with greater flexibility and computational power.
[ "precision medicine", "a framework", "that", "treatment strategies", "a patient’s individual characteristics", "helpful clinical decision support", "existing research", "various situations", "high-dimensional data", "the paradigm", "we", "a new precision medicine approach", "deep doubly robust outcome", "ddrowl", "that", "big and complex data", "this", "a machine learning tool", "that", "the optimal decision rule", "three worlds", "deep learning", "double robustness", "residual weighted learning", "two architectures", "the proposed method", "a fully-connected feedforward neural network", "the deep kernel learning model", "a gaussian process", "deep learning-filtered inputs", "we", "the performance", "limitation", "different methods", "a range", "simulations", "longitudinal and brain imaging data", "patients", "disease", "we", "the application", "the proposed method", "real-world clinical practice", "the implementation", "deep learning", "the proposed method", "the influence", "precision medicine", "high-dimensional abundant data", "greater flexibility", "computational power", "three", "two" ]
Topological deep learning: a review of an emerging paradigm
[ "Ali Zia", "Abdelwahed Khamis", "James Nichols", "Usman Bashir Tayab", "Zeeshan Hayder", "Vivien Rolland", "Eric Stone", "Lars Petersson" ]
Topological deep learning (TDL) is an emerging area that combines the principles of Topological data analysis (TDA) with deep learning techniques. TDA provides insight into data shape; it obtains global descriptions of multi-dimensional data whilst exhibiting robustness to deformation and noise. Such properties are desirable in deep learning pipelines, but they are typically obtained using non-TDA strategies. This is partly caused by the difficulty of combining TDA constructs (e.g. barcode and persistence diagrams) with current deep learning algorithms. Fortunately, we are now witnessing a growth of deep learning applications embracing topologically-guided components. In this survey, we review the nascent field of topological deep learning by first revisiting the core concepts of TDA. We then explore how the use of TDA techniques has evolved over time to support deep learning frameworks, and how they can be integrated into different aspects of deep learning. Furthermore, we touch on TDA usage for analyzing existing deep models; deep topological analytics. Finally, we discuss the challenges and future prospects of topological deep learning.
10.1007/s10462-024-10710-9
topological deep learning: a review of an emerging paradigm
topological deep learning (tdl) is an emerging area that combines the principles of topological data analysis (tda) with deep learning techniques. tda provides insight into data shape; it obtains global descriptions of multi-dimensional data whilst exhibiting robustness to deformation and noise. such properties are desirable in deep learning pipelines, but they are typically obtained using non-tda strategies. this is partly caused by the difficulty of combining tda constructs (e.g. barcode and persistence diagrams) with current deep learning algorithms. fortunately, we are now witnessing a growth of deep learning applications embracing topologically-guided components. in this survey, we review the nascent field of topological deep learning by first revisiting the core concepts of tda. we then explore how the use of tda techniques has evolved over time to support deep learning frameworks, and how they can be integrated into different aspects of deep learning. furthermore, we touch on tda usage for analyzing existing deep models; deep topological analytics. finally, we discuss the challenges and future prospects of topological deep learning.
[ "topological deep learning", "tdl", "an emerging area", "that", "the principles", "topological data analysis", "tda", "deep learning techniques", "tda", "insight", "data shape", "it", "global descriptions", "multi-dimensional data", "robustness", "deformation", "noise", "such properties", "deep learning pipelines", "they", "non-tda strategies", "this", "the difficulty", "tda constructs", "e.g. barcode and persistence diagrams", "current deep learning algorithms", "we", "a growth", "deep learning applications", "topologically-guided components", "this survey", "we", "the nascent field", "topological deep learning", "the core concepts", "tda", "we", "the use", "tda techniques", "time", "deep learning frameworks", "they", "different aspects", "deep learning", "we", "tda usage", "existing deep models", "deep topological analytics", "we", "the challenges", "future prospects", "topological deep learning", "first" ]
Deep learning in computational mechanics: a review
[ "Leon Herrmann", "Stefan Kollmannsberger" ]
The rapid growth of deep learning research, including within the field of computational mechanics, has resulted in an extensive and diverse body of literature. To help researchers identify key concepts and promising methodologies within this field, we provide an overview of deep learning in deterministic computational mechanics. Five main categories are identified and explored: simulation substitution, simulation enhancement, discretizations as neural networks, generative approaches, and deep reinforcement learning. This review focuses on deep learning methods rather than applications for computational mechanics, thereby enabling researchers to explore this field more effectively. As such, the review is not necessarily aimed at researchers with extensive knowledge of deep learning—instead, the primary audience is researchers on the verge of entering this field or those attempting to gain an overview of deep learning in computational mechanics. The discussed concepts are, therefore, explained as simple as possible.
10.1007/s00466-023-02434-4
deep learning in computational mechanics: a review
the rapid growth of deep learning research, including within the field of computational mechanics, has resulted in an extensive and diverse body of literature. to help researchers identify key concepts and promising methodologies within this field, we provide an overview of deep learning in deterministic computational mechanics. five main categories are identified and explored: simulation substitution, simulation enhancement, discretizations as neural networks, generative approaches, and deep reinforcement learning. this review focuses on deep learning methods rather than applications for computational mechanics, thereby enabling researchers to explore this field more effectively. as such, the review is not necessarily aimed at researchers with extensive knowledge of deep learning—instead, the primary audience is researchers on the verge of entering this field or those attempting to gain an overview of deep learning in computational mechanics. the discussed concepts are, therefore, explained as simple as possible.
[ "the rapid growth", "deep learning research", "the field", "computational mechanics", "an extensive and diverse body", "literature", "researchers", "key concepts", "methodologies", "this field", "we", "an overview", "deep learning", "deterministic computational mechanics", "five main categories", "simulation substitution", "simulation enhancement", "discretizations", "neural networks", "generative approaches", "deep reinforcement learning", "this review", "deep learning methods", "applications", "computational mechanics", "researchers", "this field", "the review", "researchers", "extensive knowledge", "deep learning", "the primary audience", "researchers", "the verge", "this field", "those", "an overview", "deep learning", "computational mechanics", "the discussed concepts", "five" ]
OpBench: an operator-level GPU benchmark for deep learning
[ "Qingwen Gu", "Bo Fan", "Zhengning Liu", "Kaicheng Cao", "Songhai Zhang", "Shimin Hu" ]
Operators (such as Conv and ReLU) play an important role in deep neural networks. Every neural network is composed of a series of differentiable operators. However, existing AI benchmarks mainly focus on accessing model training and inference performance of deep learning systems on specific models. To help GPU hardware find computing bottlenecks and intuitively evaluate GPU performance on specific deep learning tasks, this paper focuses on evaluating GPU performance at the operator level. We statistically analyze the information of operators on 12 representative deep learning models from six prominent AI tasks and provide an operator dataset to show the different importance of various types of operators in different networks. An operator-level benchmark, OpBench, is proposed on the basis of this dataset, allowing users to choose from a given range of models and set the input sizes according to their demands. This benchmark offers a detailed operator-level performance report for AI and hardware developers. We also evaluate four GPU models on OpBench and find that their performances differ on various types of operators and are not fully consistent with the performance metric FLOPS (floating point operations per second).
10.1007/s11432-023-3989-3
opbench: an operator-level gpu benchmark for deep learning
operators (such as conv and relu) play an important role in deep neural networks. every neural network is composed of a series of differentiable operators. however, existing ai benchmarks mainly focus on accessing model training and inference performance of deep learning systems on specific models. to help gpu hardware find computing bottlenecks and intuitively evaluate gpu performance on specific deep learning tasks, this paper focuses on evaluating gpu performance at the operator level. we statistically analyze the information of operators on 12 representative deep learning models from six prominent ai tasks and provide an operator dataset to show the different importance of various types of operators in different networks. an operator-level benchmark, opbench, is proposed on the basis of this dataset, allowing users to choose from a given range of models and set the input sizes according to their demands. this benchmark offers a detailed operator-level performance report for ai and hardware developers. we also evaluate four gpu models on opbench and find that their performances differ on various types of operators and are not fully consistent with the performance metric flops (floating point operations per second).
[ "operators", "relu", "an important role", "deep neural networks", "every neural network", "a series", "differentiable operators", "existing ai benchmarks", "model training and inference performance", "deep learning systems", "specific models", "gpu hardware", "computing bottlenecks", "gpu performance", "specific deep learning tasks", "this paper", "gpu performance", "the operator level", "we", "the information", "operators", "12 representative deep learning models", "six prominent ai tasks", "an operator dataset", "the different importance", "various types", "operators", "different networks", "an operator-level benchmark, opbench", "the basis", "this dataset", "users", "a given range", "models", "the input sizes", "their demands", "this benchmark", "a detailed operator-level performance report", "ai and hardware developers", "we", "four gpu models", "opbench", "their performances", "various types", "operators", "the performance metric flops", "floating point operations", "12", "six", "four", "second" ]
Diabetes detection based on machine learning and deep learning approaches
[ "Boon Feng Wee", "Saaveethya Sivakumar", "King Hann Lim", "W. K. Wong", "Filbert H. Juwono" ]
The increasing number of diabetes individuals in the globe has alarmed the medical sector to seek alternatives to improve their medical technologies. Machine learning and deep learning approaches are active research in developing intelligent and efficient diabetes detection systems. This study profoundly investigates and discusses the impacts of the latest machine learning and deep learning approaches in diabetes identification/classifications. It is observed that diabetes data are limited in availability. Available databases comprise lab-based and invasive test measurements. Investigating anthropometric measurements and non-invasive tests must be performed to create a cost-effective yet high-performance solution. Several findings showed the possibility of reconstructing the detection models based on anthropometric measurements and non-invasive medical indicators. This study investigated the consequences of oversampling techniques and data dimensionality reduction through feature selection approaches. The future direction is highlighted in the research of feature selection approaches to improve the accuracy and reliability of diabetes identifications.
10.1007/s11042-023-16407-5
diabetes detection based on machine learning and deep learning approaches
the increasing number of diabetes individuals in the globe has alarmed the medical sector to seek alternatives to improve their medical technologies. machine learning and deep learning approaches are active research in developing intelligent and efficient diabetes detection systems. this study profoundly investigates and discusses the impacts of the latest machine learning and deep learning approaches in diabetes identification/classifications. it is observed that diabetes data are limited in availability. available databases comprise lab-based and invasive test measurements. investigating anthropometric measurements and non-invasive tests must be performed to create a cost-effective yet high-performance solution. several findings showed the possibility of reconstructing the detection models based on anthropometric measurements and non-invasive medical indicators. this study investigated the consequences of oversampling techniques and data dimensionality reduction through feature selection approaches. the future direction is highlighted in the research of feature selection approaches to improve the accuracy and reliability of diabetes identifications.
[ "the increasing number", "diabetes individuals", "the globe", "the medical sector", "alternatives", "their medical technologies", "machine learning", "deep learning approaches", "active research", "intelligent and efficient diabetes detection systems", "this study profoundly investigates", "the impacts", "the latest machine learning", "deep learning approaches", "diabetes identification/classifications", "it", "diabetes data", "availability", "available databases", "lab-based and invasive test measurements", "anthropometric measurements", "non-invasive tests", "a cost-effective yet high-performance solution", "several findings", "the possibility", "the detection models", "anthropometric measurements", "non-invasive medical indicators", "this study", "the consequences", "techniques and data dimensionality reduction", "feature selection approaches", "the future direction", "the research", "feature selection approaches", "the accuracy", "reliability", "diabetes identifications" ]
Deep-kidney: an effective deep learning framework for chronic kidney disease prediction
[ "Dina Saif", "Amany M. Sarhan", "Nada M. Elshennawy" ]
Chronic kidney disease (CKD) is one of today’s most serious illnesses. Because this disease usually does not manifest itself until the kidney is severely damaged, early detection saves many people’s lives. Therefore, the contribution of the current paper is proposing three predictive models to predict CKD possible occurrence within 6 or 12 months before disease existence namely; convolutional neural network (CNN), long short-term memory (LSTM) model, and deep ensemble model. The deep ensemble model fuses three base deep learning classifiers (CNN, LSTM, and LSTM-BLSTM) using majority voting technique. To evaluate the performance of the proposed models, several experiments were conducted on two different public datasets. Among the predictive models and the reached results, the deep ensemble model is superior to all the other models, with an accuracy of 0.993 and 0.992 for the 6-month data and 12-month data predictions, respectively.
10.1007/s13755-023-00261-8
deep-kidney: an effective deep learning framework for chronic kidney disease prediction
chronic kidney disease (ckd) is one of today’s most serious illnesses. because this disease usually does not manifest itself until the kidney is severely damaged, early detection saves many people’s lives. therefore, the contribution of the current paper is proposing three predictive models to predict ckd possible occurrence within 6 or 12 months before disease existence namely; convolutional neural network (cnn), long short-term memory (lstm) model, and deep ensemble model. the deep ensemble model fuses three base deep learning classifiers (cnn, lstm, and lstm-blstm) using majority voting technique. to evaluate the performance of the proposed models, several experiments were conducted on two different public datasets. among the predictive models and the reached results, the deep ensemble model is superior to all the other models, with an accuracy of 0.993 and 0.992 for the 6-month data and 12-month data predictions, respectively.
[ "chronic kidney disease", "ckd", "today’s most serious illnesses", "this disease", "itself", "the kidney", "early detection", "many people’s lives", "the contribution", "the current paper", "three predictive models", "possible occurrence", "disease existence", "convolutional neural network", "cnn", "long short-term memory", "lstm) model", "deep ensemble model", "the deep ensemble model", "three base deep learning classifiers", "cnn", "lstm", "lstm-blstm", "majority voting technique", "the performance", "the proposed models", "several experiments", "two different public datasets", "the predictive models", "the reached results", "the deep ensemble model", "all the other models", "an accuracy", "the 6-month data", "12-month data predictions", "today", "three", "6 or 12 months", "cnn", "three", "cnn", "two", "0.993", "0.992", "6-month", "12-month" ]
Predicting Potato Crop Yield with Machine Learning and Deep Learning for Sustainable Agriculture
[ "El-Sayed M. El-Kenawy", "Amel Ali Alhussan", "Nima Khodadadi", "Seyedali Mirjalili", "Marwa M. Eid" ]
Potatoes are an important crop in the world; they are the main source of food for a large number of people globally and also provide an income for many people. The true forecasting of potato yields is a determining factor for the rational use and maximization of agricultural practices, responsible management of the resources, and wider regions’ food security. The latest discoveries in machine learning and deep learning provide new directions to yield prediction models more accurately and sparingly. From the study, we evaluated different types of predictive models, including K-nearest neighbors (KNN), gradient boosting, XGBoost, and multilayer perceptron that use machine learning, as well as graph neural networks (GNNs), gated recurrent units (GRUs), and long short-term memory networks (LSTM), which are popular in deep learning models. These models are evaluated on the basis of some performance measures like mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE) to know how much they accurately predict the potato yields. The terminal results show that although gradient boosting and XGBoost algorithms are good at potato yield prediction, GNNs and LSTMs not only have the advantage of high accuracy but also capture the complex spatial and temporal patterns in the data. Gradient boosting resulted in an MSE of 0.03438 and an R2 of 0.49168, while XGBoost had an MSE of 0.03583 and an R2 of 0.35106. Out of all deep learning models, GNNs displayed an MSE of 0.02363 and an R2 of 0.51719, excelling in the overall performance. LSTMs and GRUs were reported to be very promising as well, with LSTMs comprehending an MSE of 0.03177 and GRUs grabbing an MSE of 0.03150. These findings underscore the potential of advanced predictive models to support sustainable agricultural practices and informed decision-making in the context of potato farming.
10.1007/s11540-024-09753-w
predicting potato crop yield with machine learning and deep learning for sustainable agriculture
potatoes are an important crop in the world; they are the main source of food for a large number of people globally and also provide an income for many people. the true forecasting of potato yields is a determining factor for the rational use and maximization of agricultural practices, responsible management of the resources, and wider regions’ food security. the latest discoveries in machine learning and deep learning provide new directions to yield prediction models more accurately and sparingly. from the study, we evaluated different types of predictive models, including k-nearest neighbors (knn), gradient boosting, xgboost, and multilayer perceptron that use machine learning, as well as graph neural networks (gnns), gated recurrent units (grus), and long short-term memory networks (lstm), which are popular in deep learning models. these models are evaluated on the basis of some performance measures like mean squared error (mse), root mean squared error (rmse), and mean absolute error (mae) to know how much they accurately predict the potato yields. the terminal results show that although gradient boosting and xgboost algorithms are good at potato yield prediction, gnns and lstms not only have the advantage of high accuracy but also capture the complex spatial and temporal patterns in the data. gradient boosting resulted in an mse of 0.03438 and an r2 of 0.49168, while xgboost had an mse of 0.03583 and an r2 of 0.35106. out of all deep learning models, gnns displayed an mse of 0.02363 and an r2 of 0.51719, excelling in the overall performance. lstms and grus were reported to be very promising as well, with lstms comprehending an mse of 0.03177 and grus grabbing an mse of 0.03150. these findings underscore the potential of advanced predictive models to support sustainable agricultural practices and informed decision-making in the context of potato farming.
[ "potatoes", "an important crop", "the world", "they", "the main source", "food", "a large number", "people", "an income", "many people", "the true forecasting", "potato yields", "a determining factor", "the rational use", "maximization", "agricultural practices", "responsible management", "the resources", "wider regions’ food security", "the latest discoveries", "machine learning", "deep learning", "new directions", "prediction models", "the study", "we", "different types", "predictive models", "k-nearest neighbors", "gradient boosting", "xgboost", "multilayer perceptron", "that", "machine learning", "graph neural networks", "gnns", "grus", "lstm", "which", "deep learning models", "these models", "the basis", "some performance measures", "mean squared error", "mse", "root mean squared error", "rmse", "absolute error", "mae", "they", "the potato yields", "the terminal results", "gradient boosting", "xgboost algorithms", "potato yield prediction", "gnns", "the advantage", "high accuracy", "the complex spatial and temporal patterns", "the data", "an mse", "an r2", "xgboost", "an mse", "an r2", "all deep learning models", "gnns", "an mse", "an r2", "the overall performance", "grus", "an mse", "an mse", "these findings", "the potential", "advanced predictive models", "sustainable agricultural practices", "informed decision-making", "the context", "potato farming", "0.03438", "0.49168", "0.03583", "0.02363", "0.51719", "0.03177", "0.03150" ]
Prediction of glycopeptide fragment mass spectra by deep learning
[ "Yi Yang", "Qun Fang" ]
Deep learning has achieved a notable success in mass spectrometry-based proteomics and is now emerging in glycoproteomics. While various deep learning models can predict fragment mass spectra of peptides with good accuracy, they cannot cope with the non-linear glycan structure in an intact glycopeptide. Herein, we present DeepGlyco, a deep learning-based approach for the prediction of fragment spectra of intact glycopeptides. Our model adopts tree-structured long-short term memory networks to process the glycan moiety and a graph neural network architecture to incorporate potential fragmentation pathways of a specific glycan structure. This feature is beneficial to model explainability and differentiation ability of glycan structural isomers. We further demonstrate that predicted spectral libraries can be used for data-independent acquisition glycoproteomics as a supplement for library completeness. We expect that this work will provide a valuable deep learning resource for glycoproteomics.
10.1038/s41467-024-46771-1
prediction of glycopeptide fragment mass spectra by deep learning
deep learning has achieved a notable success in mass spectrometry-based proteomics and is now emerging in glycoproteomics. while various deep learning models can predict fragment mass spectra of peptides with good accuracy, they cannot cope with the non-linear glycan structure in an intact glycopeptide. herein, we present deepglyco, a deep learning-based approach for the prediction of fragment spectra of intact glycopeptides. our model adopts tree-structured long-short term memory networks to process the glycan moiety and a graph neural network architecture to incorporate potential fragmentation pathways of a specific glycan structure. this feature is beneficial to model explainability and differentiation ability of glycan structural isomers. we further demonstrate that predicted spectral libraries can be used for data-independent acquisition glycoproteomics as a supplement for library completeness. we expect that this work will provide a valuable deep learning resource for glycoproteomics.
[ "deep learning", "a notable success", "mass spectrometry-based proteomics", "glycoproteomics", "various deep learning models", "fragment mass spectra", "peptides", "good accuracy", "they", "the non-linear glycan structure", "an intact glycopeptide", "we", "deepglyco", "a deep learning-based approach", "the prediction", "fragment spectra", "intact glycopeptides", "our model", "tree-structured long-short term memory networks", "the glycan moiety", "a graph neural network architecture", "potential fragmentation pathways", "a specific glycan structure", "this feature", "model explainability", "differentiation ability", "glycan structural isomers", "we", "spectral libraries", "data-independent acquisition glycoproteomics", "a supplement", "library completeness", "we", "this work", "a valuable deep learning resource", "glycoproteomics" ]
Deep learning application in diagnosing breast cancer recurrence
[ "Zeinab Jam", "Amir Albadvi", "Alireza Atashi" ]
Patients' lives can always be saved when diseases, especially special diseases, are detected early. The chances of a patient surviving can be increased by early detection. Breast cancer is one of the deadliest and common cancers. After recovering from breast cancer, patients are always worried about recurrence and return. The use of modern technology, however, can help predict disease recurrence at an early stage, allowing patients to receive treatment sooner.Significant strides have been achieved in deep learning, demonstrating strong performance in handling unstructured data challenges. However, when it comes to predicting tabular data, deep learning hasn't quite matched its success with unstructured data. Presently, ensemble models relying on gradient-boosted decision trees (GBDT) are frequently favored for tabular data prediction tasks. Typically, these GBDT-based models outshine deep learning approaches.Many novel deep learning techniques are emerging for handling tabular data. TabNet, for instance, mirrors decision tree feature selection within a neural network framework. AutoInt addresses high dimensionality by condensing data through embedding layers. Tab Transformer adapts the transformer model, generating text representations for categorical attributes. Despite their innovation, these methods remain less recognized compared to those for image and text data processing.In this study, 158 different characteristics of 5142 breast cancer patients from 1997 to 2019 were examined. We aim to evaluate deep learning techniques effectiveness in detecting breast cancer recurrence. Through examination of evaluation metrics, it becomes evident that deep learning approaches applied to tabular data surpass traditional machine learning algorithms, even when dealing with imbalanced datasets. Ultimately, the results derived from each algorithm analyzed and concluded with a review and comparison of the findings.
10.1007/s11042-024-19423-1
deep learning application in diagnosing breast cancer recurrence
patients' lives can always be saved when diseases, especially special diseases, are detected early. the chances of a patient surviving can be increased by early detection. breast cancer is one of the deadliest and common cancers. after recovering from breast cancer, patients are always worried about recurrence and return. the use of modern technology, however, can help predict disease recurrence at an early stage, allowing patients to receive treatment sooner.significant strides have been achieved in deep learning, demonstrating strong performance in handling unstructured data challenges. however, when it comes to predicting tabular data, deep learning hasn't quite matched its success with unstructured data. presently, ensemble models relying on gradient-boosted decision trees (gbdt) are frequently favored for tabular data prediction tasks. typically, these gbdt-based models outshine deep learning approaches.many novel deep learning techniques are emerging for handling tabular data. tabnet, for instance, mirrors decision tree feature selection within a neural network framework. autoint addresses high dimensionality by condensing data through embedding layers. tab transformer adapts the transformer model, generating text representations for categorical attributes. despite their innovation, these methods remain less recognized compared to those for image and text data processing.in this study, 158 different characteristics of 5142 breast cancer patients from 1997 to 2019 were examined. we aim to evaluate deep learning techniques effectiveness in detecting breast cancer recurrence. through examination of evaluation metrics, it becomes evident that deep learning approaches applied to tabular data surpass traditional machine learning algorithms, even when dealing with imbalanced datasets. ultimately, the results derived from each algorithm analyzed and concluded with a review and comparison of the findings.
[ "patients' lives", "diseases", "especially special diseases", "the chances", "a patient surviving", "early detection", "breast cancer", "the deadliest and common cancers", "breast cancer", "patients", "recurrence", "return", "the use", "modern technology", "disease recurrence", "an early stage", "patients", "treatment sooner.significant strides", "deep learning", "strong performance", "unstructured data challenges", "it", "tabular data", "deep learning", "its success", "unstructured data", "ensemble models", "gradient-boosted decision trees", "gbdt", "tabular data prediction tasks", "these gbdt-based models", "approaches.many novel deep learning techniques", "tabular data", "tabnet", "instance", "mirrors decision tree", "a neural network framework", "autoint addresses high dimensionality", "data", "embedding layers", "tab transformer", "the transformer model", "text representations", "categorical attributes", "their innovation", "these methods", "those", "image", "text data", "158 different characteristics", "5142 breast cancer patients", "we", "deep learning techniques effectiveness", "breast cancer recurrence", "examination", "evaluation metrics", "it", "deep learning approaches", "data surpass traditional machine learning algorithms", "imbalanced datasets", "the results", "each algorithm", "a review", "comparison", "the findings", "one", "sooner.significant", "autoint", "158", "5142", "from 1997 to 2019" ]
CNN-Transformer: A deep learning method for automatically identifying learning engagement
[ "Yan Xiong", "Guo Xinya", "Junjie Xu" ]
Learning engagement is an essential indication to define students' learning pacification in the class, and its automated identification technique is the foundation for exploring how to effectively explain the motive of learning impact modifications and making intelligent teaching choices. Current research have demonstrated that there is a direct link between learning engagement and emotional investment and behavioural investment, and it is appropriate and required to apply artificial intelligence to perform autonomous assessment. Unfortunately, the number of relevant research is limited, and the features of learning engagement in certain contexts have not been thoroughly examined. In this research, we highlight the features of a particular application scenario of learning engagement: the application scenario of learning engagement has to incorporate both the coarse-grained information of human body position and the fine-grained information of facial expressions. On the basis of this analysis, a fine-grained learning participation recognition model that suppresses background clutter information is presented. This model can effectively extract coarse and fine-grained information to improve the recognition of learning participation in real-world teaching situations. Particularly, the CNN-Transformer model suggested in this study employs CNN to extract fine-grained information of facial expressions and Transformer to recover coarse-grained information of human body position. Simultaneously, we gathered and categorised real teaching data based on the features of learning engagement situations and enhanced the data quality via crowdsourcing and expert verification. The experimental findings indicate that the CNN-Transformer model can accurately predict the learning engagement of unknown participants with a 92.9% rate of accuracy. Comparative trials reveal that the model's recognition impact is much greater than that of other sophisticated deep learning approaches. Our research offers a framework for future work on deep learning approaches in learning engagement settings.
10.1007/s10639-023-12058-z
cnn-transformer: a deep learning method for automatically identifying learning engagement
learning engagement is an essential indication to define students' learning pacification in the class, and its automated identification technique is the foundation for exploring how to effectively explain the motive of learning impact modifications and making intelligent teaching choices. current research have demonstrated that there is a direct link between learning engagement and emotional investment and behavioural investment, and it is appropriate and required to apply artificial intelligence to perform autonomous assessment. unfortunately, the number of relevant research is limited, and the features of learning engagement in certain contexts have not been thoroughly examined. in this research, we highlight the features of a particular application scenario of learning engagement: the application scenario of learning engagement has to incorporate both the coarse-grained information of human body position and the fine-grained information of facial expressions. on the basis of this analysis, a fine-grained learning participation recognition model that suppresses background clutter information is presented. this model can effectively extract coarse and fine-grained information to improve the recognition of learning participation in real-world teaching situations. particularly, the cnn-transformer model suggested in this study employs cnn to extract fine-grained information of facial expressions and transformer to recover coarse-grained information of human body position. simultaneously, we gathered and categorised real teaching data based on the features of learning engagement situations and enhanced the data quality via crowdsourcing and expert verification. the experimental findings indicate that the cnn-transformer model can accurately predict the learning engagement of unknown participants with a 92.9% rate of accuracy. comparative trials reveal that the model's recognition impact is much greater than that of other sophisticated deep learning approaches. our research offers a framework for future work on deep learning approaches in learning engagement settings.
[ "learning engagement", "an essential indication", "students' learning pacification", "the class", "its automated identification technique", "the foundation", "the motive", "impact modifications", "intelligent teaching choices", "current research", "a direct link", "engagement", "emotional investment", "behavioural investment", "it", "artificial intelligence", "autonomous assessment", "the number", "relevant research", "the features", "engagement", "certain contexts", "this research", "we", "the features", "a particular application scenario", "learning engagement", "the application scenario", "learning engagement", "both the coarse-grained information", "human body position", "the fine-grained information", "facial expressions", "the basis", "this analysis", "a fine-grained learning participation recognition model", "that", "background clutter information", "this model", "coarse and fine-grained information", "the recognition", "participation", "real-world teaching situations", "the cnn-transformer model", "this study", "cnn", "fine-grained information", "facial expressions", "transformer", "coarse-grained information", "human body position", "we", "real teaching data", "the features", "engagement situations", "the data quality", "crowdsourcing and expert verification", "the experimental findings", "the cnn-transformer model", "the learning engagement", "unknown participants", "a 92.9% rate", "accuracy", "comparative trials", "the model's recognition impact", "that", "other sophisticated deep learning approaches", "our research", "a framework", "future work", "deep learning approaches", "engagement settings", "cnn", "cnn", "cnn", "92.9%" ]
Deep learning based features extraction for facial gender classification using ensemble of machine learning technique
[ "Fazal Waris", "Feipeng Da", "Shanghuan Liu" ]
Accurate and efficient gender recognition is an essential for many applications such as surveillance, security, and biometrics. Recently, deep learning techniques have made remarkable advancements in feature extraction and have become extensively implemented in various applications, including gender classification. However, despite the numerous studies conducted on the problem, correctly recognizing robust and essential features from face images and efficiently distinguishing them with high accuracy in the wild is still a challenging task for real-world applications. This article proposes an approach that combines deep learning and soft voting-based ensemble model to perform automatic gender classification with high accuracy in an unconstrained environment. In the proposed technique, a novel deep convolutional neural network (DCNN) was designed to extract 128 high-quality and accurate features from face images. The StandardScaler method was then used to pre-process these extracted features, and finally, these preprocessed features were classified with soft voting ensemble learning model combining the outputs from several machine learning classifiers such as random forest (RF), support vector machine (SVM), linear discriminant analysis (LDA), logistic regression (LR), gradient boosting classifier (GBC) and XGBoost to improve the prediction accuracy. The experimental study was performed on the UTK, label faces in the wild (LFW), Adience and FEI datasets. The results attained evidently show that the proposed approach outperforms all current approaches in terms of accuracy across all datasets.
10.1007/s00530-024-01399-5
deep learning based features extraction for facial gender classification using ensemble of machine learning technique
accurate and efficient gender recognition is an essential for many applications such as surveillance, security, and biometrics. recently, deep learning techniques have made remarkable advancements in feature extraction and have become extensively implemented in various applications, including gender classification. however, despite the numerous studies conducted on the problem, correctly recognizing robust and essential features from face images and efficiently distinguishing them with high accuracy in the wild is still a challenging task for real-world applications. this article proposes an approach that combines deep learning and soft voting-based ensemble model to perform automatic gender classification with high accuracy in an unconstrained environment. in the proposed technique, a novel deep convolutional neural network (dcnn) was designed to extract 128 high-quality and accurate features from face images. the standardscaler method was then used to pre-process these extracted features, and finally, these preprocessed features were classified with soft voting ensemble learning model combining the outputs from several machine learning classifiers such as random forest (rf), support vector machine (svm), linear discriminant analysis (lda), logistic regression (lr), gradient boosting classifier (gbc) and xgboost to improve the prediction accuracy. the experimental study was performed on the utk, label faces in the wild (lfw), adience and fei datasets. the results attained evidently show that the proposed approach outperforms all current approaches in terms of accuracy across all datasets.
[ "accurate and efficient gender recognition", "many applications", "surveillance", "security", "biometrics", "deep learning techniques", "remarkable advancements", "feature extraction", "various applications", "gender classification", "the numerous studies", "the problem", "robust and essential features", "face images", "them", "high accuracy", "the wild", "a challenging task", "real-world applications", "this article", "an approach", "that", "deep learning", "soft voting-based ensemble model", "automatic gender classification", "high accuracy", "an unconstrained environment", "the proposed technique", "a novel deep convolutional neural network", "dcnn", "128 high-quality and accurate features", "face images", "the standardscaler method", "these extracted features", "these preprocessed features", "soft voting ensemble learning model", "the outputs", "several machine learning classifiers", "random forest", "rf", "vector machine", "svm", "linear", "discriminant analysis", "logistic regression", "lr", "gradient", "classifier", "gbc", "xgboost", "the prediction accuracy", "the experimental study", "the utk", "the results", "the proposed approach", "all current approaches", "terms", "accuracy", "all datasets", "128", "linear", "gbc" ]
Ensemble of Deep Learning Architectures with Machine Learning for Pneumonia Classification Using Chest X-rays
[ "Rupali Vyas", "Deepak Rao Khadatkar" ]
Pneumonia is a severe health concern, particularly for vulnerable groups, needing early and correct classification for optimal treatment. This study addresses the use of deep learning combined with machine learning classifiers (DLxMLCs) for pneumonia classification from chest X-ray (CXR) images. We deployed modified VGG19, ResNet50V2, and DenseNet121 models for feature extraction, followed by five machine learning classifiers (logistic regression, support vector machine, decision tree, random forest, artificial neural network). The approach we suggested displayed remarkable accuracy, with VGG19 and DenseNet121 models obtaining 99.98% accuracy when combined with random forest or decision tree classifiers. ResNet50V2 achieved 99.25% accuracy with random forest. These results illustrate the advantages of merging deep learning models with machine learning classifiers in boosting the speedy and accurate identification of pneumonia. The study underlines the potential of DLxMLC systems in enhancing diagnostic accuracy and efficiency. By integrating these models into clinical practice, healthcare practitioners could greatly boost patient care and results. Future research should focus on refining these models and exploring their application to other medical imaging tasks, as well as including explainability methodologies to better understand their decision-making processes and build trust in their clinical use. This technique promises promising breakthroughs in medical imaging and patient management.
10.1007/s10278-024-01201-y
ensemble of deep learning architectures with machine learning for pneumonia classification using chest x-rays
pneumonia is a severe health concern, particularly for vulnerable groups, needing early and correct classification for optimal treatment. this study addresses the use of deep learning combined with machine learning classifiers (dlxmlcs) for pneumonia classification from chest x-ray (cxr) images. we deployed modified vgg19, resnet50v2, and densenet121 models for feature extraction, followed by five machine learning classifiers (logistic regression, support vector machine, decision tree, random forest, artificial neural network). the approach we suggested displayed remarkable accuracy, with vgg19 and densenet121 models obtaining 99.98% accuracy when combined with random forest or decision tree classifiers. resnet50v2 achieved 99.25% accuracy with random forest. these results illustrate the advantages of merging deep learning models with machine learning classifiers in boosting the speedy and accurate identification of pneumonia. the study underlines the potential of dlxmlc systems in enhancing diagnostic accuracy and efficiency. by integrating these models into clinical practice, healthcare practitioners could greatly boost patient care and results. future research should focus on refining these models and exploring their application to other medical imaging tasks, as well as including explainability methodologies to better understand their decision-making processes and build trust in their clinical use. this technique promises promising breakthroughs in medical imaging and patient management.
[ "pneumonia", "a severe health concern", "vulnerable groups", "early and correct classification", "optimal treatment", "this study", "the use", "deep learning", "machine learning classifiers", "dlxmlcs", "pneumonia classification", "chest x", "cxr", "we", "modified vgg19", "densenet121 models", "feature extraction", "five machine learning classifiers", "logistic regression", "vector machine", "decision tree", "random forest", "artificial neural network", "the approach", "we", "remarkable accuracy", "vgg19 and densenet121 models", "99.98% accuracy", "random forest", "decision tree classifiers", "resnet50v2", "99.25% accuracy", "random forest", "these results", "the advantages", "deep learning models", "machine learning classifiers", "the speedy and accurate identification", "pneumonia", "the study", "the potential", "dlxmlc systems", "diagnostic accuracy", "efficiency", "these models", "clinical practice", "healthcare practitioners", "patient care", "results", "future research", "these models", "their application", "other medical imaging tasks", "explainability methodologies", "their decision-making processes", "trust", "their clinical use", "this technique", "breakthroughs", "medical imaging and patient management", "five", "99.98%", "99.25%" ]
Instance segmentation on distributed deep learning big data cluster
[ "Mohammed Elhmadany", "Islam Elmadah", "Hossam E. Abdelmunim" ]
Distributed deep learning is a promising approach for training and deploying large and complex deep learning models. This paper presents a comprehensive workflow for deploying and optimizing the YOLACT instance segmentation model as on big data clusters. OpenVINO, a toolkit known for its high-speed data processing and ability to optimize deep learning models for deployment on a variety of devices, was used to optimize the YOLACT model. The model is then run on a big data cluster using BigDL, a distributed deep learning library for Apache Spark. BigDL provides a high-level programming interface for defining and training deep neural networks, making it suitable for large-scale deep learning applications. In distributed deep learning, input data is divided and distributed across multiple machines for parallel processing. This approach offers several advantages, including the ability to handle very large data that can be stored in a distributed manner, scalability to decrease processing time by increasing the number of workers, and fault tolerance. The proposed workflow was evaluated on virtual machines and Azure Databricks, a cloud-based platform for big data analytics. The results indicated that the workflow can scale to large datasets and deliver high performance on Azure Databricks. This study explores the benefits and challenges of using distributed deep learning on big data clusters for instance segmentation. Popular distributed deep learning frameworks are discussed, and BigDL is chosen. Overall, this study highlights the practicality of distributed deep learning for deploying and scaling sophisticated deep learning models on big data clusters.
10.1186/s40537-023-00871-9
instance segmentation on distributed deep learning big data cluster
distributed deep learning is a promising approach for training and deploying large and complex deep learning models. this paper presents a comprehensive workflow for deploying and optimizing the yolact instance segmentation model as on big data clusters. openvino, a toolkit known for its high-speed data processing and ability to optimize deep learning models for deployment on a variety of devices, was used to optimize the yolact model. the model is then run on a big data cluster using bigdl, a distributed deep learning library for apache spark. bigdl provides a high-level programming interface for defining and training deep neural networks, making it suitable for large-scale deep learning applications. in distributed deep learning, input data is divided and distributed across multiple machines for parallel processing. this approach offers several advantages, including the ability to handle very large data that can be stored in a distributed manner, scalability to decrease processing time by increasing the number of workers, and fault tolerance. the proposed workflow was evaluated on virtual machines and azure databricks, a cloud-based platform for big data analytics. the results indicated that the workflow can scale to large datasets and deliver high performance on azure databricks. this study explores the benefits and challenges of using distributed deep learning on big data clusters for instance segmentation. popular distributed deep learning frameworks are discussed, and bigdl is chosen. overall, this study highlights the practicality of distributed deep learning for deploying and scaling sophisticated deep learning models on big data clusters.
[ "distributed deep learning", "a promising approach", "training", "large and complex deep learning models", "this paper", "a comprehensive workflow", "the yolact instance segmentation model", "big data clusters", "a toolkit", "its high-speed data processing", "ability", "deep learning models", "deployment", "a variety", "devices", "the yolact model", "the model", "a big data cluster", "bigdl", "a distributed deep learning library", "apache spark", "a high-level programming interface", "deep neural networks", "it", "large-scale deep learning applications", "distributed deep learning", "input data", "multiple machines", "parallel processing", "this approach", "several advantages", "the ability", "very large data", "that", "a distributed manner", "scalability", "processing time", "the number", "workers", "tolerance", "the proposed workflow", "virtual machines", "azure databricks", "a cloud-based platform", "big data analytics", "the results", "the workflow", "large datasets", "high performance", "azure databricks", "this study", "the benefits", "challenges", "distributed deep learning", "big data clusters", "instance segmentation", "popular distributed deep learning frameworks", "bigdl", "this study", "the practicality", "distributed deep learning", "sophisticated deep learning models", "big data clusters" ]
Deep learning-based personalized learning recommendation system design for "T++" Guzheng Pedagogy
[ "Xingyue Wang" ]
This study investigates the development and impact of a deep learning-based personalized learning recommendation system designed specifically for 'T++' Guzheng pedagogy. In the realm of music education, particularly in the context of the traditional Chinese Guzheng instrument, technology-driven personalization has the potential to revolutionize learning experiences. The research involves data collection, algorithm development, and integration to create a system that tailors Guzheng learning materials to individual students' skill levels and preferences. Results indicate high recommendation accuracy, increased user satisfaction, and positive engagement rates. This study contributes to the intersection of technology, music education, and cultural preservation, showcasing the promise of personalized learning in the realm of Guzheng pedagogy.
10.1007/s41870-024-01871-5
deep learning-based personalized learning recommendation system design for "t++" guzheng pedagogy
this study investigates the development and impact of a deep learning-based personalized learning recommendation system designed specifically for 't++' guzheng pedagogy. in the realm of music education, particularly in the context of the traditional chinese guzheng instrument, technology-driven personalization has the potential to revolutionize learning experiences. the research involves data collection, algorithm development, and integration to create a system that tailors guzheng learning materials to individual students' skill levels and preferences. results indicate high recommendation accuracy, increased user satisfaction, and positive engagement rates. this study contributes to the intersection of technology, music education, and cultural preservation, showcasing the promise of personalized learning in the realm of guzheng pedagogy.
[ "this study", "the development", "impact", "a deep learning-based personalized learning recommendation system", "t++' guzheng pedagogy", "the realm", "music education", "the context", "the traditional chinese guzheng instrument, technology-driven personalization", "the potential", "learning experiences", "the research", "data collection", "algorithm development", "integration", "a system", "that", "guzheng learning materials", "individual students' skill levels", "preferences", "results", "high recommendation accuracy", "increased user satisfaction", "positive engagement rates", "this study", "the intersection", "technology", "music education", "cultural preservation", "the promise", "personalized learning", "the realm", "guzheng pedagogy", "chinese", "guzheng", "guzheng" ]
Deep learning in two-dimensional materials: Characterization, prediction, and design
[ "Xinqin Meng", "Chengbing Qin", "Xilong Liang", "Guofeng Zhang", "Ruiyun Chen", "Jianyong Hu", "Zhichun Yang", "Jianzhong Huo", "Liantuan Xiao", "Suotang Jia" ]
Since the isolation of graphene, two-dimensional (2D) materials have attracted increasing interest because of their excellent chemical and physical properties, as well as promising applications. Nonetheless, particular challenges persist in their further development, particularly in the effective identification of diverse 2D materials, the domains of large-scale and high-precision characterization, also intelligent function prediction and design. These issues are mainly solved by computational techniques, such as density function theory and molecular dynamic simulation, which require powerful computational resources and high time consumption. The booming deep learning methods in recent years offer innovative insights and tools to address these challenges. This review comprehensively outlines the current progress of deep learning within the realm of 2D materials. Firstly, we will briefly introduce the basic concepts of deep learning and commonly used architectures, including convolutional neural and generative adversarial networks, as well as U-net models. Then, the characterization of 2D materials by deep learning methods will be discussed, including defects and materials identification, as well as automatic thickness characterization. Thirdly, the research progress for predicting the unique properties of 2D materials, involving electronic, mechanical, and thermodynamic features, will be evaluated succinctly. Lately, the current works on the inverse design of functional 2D materials will be presented. At last, we will look forward to the application prospects and opportunities of deep learning in other aspects of 2D materials. This review may offer some guidance to boost the understanding and employing novel 2D materials.
10.1007/s11467-024-1394-7
deep learning in two-dimensional materials: characterization, prediction, and design
since the isolation of graphene, two-dimensional (2d) materials have attracted increasing interest because of their excellent chemical and physical properties, as well as promising applications. nonetheless, particular challenges persist in their further development, particularly in the effective identification of diverse 2d materials, the domains of large-scale and high-precision characterization, also intelligent function prediction and design. these issues are mainly solved by computational techniques, such as density function theory and molecular dynamic simulation, which require powerful computational resources and high time consumption. the booming deep learning methods in recent years offer innovative insights and tools to address these challenges. this review comprehensively outlines the current progress of deep learning within the realm of 2d materials. firstly, we will briefly introduce the basic concepts of deep learning and commonly used architectures, including convolutional neural and generative adversarial networks, as well as u-net models. then, the characterization of 2d materials by deep learning methods will be discussed, including defects and materials identification, as well as automatic thickness characterization. thirdly, the research progress for predicting the unique properties of 2d materials, involving electronic, mechanical, and thermodynamic features, will be evaluated succinctly. lately, the current works on the inverse design of functional 2d materials will be presented. at last, we will look forward to the application prospects and opportunities of deep learning in other aspects of 2d materials. this review may offer some guidance to boost the understanding and employing novel 2d materials.
[ "the isolation", "graphene", "two-dimensional (2d) materials", "increasing interest", "their excellent chemical and physical properties", "promising applications", "particular challenges", "their further development", "the effective identification", "diverse 2d materials", "the domains", "large-scale and high-precision characterization", "also intelligent function prediction", "design", "these issues", "computational techniques", "density function theory", "molecular dynamic simulation", "which", "powerful computational resources", "high time consumption", "the booming deep learning methods", "recent years", "innovative insights", "tools", "these challenges", "this review", "the current progress", "deep learning", "the realm", "2d materials", "we", "the basic concepts", "deep learning", "commonly used architectures", "convolutional neural and generative adversarial networks", "u-net models", "the characterization", "2d materials", "deep learning methods", "defects and materials identification", "automatic thickness characterization", "the research progress", "the unique properties", "2d materials", "electronic, mechanical, and thermodynamic features", "the inverse design", "functional 2d materials", "we", "the application prospects", "opportunities", "deep learning", "other aspects", "2d materials", "this review", "some guidance", "the understanding and employing novel 2d materials", "two", "2d", "2d", "recent years", "2d", "firstly", "2d", "thirdly", "2d", "2d", "2d", "2d" ]
Revitalizing Arabic Character Classification: Unleashing the Power of Deep Learning with Transfer Learning and Data Augmentation Techniques
[ "Marwa Amara", "Nadia Smairi", "Sami Mnasri", "Abdelmalek Zidouri" ]
Deep learning techniques have demonstrated remarkable success in various domains, including character classification tasks. However, the performance of deep learning models heavily relies on the availability of large-annotated datasets. This research work is motivated by the need to overcome the difficulties associated with handwritten Arabic character recognition, and the constraints provided by limited training data. It is also motivated by the need to enhance the model’s generalizability to unknown characteristics and to improve the accuracy of deep learning character classification models. To overcome this limitation, we apply transfer learning and data augmentation strategies to improve the character classification using deep learning. The proposed model transfers knowledge from previously trained models via transfer learning, addresses data scarcity, and reflects generalizable properties. Indeed, we utilize a VGG16-ImageNet transfer learning model, which is systematically enhanced through data augmentation across three distinct models: pre-trained ImageNet weights with a frozen backbone, pre-trained ImageNet weights with a fine-tuned backbone, and initiating with a randomly initialized. In each case, data augmentation plays a critical role. Our experimental results show that better precision and recall values were recorded for most classes in our dataset, which indicates the model’s ability to accurately identify instances of each character. Moreover, when applying our method to the IFHCDB and HACDB datasets, we observed an impressive recognition accuracy of 96.01% and 97.15%, respectively. This clearly indicates that involving transfer learning and data augmentation significantly improves the performance of deep learning models, especially for small size training datasets.
10.1007/s13369-024-08818-9
revitalizing arabic character classification: unleashing the power of deep learning with transfer learning and data augmentation techniques
deep learning techniques have demonstrated remarkable success in various domains, including character classification tasks. however, the performance of deep learning models heavily relies on the availability of large-annotated datasets. this research work is motivated by the need to overcome the difficulties associated with handwritten arabic character recognition, and the constraints provided by limited training data. it is also motivated by the need to enhance the model’s generalizability to unknown characteristics and to improve the accuracy of deep learning character classification models. to overcome this limitation, we apply transfer learning and data augmentation strategies to improve the character classification using deep learning. the proposed model transfers knowledge from previously trained models via transfer learning, addresses data scarcity, and reflects generalizable properties. indeed, we utilize a vgg16-imagenet transfer learning model, which is systematically enhanced through data augmentation across three distinct models: pre-trained imagenet weights with a frozen backbone, pre-trained imagenet weights with a fine-tuned backbone, and initiating with a randomly initialized. in each case, data augmentation plays a critical role. our experimental results show that better precision and recall values were recorded for most classes in our dataset, which indicates the model’s ability to accurately identify instances of each character. moreover, when applying our method to the ifhcdb and hacdb datasets, we observed an impressive recognition accuracy of 96.01% and 97.15%, respectively. this clearly indicates that involving transfer learning and data augmentation significantly improves the performance of deep learning models, especially for small size training datasets.
[ "deep learning techniques", "remarkable success", "various domains", "character classification tasks", "the performance", "deep learning models", "the availability", "large-annotated datasets", "this research work", "the need", "the difficulties", "handwritten arabic character recognition", "the constraints", "limited training data", "it", "the need", "the model’s generalizability", "unknown characteristics", "the accuracy", "deep learning character classification models", "this limitation", "we", "transfer learning and data augmentation strategies", "the character classification", "deep learning", "the proposed model transfers", "previously trained models", "transfer learning", "addresses data scarcity", "generalizable properties", "we", "a vgg16-imagenet transfer learning model", "which", "data augmentation", "three distinct models", "pre-trained imagenet weights", "a frozen backbone", "pre-trained imagenet weights", "a fine-tuned backbone", "each case", "data augmentation", "a critical role", "our experimental results", "better precision", "recall values", "most classes", "our dataset", "which", "the model’s ability", "instances", "each character", "our method", "the ifhcdb", "hacdb datasets", "we", "an impressive recognition accuracy", "96.01%", "97.15%", "this", "transfer learning and data augmentation", "the performance", "deep learning models", "small size training datasets", "recognition", "three", "96.01%", "97.15%" ]
A systematic survey of fuzzy deep learning for uncertain medical data
[ "Yuanhang Zheng", "Zeshui Xu", "Tong Wu", "Zhang Yi" ]
Intelligent medical industry is in a rapid stage of development around the world, followed by are the expanding market size and basic theories of intelligent medical diagnosis and decision-making. Deep learning models have achieved good practical results in medical domain. However, traditional deep learning is almost calculated and developed by crisp values, while imprecise, uncertain, and vague medical data is common in the process of diagnosis and treatment. It is important and significant to review the contributions of fuzzy deep learning for uncertain medical data, because fuzzy deep learning that originated from fuzzy sets, can effectively deal with uncertain and inaccurate information, providing new viewpoints for alleviating the presence of noise, artifact or high dimensional unstructured information in uncertain medical data. Therefore, taking focus on the intersection of both different fuzzy deep learning models and several types of uncertain medical data, the paper first constructs four types of frameworks of fuzzy deep learning models used for uncertain medical data, and investigates the status from three aspects: fuzzy deep learning models, uncertain medical data and application scenarios. Then the performance evaluation metrics of fuzzy deep learning models are analyzed in details. This work has some original points: (1) four types of frameworks of applying fuzzy deep learning models for uncertain medical data are first proposed. (2) Seven fuzzy deep learning models, five types of uncertain medical data, and five application scenarios are reviewed in details, respectively. (3) The advantages, challenges, and future research directions of fuzzy deep learning for uncertain medical data are critically analyzed, providing valuable suggestions for further deep research.
10.1007/s10462-024-10871-7
a systematic survey of fuzzy deep learning for uncertain medical data
intelligent medical industry is in a rapid stage of development around the world, followed by are the expanding market size and basic theories of intelligent medical diagnosis and decision-making. deep learning models have achieved good practical results in medical domain. however, traditional deep learning is almost calculated and developed by crisp values, while imprecise, uncertain, and vague medical data is common in the process of diagnosis and treatment. it is important and significant to review the contributions of fuzzy deep learning for uncertain medical data, because fuzzy deep learning that originated from fuzzy sets, can effectively deal with uncertain and inaccurate information, providing new viewpoints for alleviating the presence of noise, artifact or high dimensional unstructured information in uncertain medical data. therefore, taking focus on the intersection of both different fuzzy deep learning models and several types of uncertain medical data, the paper first constructs four types of frameworks of fuzzy deep learning models used for uncertain medical data, and investigates the status from three aspects: fuzzy deep learning models, uncertain medical data and application scenarios. then the performance evaluation metrics of fuzzy deep learning models are analyzed in details. this work has some original points: (1) four types of frameworks of applying fuzzy deep learning models for uncertain medical data are first proposed. (2) seven fuzzy deep learning models, five types of uncertain medical data, and five application scenarios are reviewed in details, respectively. (3) the advantages, challenges, and future research directions of fuzzy deep learning for uncertain medical data are critically analyzed, providing valuable suggestions for further deep research.
[ "intelligent medical industry", "a rapid stage", "development", "the world", "the expanding market size", "basic theories", "intelligent medical diagnosis", "decision-making", "deep learning models", "good practical results", "medical domain", "traditional deep learning", "crisp values", "imprecise", "vague medical data", "the process", "diagnosis", "treatment", "it", "the contributions", "fuzzy deep learning", "uncertain medical data", "fuzzy deep learning", "that", "fuzzy sets", "uncertain and inaccurate information", "new viewpoints", "the presence", "noise", "artifact", "high dimensional unstructured information", "uncertain medical data", "focus", "the intersection", "both different fuzzy deep learning models", "several types", "uncertain medical data", "the paper", "four types", "frameworks", "fuzzy deep learning models", "uncertain medical data", "the status", "three aspects", "fuzzy deep learning models", "uncertain medical data", "application scenarios", "the performance evaluation metrics", "fuzzy deep learning models", "details", "this work", "some original points", "(1) four types", "frameworks", "fuzzy deep learning models", "uncertain medical data", "(2) seven fuzzy deep learning models", "five types", "uncertain medical data", "five application scenarios", "details", "(3) the advantages", "challenges", "future research directions", "fuzzy deep learning", "uncertain medical data", "valuable suggestions", "further deep research", "first", "four", "three", "1", "first", "2", "seven", "five", "five", "3" ]
Predicting discrete-time bifurcations with deep learning
[ "Thomas M. Bury", "Daniel Dylewsky", "Chris T. Bauch", "Madhur Anand", "Leon Glass", "Alvin Shrier", "Gil Bub" ]
Many natural and man-made systems are prone to critical transitions—abrupt and potentially devastating changes in dynamics. Deep learning classifiers can provide an early warning signal for critical transitions by learning generic features of bifurcations from large simulated training data sets. So far, classifiers have only been trained to predict continuous-time bifurcations, ignoring rich dynamics unique to discrete-time bifurcations. Here, we train a deep learning classifier to provide an early warning signal for the five local discrete-time bifurcations of codimension-one. We test the classifier on simulation data from discrete-time models used in physiology, economics and ecology, as well as experimental data of spontaneously beating chick-heart aggregates that undergo a period-doubling bifurcation. The classifier shows higher sensitivity and specificity than commonly used early warning signals under a wide range of noise intensities and rates of approach to the bifurcation. It also predicts the correct bifurcation in most cases, with particularly high accuracy for the period-doubling, Neimark-Sacker and fold bifurcations. Deep learning as a tool for bifurcation prediction is still in its nascence and has the potential to transform the way we monitor systems for critical transitions.
10.1038/s41467-023-42020-z
predicting discrete-time bifurcations with deep learning
many natural and man-made systems are prone to critical transitions—abrupt and potentially devastating changes in dynamics. deep learning classifiers can provide an early warning signal for critical transitions by learning generic features of bifurcations from large simulated training data sets. so far, classifiers have only been trained to predict continuous-time bifurcations, ignoring rich dynamics unique to discrete-time bifurcations. here, we train a deep learning classifier to provide an early warning signal for the five local discrete-time bifurcations of codimension-one. we test the classifier on simulation data from discrete-time models used in physiology, economics and ecology, as well as experimental data of spontaneously beating chick-heart aggregates that undergo a period-doubling bifurcation. the classifier shows higher sensitivity and specificity than commonly used early warning signals under a wide range of noise intensities and rates of approach to the bifurcation. it also predicts the correct bifurcation in most cases, with particularly high accuracy for the period-doubling, neimark-sacker and fold bifurcations. deep learning as a tool for bifurcation prediction is still in its nascence and has the potential to transform the way we monitor systems for critical transitions.
[ "many natural and man-made systems", "critical transitions", "abrupt and potentially devastating changes", "dynamics", "deep learning classifiers", "an early warning signal", "critical transitions", "generic features", "bifurcations", "large simulated training data sets", "classifiers", "continuous-time bifurcations", "rich dynamics", "discrete-time bifurcations", "we", "a deep learning classifier", "an early warning signal", "the five local discrete-time bifurcations", "we", "the classifier", "simulation data", "discrete-time models", "physiology", "economics", "ecology", "experimental data", "chick-heart aggregates", "that", "a period-doubling bifurcation", "the classifier", "higher sensitivity", "specificity", "early warning signals", "a wide range", "noise intensities", "rates", "approach", "the bifurcation", "it", "the correct bifurcation", "most cases", "particularly high accuracy", "the period-doubling, neimark-sacker and fold bifurcations", "deep learning", "a tool", "bifurcation prediction", "its nascence", "the potential", "the way", "we", "systems", "critical transitions", "five" ]
Machine learning and deep learning techniques for poultry tasks management: a review
[ "Thavamani. Subramani", "Vijayakumar. Jeganathan", "Sruthi. Kunkuma Balasubramanian" ]
In recent years the poultry production industry has adopted automation with the help of different kinds of technological advancements like verities of monitoring and sensing tools, IoT devices, sensors, monitoring devices, and more. These advanced techniques will offer numerous advantages in poultry product production. Only with human resources based large-scale poultry production is not an easy task because the public health threads from ingesting foods with high antibiotic remains will residues a problem. Sometimes, zoonotic diseases and foodborne diseases present a crucial task to poultry producers. These repeated tasks concern massive and hazardous work accomplished within a suffering work domain, leading to high human health safety, labor cost, and sustainable risk of cross-infection through manufacturing facilities. In recent years, Artificial Intelligence technology has played a vital role in all sectors including poultry production has utilized advanced Machine Learning and Deep Learning technologies which is a subfield of AI, this technology has an acceptable potential to manage numerous challenges in the establishment of information-based farming and various task-handling systems in the poultry production sector. This article comprehensively reviews some recently established machine learning and deep learning algorithms for poultry management tasks like chicken activity monitoring, farm weather monitoring and control, weight prediction, earlier identification of diseased chickens, and more. It is divine that this proposed review work will compose a piece of helpful information for all who are interested in enhancing recognition of the future scope of utilizing the ML and DL techniques in the poultry production sectors.
10.1007/s11042-024-18951-0
machine learning and deep learning techniques for poultry tasks management: a review
in recent years the poultry production industry has adopted automation with the help of different kinds of technological advancements like verities of monitoring and sensing tools, iot devices, sensors, monitoring devices, and more. these advanced techniques will offer numerous advantages in poultry product production. only with human resources based large-scale poultry production is not an easy task because the public health threads from ingesting foods with high antibiotic remains will residues a problem. sometimes, zoonotic diseases and foodborne diseases present a crucial task to poultry producers. these repeated tasks concern massive and hazardous work accomplished within a suffering work domain, leading to high human health safety, labor cost, and sustainable risk of cross-infection through manufacturing facilities. in recent years, artificial intelligence technology has played a vital role in all sectors including poultry production has utilized advanced machine learning and deep learning technologies which is a subfield of ai, this technology has an acceptable potential to manage numerous challenges in the establishment of information-based farming and various task-handling systems in the poultry production sector. this article comprehensively reviews some recently established machine learning and deep learning algorithms for poultry management tasks like chicken activity monitoring, farm weather monitoring and control, weight prediction, earlier identification of diseased chickens, and more. it is divine that this proposed review work will compose a piece of helpful information for all who are interested in enhancing recognition of the future scope of utilizing the ml and dl techniques in the poultry production sectors.
[ "recent years", "the poultry production industry", "automation", "the help", "different kinds", "technological advancements", "verities", "monitoring", "sensing tools", "devices", "these advanced techniques", "numerous advantages", "poultry product production", "human resources", "an easy task", "the public health threads", "foods", "high antibiotic remains", "a problem", "zoonotic diseases", "foodborne diseases", "a crucial task", "poultry producers", "these repeated tasks concern massive and hazardous work", "a suffering work domain", "high human health safety", "labor cost", "sustainable risk", "cross", "-", "infection", "manufacturing facilities", "recent years", "artificial intelligence technology", "a vital role", "all sectors", "poultry production", "advanced machine learning", "deep learning technologies", "which", "a subfield", "ai", "this technology", "an acceptable potential", "numerous challenges", "the establishment", "information-based farming", "various task-handling systems", "the poultry production sector", "this article", "some recently established machine learning", "deep learning algorithms", "poultry management tasks", "chicken activity monitoring", "farm weather monitoring", "control", "weight prediction", "earlier identification", "diseased chickens", "it", "this proposed review work", "a piece", "helpful information", "all", "who", "recognition", "the future scope", "the ml and dl techniques", "the poultry production sectors", "recent years", "recent years" ]
Explaining deep learning-based leaf disease identification
[ "Ankit Rajpal", "Rashmi Mishra", "Sheetal Rajpal", "Kavita", "Varnika Bhatia", "Naveen Kumar" ]
Crop diseases adversely affect agricultural productivity and quality. The primary cause of these diseases is the presence of biotic stresses such as fungi, viruses, and bacteria. Detecting these causes at early stages requires constant monitoring by domain experts. Technological advancements in machine learning and deep learning methods have enabled the automated identification of leaf disease-specific symptoms through image analysis. This paper proposes image-based detection of leaf diseases using various deep learning-based models. The experiment was conducted on the PlantVillage dataset, which consists of 54,305 colour leaf images (healthy and diseased) belonging to 11 crop species categorized into 38 classes. The Inception-ResNet-V2-based model achieved a 10-fold cross-validation accuracy of \(0.9991 \pm 0.002\) outperforming the other deep neural architectures and surpassing the performance of existing models in recent state-of-the-art works. Each underlined model is validated on an independent cohort. The Inception-ResNet-V2-based model achieved the best 10-fold cross-validation accuracy of \(0.9535 \pm 0.041\) and was found statistically significant among other deep learning-based models. However, these deep learning models are considered a black box as their leaf disease predictions are opaque to end users. To address this issue, a local interpretable framework is proposed to mark the superpixels that contribute to identifying leaf disease. These superpixels closely confirmed the annotations of the human expert.
10.1007/s00500-024-09939-x
explaining deep learning-based leaf disease identification
crop diseases adversely affect agricultural productivity and quality. the primary cause of these diseases is the presence of biotic stresses such as fungi, viruses, and bacteria. detecting these causes at early stages requires constant monitoring by domain experts. technological advancements in machine learning and deep learning methods have enabled the automated identification of leaf disease-specific symptoms through image analysis. this paper proposes image-based detection of leaf diseases using various deep learning-based models. the experiment was conducted on the plantvillage dataset, which consists of 54,305 colour leaf images (healthy and diseased) belonging to 11 crop species categorized into 38 classes. the inception-resnet-v2-based model achieved a 10-fold cross-validation accuracy of \(0.9991 \pm 0.002\) outperforming the other deep neural architectures and surpassing the performance of existing models in recent state-of-the-art works. each underlined model is validated on an independent cohort. the inception-resnet-v2-based model achieved the best 10-fold cross-validation accuracy of \(0.9535 \pm 0.041\) and was found statistically significant among other deep learning-based models. however, these deep learning models are considered a black box as their leaf disease predictions are opaque to end users. to address this issue, a local interpretable framework is proposed to mark the superpixels that contribute to identifying leaf disease. these superpixels closely confirmed the annotations of the human expert.
[ "crop diseases", "agricultural productivity", "quality", "the primary cause", "these diseases", "the presence", "biotic stresses", "fungi", "viruses", "bacteria", "these causes", "early stages", "constant monitoring", "domain experts", "technological advancements", "machine learning", "deep learning methods", "the automated identification", "leaf disease-specific symptoms", "image analysis", "this paper", "image-based detection", "leaf diseases", "various deep learning-based models", "the experiment", "the plantvillage dataset", "which", "54,305 colour leaf images", "11 crop species", "38 classes", "the inception-resnet-v2-based model", "a 10-fold cross-validation accuracy", "\\(0.9991 \\pm 0.002\\", "the other deep neural architectures", "the performance", "existing models", "the-art", "each underlined model", "an independent cohort", "the inception-resnet-v2-based model", "the best 10-fold cross-validation accuracy", "\\(0.9535 \\pm 0.041\\", "other deep learning-based models", "these deep learning models", "a black box", "their leaf disease predictions", "users", "this issue", "a local interpretable framework", "the superpixels", "that", "leaf disease", "these superpixels", "the annotations", "the human expert", "54,305", "11", "38", "10-fold", "\\pm 0.002\\", "10-fold", "\\(0.9535 \\pm 0.041\\" ]
Model-based deep reinforcement learning for accelerated learning from flow simulations
[ "Andre Weiner", "Janis Geise" ]
In recent years, deep reinforcement learning has emerged as a technique to solve closed-loop flow control problems. Employing simulation-based environments in reinforcement learning enables a priori end-to-end optimization of the control system, provides a virtual testbed for safety-critical control applications, and allows to gain a deep understanding of the control mechanisms. While reinforcement learning has been applied successfully in a number of rather simple flow control benchmarks, a major bottleneck toward real-world applications is the high computational cost and turnaround time of flow simulations. In this contribution, we demonstrate the benefits of model-based reinforcement learning for flow control applications. Specifically, we optimize the policy by alternating between trajectories sampled from flow simulations and trajectories sampled from an ensemble of environment models. The model-based learning reduces the overall training time by up to \(85\%\) for the fluidic pinball test case. Even larger savings are expected for more demanding flow simulations.
10.1007/s11012-024-01808-z
model-based deep reinforcement learning for accelerated learning from flow simulations
in recent years, deep reinforcement learning has emerged as a technique to solve closed-loop flow control problems. employing simulation-based environments in reinforcement learning enables a priori end-to-end optimization of the control system, provides a virtual testbed for safety-critical control applications, and allows to gain a deep understanding of the control mechanisms. while reinforcement learning has been applied successfully in a number of rather simple flow control benchmarks, a major bottleneck toward real-world applications is the high computational cost and turnaround time of flow simulations. in this contribution, we demonstrate the benefits of model-based reinforcement learning for flow control applications. specifically, we optimize the policy by alternating between trajectories sampled from flow simulations and trajectories sampled from an ensemble of environment models. the model-based learning reduces the overall training time by up to \(85\%\) for the fluidic pinball test case. even larger savings are expected for more demanding flow simulations.
[ "recent years", "deep reinforcement learning", "a technique", "closed-loop flow control problems", "simulation-based environments", "reinforcement learning", "end", "the control system", "safety-critical control applications", "a deep understanding", "the control mechanisms", "reinforcement learning", "a number", "rather simple flow control benchmarks", "a major bottleneck", "real-world applications", "the high computational cost and turnaround time", "flow simulations", "this contribution", "we", "the benefits", "model-based reinforcement learning", "flow control applications", "we", "the policy", "trajectories", "flow simulations", "trajectories", "an ensemble", "environment models", "the model-based learning", "the overall training time", "\\(85\\%\\", "the fluidic pinball test case", "even larger savings", "more demanding flow simulations", "recent years" ]
Deep learning algorithms applied to computational chemistry
[ "Abimael Guzman-Pando", "Graciela Ramirez-Alonso", "Carlos Arzate-Quintana", "Javier Camarillo-Cisneros" ]
Recently, there has been a significant increase in the use of deep learning techniques in the molecular sciences, which have shown high performance on datasets and the ability to generalize across data. However, no model has achieved perfect performance in solving all problems, and the pros and cons of each approach remain unclear to those new to the field. Therefore, this paper aims to review deep learning algorithms that have been applied to solve molecular challenges in computational chemistry. We proposed a comprehensive categorization that encompasses two primary approaches; conventional deep learning and geometric deep learning models. This classification takes into account the distinct techniques employed by the algorithms within each approach. We present an up-to-date analysis of these algorithms, emphasizing their key features and open issues. This includes details of input descriptors, datasets used, open-source code availability, task solutions, and actual research applications, focusing on general applications rather than specific ones such as drug discovery. Furthermore, our report discusses trends and future directions in molecular algorithm design, including the input descriptors used for each deep learning model, GPU usage, training and forward processing time, model parameters, the most commonly used datasets, libraries, and optimization schemes. This information aids in identifying the most suitable algorithms for a given task. It also serves as a reference for the datasets and input data frequently used for each algorithm technique. In addition, it provides insights into the benefits and open issues of each technique, and supports the development of novel computational chemistry systems.
10.1007/s11030-023-10771-y
deep learning algorithms applied to computational chemistry
recently, there has been a significant increase in the use of deep learning techniques in the molecular sciences, which have shown high performance on datasets and the ability to generalize across data. however, no model has achieved perfect performance in solving all problems, and the pros and cons of each approach remain unclear to those new to the field. therefore, this paper aims to review deep learning algorithms that have been applied to solve molecular challenges in computational chemistry. we proposed a comprehensive categorization that encompasses two primary approaches; conventional deep learning and geometric deep learning models. this classification takes into account the distinct techniques employed by the algorithms within each approach. we present an up-to-date analysis of these algorithms, emphasizing their key features and open issues. this includes details of input descriptors, datasets used, open-source code availability, task solutions, and actual research applications, focusing on general applications rather than specific ones such as drug discovery. furthermore, our report discusses trends and future directions in molecular algorithm design, including the input descriptors used for each deep learning model, gpu usage, training and forward processing time, model parameters, the most commonly used datasets, libraries, and optimization schemes. this information aids in identifying the most suitable algorithms for a given task. it also serves as a reference for the datasets and input data frequently used for each algorithm technique. in addition, it provides insights into the benefits and open issues of each technique, and supports the development of novel computational chemistry systems.
[ "a significant increase", "the use", "deep learning techniques", "the molecular sciences", "which", "high performance", "datasets", "the ability", "data", "no model", "perfect performance", "all problems", "the pros", "cons", "each approach", "those", "the field", "this paper", "deep learning algorithms", "that", "molecular challenges", "computational chemistry", "we", "a comprehensive categorization", "that", "two primary approaches", "conventional deep learning", "geometric deep learning models", "this classification", "account", "the distinct techniques", "the algorithms", "each approach", "we", "date", "these algorithms", "their key features", "open issues", "this", "details", "input descriptors", "datasets", "open-source code availability", "task solutions", "actual research applications", "general applications", "specific ones", "drug discovery", "our report", "trends", "future directions", "molecular algorithm design", "the input descriptors", "each deep learning model", "gpu usage", "training", "forward processing time", "model parameters", "the most commonly used datasets", "libraries", "optimization schemes", "the most suitable algorithms", "a given task", "it", "a reference", "the datasets", "input data", "each algorithm technique", "addition", "it", "insights", "the benefits", "open issues", "each technique", "the development", "novel computational chemistry systems", "two" ]
Deep learning for named entity recognition: a survey
[ "Zhentao Hu", "Wei Hou", "Xianxing Liu" ]
Named entity recognition (NER) aims to identify the required entities and their types from unstructured text, which can be utilized for the construction of knowledge graphs. Traditional methods heavily rely on manual feature engineering and face challenges in adapting to large datasets within complex linguistic contexts. In recent years, with the development of deep learning, a plethora of NER methods based on deep learning have emerged. This paper begins by providing a succinct introduction to the definition of the problem and the limitations of traditional methods. It enumerates commonly used NER datasets suitable for deep learning methods and categorizes them into three classes based on the complexity of named entities. Then, some typical deep learning-based NER methods are summarized in detail according to the development history of deep learning models. Subsequently, an in-depth analysis and comparison of methods achieving outstanding performance on representative and widely used datasets is conducted. Furthermore, the paper reproduces and analyzes the recognition results of some typical models on three different types of typical datasets. Finally, the paper concludes by offering insights into the future trends of NER development.
10.1007/s00521-024-09646-6
deep learning for named entity recognition: a survey
named entity recognition (ner) aims to identify the required entities and their types from unstructured text, which can be utilized for the construction of knowledge graphs. traditional methods heavily rely on manual feature engineering and face challenges in adapting to large datasets within complex linguistic contexts. in recent years, with the development of deep learning, a plethora of ner methods based on deep learning have emerged. this paper begins by providing a succinct introduction to the definition of the problem and the limitations of traditional methods. it enumerates commonly used ner datasets suitable for deep learning methods and categorizes them into three classes based on the complexity of named entities. then, some typical deep learning-based ner methods are summarized in detail according to the development history of deep learning models. subsequently, an in-depth analysis and comparison of methods achieving outstanding performance on representative and widely used datasets is conducted. furthermore, the paper reproduces and analyzes the recognition results of some typical models on three different types of typical datasets. finally, the paper concludes by offering insights into the future trends of ner development.
[ "entity recognition", "ner", "the required entities", "their types", "unstructured text", "which", "the construction", "knowledge graphs", "traditional methods", "manual feature engineering", "challenges", "large datasets", "complex linguistic contexts", "recent years", "the development", "deep learning", "a plethora", "ner methods", "deep learning", "this paper", "a succinct introduction", "the definition", "the problem", "the limitations", "traditional methods", "it", "ner datasets", "deep learning methods", "them", "three classes", "the complexity", "named entities", "some typical deep learning-based ner methods", "detail", "the development history", "deep learning models", "an in-depth analysis", "comparison", "methods", "outstanding performance", "representative and widely used datasets", "furthermore, the paper reproduces", "the recognition results", "some typical models", "three different types", "typical datasets", "the paper", "insights", "the future trends", "ner development", "ner", "recent years", "three", "three" ]
Deep Kernel learning for reaction outcome prediction and optimization
[ "Sukriti Singh", "José Miguel Hernández-Lobato" ]
Recent years have seen a rapid growth in the application of various machine learning methods for reaction outcome prediction. Deep learning models have gained popularity due to their ability to learn representations directly from the molecular structure. Gaussian processes (GPs), on the other hand, provide reliable uncertainty estimates but are unable to learn representations from the data. We combine the feature learning ability of neural networks (NNs) with uncertainty quantification of GPs in a deep kernel learning (DKL) framework to predict the reaction outcome. The DKL model is observed to obtain very good predictive performance across different input representations. It significantly outperforms standard GPs and provides comparable performance to graph neural networks, but with uncertainty estimation. Additionally, the uncertainty estimates on predictions provided by the DKL model facilitated its incorporation as a surrogate model for Bayesian optimization (BO). The proposed method, therefore, has a great potential towards accelerating reaction discovery by integrating accurate predictive models that provide reliable uncertainty estimates with BO.
10.1038/s42004-024-01219-x
deep kernel learning for reaction outcome prediction and optimization
recent years have seen a rapid growth in the application of various machine learning methods for reaction outcome prediction. deep learning models have gained popularity due to their ability to learn representations directly from the molecular structure. gaussian processes (gps), on the other hand, provide reliable uncertainty estimates but are unable to learn representations from the data. we combine the feature learning ability of neural networks (nns) with uncertainty quantification of gps in a deep kernel learning (dkl) framework to predict the reaction outcome. the dkl model is observed to obtain very good predictive performance across different input representations. it significantly outperforms standard gps and provides comparable performance to graph neural networks, but with uncertainty estimation. additionally, the uncertainty estimates on predictions provided by the dkl model facilitated its incorporation as a surrogate model for bayesian optimization (bo). the proposed method, therefore, has a great potential towards accelerating reaction discovery by integrating accurate predictive models that provide reliable uncertainty estimates with bo.
[ "recent years", "a rapid growth", "the application", "various machine learning methods", "reaction outcome prediction", "deep learning models", "popularity", "their ability", "representations", "the molecular structure", "gaussian processes", "gps", "the other hand", "reliable uncertainty estimates", "representations", "the data", "we", "the feature learning ability", "neural networks", "nns", "uncertainty quantification", "gps", "a deep kernel learning (dkl) framework", "the reaction outcome", "the dkl model", "very good predictive performance", "different input representations", "it", "standard gps", "comparable performance", "neural networks", "uncertainty estimation", "the uncertainty", "predictions", "the dkl model", "its incorporation", "a surrogate model", "bayesian optimization", "bo", "the proposed method", "a great potential", "reaction discovery", "accurate predictive models", "that", "reliable uncertainty estimates", "bo", "recent years", "gaussian" ]
Colour fusion effect on deep learning classification of uveal melanoma
[ "Albert K. Dadzie", "Sabrina P. Iddir", "Mansour Abtahi", "Behrouz Ebrahimi", "David Le", "Sanjay Ganesh", "Taeyoon Son", "Michael J. Heiferman", "Xincheng Yao" ]
BackgroundReliable differentiation of uveal melanoma and choroidal nevi is crucial to guide appropriate treatment, preventing unnecessary procedures for benign lesions and ensuring timely treatment for potentially malignant cases. The purpose of this study is to validate deep learning classification of uveal melanoma and choroidal nevi, and to evaluate the effect of colour fusion options on the classification performance.MethodsA total of 798 ultra-widefield retinal images of 438 patients were included in this retrospective study, comprising 157 patients diagnosed with UM and 281 patients diagnosed with choroidal naevus. Colour fusion options, including early fusion, intermediate fusion and late fusion, were tested for deep learning image classification with a convolutional neural network (CNN). F1-score, accuracy and the area under the curve (AUC) of a receiver operating characteristic (ROC) were used to evaluate the classification performance.ResultsColour fusion options were observed to affect the deep learning performance significantly. For single-colour learning, the red colour image was observed to have superior performance compared to green and blue channels. For multi-colour learning, the intermediate fusion is better than early and late fusion options.ConclusionDeep learning is a promising approach for automated classification of uveal melanoma and choroidal nevi. Colour fusion options can significantly affect the classification performance.
10.1038/s41433-024-03148-4
colour fusion effect on deep learning classification of uveal melanoma
backgroundreliable differentiation of uveal melanoma and choroidal nevi is crucial to guide appropriate treatment, preventing unnecessary procedures for benign lesions and ensuring timely treatment for potentially malignant cases. the purpose of this study is to validate deep learning classification of uveal melanoma and choroidal nevi, and to evaluate the effect of colour fusion options on the classification performance.methodsa total of 798 ultra-widefield retinal images of 438 patients were included in this retrospective study, comprising 157 patients diagnosed with um and 281 patients diagnosed with choroidal naevus. colour fusion options, including early fusion, intermediate fusion and late fusion, were tested for deep learning image classification with a convolutional neural network (cnn). f1-score, accuracy and the area under the curve (auc) of a receiver operating characteristic (roc) were used to evaluate the classification performance.resultscolour fusion options were observed to affect the deep learning performance significantly. for single-colour learning, the red colour image was observed to have superior performance compared to green and blue channels. for multi-colour learning, the intermediate fusion is better than early and late fusion options.conclusiondeep learning is a promising approach for automated classification of uveal melanoma and choroidal nevi. colour fusion options can significantly affect the classification performance.
[ "backgroundreliable differentiation", "uveal melanoma", "choroidal nevi", "appropriate treatment", "unnecessary procedures", "benign lesions", "timely treatment", "potentially malignant cases", "the purpose", "this study", "deep learning classification", "uveal melanoma", "choroidal", "nevi", "the effect", "colour fusion options", "the classification performance.methodsa total", "798 ultra-widefield retinal images", "438 patients", "this retrospective study", "157 patients", "281 patients", "choroidal naevus", "colour fusion options", "early fusion", "intermediate fusion", "late fusion", "deep learning image classification", "a convolutional neural network", "cnn", "f1-score", "accuracy", "the area", "the curve", "auc", "(roc", "the classification performance.resultscolour fusion options", "the deep learning performance", "single-colour learning", "the red colour image", "superior performance", "green and blue channels", "multi-colour learning", "the intermediate fusion", "early and late fusion", "options.conclusiondeep learning", "a promising approach", "automated classification", "uveal melanoma", "choroidal", "nevi", "colour fusion options", "the classification performance", "798", "438", "157", "281", "cnn", "roc", "performance.resultscolour" ]
Predicting DNA structure using a deep learning method
[ "Jinsen Li", "Tsu-Pei Chiu", "Remo Rohs" ]
Understanding the mechanisms of protein-DNA binding is critical in comprehending gene regulation. Three-dimensional DNA structure, also described as DNA shape, plays a key role in these mechanisms. In this study, we present a deep learning-based method, Deep DNAshape, that fundamentally changes the current k-mer based high-throughput prediction of DNA shape features by accurately accounting for the influence of extended flanking regions, without the need for extensive molecular simulations or structural biology experiments. By using the Deep DNAshape method, DNA structural features can be predicted for any length and number of DNA sequences in a high-throughput manner, providing an understanding of the effects of flanking regions on DNA structure in a target region of a sequence. The Deep DNAshape method provides access to the influence of distant flanking regions on a region of interest. Our findings reveal that DNA shape readout mechanisms of a core target are quantitatively affected by flanking regions, including extended flanking regions, providing valuable insights into the detailed structural readout mechanisms of protein-DNA binding. Furthermore, when incorporated in machine learning models, the features generated by Deep DNAshape improve the model prediction accuracy. Collectively, Deep DNAshape can serve as versatile and powerful tool for diverse DNA structure-related studies.
10.1038/s41467-024-45191-5
predicting dna structure using a deep learning method
understanding the mechanisms of protein-dna binding is critical in comprehending gene regulation. three-dimensional dna structure, also described as dna shape, plays a key role in these mechanisms. in this study, we present a deep learning-based method, deep dnashape, that fundamentally changes the current k-mer based high-throughput prediction of dna shape features by accurately accounting for the influence of extended flanking regions, without the need for extensive molecular simulations or structural biology experiments. by using the deep dnashape method, dna structural features can be predicted for any length and number of dna sequences in a high-throughput manner, providing an understanding of the effects of flanking regions on dna structure in a target region of a sequence. the deep dnashape method provides access to the influence of distant flanking regions on a region of interest. our findings reveal that dna shape readout mechanisms of a core target are quantitatively affected by flanking regions, including extended flanking regions, providing valuable insights into the detailed structural readout mechanisms of protein-dna binding. furthermore, when incorporated in machine learning models, the features generated by deep dnashape improve the model prediction accuracy. collectively, deep dnashape can serve as versatile and powerful tool for diverse dna structure-related studies.
[ "the mechanisms", "protein-dna binding", "comprehending gene regulation", "three-dimensional dna structure", "dna shape", "a key role", "these mechanisms", "this study", "we", "a deep learning-based method", "deep dnashape", "the current k-mer based high-throughput prediction", "dna shape features", "the influence", "extended flanking regions", "the need", "extensive molecular simulations", "structural biology experiments", "the deep dnashape method", "dna structural features", "any length", "number", "dna sequences", "a high-throughput manner", "an understanding", "the effects", "flanking regions", "dna structure", "a target region", "a sequence", "the deep dnashape method", "access", "the influence", "distant flanking regions", "a region", "interest", "our findings", "dna", "readout mechanisms", "a core target", "flanking regions", "extended flanking regions", "valuable insights", "the detailed structural readout mechanisms", "protein-dna binding", "machine learning models", "the features", "deep dnashape", "the model prediction accuracy", "deep dnashape", "versatile and powerful tool", "diverse dna", "structure-related studies", "three" ]
Machine learning and deep learning for classifying the justification of brain CT referrals
[ "Jaka Potočnik", "Edel Thomas", "Aonghus Lawlor", "Dearbhla Kearney", "Eric J. Heffernan", "Ronan P. Killeen", "Shane J. Foley" ]
ObjectivesTo train the machine and deep learning models to automate the justification analysis of radiology referrals in accordance with iGuide categorisation, and to determine if prediction models can generalise across multiple clinical sites and outperform human experts.MethodsAdult brain computed tomography (CT) referrals from scans performed in three CT centres in Ireland in 2020 and 2021 were retrospectively collected. Two radiographers analysed the justification of 3000 randomly selected referrals using iGuide, with two consultant radiologists analysing the referrals with disagreement. Insufficient or duplicate referrals were discarded. The inter-rater agreement among radiographers and consultants was computed. A random split (4:1) was performed to apply machine learning (ML) and deep learning (DL) techniques to unstructured clinical indications to automate retrospective justification auditing with multi-class classification. The accuracy and macro-averaged F1 score of the best-performing classifier of each type on the training set were computed on the test set.Results42 referrals were ignored. 1909 (64.5%) referrals were justified, 811 (27.4%) were potentially justified, and 238 (8.1%) were unjustified. The agreement between radiographers (κ = 0.268) was lower than radiologists (κ = 0.460). The best-performing ML model was the bag-of-words-based gradient-boosting classifier achieving a 94.4% accuracy and a macro F1 of 0.94. DL models were inferior, with bi-directional long short-term memory achieving 92.3% accuracy, a macro F1 of 0.92, and outperforming multilayer perceptrons.ConclusionInterpreting unstructured clinical indications is challenging necessitating clinical decision support. ML and DL can generalise across multiple clinical sites, outperform human experts, and be used as an artificial intelligence-based iGuide interpreter when retrospectively vetting radiology referrals.Clinical relevance statementHealthcare vendors and clinical sites should consider developing and utilising artificial intelligence-enabled systems for justifying medical exposures. This would enable better implementation of imaging referral guidelines in clinical practices and reduce population dose burden, CT waiting lists, and wasteful use of resources.Key Points Significant variations exist among human experts in interpreting unstructured clinical indications/patient presentations. Machine and deep learning can automate the justification analysis of radiology referrals according to iGuide categorisation. Machine and deep learning can improve retrospective and prospective justification auditing for better implementation of imaging referral guidelines.
10.1007/s00330-024-10851-z
machine learning and deep learning for classifying the justification of brain ct referrals
objectivesto train the machine and deep learning models to automate the justification analysis of radiology referrals in accordance with iguide categorisation, and to determine if prediction models can generalise across multiple clinical sites and outperform human experts.methodsadult brain computed tomography (ct) referrals from scans performed in three ct centres in ireland in 2020 and 2021 were retrospectively collected. two radiographers analysed the justification of 3000 randomly selected referrals using iguide, with two consultant radiologists analysing the referrals with disagreement. insufficient or duplicate referrals were discarded. the inter-rater agreement among radiographers and consultants was computed. a random split (4:1) was performed to apply machine learning (ml) and deep learning (dl) techniques to unstructured clinical indications to automate retrospective justification auditing with multi-class classification. the accuracy and macro-averaged f1 score of the best-performing classifier of each type on the training set were computed on the test set.results42 referrals were ignored. 1909 (64.5%) referrals were justified, 811 (27.4%) were potentially justified, and 238 (8.1%) were unjustified. the agreement between radiographers (κ = 0.268) was lower than radiologists (κ = 0.460). the best-performing ml model was the bag-of-words-based gradient-boosting classifier achieving a 94.4% accuracy and a macro f1 of 0.94. dl models were inferior, with bi-directional long short-term memory achieving 92.3% accuracy, a macro f1 of 0.92, and outperforming multilayer perceptrons.conclusioninterpreting unstructured clinical indications is challenging necessitating clinical decision support. ml and dl can generalise across multiple clinical sites, outperform human experts, and be used as an artificial intelligence-based iguide interpreter when retrospectively vetting radiology referrals.clinical relevance statementhealthcare vendors and clinical sites should consider developing and utilising artificial intelligence-enabled systems for justifying medical exposures. this would enable better implementation of imaging referral guidelines in clinical practices and reduce population dose burden, ct waiting lists, and wasteful use of resources.key points significant variations exist among human experts in interpreting unstructured clinical indications/patient presentations. machine and deep learning can automate the justification analysis of radiology referrals according to iguide categorisation. machine and deep learning can improve retrospective and prospective justification auditing for better implementation of imaging referral guidelines.
[ "objectivesto", "the machine", "deep learning models", "the justification analysis", "radiology referrals", "accordance", "iguide categorisation", "prediction models", "multiple clinical sites", "human experts.methodsadult brain computed tomography (ct) referrals", "scans", "three ct centres", "ireland", "two radiographers", "the justification", "3000 randomly selected referrals", "iguide", "two consultant radiologists", "the referrals", "disagreement", "insufficient or duplicate referrals", "the inter-rater agreement", "radiographers", "consultants", "a random split", "machine learning", "ml", "deep learning", "(dl) techniques", "unstructured clinical indications", "multi-class classification", "the accuracy and macro-averaged f1 score", "the best-performing classifier", "each type", "the training set", "the test", "set.results42 referrals", "(64.5%", "238 (8.1%", "the agreement", "radiographers", "radiologists", "κ", "the best-performing ml model", "the bag-of-words-based gradient-boosting classifier", "a 94.4% accuracy", "a macro f1", "dl models", "bi-directional long short-term memory", "92.3% accuracy", "a macro f1", "outperforming multilayer", "necessitating clinical decision support", "ml", "dl", "multiple clinical sites", "human experts", "an artificial intelligence-based iguide interpreter", "radiology referrals.clinical relevance statementhealthcare vendors", "clinical sites", "artificial intelligence-enabled systems", "medical exposures", "this", "better implementation", "referral guidelines", "clinical practices", "population dose burden", "ct waiting lists", "wasteful use", "resources.key points", "significant variations", "human experts", "unstructured clinical indications", "patient presentations", "machine", "deep learning", "the justification analysis", "radiology referrals", "iguide categorisation", "machine", "deep learning", "retrospective and prospective justification", "better implementation", "imaging referral guidelines", "objectivesto train", "three", "ireland", "2020", "2021", "two", "3000", "two", "4:1", "set.results42", "1909", "64.5%", "811", "27.4%", "238", "8.1%", "0.268", "0.460", "94.4%", "0.94", "92.3%", "0.92" ]
Applications of deep learning method of artificial intelligence in education
[ "Fan Zhang", "Xiangyu Wang", "Xinhong Zhang" ]
Intersection of education and deep learning method of artificial intelligence (AI) is gradually becoming a hot research field. Education will be profoundly transformed by AI. The purpose of this review is to help education practitioners understand the research frontiers and directions of AI applications in education. This paper reviews the applications of deep learning in education and provides a visualized bibliometric analysis. The data of this paper come from Education Resources Information Center (ERIC) and Web-of-Science (WOS). These two data bases are searched to identify research of deep learning-based education applications from 2015 to 2023. CiteSpace is used to analyze the number of publications, authors, institutions, and keywords of articles that are related to deep learning and education. This paper reviews and systematically analyzes the educational applications of deep learning in the following six aspects: learning effect prediction, educational game, learning recommendation, automatic scoring, assisted teaching and medical education. Based on the visualized bibliometric analysis, the path and inflection point of research evolution can be inferred, the potential motivation of research can be analyzed, and the frontier of research can be explored. At present, AI can enable teachers to focus more on teaching and personalized interactions with students, enhancing rather than replacing human instructions.
10.1007/s10639-024-12883-w
applications of deep learning method of artificial intelligence in education
intersection of education and deep learning method of artificial intelligence (ai) is gradually becoming a hot research field. education will be profoundly transformed by ai. the purpose of this review is to help education practitioners understand the research frontiers and directions of ai applications in education. this paper reviews the applications of deep learning in education and provides a visualized bibliometric analysis. the data of this paper come from education resources information center (eric) and web-of-science (wos). these two data bases are searched to identify research of deep learning-based education applications from 2015 to 2023. citespace is used to analyze the number of publications, authors, institutions, and keywords of articles that are related to deep learning and education. this paper reviews and systematically analyzes the educational applications of deep learning in the following six aspects: learning effect prediction, educational game, learning recommendation, automatic scoring, assisted teaching and medical education. based on the visualized bibliometric analysis, the path and inflection point of research evolution can be inferred, the potential motivation of research can be analyzed, and the frontier of research can be explored. at present, ai can enable teachers to focus more on teaching and personalized interactions with students, enhancing rather than replacing human instructions.
[ "intersection", "education", "deep learning method", "artificial intelligence", "a hot research field", "education", "ai", "the purpose", "this review", "education practitioners", "the research frontiers", "directions", "ai applications", "education", "this paper", "the applications", "deep learning", "education", "a visualized bibliometric analysis", "the data", "this paper", "education resources information center", "eric", "web", "science", "wos", "these two data bases", "research", "deep learning-based education applications", "citespace", "the number", "publications", "authors", "institutions", "keywords", "articles", "that", "deep learning", "education", "this paper reviews", "the educational applications", "deep learning", "the following six aspects", "effect prediction", "educational game", "recommendation", "automatic scoring", "assisted teaching", "medical education", "the visualized bibliometric analysis", "the path", "inflection point", "research evolution", "the potential motivation", "research", "the frontier", "research", "present", "teachers", "teaching and personalized interactions", "students", "human instructions", "eric", "two", "2015", "citespace", "six" ]
A systematic literature review of visual feature learning: deep learning techniques, applications, challenges and future directions
[ "Mohammed Abdullahi", "Olaide Nathaniel Oyelade", "Armand Florentin Donfack Kana", "Mustapha Aminu Bagiwa", "Fatimah Binta Abdullahi", "Sahalu Balarabe Junaidu", "Ibrahim Iliyasu", "Ajayi Ore-ofe", "Haruna Chiroma" ]
Visual Feature Learning (VFL) is a critical area of research in computer vision that involves the automatic extraction of features and patterns from images and videos. The applications of VFL are vast, including object detection and recognition, facial recognition, scene understanding, medical image analysis, and autonomous vehicles. In this paper, we propose to conduct extensive systematic literature review (SLR) on VFL based on deep learning algorithms. The paper conducted an SLR covering deep learning algorithms such as Convolutional Neural Networks (CNNs), Autoencoders, and Generative Adversarial Networks (GANs) including their variants. The review highlights the importance of VFL in computer vision and the limitations of traditional feature extraction techniques. Furthermore, it provides an in-depth analysis of the strengths and weaknesses of various deep learning algorithms for solving problems in VFL. The discussion of the applications of VFL provides an insight into the impact of VFL on various industries and domains. The review also analyzed the challenges faced by VFL, such as data scarcity and quality, overfitting, generalization, interpretability, and explainability. The discussion of future directions for VFL includes hybrid techniques, unsupervised feature learning, continual learning, attention-based models, and explainable AI. These techniques aim to address the challenges faced by VFL and improve the performance of the models. The systematic literature review concludes that VFL is a rapidly evolving field with the potential to transform many industries and domains. The review highlights the need for further research in VFL and emphasizes the importance of responsible use of VFL models in various applications. The review provides valuable insights for researchers and practitioners in the field of computer vision, who can use these insights to enhance their work and ensure the responsible use of VFL models.
10.1007/s11042-024-19823-3
a systematic literature review of visual feature learning: deep learning techniques, applications, challenges and future directions
visual feature learning (vfl) is a critical area of research in computer vision that involves the automatic extraction of features and patterns from images and videos. the applications of vfl are vast, including object detection and recognition, facial recognition, scene understanding, medical image analysis, and autonomous vehicles. in this paper, we propose to conduct extensive systematic literature review (slr) on vfl based on deep learning algorithms. the paper conducted an slr covering deep learning algorithms such as convolutional neural networks (cnns), autoencoders, and generative adversarial networks (gans) including their variants. the review highlights the importance of vfl in computer vision and the limitations of traditional feature extraction techniques. furthermore, it provides an in-depth analysis of the strengths and weaknesses of various deep learning algorithms for solving problems in vfl. the discussion of the applications of vfl provides an insight into the impact of vfl on various industries and domains. the review also analyzed the challenges faced by vfl, such as data scarcity and quality, overfitting, generalization, interpretability, and explainability. the discussion of future directions for vfl includes hybrid techniques, unsupervised feature learning, continual learning, attention-based models, and explainable ai. these techniques aim to address the challenges faced by vfl and improve the performance of the models. the systematic literature review concludes that vfl is a rapidly evolving field with the potential to transform many industries and domains. the review highlights the need for further research in vfl and emphasizes the importance of responsible use of vfl models in various applications. the review provides valuable insights for researchers and practitioners in the field of computer vision, who can use these insights to enhance their work and ensure the responsible use of vfl models.
[ "vfl", "a critical area", "research", "computer vision", "that", "the automatic extraction", "features", "patterns", "images", "videos", "the applications", "vfl", "object detection", "recognition", "facial recognition", "scene understanding", "medical image analysis", "autonomous vehicles", "this paper", "we", "extensive systematic literature review", "slr", "vfl", "deep learning algorithms", "the paper", "an slr", "deep learning algorithms", "convolutional neural networks", "cnns", "autoencoders", "generative adversarial networks", "gans", "their variants", "the review", "the importance", "vfl", "computer vision", "the limitations", "traditional feature extraction techniques", "it", "an in-depth analysis", "the strengths", "weaknesses", "various deep learning algorithms", "problems", "vfl", "the discussion", "the applications", "vfl", "an insight", "the impact", "vfl", "various industries", "domains", "the review", "the challenges", "vfl", "data scarcity", "quality", "overfitting", "generalization", "interpretability", "explainability", "the discussion", "future directions", "vfl", "hybrid techniques", "feature learning", "continual learning", "attention-based models", "explainable ai", "these techniques", "the challenges", "vfl", "the performance", "the models", "the systematic literature review", "vfl", "a rapidly evolving field", "the potential", "many industries", "domains", "the review", "the need", "further research", "vfl", "the importance", "responsible use", "vfl models", "various applications", "the review", "valuable insights", "researchers", "practitioners", "the field", "computer vision", "who", "these insights", "their work", "the responsible use", "vfl models", "vfl", "vfl", "vfl", "vfl", "vfl", "vfl", "vfl", "vfl", "vfl", "vfl", "vfl", "vfl", "vfl", "vfl" ]
Deep Learning Challenges and Prospects in Wireless Sensor Network Deployment
[ "Yaner Qiu", "Liyun Ma", "Rahul Priyadarshi" ]
This paper explores the transformative integration of deep learning applications in the deployment of Wireless Sensor Networks (WSNs). As WSNs continue to play a pivotal role in diverse domains, the infusion of deep learning techniques offers unprecedented opportunities for enhanced data processing, analysis, and decision-making. The research problem addressed in this paper revolves around navigating the challenges associated with incorporating deep learning into WSN deployment. The methodology involves an extensive literature review, highlighting the increasing role of deep learning in addressing WSN challenges. Key findings underscore the potential improvements in energy efficiency, data processing speed, and accuracy achieved through deep learning-empowered WSNs. The implications of this research extend to diverse applications, including environmental monitoring, healthcare, industrial systems, and smart agriculture. As we delve into the future research agenda, the paper identifies the need for further exploration in areas such as adaptability to dynamic environments, privacy-preserving optimizations, and scalable deep learning models tailored to the unique constraints of WSNs.
10.1007/s11831-024-10079-6
deep learning challenges and prospects in wireless sensor network deployment
this paper explores the transformative integration of deep learning applications in the deployment of wireless sensor networks (wsns). as wsns continue to play a pivotal role in diverse domains, the infusion of deep learning techniques offers unprecedented opportunities for enhanced data processing, analysis, and decision-making. the research problem addressed in this paper revolves around navigating the challenges associated with incorporating deep learning into wsn deployment. the methodology involves an extensive literature review, highlighting the increasing role of deep learning in addressing wsn challenges. key findings underscore the potential improvements in energy efficiency, data processing speed, and accuracy achieved through deep learning-empowered wsns. the implications of this research extend to diverse applications, including environmental monitoring, healthcare, industrial systems, and smart agriculture. as we delve into the future research agenda, the paper identifies the need for further exploration in areas such as adaptability to dynamic environments, privacy-preserving optimizations, and scalable deep learning models tailored to the unique constraints of wsns.
[ "this paper", "the transformative integration", "deep learning applications", "the deployment", "wireless sensor networks", "wsns", "wsns", "a pivotal role", "diverse domains", "the infusion", "deep learning techniques", "unprecedented opportunities", "enhanced data processing", "analysis", "decision-making", "the research problem", "this paper", "the challenges", "deep learning", "wsn deployment", "the methodology", "an extensive literature review", "the increasing role", "deep learning", "wsn challenges", "key findings", "the potential improvements", "energy efficiency", "data processing speed", "accuracy", "deep learning-empowered wsns", "the implications", "this research extend", "diverse applications", "environmental monitoring", "healthcare", "industrial systems", "smart agriculture", "we", "the future research agenda", "the paper", "the need", "further exploration", "areas", "adaptability", "dynamic environments", "privacy-preserving optimizations", "scalable deep learning models", "the unique constraints", "wsns", "wsns" ]
Wheat crop classification using deep learning
[ "Harmandeep Singh Gill", "Bikramjit Singh Bath", "Rajanbir Singh", "Amarinder Singh Riar" ]
Crop yield forecasting is becoming more essential in the present environment, when food security must be maintained despite climate, population, and climate change concerns. Machine learning is a useful decision-making tool for predicting agricultural yields, as well as for deciding what crops to plant and what to do throughout the crop’s growth season. To aid agricultural production prediction studies, a number of machine learning methods have been used. Wheat is a significant food source in India, particularly in the north. The wheat crop is categorised using deep learning techniques in the proposed research. The suggested system uses deep learning CNN, RNN, and LSTM applications to classify wheat crops. The results showed that the test accuracy ranged from \(85\%\) to 95.68\(\%\) for varietal level classification. Hence, the proposed approach results are accurate and reliable, encouraging the deployment of such an approach in practice.
10.1007/s11042-024-18617-x
wheat crop classification using deep learning
crop yield forecasting is becoming more essential in the present environment, when food security must be maintained despite climate, population, and climate change concerns. machine learning is a useful decision-making tool for predicting agricultural yields, as well as for deciding what crops to plant and what to do throughout the crop’s growth season. to aid agricultural production prediction studies, a number of machine learning methods have been used. wheat is a significant food source in india, particularly in the north. the wheat crop is categorised using deep learning techniques in the proposed research. the suggested system uses deep learning cnn, rnn, and lstm applications to classify wheat crops. the results showed that the test accuracy ranged from \(85\%\) to 95.68\(\%\) for varietal level classification. hence, the proposed approach results are accurate and reliable, encouraging the deployment of such an approach in practice.
[ "crop yield forecasting", "the present environment", "food security", "climate", "population", "climate change concerns", "machine learning", "a useful decision-making tool", "agricultural yields", "what", "crops", "what", "the crop’s growth season", "agricultural production prediction studies", "a number", "machine learning methods", "wheat", "a significant food source", "india", "the north", "the wheat crop", "deep learning techniques", "the proposed research", "the suggested system", "deep learning cnn", "lstm applications", "wheat crops", "the results", "the test accuracy", "\\(85\\%\\", "varietal level classification", "the proposed approach results", "the deployment", "such an approach", "practice", "india", "cnn" ]
HARNet in deep learning approach—a systematic survey
[ "Neelam Sanjeev Kumar", "G. Deepika", "V. Goutham", "B. Buvaneswari", "R. Vijaya Kumar Reddy", "Sanjeevkumar Angadi", "C. Dhanamjayulu", "Ravikumar Chinthaginjala", "Faruq Mohammad", "Baseem Khan" ]
A comprehensive examination of human action recognition (HAR) methodologies situated at the convergence of deep learning and computer vision is the subject of this article. We examine the progression from handcrafted feature-based approaches to end-to-end learning, with a particular focus on the significance of large-scale datasets. By classifying research paradigms, such as temporal modelling and spatial features, our proposed taxonomy illuminates the merits and drawbacks of each. We specifically present HARNet, an architecture for Multi-Model Deep Learning that integrates recurrent and convolutional neural networks while utilizing attention mechanisms to improve accuracy and robustness. The VideoMAE v2 method (https://github.com/OpenGVLab/VideoMAEv2) has been utilized as a case study to illustrate practical implementations and obstacles. For researchers and practitioners interested in gaining a comprehensive understanding of the most recent advancements in HAR as they relate to computer vision and deep learning, this survey is an invaluable resource.
10.1038/s41598-024-58074-y
harnet in deep learning approach—a systematic survey
a comprehensive examination of human action recognition (har) methodologies situated at the convergence of deep learning and computer vision is the subject of this article. we examine the progression from handcrafted feature-based approaches to end-to-end learning, with a particular focus on the significance of large-scale datasets. by classifying research paradigms, such as temporal modelling and spatial features, our proposed taxonomy illuminates the merits and drawbacks of each. we specifically present harnet, an architecture for multi-model deep learning that integrates recurrent and convolutional neural networks while utilizing attention mechanisms to improve accuracy and robustness. the videomae v2 method (https://github.com/opengvlab/videomaev2) has been utilized as a case study to illustrate practical implementations and obstacles. for researchers and practitioners interested in gaining a comprehensive understanding of the most recent advancements in har as they relate to computer vision and deep learning, this survey is an invaluable resource.
[ "a comprehensive examination", "human action recognition (har) methodologies", "the convergence", "deep learning and computer vision", "the subject", "this article", "we", "the progression", "handcrafted feature-based approaches", "end", "learning", "a particular focus", "the significance", "large-scale datasets", "research paradigms", "temporal modelling", "spatial features", "our proposed taxonomy", "the merits", "drawbacks", "each", "we", "an architecture", "multi-model deep learning", "that", "recurrent and convolutional neural networks", "attention mechanisms", "accuracy", "robustness", "the videomae v2 method", "https://github.com/opengvlab/videomaev2", "a case study", "practical implementations", "obstacles", "researchers", "practitioners", "a comprehensive understanding", "the most recent advancements", "har", "they", "computer vision", "deep learning", "this survey", "an invaluable resource" ]
Deep learning bulk spacetime from boundary optical conductivity
[ "Byoungjoon Ahn", "Hyun-Sik Jeong", "Keun-Young Kim", "Kwan Yun" ]
We employ a deep learning method to deduce the bulk spacetime from boundary optical conductivity. We apply the neural ordinary differential equation technique, tailored for continuous functions such as the metric, to the typical class of holographic condensed matter models featuring broken translations: linear-axion models. We successfully extract the bulk metric from the boundary holographic optical conductivity. Furthermore, as an example for real material, we use experimental optical conductivity of UPd2Al3, a representative of heavy fermion metals in strongly correlated electron systems, and construct the corresponding bulk metric. To our knowledge, our work is the first illustration of deep learning bulk spacetime from boundary holographic or experimental conductivity data.
10.1007/JHEP03(2024)141
deep learning bulk spacetime from boundary optical conductivity
we employ a deep learning method to deduce the bulk spacetime from boundary optical conductivity. we apply the neural ordinary differential equation technique, tailored for continuous functions such as the metric, to the typical class of holographic condensed matter models featuring broken translations: linear-axion models. we successfully extract the bulk metric from the boundary holographic optical conductivity. furthermore, as an example for real material, we use experimental optical conductivity of upd2al3, a representative of heavy fermion metals in strongly correlated electron systems, and construct the corresponding bulk metric. to our knowledge, our work is the first illustration of deep learning bulk spacetime from boundary holographic or experimental conductivity data.
[ "we", "a deep learning method", "the bulk spacetime", "boundary optical conductivity", "we", "the neural ordinary differential equation technique", "continuous functions", "the metric", "the typical class", "holographic condensed matter models", "broken translations", "linear-axion models", "we", "the bulk metric", "the boundary holographic optical conductivity", "an example", "real material", "we", "experimental optical conductivity", "upd2al3", "a representative", "heavy fermion metals", "strongly correlated electron systems", "the corresponding bulk metric", "our knowledge", "our work", "the first illustration", "deep learning bulk spacetime", "boundary holographic or experimental conductivity data", "linear", "first" ]
Deep learning: systematic review, models, challenges, and research directions
[ "Tala Talaei Khoei", "Hadjar Ould Slimane", "Naima Kaabouch" ]
The current development in deep learning is witnessing an exponential transition into automation applications. This automation transition can provide a promising framework for higher performance and lower complexity. This ongoing transition undergoes several rapid changes, resulting in the processing of the data by several studies, while it may lead to time-consuming and costly models. Thus, to address these challenges, several studies have been conducted to investigate deep learning techniques; however, they mostly focused on specific learning approaches, such as supervised deep learning. In addition, these studies did not comprehensively investigate other deep learning techniques, such as deep unsupervised and deep reinforcement learning techniques. Moreover, the majority of these studies neglect to discuss some main methodologies in deep learning, such as transfer learning, federated learning, and online learning. Therefore, motivated by the limitations of the existing studies, this study summarizes the deep learning techniques into supervised, unsupervised, reinforcement, and hybrid learning-based models. In addition to address each category, a brief description of these categories and their models is provided. Some of the critical topics in deep learning, namely, transfer, federated, and online learning models, are explored and discussed in detail. Finally, challenges and future directions are outlined to provide wider outlooks for future researchers.
10.1007/s00521-023-08957-4
deep learning: systematic review, models, challenges, and research directions
the current development in deep learning is witnessing an exponential transition into automation applications. this automation transition can provide a promising framework for higher performance and lower complexity. this ongoing transition undergoes several rapid changes, resulting in the processing of the data by several studies, while it may lead to time-consuming and costly models. thus, to address these challenges, several studies have been conducted to investigate deep learning techniques; however, they mostly focused on specific learning approaches, such as supervised deep learning. in addition, these studies did not comprehensively investigate other deep learning techniques, such as deep unsupervised and deep reinforcement learning techniques. moreover, the majority of these studies neglect to discuss some main methodologies in deep learning, such as transfer learning, federated learning, and online learning. therefore, motivated by the limitations of the existing studies, this study summarizes the deep learning techniques into supervised, unsupervised, reinforcement, and hybrid learning-based models. in addition to address each category, a brief description of these categories and their models is provided. some of the critical topics in deep learning, namely, transfer, federated, and online learning models, are explored and discussed in detail. finally, challenges and future directions are outlined to provide wider outlooks for future researchers.
[ "the current development", "deep learning", "an exponential transition", "automation applications", "this automation transition", "a promising framework", "higher performance", "lower complexity", "this ongoing transition", "several rapid changes", "the processing", "the data", "several studies", "it", "time-consuming and costly models", "these challenges", "several studies", "deep learning techniques", "they", "specific learning approaches", "supervised deep learning", "addition", "these studies", "other deep learning techniques", "deep unsupervised and deep reinforcement learning techniques", "the majority", "these studies", "some main methodologies", "deep learning", "transfer learning", "federated learning", "online learning", "the limitations", "the existing studies", "this study", "the deep learning techniques", "supervised, unsupervised, reinforcement, and hybrid learning-based models", "addition", "each category", "a brief description", "these categories", "their models", "some", "the critical topics", "deep learning", "namely, transfer", "online learning models", "detail", "challenges", "future directions", "wider outlooks", "future researchers" ]
Colour fusion effect on deep learning classification of uveal melanoma
[ "Albert K. Dadzie", "Sabrina P. Iddir", "Mansour Abtahi", "Behrouz Ebrahimi", "David Le", "Sanjay Ganesh", "Taeyoon Son", "Michael J. Heiferman", "Xincheng Yao" ]
BackgroundReliable differentiation of uveal melanoma and choroidal nevi is crucial to guide appropriate treatment, preventing unnecessary procedures for benign lesions and ensuring timely treatment for potentially malignant cases. The purpose of this study is to validate deep learning classification of uveal melanoma and choroidal nevi, and to evaluate the effect of colour fusion options on the classification performance.MethodsA total of 798 ultra-widefield retinal images of 438 patients were included in this retrospective study, comprising 157 patients diagnosed with UM and 281 patients diagnosed with choroidal naevus. Colour fusion options, including early fusion, intermediate fusion and late fusion, were tested for deep learning image classification with a convolutional neural network (CNN). F1-score, accuracy and the area under the curve (AUC) of a receiver operating characteristic (ROC) were used to evaluate the classification performance.ResultsColour fusion options were observed to affect the deep learning performance significantly. For single-colour learning, the red colour image was observed to have superior performance compared to green and blue channels. For multi-colour learning, the intermediate fusion is better than early and late fusion options.ConclusionDeep learning is a promising approach for automated classification of uveal melanoma and choroidal nevi. Colour fusion options can significantly affect the classification performance.
10.1038/s41433-024-03148-4
colour fusion effect on deep learning classification of uveal melanoma
backgroundreliable differentiation of uveal melanoma and choroidal nevi is crucial to guide appropriate treatment, preventing unnecessary procedures for benign lesions and ensuring timely treatment for potentially malignant cases. the purpose of this study is to validate deep learning classification of uveal melanoma and choroidal nevi, and to evaluate the effect of colour fusion options on the classification performance.methodsa total of 798 ultra-widefield retinal images of 438 patients were included in this retrospective study, comprising 157 patients diagnosed with um and 281 patients diagnosed with choroidal naevus. colour fusion options, including early fusion, intermediate fusion and late fusion, were tested for deep learning image classification with a convolutional neural network (cnn). f1-score, accuracy and the area under the curve (auc) of a receiver operating characteristic (roc) were used to evaluate the classification performance.resultscolour fusion options were observed to affect the deep learning performance significantly. for single-colour learning, the red colour image was observed to have superior performance compared to green and blue channels. for multi-colour learning, the intermediate fusion is better than early and late fusion options.conclusiondeep learning is a promising approach for automated classification of uveal melanoma and choroidal nevi. colour fusion options can significantly affect the classification performance.
[ "backgroundreliable differentiation", "uveal melanoma", "choroidal nevi", "appropriate treatment", "unnecessary procedures", "benign lesions", "timely treatment", "potentially malignant cases", "the purpose", "this study", "deep learning classification", "uveal melanoma", "choroidal", "nevi", "the effect", "colour fusion options", "the classification performance.methodsa total", "798 ultra-widefield retinal images", "438 patients", "this retrospective study", "157 patients", "281 patients", "choroidal naevus", "colour fusion options", "early fusion", "intermediate fusion", "late fusion", "deep learning image classification", "a convolutional neural network", "cnn", "f1-score", "accuracy", "the area", "the curve", "auc", "(roc", "the classification performance.resultscolour fusion options", "the deep learning performance", "single-colour learning", "the red colour image", "superior performance", "green and blue channels", "multi-colour learning", "the intermediate fusion", "early and late fusion", "options.conclusiondeep learning", "a promising approach", "automated classification", "uveal melanoma", "choroidal", "nevi", "colour fusion options", "the classification performance", "798", "438", "157", "281", "cnn", "roc", "performance.resultscolour" ]
Machine learning and deep learning for classifying the justification of brain CT referrals
[ "Jaka Potočnik", "Edel Thomas", "Aonghus Lawlor", "Dearbhla Kearney", "Eric J. Heffernan", "Ronan P. Killeen", "Shane J. Foley" ]
ObjectivesTo train the machine and deep learning models to automate the justification analysis of radiology referrals in accordance with iGuide categorisation, and to determine if prediction models can generalise across multiple clinical sites and outperform human experts.MethodsAdult brain computed tomography (CT) referrals from scans performed in three CT centres in Ireland in 2020 and 2021 were retrospectively collected. Two radiographers analysed the justification of 3000 randomly selected referrals using iGuide, with two consultant radiologists analysing the referrals with disagreement. Insufficient or duplicate referrals were discarded. The inter-rater agreement among radiographers and consultants was computed. A random split (4:1) was performed to apply machine learning (ML) and deep learning (DL) techniques to unstructured clinical indications to automate retrospective justification auditing with multi-class classification. The accuracy and macro-averaged F1 score of the best-performing classifier of each type on the training set were computed on the test set.Results42 referrals were ignored. 1909 (64.5%) referrals were justified, 811 (27.4%) were potentially justified, and 238 (8.1%) were unjustified. The agreement between radiographers (κ = 0.268) was lower than radiologists (κ = 0.460). The best-performing ML model was the bag-of-words-based gradient-boosting classifier achieving a 94.4% accuracy and a macro F1 of 0.94. DL models were inferior, with bi-directional long short-term memory achieving 92.3% accuracy, a macro F1 of 0.92, and outperforming multilayer perceptrons.ConclusionInterpreting unstructured clinical indications is challenging necessitating clinical decision support. ML and DL can generalise across multiple clinical sites, outperform human experts, and be used as an artificial intelligence-based iGuide interpreter when retrospectively vetting radiology referrals.Clinical relevance statementHealthcare vendors and clinical sites should consider developing and utilising artificial intelligence-enabled systems for justifying medical exposures. This would enable better implementation of imaging referral guidelines in clinical practices and reduce population dose burden, CT waiting lists, and wasteful use of resources.Key Points Significant variations exist among human experts in interpreting unstructured clinical indications/patient presentations. Machine and deep learning can automate the justification analysis of radiology referrals according to iGuide categorisation. Machine and deep learning can improve retrospective and prospective justification auditing for better implementation of imaging referral guidelines.
10.1007/s00330-024-10851-z
machine learning and deep learning for classifying the justification of brain ct referrals
objectivesto train the machine and deep learning models to automate the justification analysis of radiology referrals in accordance with iguide categorisation, and to determine if prediction models can generalise across multiple clinical sites and outperform human experts.methodsadult brain computed tomography (ct) referrals from scans performed in three ct centres in ireland in 2020 and 2021 were retrospectively collected. two radiographers analysed the justification of 3000 randomly selected referrals using iguide, with two consultant radiologists analysing the referrals with disagreement. insufficient or duplicate referrals were discarded. the inter-rater agreement among radiographers and consultants was computed. a random split (4:1) was performed to apply machine learning (ml) and deep learning (dl) techniques to unstructured clinical indications to automate retrospective justification auditing with multi-class classification. the accuracy and macro-averaged f1 score of the best-performing classifier of each type on the training set were computed on the test set.results42 referrals were ignored. 1909 (64.5%) referrals were justified, 811 (27.4%) were potentially justified, and 238 (8.1%) were unjustified. the agreement between radiographers (κ = 0.268) was lower than radiologists (κ = 0.460). the best-performing ml model was the bag-of-words-based gradient-boosting classifier achieving a 94.4% accuracy and a macro f1 of 0.94. dl models were inferior, with bi-directional long short-term memory achieving 92.3% accuracy, a macro f1 of 0.92, and outperforming multilayer perceptrons.conclusioninterpreting unstructured clinical indications is challenging necessitating clinical decision support. ml and dl can generalise across multiple clinical sites, outperform human experts, and be used as an artificial intelligence-based iguide interpreter when retrospectively vetting radiology referrals.clinical relevance statementhealthcare vendors and clinical sites should consider developing and utilising artificial intelligence-enabled systems for justifying medical exposures. this would enable better implementation of imaging referral guidelines in clinical practices and reduce population dose burden, ct waiting lists, and wasteful use of resources.key points significant variations exist among human experts in interpreting unstructured clinical indications/patient presentations. machine and deep learning can automate the justification analysis of radiology referrals according to iguide categorisation. machine and deep learning can improve retrospective and prospective justification auditing for better implementation of imaging referral guidelines.
[ "objectivesto", "the machine", "deep learning models", "the justification analysis", "radiology referrals", "accordance", "iguide categorisation", "prediction models", "multiple clinical sites", "human experts.methodsadult brain computed tomography (ct) referrals", "scans", "three ct centres", "ireland", "two radiographers", "the justification", "3000 randomly selected referrals", "iguide", "two consultant radiologists", "the referrals", "disagreement", "insufficient or duplicate referrals", "the inter-rater agreement", "radiographers", "consultants", "a random split", "machine learning", "ml", "deep learning", "(dl) techniques", "unstructured clinical indications", "multi-class classification", "the accuracy and macro-averaged f1 score", "the best-performing classifier", "each type", "the training set", "the test", "set.results42 referrals", "(64.5%", "238 (8.1%", "the agreement", "radiographers", "radiologists", "κ", "the best-performing ml model", "the bag-of-words-based gradient-boosting classifier", "a 94.4% accuracy", "a macro f1", "dl models", "bi-directional long short-term memory", "92.3% accuracy", "a macro f1", "outperforming multilayer", "necessitating clinical decision support", "ml", "dl", "multiple clinical sites", "human experts", "an artificial intelligence-based iguide interpreter", "radiology referrals.clinical relevance statementhealthcare vendors", "clinical sites", "artificial intelligence-enabled systems", "medical exposures", "this", "better implementation", "referral guidelines", "clinical practices", "population dose burden", "ct waiting lists", "wasteful use", "resources.key points", "significant variations", "human experts", "unstructured clinical indications", "patient presentations", "machine", "deep learning", "the justification analysis", "radiology referrals", "iguide categorisation", "machine", "deep learning", "retrospective and prospective justification", "better implementation", "imaging referral guidelines", "objectivesto train", "three", "ireland", "2020", "2021", "two", "3000", "two", "4:1", "set.results42", "1909", "64.5%", "811", "27.4%", "238", "8.1%", "0.268", "0.460", "94.4%", "0.94", "92.3%", "0.92" ]
General deep learning framework for emissivity engineering
[ "Shilv Yu", "Peng Zhou", "Wang Xi", "Zihe Chen", "Yuheng Deng", "Xiaobing Luo", "Wangnan Li", "Junichiro Shiomi", "Run Hu" ]
Wavelength-selective thermal emitters (WS-TEs) have been frequently designed to achieve desired target emissivity spectra, as a typical emissivity engineering, for broad applications such as thermal camouflage, radiative cooling, and gas sensing, etc. However, previous designs require prior knowledge of materials or structures for different applications and the designed WS-TEs usually vary from applications to applications in terms of materials and structures, thus lacking of a general design framework for emissivity engineering across different applications. Moreover, previous designs fail to tackle the simultaneous design of both materials and structures, as they either fix materials to design structures or fix structures to select suitable materials. Herein, we employ the deep Q-learning network algorithm, a reinforcement learning method based on deep learning framework, to design multilayer WS-TEs. To demonstrate the general validity, three WS-TEs are designed for various applications, including thermal camouflage, radiative cooling and gas sensing, which are then fabricated and measured. The merits of the deep Q-learning algorithm include that it can (1) offer a general design framework for WS-TEs beyond one-dimensional multilayer structures; (2) autonomously select suitable materials from a self-built material library and (3) autonomously optimize structural parameters for the target emissivity spectra. The present framework is demonstrated to be feasible and efficient in designing WS-TEs across different applications, and the design parameters are highly scalable in materials, structures, dimensions, and the target functions, offering a general framework for emissivity engineering and paving the way for efficient design of nonlinear optimization problems beyond thermal metamaterials.
10.1038/s41377-023-01341-w
general deep learning framework for emissivity engineering
wavelength-selective thermal emitters (ws-tes) have been frequently designed to achieve desired target emissivity spectra, as a typical emissivity engineering, for broad applications such as thermal camouflage, radiative cooling, and gas sensing, etc. however, previous designs require prior knowledge of materials or structures for different applications and the designed ws-tes usually vary from applications to applications in terms of materials and structures, thus lacking of a general design framework for emissivity engineering across different applications. moreover, previous designs fail to tackle the simultaneous design of both materials and structures, as they either fix materials to design structures or fix structures to select suitable materials. herein, we employ the deep q-learning network algorithm, a reinforcement learning method based on deep learning framework, to design multilayer ws-tes. to demonstrate the general validity, three ws-tes are designed for various applications, including thermal camouflage, radiative cooling and gas sensing, which are then fabricated and measured. the merits of the deep q-learning algorithm include that it can (1) offer a general design framework for ws-tes beyond one-dimensional multilayer structures; (2) autonomously select suitable materials from a self-built material library and (3) autonomously optimize structural parameters for the target emissivity spectra. the present framework is demonstrated to be feasible and efficient in designing ws-tes across different applications, and the design parameters are highly scalable in materials, structures, dimensions, and the target functions, offering a general framework for emissivity engineering and paving the way for efficient design of nonlinear optimization problems beyond thermal metamaterials.
[ "wavelength-selective thermal emitters", "ws-tes", "desired target emissivity spectra", "a typical emissivity engineering", "broad applications", "thermal camouflage", "cooling", "gas sensing", "previous designs", "prior knowledge", "materials", "structures", "different applications", "the designed ws-tes", "applications", "applications", "terms", "materials", "structures", "a general design framework", "emissivity engineering", "different applications", "previous designs", "the simultaneous design", "both materials", "structures", "they", "materials", "structures", "structures", "suitable materials", "we", "the deep q-learning network algorithm", "a reinforcement learning method", "deep learning framework", "multilayer ws-tes", "the general validity", "three ws-tes", "various applications", "thermal camouflage", "gas sensing", "which", "the merits", "the deep q-learning algorithm", "it", "a general design framework", "ws-tes", "one-dimensional multilayer structures", "suitable materials", "a self-built material library", "structural parameters", "the target emissivity spectra", "the present framework", "ws-tes", "different applications", "the design parameters", "materials", "structures", "dimensions", "the target functions", "a general framework", "emissivity engineering", "the way", "efficient design", "nonlinear optimization problems", "thermal metamaterials", "three", "1", "2", "3" ]
Forecasting VIX using Bayesian deep learning
[ "Héctor J. Hortúa", "Andrés Mora-Valencia" ]
Recently, deep learning techniques are gradually replacing traditional statistical and machine learning models as the first choice for price forecasting tasks. In this paper, we leverage probabilistic deep learning for inferring the volatility index VIX. We employ the probabilistic counterpart of WaveNet, Temporal Convolutional Network (TCN), and Transformers. We show that TCN outperforms all models with an RMSE around 0.189. In addition, it has been well known that modern neural networks provide inaccurate uncertainty estimates. For solving this problem, we use the standard deviation scaling to calibrate the networks. Furthermore, we found out that MNF with Gaussian prior outperforms Reparameterization Trick and Flipout models in terms of precision and uncertainty predictions. Finally, we claim that MNF with Cauchy and LogUniform prior distributions yield well-calibrated TCN, and Transformer and WaveNet networks being the former that best infer the VIX values for one and five-step-ahead forecasting, and the probabilistic Transformer model yields an adequate forecasting for the COVID-19 pandemic period.
10.1007/s41060-024-00562-5
forecasting vix using bayesian deep learning
recently, deep learning techniques are gradually replacing traditional statistical and machine learning models as the first choice for price forecasting tasks. in this paper, we leverage probabilistic deep learning for inferring the volatility index vix. we employ the probabilistic counterpart of wavenet, temporal convolutional network (tcn), and transformers. we show that tcn outperforms all models with an rmse around 0.189. in addition, it has been well known that modern neural networks provide inaccurate uncertainty estimates. for solving this problem, we use the standard deviation scaling to calibrate the networks. furthermore, we found out that mnf with gaussian prior outperforms reparameterization trick and flipout models in terms of precision and uncertainty predictions. finally, we claim that mnf with cauchy and loguniform prior distributions yield well-calibrated tcn, and transformer and wavenet networks being the former that best infer the vix values for one and five-step-ahead forecasting, and the probabilistic transformer model yields an adequate forecasting for the covid-19 pandemic period.
[ "deep learning techniques", "traditional statistical and machine learning models", "the first choice", "price forecasting tasks", "this paper", "we", "probabilistic deep learning", "the volatility index vix", "we", "the probabilistic counterpart", "wavenet", "temporal convolutional network", "tcn", "transformers", "we", "tcn", "all models", "an rmse", "addition", "it", "modern neural networks", "inaccurate uncertainty estimates", "this problem", "we", "the standard deviation", "the networks", "we", "that mnf", "prior outperforms reparameterization trick", "flipout", "terms", "precision and uncertainty predictions", "we", "cauchy", "prior distributions", "well-calibrated tcn", "the vix values", "one and five-step-ahead forecasting", "the probabilistic transformer model", "an adequate forecasting", "the covid-19 pandemic period", "first", "rmse", "0.189", "one", "five", "covid-19" ]
Review of Deep Learning Techniques for Neurological Disorders Detection
[ "Akhilesh Kumar Tripathi", "Rafeeq Ahmed", "Arvind Kumar Tiwari" ]
Neurological disease is one of the most common types of dementia that predominantly concerns the elderly. In clinical approaches, identifying its premature stages is complicated, and no biomarker is comprehended to be thorough in witnessing neurological disorders in their earlier stages. Deep learning approaches have attracted much attention in the scientific community using scanned images. They differ from simple machine learning (ML) algorithms in that they study the most favorable depiction of untreated images. Deep learning is helpful in the neuroimaging analysis of neurological diseases with subtle and dispersed changes because it can discover abstract and complicated patterns. The current study discusses a vital part of deep learning and looks at past work that has been used to switch between different ML algorithms that can predict neurological diseases. Convolution Neural Networks, Generative Adversarial Network, Recurrent Neural Network, Deep Belief Network, Auto Encoder, and other algorithms for Alzheimer’s illness prediction have been considered. Many publications on preprocessing methods, such as scaling, correction, stripping, and normalizing, have been evaluated.
10.1007/s11277-024-11464-x
review of deep learning techniques for neurological disorders detection
neurological disease is one of the most common types of dementia that predominantly concerns the elderly. in clinical approaches, identifying its premature stages is complicated, and no biomarker is comprehended to be thorough in witnessing neurological disorders in their earlier stages. deep learning approaches have attracted much attention in the scientific community using scanned images. they differ from simple machine learning (ml) algorithms in that they study the most favorable depiction of untreated images. deep learning is helpful in the neuroimaging analysis of neurological diseases with subtle and dispersed changes because it can discover abstract and complicated patterns. the current study discusses a vital part of deep learning and looks at past work that has been used to switch between different ml algorithms that can predict neurological diseases. convolution neural networks, generative adversarial network, recurrent neural network, deep belief network, auto encoder, and other algorithms for alzheimer’s illness prediction have been considered. many publications on preprocessing methods, such as scaling, correction, stripping, and normalizing, have been evaluated.
[ "neurological disease", "the most common types", "dementia", "that", "clinical approaches", "its premature stages", "no biomarker", "neurological disorders", "their earlier stages", "deep learning approaches", "much attention", "the scientific community", "scanned images", "they", "simple machine learning", "ml", "they", "the most favorable depiction", "untreated images", "deep learning", "the neuroimaging analysis", "neurological diseases", "subtle and dispersed changes", "it", "abstract and complicated patterns", "the current study", "a vital part", "deep learning", "past work", "that", "different ml algorithms", "that", "neurological diseases", "convolution neural networks", "generative adversarial network", "recurrent neural network", "deep belief network", "auto encoder", "other algorithms", "alzheimer’s illness prediction", "many publications", "preprocessing methods", "scaling", "correction" ]
Enhancing Cardiovascular Health Monitoring Through IoT and Deep Learning Technologies
[ "Huu-Hoa Nguyen", "Tri-Thuc Vo" ]
Monitoring cardiovascular conditions is crucial in healthcare due to their significant impact on overall wellness and their role in mitigating heart-related diseases. To address this pressing issue, the research community has introduced various methodologies, among which deep learning approaches have shown notable effectiveness. Despite this potential, creating effective deep learning models tailored to time-series health data remains challenging. These challenges include processing vast amounts of data from IoT devices, building and selecting optimal deep learning models with appropriate parameters, and designing and implementing reliable systems for cardiovascular health monitoring. In response, our research introduces an advanced cardiovascular health monitoring system that takes advantage of wearable IoT and deep learning technologies to enhance healthcare. It features a multi-layered architecture, where each layer serves a specific function and integrates closely with the others. This integration enhances the system’s overall functionality and reliability. The system efficiently integrates processes from health data collection through deep learning analysis to the delivery of timely health alerts. A critical feature of this system is the targeted deep learning model, selected from six potential algorithms based on experiments with data from IoT-enabled smartwatches. The selection process involves an in-depth evaluation of the models’ performance, leading to the choice of the most effective model for system implementation. Our results highlight the system’s effectiveness in monitoring cardiovascular health, underscoring its potential to enhance personalized healthcare, particularly for individuals with cardiovascular conditions, through advanced monitoring technologies.
10.1007/s42979-024-02962-7
enhancing cardiovascular health monitoring through iot and deep learning technologies
monitoring cardiovascular conditions is crucial in healthcare due to their significant impact on overall wellness and their role in mitigating heart-related diseases. to address this pressing issue, the research community has introduced various methodologies, among which deep learning approaches have shown notable effectiveness. despite this potential, creating effective deep learning models tailored to time-series health data remains challenging. these challenges include processing vast amounts of data from iot devices, building and selecting optimal deep learning models with appropriate parameters, and designing and implementing reliable systems for cardiovascular health monitoring. in response, our research introduces an advanced cardiovascular health monitoring system that takes advantage of wearable iot and deep learning technologies to enhance healthcare. it features a multi-layered architecture, where each layer serves a specific function and integrates closely with the others. this integration enhances the system’s overall functionality and reliability. the system efficiently integrates processes from health data collection through deep learning analysis to the delivery of timely health alerts. a critical feature of this system is the targeted deep learning model, selected from six potential algorithms based on experiments with data from iot-enabled smartwatches. the selection process involves an in-depth evaluation of the models’ performance, leading to the choice of the most effective model for system implementation. our results highlight the system’s effectiveness in monitoring cardiovascular health, underscoring its potential to enhance personalized healthcare, particularly for individuals with cardiovascular conditions, through advanced monitoring technologies.
[ "cardiovascular conditions", "healthcare", "their significant impact", "overall wellness", "their role", "heart-related diseases", "this pressing issue", "the research community", "various methodologies", "which", "deep learning approaches", "notable effectiveness", "this potential", "effective deep learning models", "time-series health data", "these challenges", "vast amounts", "data", "iot devices", "optimal deep learning models", "appropriate parameters", "reliable systems", "cardiovascular health monitoring", "response", "our research", "an advanced cardiovascular health monitoring system", "that", "advantage", "wearable iot and deep learning technologies", "healthcare", "it", "a multi-layered architecture", "each layer", "a specific function", "integrates", "the others", "this integration", "the system’s overall functionality", "reliability", "the system", "processes", "health data collection", "deep learning analysis", "the delivery", "timely health alerts", "a critical feature", "this system", "the targeted deep learning model", "six potential algorithms", "experiments", "data", "iot-enabled smartwatches", "the selection process", "-depth", "the models’ performance", "the choice", "the most effective model", "system implementation", "our results", "the system’s effectiveness", "cardiovascular health", "its potential", "personalized healthcare", "individuals", "cardiovascular conditions", "advanced monitoring technologies", "six" ]
Image recognition algorithm based on hybrid deep learning
[ "Tang Xiangdong" ]
As we embrace the information age, our lives have experienced revolutionary transformations. With the continuous advancement of computer technology, data sharing and information exchange have become increasingly robust. Consequently, the widespread adoption of hybrid deep learning algorithms has been prioritized. The emergence of deep learning is intricately linked to the progress of artificial intelligence, with deep learning serving as a tangible manifestation of AI. Deep learning algorithms are an important part of the field of robotics research and development. In image recognition, deep learning algorithms are playing an irreplaceable role. Based on the technological breakthrough of convolutional neural network, deep learning is unique in image recognition. In addition, aspects such as speech recognition, body function monitoring, and sports data analysis all have deep learning, and they are advancing all the way with unstoppable development momentum. Of course, with the current development of computer technology, image data also shines on its basis. Not only has it achieved a substantial surpass in quantity, but its types and styles are coming out in a variety of colors. On the basis of this series of developments, we are faced with a problem: traditional recognition algorithms cannot meet our next development requirements. Therefore, a new type of algorithm to deal with this problem has emerged: the research of image recognition algorithms based on hybrid deep learning. In the research of this article, we will focus on comparing its various algorithms to find out their advantages and disadvantages. So as to promote the further development of image recognition.
10.1007/s13198-023-02134-5
image recognition algorithm based on hybrid deep learning
as we embrace the information age, our lives have experienced revolutionary transformations. with the continuous advancement of computer technology, data sharing and information exchange have become increasingly robust. consequently, the widespread adoption of hybrid deep learning algorithms has been prioritized. the emergence of deep learning is intricately linked to the progress of artificial intelligence, with deep learning serving as a tangible manifestation of ai. deep learning algorithms are an important part of the field of robotics research and development. in image recognition, deep learning algorithms are playing an irreplaceable role. based on the technological breakthrough of convolutional neural network, deep learning is unique in image recognition. in addition, aspects such as speech recognition, body function monitoring, and sports data analysis all have deep learning, and they are advancing all the way with unstoppable development momentum. of course, with the current development of computer technology, image data also shines on its basis. not only has it achieved a substantial surpass in quantity, but its types and styles are coming out in a variety of colors. on the basis of this series of developments, we are faced with a problem: traditional recognition algorithms cannot meet our next development requirements. therefore, a new type of algorithm to deal with this problem has emerged: the research of image recognition algorithms based on hybrid deep learning. in the research of this article, we will focus on comparing its various algorithms to find out their advantages and disadvantages. so as to promote the further development of image recognition.
[ "we", "the information age", "our lives", "revolutionary transformations", "the continuous advancement", "computer technology", "data sharing", "information exchange", "the widespread adoption", "hybrid deep learning algorithms", "the emergence", "deep learning", "the progress", "artificial intelligence", "deep learning", "a tangible manifestation", "ai", "deep learning algorithms", "an important part", "the field", "robotics research", "development", "image recognition", "deep learning algorithms", "an irreplaceable role", "the technological breakthrough", "convolutional neural network", "deep learning", "image recognition", "addition", "aspects", "speech recognition", "body function monitoring", "sports data analysis", "all", "deep learning", "they", "unstoppable development momentum", "course", "the current development", "computer technology", "image data", "its basis", "it", "a substantial surpass", "quantity", "its types", "styles", "a variety", "colors", "the basis", "this series", "developments", "we", "a problem", "traditional recognition algorithms", "our next development requirements", "a new type", "algorithm", "this problem", "the research", "image recognition algorithms", "hybrid deep learning", "the research", "this article", "we", "its various algorithms", "their advantages", "disadvantages", "the further development", "image recognition" ]
Deep Ensemble learning and quantum machine learning approach for Alzheimer’s disease detection
[ "Abebech Jenber Belay", "Yelkal Mulualem Walle", "Melaku Bitew Haile" ]
Alzheimer disease (AD) is among the most chronic neurodegenerative diseases that threaten global public health. The prevalence of Alzheimer disease and consequently the increased risk of spread all over the world pose a vital threat to human safekeeping. Early diagnosis of AD is a suitable action for timely intervention and medication, which may increase the prognosis and quality of life for affected individuals. Quantum computing provides a more efficient model for different disease classification tasks than classical machine learning approaches. The full potential of quantum computing is not applied to Alzheimer’s disease classification tasks as expected. In this study, we proposed an ensemble deep learning model based on quantum machine learning classifiers to classify Alzheimer’s disease. The Alzheimer’s disease Neuroimaging Initiative I and Alzheimer’s disease Neuroimaging Initiative II datasets are merged for the AD disease classification. We combined important features extracted based on the customized version of VGG16 and ResNet50 models from the merged images then feed these features to the Quantum Machine Learning classifier to classify them as non-demented, mild demented, moderate demented, and very mild demented. We evaluate the performance of our model by using six metrics; accuracy, the area under the curve, F1-score, precision, and recall. The result validates that the proposed model outperforms several state-of-the-art methods for detecting Alzheimer’s disease by registering an accuracy of 99.89 and 98.37 F1-score.
10.1038/s41598-024-61452-1
deep ensemble learning and quantum machine learning approach for alzheimer’s disease detection
alzheimer disease (ad) is among the most chronic neurodegenerative diseases that threaten global public health. the prevalence of alzheimer disease and consequently the increased risk of spread all over the world pose a vital threat to human safekeeping. early diagnosis of ad is a suitable action for timely intervention and medication, which may increase the prognosis and quality of life for affected individuals. quantum computing provides a more efficient model for different disease classification tasks than classical machine learning approaches. the full potential of quantum computing is not applied to alzheimer’s disease classification tasks as expected. in this study, we proposed an ensemble deep learning model based on quantum machine learning classifiers to classify alzheimer’s disease. the alzheimer’s disease neuroimaging initiative i and alzheimer’s disease neuroimaging initiative ii datasets are merged for the ad disease classification. we combined important features extracted based on the customized version of vgg16 and resnet50 models from the merged images then feed these features to the quantum machine learning classifier to classify them as non-demented, mild demented, moderate demented, and very mild demented. we evaluate the performance of our model by using six metrics; accuracy, the area under the curve, f1-score, precision, and recall. the result validates that the proposed model outperforms several state-of-the-art methods for detecting alzheimer’s disease by registering an accuracy of 99.89 and 98.37 f1-score.
[ "alzheimer disease", "ad", "the most chronic neurodegenerative diseases", "that", "global public health", "the prevalence", "alzheimer disease", "consequently the increased risk", "spread", "the world", "a vital threat", "human safekeeping", "early diagnosis", "ad", "a suitable action", "timely intervention", "medication", "which", "the prognosis", "quality", "life", "affected individuals", "quantum computing", "a more efficient model", "different disease classification tasks", "classical machine learning approaches", "the full potential", "quantum computing", "alzheimer’s disease classification tasks", "this study", "we", "an ensemble deep learning model", "quantum machine learning classifiers", "alzheimer’s disease", "i", "alzheimer", "disease", "neuroimaging initiative ii datasets", "the ad disease classification", "we", "important features", "the customized version", "vgg16 and resnet50 models", "the merged images", "these features", "the quantum machine", "classifier", "them", "we", "the performance", "our model", "six metrics", "accuracy", "the area", "the curve", "f1-score", "precision", "the result", "the proposed model", "the-art", "alzheimer’s disease", "an accuracy", "98.37 f1-score", "quantum", "quantum", "quantum", "resnet50", "quantum", "six", "99.89", "98.37" ]
A Review on Machine Learning and Deep Learning Based Systems for the Diagnosis of Brain Cancer
[ "Prottoy Saha", "Shanta Kumar Das", "Rudra Das" ]
Brain cancer is a disease of the brain caused by a brain tumor. A brain tumor is the development of cells in the brain that grow in an unregulated and unnatural manner. Patients may suffer irreversible brain damage or even death if these tumors are not detected and treated properly. As with all types of treatment, Positional information and tumor size are critical for conventional systems. Thus, establishing a meticulous and automated approach to providing information to medical practitioners is required. With machine learning, deep learning, and several imaging modalities, physicians may now more reliably detect tumor types in a shorter period. The paper aims to provide an overview of newly developed systems that use machine learning and deep learning approaches to analyze various medical imaging modalities in the case of diagnosing brain tumors. Datasets used by the authors, dataset partitioning strategies, and different performance evaluation matrices are also described in this paper. To better understand the policy of categorization, we propose a taxonomy here where we have categorized deep learning and machine learning based systems with respect to single classifier, multiple classifiers, single dataset and multiple dataset. Finally, we focus on the challenges of deep learning algorithms for brain tumor classification and possible future trends in this field.
10.1007/s42979-023-02360-5
a review on machine learning and deep learning based systems for the diagnosis of brain cancer
brain cancer is a disease of the brain caused by a brain tumor. a brain tumor is the development of cells in the brain that grow in an unregulated and unnatural manner. patients may suffer irreversible brain damage or even death if these tumors are not detected and treated properly. as with all types of treatment, positional information and tumor size are critical for conventional systems. thus, establishing a meticulous and automated approach to providing information to medical practitioners is required. with machine learning, deep learning, and several imaging modalities, physicians may now more reliably detect tumor types in a shorter period. the paper aims to provide an overview of newly developed systems that use machine learning and deep learning approaches to analyze various medical imaging modalities in the case of diagnosing brain tumors. datasets used by the authors, dataset partitioning strategies, and different performance evaluation matrices are also described in this paper. to better understand the policy of categorization, we propose a taxonomy here where we have categorized deep learning and machine learning based systems with respect to single classifier, multiple classifiers, single dataset and multiple dataset. finally, we focus on the challenges of deep learning algorithms for brain tumor classification and possible future trends in this field.
[ "brain cancer", "a disease", "the brain", "a brain tumor", "a brain tumor", "the development", "cells", "the brain", "that", "an unregulated and unnatural manner", "patients", "irreversible brain damage", "even death", "these tumors", "all types", "treatment", "positional information", "tumor size", "conventional systems", "a meticulous and automated approach", "information", "medical practitioners", "machine learning", "deep learning", "several imaging modalities", "physicians", "tumor types", "a shorter period", "the paper", "an overview", "newly developed systems", "that", "machine learning", "approaches", "various medical imaging modalities", "the case", "brain tumors", "datasets", "the authors", "partitioning strategies", "different performance evaluation matrices", "this paper", "the policy", "categorization", "we", "a taxonomy", "we", "deep learning and machine learning based systems", "respect", "single classifier", "multiple classifiers", "single dataset", "multiple dataset", "we", "the challenges", "deep learning algorithms", "brain tumor classification", "possible future trends", "this field" ]
Ensemble deep learning for Alzheimer’s disease characterization and estimation
[ "M. Tanveer", "T. Goel", "R. Sharma", "A. K. Malik", "I. Beheshti", "J. Del Ser", "P. N. Suganthan", "C. T. Lin" ]
Alzheimer’s disease, which is characterized by a continual deterioration of cognitive abilities in older people, is the most common form of dementia. Neuroimaging data, for example, from magnetic resonance imaging and positron emission tomography, enable identification of the structural and functional changes caused by Alzheimer’s disease in the brain. Diagnosing Alzheimer’s disease is critical in medical settings, as it supports early intervention and treatment planning and contributes to expanding our knowledge of the dynamics of Alzheimer’s disease in the brain. Lately, ensemble deep learning has become popular for enhancing the performance and reliability of Alzheimer’s disease diagnosis. These models combine several deep neural networks to increase a prediction’s robustness. Here we revisit key developments of ensemble deep learning, connecting its design—the type of ensemble, its heterogeneity and data modalities—with its application to AD diagnosis using neuroimaging and genetic data. Trends and challenges are discussed thoroughly to assess where our knowledge in this area stands.
10.1038/s44220-024-00237-x
ensemble deep learning for alzheimer’s disease characterization and estimation
alzheimer’s disease, which is characterized by a continual deterioration of cognitive abilities in older people, is the most common form of dementia. neuroimaging data, for example, from magnetic resonance imaging and positron emission tomography, enable identification of the structural and functional changes caused by alzheimer’s disease in the brain. diagnosing alzheimer’s disease is critical in medical settings, as it supports early intervention and treatment planning and contributes to expanding our knowledge of the dynamics of alzheimer’s disease in the brain. lately, ensemble deep learning has become popular for enhancing the performance and reliability of alzheimer’s disease diagnosis. these models combine several deep neural networks to increase a prediction’s robustness. here we revisit key developments of ensemble deep learning, connecting its design—the type of ensemble, its heterogeneity and data modalities—with its application to ad diagnosis using neuroimaging and genetic data. trends and challenges are discussed thoroughly to assess where our knowledge in this area stands.
[ "alzheimer’s disease", "which", "a continual deterioration", "cognitive abilities", "older people", "the most common form", "dementia", "neuroimaging data", "example", "magnetic resonance imaging", "positron emission tomography", "identification", "the structural and functional changes", "alzheimer’s disease", "the brain", "alzheimer’s disease", "medical settings", "it", "early intervention", "treatment planning", "our knowledge", "the dynamics", "alzheimer’s disease", "the brain", "ensemble deep learning", "the performance", "reliability", "alzheimer’s disease diagnosis", "these models", "several deep neural networks", "a prediction’s robustness", "we", "key developments", "ensemble deep learning", "its design", "the type", "ensemble", "its heterogeneity and data modalities", "its application", "ad diagnosis", "neuroimaging and genetic data", "trends", "challenges", "our knowledge", "this area" ]
Prediction of crop yield in India using machine learning and hybrid deep learning models
[ "Krithikha Sanju Saravanan", "Velammal Bhagavathiappan" ]
Crop yield prediction is one of the burgeoning research areas in the agriculture domain. The crop yield forecasting models are developed to enhance productivity with improved decision-making strategies. The highly efficient crop yield forecasting model assists farmers in determining when, what and how much to plant on their cultivable land. The main objective of the proposed research work is to build a high efficacious crop yield prediction model based on the data available for the period of 21 years from 1997 to 2017 using machine learning and hybrid deep learning approaches. Two prediction models have been proposed in this research work to predict the crop yield accurately. The first model is a machine learning-based model which uses the CatBoost regression model and its hyperparameters are tuned which improves the performance of the yield prediction using the Optuna framework. The second model is the hybrid deep learning model which uses spatio-temporal attention-based convolutional neural network (STACNN) for extracting the features and the bidirectional long short-term memory (BiLSTM) model for predicting the crop yield effectively. The proposed models are evaluated using the error metrics and compared with the latest contemporary models. From the evaluation results, it is shown that the proposed models significantly outperform all other existing models and CatBoost regression model slightly performs better than the STACNN-BiLSTM model, with the R-squared value of 0.99.
10.1007/s11600-024-01312-8
prediction of crop yield in india using machine learning and hybrid deep learning models
crop yield prediction is one of the burgeoning research areas in the agriculture domain. the crop yield forecasting models are developed to enhance productivity with improved decision-making strategies. the highly efficient crop yield forecasting model assists farmers in determining when, what and how much to plant on their cultivable land. the main objective of the proposed research work is to build a high efficacious crop yield prediction model based on the data available for the period of 21 years from 1997 to 2017 using machine learning and hybrid deep learning approaches. two prediction models have been proposed in this research work to predict the crop yield accurately. the first model is a machine learning-based model which uses the catboost regression model and its hyperparameters are tuned which improves the performance of the yield prediction using the optuna framework. the second model is the hybrid deep learning model which uses spatio-temporal attention-based convolutional neural network (stacnn) for extracting the features and the bidirectional long short-term memory (bilstm) model for predicting the crop yield effectively. the proposed models are evaluated using the error metrics and compared with the latest contemporary models. from the evaluation results, it is shown that the proposed models significantly outperform all other existing models and catboost regression model slightly performs better than the stacnn-bilstm model, with the r-squared value of 0.99.
[ "crop yield prediction", "the burgeoning research areas", "the agriculture domain", "the crop yield forecasting models", "productivity", "improved decision-making strategies", "the highly efficient crop yield forecasting model", "farmers", "their cultivable land", "the main objective", "the proposed research work", "a high efficacious crop yield prediction model", "the data", "the period", "21 years", "machine learning", "hybrid deep learning approaches", "two prediction models", "this research work", "the first model", "a machine learning-based model", "which", "the catboost regression model", "which", "the performance", "the yield prediction", "the optuna framework", "the second model", "the hybrid deep learning model", "which", "spatio-temporal attention-based convolutional neural network", "stacnn", "the features", "the bidirectional long short-term memory (bilstm) model", "the crop yield", "the proposed models", "the error metrics", "the latest contemporary models", "the evaluation results", "it", "the proposed models", "all other existing models", "catboost regression model", "the stacnn-bilstm model", "the r-squared value", "the period of 21 years from 1997 to 2017", "two", "first", "second", "stacnn", "stacnn", "0.99" ]
Deep learning CT reconstruction improves liver metastases detection
[ "Achraf Kanan", "Bruno Pereira", "Constance Hordonneau", "Lucie Cassagnes", "Eléonore Pouget", "Léon Appolinaire Tianhoun", "Benoît Chauveau", "Benoît Magnin" ]
ObjectivesDetection of liver metastases is crucial for guiding oncological management. Computed tomography through iterative reconstructions is widely used in this indication but has certain limitations. Deep learning image reconstructions (DLIR) use deep neural networks to achieve a significant noise reduction compared to iterative reconstructions. While reports have demonstrated improvements in image quality, their impact on liver metastases detection remains unclear. Our main objective was to determine whether DLIR affects the number of detected liver metastasis. Our secondary objective was to compare metastases conspicuity between the two reconstruction methods.MethodsCT images of 121 patients with liver metastases were reconstructed using a 50% adaptive statistical iterative reconstruction (50%-ASiR-V), and three levels of DLIR (DLIR-low, DLIR-medium, and DLIR-high). For each reconstruction, two double-blinded radiologists counted up to a maximum of ten metastases. Visibility and contour definitions were also assessed. Comparisons between methods for continuous parameters were performed using mixed models.ResultsA higher number of metastases was detected by one reader with DLIR-high: 7 (2–10) (median (Q₁–Q₃); total 733) versus 5 (2–10), respectively for DLIR-medium, DLIR-low, and ASiR-V (p < 0.001). Ten patents were detected with more metastases with DLIR-high simultaneously by both readers and a third reader for confirmation. Metastases visibility and contour definition were better with DLIR than ASiR-V.ConclusionDLIR-high enhanced the detection and visibility of liver metastases compared to ASiR-V, and also increased the number of liver metastases detected.Critical relevance statementDeep learning-based reconstruction at high strength allowed an increase in liver metastases detection compared to hybrid iterative reconstruction and can be used in clinical oncology imaging to help overcome the limitations of CT.Key Points Detection of liver metastases is crucial but limited with standard CT reconstructions. More liver metastases were detected with deep-learning CT reconstruction compared to iterative reconstruction. Deep learning reconstructions are suitable for hepatic metastases staging and follow-up. Graphical Abstract
10.1186/s13244-024-01753-1
deep learning ct reconstruction improves liver metastases detection
objectivesdetection of liver metastases is crucial for guiding oncological management. computed tomography through iterative reconstructions is widely used in this indication but has certain limitations. deep learning image reconstructions (dlir) use deep neural networks to achieve a significant noise reduction compared to iterative reconstructions. while reports have demonstrated improvements in image quality, their impact on liver metastases detection remains unclear. our main objective was to determine whether dlir affects the number of detected liver metastasis. our secondary objective was to compare metastases conspicuity between the two reconstruction methods.methodsct images of 121 patients with liver metastases were reconstructed using a 50% adaptive statistical iterative reconstruction (50%-asir-v), and three levels of dlir (dlir-low, dlir-medium, and dlir-high). for each reconstruction, two double-blinded radiologists counted up to a maximum of ten metastases. visibility and contour definitions were also assessed. comparisons between methods for continuous parameters were performed using mixed models.resultsa higher number of metastases was detected by one reader with dlir-high: 7 (2–10) (median (q₁–q₃); total 733) versus 5 (2–10), respectively for dlir-medium, dlir-low, and asir-v (p < 0.001). ten patents were detected with more metastases with dlir-high simultaneously by both readers and a third reader for confirmation. metastases visibility and contour definition were better with dlir than asir-v.conclusiondlir-high enhanced the detection and visibility of liver metastases compared to asir-v, and also increased the number of liver metastases detected.critical relevance statementdeep learning-based reconstruction at high strength allowed an increase in liver metastases detection compared to hybrid iterative reconstruction and can be used in clinical oncology imaging to help overcome the limitations of ct.key points detection of liver metastases is crucial but limited with standard ct reconstructions. more liver metastases were detected with deep-learning ct reconstruction compared to iterative reconstruction. deep learning reconstructions are suitable for hepatic metastases staging and follow-up. graphical abstract
[ "objectivesdetection", "liver metastases", "oncological management", "computed tomography", "iterative reconstructions", "this indication", "certain limitations", "image reconstructions", "dlir", "deep neural networks", "a significant noise reduction", "iterative reconstructions", "reports", "improvements", "image quality", "their impact", "liver metastases detection", "our main objective", "dlir", "the number", "detected liver metastasis", "our secondary objective", "metastases", "conspicuity", "the two reconstruction methods.methodsct images", "121 patients", "liver metastases", "a 50% adaptive statistical iterative reconstruction", "three levels", "dlir", "each reconstruction", "two double-blinded radiologists", "a maximum", "ten metastases", "visibility", "contour definitions", "comparisons", "methods", "continuous parameters", "mixed models.resultsa higher number", "metastases", "one reader", "median", "(q₁–q₃", "p", "ten patents", "more metastases", "both readers", "a third reader", "confirmation", "metastases visibility", "contour definition", "dlir", "the detection", "visibility", "liver metastases", "asir-v", "the number", "liver metastases", "detected.critical relevance", "high strength", "an increase", "liver metastases detection", "hybrid iterative reconstruction", "clinical oncology imaging", "the limitations", "ct.key", "points detection", "liver metastases", "standard ct reconstructions", "more liver metastases", "deep-learning ct reconstruction", "iterative reconstruction", "deep learning reconstructions", "hepatic metastases", "follow-up", "graphical abstract", "two", "121", "50%", "50%-asir", "three", "two", "ten", "one", "7", "2–10", "733", "5", "2–10", "ten", "third", "ct.key" ]
Predicting Renal Toxicity of Compounds with Deep Learning and Machine Learning Methods
[ "Bitopan Mazumdar", "Pankaj Kumar Deva Sarma", "Hridoy Jyoti Mahanta" ]
Renal toxicity prediction plays a vital role in drug discovery and clinical practice, as it helps to identify potentially harmful compounds and mitigate adverse effects on the renal system. Compound with inherent renal-toxic potential is one of the major concerns for drug development as it leads to failure in drug discovery. Predicting nephrotoxic probabilities of a compound at an early stage can be effective for reducing the drug failure rate. Hence, it is crucial to develop a mechanism to analyze the renal toxicity of a drug-candidate optimally and quickly. To mitigate the risks associated with renal toxicity, predictive models leveraging machine learning and deep learning techniques have gained significant attention. In this study, 287 human renal-toxic drugs and 278 non-renal-toxic drugs were collected to develop a deep learning model and 27 machine learning models using 8 kinds of fingerprints and Rdkit descriptors. The deep neural network model shows better generalization scores on five-fold cross-validation and Extra-tree model shows better performance score on test data. Structural alerts, specific chemical substructures associated with renal toxicity, offer a valuable tool for early toxicity assessment. Therefore, the substructures of renal toxic compounds were studied by applying association rule mining technique based on frequent itemset patterns. A method has been proposed for generating structural alerts and 10 structural alerts have been generated using the method.
10.1007/s42979-023-02258-2
predicting renal toxicity of compounds with deep learning and machine learning methods
renal toxicity prediction plays a vital role in drug discovery and clinical practice, as it helps to identify potentially harmful compounds and mitigate adverse effects on the renal system. compound with inherent renal-toxic potential is one of the major concerns for drug development as it leads to failure in drug discovery. predicting nephrotoxic probabilities of a compound at an early stage can be effective for reducing the drug failure rate. hence, it is crucial to develop a mechanism to analyze the renal toxicity of a drug-candidate optimally and quickly. to mitigate the risks associated with renal toxicity, predictive models leveraging machine learning and deep learning techniques have gained significant attention. in this study, 287 human renal-toxic drugs and 278 non-renal-toxic drugs were collected to develop a deep learning model and 27 machine learning models using 8 kinds of fingerprints and rdkit descriptors. the deep neural network model shows better generalization scores on five-fold cross-validation and extra-tree model shows better performance score on test data. structural alerts, specific chemical substructures associated with renal toxicity, offer a valuable tool for early toxicity assessment. therefore, the substructures of renal toxic compounds were studied by applying association rule mining technique based on frequent itemset patterns. a method has been proposed for generating structural alerts and 10 structural alerts have been generated using the method.
[ "renal toxicity prediction", "a vital role", "drug discovery", "clinical practice", "it", "potentially harmful compounds", "adverse effects", "the renal system", "inherent renal-toxic potential", "the major concerns", "drug development", "it", "failure", "drug discovery", "nephrotoxic probabilities", "a compound", "an early stage", "the drug failure rate", "it", "a mechanism", "the renal toxicity", "a drug-candidate", "the risks", "renal toxicity", "predictive models", "machine learning", "deep learning techniques", "significant attention", "this study", "287 human renal-toxic drugs", "278 non-renal-toxic drugs", "a deep learning model", "27 machine learning models", "8 kinds", "fingerprints", "rdkit descriptors", "the deep neural network model", "better generalization scores", "five-fold cross-validation and extra-tree model", "better performance score", "test data", "structural alerts", "specific chemical substructures", "renal toxicity", "a valuable tool", "early toxicity assessment", "the substructures", "renal toxic compounds", "association rule mining technique", "frequent itemset patterns", "a method", "structural alerts", "10 structural alerts", "the method", "287", "278", "27", "8", "five-fold", "10" ]
A deep ensemble learning method for cherry classification
[ "Kiyas Kayaalp" ]
In many agricultural products, information technologies are utilized in classification processes at the desired quality. It is undesirable to mix different types of cherries, especially in export-type cherries. In this study on cherries, one of the important export products of Turkey, the classification of cherry species was carried out with ensemble learning methods. In this study, a new dataset consisting of 3570 images of seven different cherry species grown in Isparta region was created. The generated new dataset was trained with six different deep learning models with pre-learning on the original and incremental dataset. As a result of the training with incremental data, the best result was obtained from the DenseNet169 model with an accuracy of 99.57%. The two deep learning models with the best results were transferred to ensemble learning and a 100% accuracy rate was obtained with the Maximum Voting model.
10.1007/s00217-024-04490-3
a deep ensemble learning method for cherry classification
in many agricultural products, information technologies are utilized in classification processes at the desired quality. it is undesirable to mix different types of cherries, especially in export-type cherries. in this study on cherries, one of the important export products of turkey, the classification of cherry species was carried out with ensemble learning methods. in this study, a new dataset consisting of 3570 images of seven different cherry species grown in isparta region was created. the generated new dataset was trained with six different deep learning models with pre-learning on the original and incremental dataset. as a result of the training with incremental data, the best result was obtained from the densenet169 model with an accuracy of 99.57%. the two deep learning models with the best results were transferred to ensemble learning and a 100% accuracy rate was obtained with the maximum voting model.
[ "many agricultural products", "information technologies", "classification processes", "the desired quality", "it", "different types", "cherries", "export-type cherries", "this study", "cherries", "the important export products", "turkey", "the classification", "cherry species", "ensemble learning methods", "this study", "a new dataset", "3570 images", "seven different cherry species", "isparta region", "the generated new dataset", "six different deep learning models", "pre", "the original and incremental dataset", "a result", "the training", "incremental data", "the best result", "the densenet169 model", "an accuracy", "99.57%", "the two deep learning models", "the best results", "ensemble learning", "a 100% accuracy rate", "the maximum voting model", "one", "3570", "seven", "isparta", "six", "99.57%", "two", "100%" ]
The application of deep learning technology in integrated circuit design
[ "Lihua Dai", "Ben Wang", "Xuemin Cheng", "Qin Wang", "Xinsen Ni" ]
This study addresses the intricate challenge of circuit layout optimization central to integrated circuit (IC) design, where the primary goals involve attaining an optimal balance among power consumption, performance metrics, and chip area (collectively known as PPA optimization). The complexity of this task, evolving into a multidimensional problem under multiple constraints, necessitates the exploration of advanced methodologies. In response to these challenges, our research introduces deep learning technology as an innovative strategy to revolutionize circuit layout optimization. Specifically, we employ Convolutional Neural Networks (CNNs) in developing an optimized layout strategy, a performance prediction model, and a system for fault detection and real-time monitoring. These methodologies leverage the capacity of deep learning models to learn from high-dimensional data representations and handle multiple constraints effectively. Extensive case studies and rigorous experimental validations demonstrate the efficacy of our proposed deep learning-driven approaches. The results highlight significant enhancements in optimization efficiency, with an average power consumption reduction of 120% and latency decrease by 1.5%. Furthermore, the predictive capabilities are markedly improved, evidenced by a reduction in the average absolute error for power predictions to 3%. Comparative analyses conclusively illustrate the superiority of deep learning methodologies over conventional techniques across several dimensions. Our findings underscore the potential of deep learning in achieving higher accuracy in predictions, demonstrating stronger generalization abilities, facilitating superior design quality, and ultimately enhancing user satisfaction. These advancements not only validate the applicability of deep learning in IC design optimization but also pave the way for future advancements in addressing the multidimensional challenges inherent to circuit layout optimization.
10.1186/s42162-024-00380-w
the application of deep learning technology in integrated circuit design
this study addresses the intricate challenge of circuit layout optimization central to integrated circuit (ic) design, where the primary goals involve attaining an optimal balance among power consumption, performance metrics, and chip area (collectively known as ppa optimization). the complexity of this task, evolving into a multidimensional problem under multiple constraints, necessitates the exploration of advanced methodologies. in response to these challenges, our research introduces deep learning technology as an innovative strategy to revolutionize circuit layout optimization. specifically, we employ convolutional neural networks (cnns) in developing an optimized layout strategy, a performance prediction model, and a system for fault detection and real-time monitoring. these methodologies leverage the capacity of deep learning models to learn from high-dimensional data representations and handle multiple constraints effectively. extensive case studies and rigorous experimental validations demonstrate the efficacy of our proposed deep learning-driven approaches. the results highlight significant enhancements in optimization efficiency, with an average power consumption reduction of 120% and latency decrease by 1.5%. furthermore, the predictive capabilities are markedly improved, evidenced by a reduction in the average absolute error for power predictions to 3%. comparative analyses conclusively illustrate the superiority of deep learning methodologies over conventional techniques across several dimensions. our findings underscore the potential of deep learning in achieving higher accuracy in predictions, demonstrating stronger generalization abilities, facilitating superior design quality, and ultimately enhancing user satisfaction. these advancements not only validate the applicability of deep learning in ic design optimization but also pave the way for future advancements in addressing the multidimensional challenges inherent to circuit layout optimization.
[ "this study", "the intricate challenge", "circuit layout optimization", "integrated circuit (ic) design", "the primary goals", "an optimal balance", "power consumption", "performance metrics", "chip area", "ppa optimization", "the complexity", "this task", "a multidimensional problem", "multiple constraints", "the exploration", "advanced methodologies", "response", "these challenges", "our research", "deep learning technology", "an innovative strategy", "circuit layout optimization", "we", "convolutional neural networks", "cnns", "an optimized layout strategy", "a performance prediction model", "a system", "fault detection", "real-time monitoring", "these methodologies", "the capacity", "deep learning models", "high-dimensional data representations", "multiple constraints", "extensive case studies", "rigorous experimental validations", "the efficacy", "our proposed deep learning-driven approaches", "the results", "significant enhancements", "optimization efficiency", "an average power consumption reduction", "120% and latency decrease", "1.5%", "the predictive capabilities", "a reduction", "the average absolute error", "power predictions", "3%", "comparative analyses", "the superiority", "deep learning methodologies", "conventional techniques", "several dimensions", "our findings", "the potential", "deep learning", "higher accuracy", "predictions", "stronger generalization abilities", "superior design quality", "user satisfaction", "these advancements", "the applicability", "deep learning", "design optimization", "the way", "future advancements", "the multidimensional challenges", "layout optimization", "120%", "1.5%", "3%" ]
Alzheimer Disease Detection Using MRI: Deep Learning Review
[ "Pallavi Saikia", "Sanjib Kumar Kalita" ]
Deep learning for Alzheimer disease detection using MRI is an emerging area of research in medical image processing. With the advent of new technologies based on methods of Deep Learning, medical diagnosis of certain diseases has become possible. Alzheimer’s is a disease which till date has no cure but the progression of the disease can be slowed down or a person who might develop Alzheimer in future can be treated if early prediction of it is possible. Early prediction of the disease benefits medical professionals a lot for the correct diagnosis. Medical professionals label Alzheimer patients based on the progression of the disease as AD (Alzheimer’s), CN (cognitive impairment) and MCI (mild cognitive impairment). In literature many Deep Learning models are used for the early detection of Alzheimer’s disease. Though there are many image modalities, MRI images being non-invasive are considered best for these types of medical experiments. In the present study, we have studied the evolution of Alzheimer’s disease over time, research gaps, challenges towards building advanced models, possible recommendations to overcome those challenges and determining the best performance model. We have focussed on an exhaustive and comprehensive survey of very deep learning-based research papers on Alzheimer’s disease detection. The present work will benefit researchers by providing a clear direction for future scope in Alzheimer disease detection and analysis.
10.1007/s42979-024-02868-4
alzheimer disease detection using mri: deep learning review
deep learning for alzheimer disease detection using mri is an emerging area of research in medical image processing. with the advent of new technologies based on methods of deep learning, medical diagnosis of certain diseases has become possible. alzheimer’s is a disease which till date has no cure but the progression of the disease can be slowed down or a person who might develop alzheimer in future can be treated if early prediction of it is possible. early prediction of the disease benefits medical professionals a lot for the correct diagnosis. medical professionals label alzheimer patients based on the progression of the disease as ad (alzheimer’s), cn (cognitive impairment) and mci (mild cognitive impairment). in literature many deep learning models are used for the early detection of alzheimer’s disease. though there are many image modalities, mri images being non-invasive are considered best for these types of medical experiments. in the present study, we have studied the evolution of alzheimer’s disease over time, research gaps, challenges towards building advanced models, possible recommendations to overcome those challenges and determining the best performance model. we have focussed on an exhaustive and comprehensive survey of very deep learning-based research papers on alzheimer’s disease detection. the present work will benefit researchers by providing a clear direction for future scope in alzheimer disease detection and analysis.
[ "deep learning", "alzheimer disease detection", "mri", "an emerging area", "research", "medical image processing", "the advent", "new technologies", "methods", "deep learning", "medical diagnosis", "certain diseases", "alzheimer’s", "a disease", "which", "date", "no cure", "the progression", "the disease", "a person", "who", "alzheimer", "future", "early prediction", "it", "early prediction", "medical professionals", "the correct diagnosis", "medical professionals", "alzheimer patients", "the progression", "the disease", "ad", "alzheimer", "(mild cognitive impairment", "literature", "many deep learning models", "the early detection", "alzheimer’s disease", "many image modalities", "mri images", "these types", "medical experiments", "the present study", "we", "the evolution", "alzheimer’s disease", "time", "advanced models", "possible recommendations", "those challenges", "the best performance model", "we", "an exhaustive and comprehensive survey", "very deep learning-based research papers", "alzheimer’s disease detection", "the present work", "researchers", "a clear direction", "future scope", "alzheimer disease detection", "analysis", "mci" ]
Enhancing Cardiovascular Health Monitoring Through IoT and Deep Learning Technologies
[ "Huu-Hoa Nguyen", "Tri-Thuc Vo" ]
Monitoring cardiovascular conditions is crucial in healthcare due to their significant impact on overall wellness and their role in mitigating heart-related diseases. To address this pressing issue, the research community has introduced various methodologies, among which deep learning approaches have shown notable effectiveness. Despite this potential, creating effective deep learning models tailored to time-series health data remains challenging. These challenges include processing vast amounts of data from IoT devices, building and selecting optimal deep learning models with appropriate parameters, and designing and implementing reliable systems for cardiovascular health monitoring. In response, our research introduces an advanced cardiovascular health monitoring system that takes advantage of wearable IoT and deep learning technologies to enhance healthcare. It features a multi-layered architecture, where each layer serves a specific function and integrates closely with the others. This integration enhances the system’s overall functionality and reliability. The system efficiently integrates processes from health data collection through deep learning analysis to the delivery of timely health alerts. A critical feature of this system is the targeted deep learning model, selected from six potential algorithms based on experiments with data from IoT-enabled smartwatches. The selection process involves an in-depth evaluation of the models’ performance, leading to the choice of the most effective model for system implementation. Our results highlight the system’s effectiveness in monitoring cardiovascular health, underscoring its potential to enhance personalized healthcare, particularly for individuals with cardiovascular conditions, through advanced monitoring technologies.
10.1007/s42979-024-02962-7
enhancing cardiovascular health monitoring through iot and deep learning technologies
monitoring cardiovascular conditions is crucial in healthcare due to their significant impact on overall wellness and their role in mitigating heart-related diseases. to address this pressing issue, the research community has introduced various methodologies, among which deep learning approaches have shown notable effectiveness. despite this potential, creating effective deep learning models tailored to time-series health data remains challenging. these challenges include processing vast amounts of data from iot devices, building and selecting optimal deep learning models with appropriate parameters, and designing and implementing reliable systems for cardiovascular health monitoring. in response, our research introduces an advanced cardiovascular health monitoring system that takes advantage of wearable iot and deep learning technologies to enhance healthcare. it features a multi-layered architecture, where each layer serves a specific function and integrates closely with the others. this integration enhances the system’s overall functionality and reliability. the system efficiently integrates processes from health data collection through deep learning analysis to the delivery of timely health alerts. a critical feature of this system is the targeted deep learning model, selected from six potential algorithms based on experiments with data from iot-enabled smartwatches. the selection process involves an in-depth evaluation of the models’ performance, leading to the choice of the most effective model for system implementation. our results highlight the system’s effectiveness in monitoring cardiovascular health, underscoring its potential to enhance personalized healthcare, particularly for individuals with cardiovascular conditions, through advanced monitoring technologies.
[ "cardiovascular conditions", "healthcare", "their significant impact", "overall wellness", "their role", "heart-related diseases", "this pressing issue", "the research community", "various methodologies", "which", "deep learning approaches", "notable effectiveness", "this potential", "effective deep learning models", "time-series health data", "these challenges", "vast amounts", "data", "iot devices", "optimal deep learning models", "appropriate parameters", "reliable systems", "cardiovascular health monitoring", "response", "our research", "an advanced cardiovascular health monitoring system", "that", "advantage", "wearable iot and deep learning technologies", "healthcare", "it", "a multi-layered architecture", "each layer", "a specific function", "integrates", "the others", "this integration", "the system’s overall functionality", "reliability", "the system", "processes", "health data collection", "deep learning analysis", "the delivery", "timely health alerts", "a critical feature", "this system", "the targeted deep learning model", "six potential algorithms", "experiments", "data", "iot-enabled smartwatches", "the selection process", "-depth", "the models’ performance", "the choice", "the most effective model", "system implementation", "our results", "the system’s effectiveness", "cardiovascular health", "its potential", "personalized healthcare", "individuals", "cardiovascular conditions", "advanced monitoring technologies", "six" ]
Medical Image Analysis Through Deep Learning Techniques: A Comprehensive Survey
[ "K. Balasamy", "V. Seethalakshmi", "S. Suganyadevi" ]
Deep learning has been the subject of a significant amount of research interest in the development of novel algorithms for deep learning algorithms and medical image processing have proven very effective in a number of medical imaging tasks to help illness identification and diagnosis. The shortage of large-sized datasets that are also adequately annotated is a key barrier that is preventing the continued advancement of deep learning models used in medical image analysis, despite the effectiveness of these models. Over the course of the previous 5 years, a great number of research have concentrated on finding solutions to this problem. In this work, we present a complete overview of the use of deep learning techniques in a variety of medical image analysis tasks by reviewing and summarizing the current research that have been conducted in this area. In particular, we place an emphasis on the most recent developments and contributions of state-of-the-art semi-supervised and unsupervised deep learning in medical image analysis. These advancements and contributions are shortened based on various application scenarios, which include image registration, segmentation, classification and detection. In addition to this, we explore the significant technological obstacles that lie ahead and provide some potential answers for the ongoing study.
10.1007/s11277-024-11428-1
medical image analysis through deep learning techniques: a comprehensive survey
deep learning has been the subject of a significant amount of research interest in the development of novel algorithms for deep learning algorithms and medical image processing have proven very effective in a number of medical imaging tasks to help illness identification and diagnosis. the shortage of large-sized datasets that are also adequately annotated is a key barrier that is preventing the continued advancement of deep learning models used in medical image analysis, despite the effectiveness of these models. over the course of the previous 5 years, a great number of research have concentrated on finding solutions to this problem. in this work, we present a complete overview of the use of deep learning techniques in a variety of medical image analysis tasks by reviewing and summarizing the current research that have been conducted in this area. in particular, we place an emphasis on the most recent developments and contributions of state-of-the-art semi-supervised and unsupervised deep learning in medical image analysis. these advancements and contributions are shortened based on various application scenarios, which include image registration, segmentation, classification and detection. in addition to this, we explore the significant technological obstacles that lie ahead and provide some potential answers for the ongoing study.
[ "deep learning", "the subject", "a significant amount", "research interest", "the development", "novel algorithms", "deep learning algorithms", "medical image processing", "a number", "medical imaging tasks", "identification", "diagnosis", "the shortage", "large-sized datasets", "that", "a key barrier", "that", "the continued advancement", "deep learning models", "medical image analysis", "the effectiveness", "these models", "the course", "the previous 5 years", "a great number", "research", "solutions", "this problem", "this work", "we", "a complete overview", "the use", "deep learning techniques", "a variety", "medical image analysis tasks", "the current research", "that", "this area", "we", "an emphasis", "the most recent developments", "contributions", "state", "the-art", "deep learning", "medical image analysis", "these advancements", "contributions", "various application scenarios", "which", "image registration", "segmentation", "classification", "detection", "addition", "this", "we", "the significant technological obstacles", "that", "some potential answers", "the ongoing study", "the previous 5 years" ]
Deep-SEA: a deep learning based patient specific multi-modality post-cancer survival estimation architecture
[ "Ibtihaj Ahmad", "Saleem Riaz" ]
Cancer survival estimation is essential for post-cancer patient care, cancer management policy building, and the development of tailored treatment plans. Existing survival estimation methods use censored data; therefore, standard machine learning methods can not be used directly. Some censoring-based semi-machine learning methods have recently been proposed; however, these methods pose challenges. They are less patient-specific and non-linear. Furthermore, they rely on single-modality features. These drawbacks result in lower survival estimation performance. To address these issues, this work proposes a framework named Deep-SEA. Compared to the state-of-the-art, Deep-SEA uses multi-modality features, i.e., clinical, radiology, and histology features. These features are analyzed with statistical methods, and only significant features are selected. Then, the baseline hazard of the Cox model is estimated using Breslow’s estimator, which is optimized using stochastic gradient descent. Finally, the risk function, i.e., the parameters of our model, are estimated via an ANN with time as additional input. ANN makes it non-linear while training on the patient-specific features makes it more patient-specific than the state-of-the-art. We train and evaluate Deep-SEA on five datasets, including head, neck, and colorectal-liver cancer. We have achieved a Concordance-index (C-index) score of 0.7181, the highest compared to the state-of-the-art. Results and ablation studies on Deep-SEA suggest that the proposed method improves cancer survival estimation and can be applied to other estimations, such as cancer recurrence estimation.
10.1007/s10489-024-05794-3
deep-sea: a deep learning based patient specific multi-modality post-cancer survival estimation architecture
cancer survival estimation is essential for post-cancer patient care, cancer management policy building, and the development of tailored treatment plans. existing survival estimation methods use censored data; therefore, standard machine learning methods can not be used directly. some censoring-based semi-machine learning methods have recently been proposed; however, these methods pose challenges. they are less patient-specific and non-linear. furthermore, they rely on single-modality features. these drawbacks result in lower survival estimation performance. to address these issues, this work proposes a framework named deep-sea. compared to the state-of-the-art, deep-sea uses multi-modality features, i.e., clinical, radiology, and histology features. these features are analyzed with statistical methods, and only significant features are selected. then, the baseline hazard of the cox model is estimated using breslow’s estimator, which is optimized using stochastic gradient descent. finally, the risk function, i.e., the parameters of our model, are estimated via an ann with time as additional input. ann makes it non-linear while training on the patient-specific features makes it more patient-specific than the state-of-the-art. we train and evaluate deep-sea on five datasets, including head, neck, and colorectal-liver cancer. we have achieved a concordance-index (c-index) score of 0.7181, the highest compared to the state-of-the-art. results and ablation studies on deep-sea suggest that the proposed method improves cancer survival estimation and can be applied to other estimations, such as cancer recurrence estimation.
[ "cancer survival estimation", "post-cancer patient care", "cancer management policy building", "the development", "tailored treatment plans", "existing survival estimation methods", "censored data", "standard machine learning methods", "some censoring-based semi-machine learning methods", "these methods", "challenges", "they", "they", "single-modality features", "these drawbacks", "lower survival estimation performance", "these issues", "this work", "a framework", "deep-sea", "the-art", "deep-sea", "these features", "statistical methods", "only significant features", "the baseline hazard", "the cox model", "breslow’s estimator", "which", "stochastic gradient descent", "the risk function", "the parameters", "our model", "an ann", "time", "additional input", "ann", "it", "while training", "the patient-specific features", "it", "the state", "the-art", "we", "deep-sea", "five datasets", "head", "neck", "colorectal-liver cancer", "we", "c-index", "the state", "the-art", "results", "ablation studies", "deep-sea", "the proposed method", "cancer survival estimation", "other estimations", "cancer recurrence estimation", "ann", "five", "0.7181" ]
A survey on deep learning and machine learning techniques over histopathology image based Osteosarcoma Detection
[ "K. V. Deepak", "R. Bharanidharan" ]
Osteosarcoma is a common type of cancer that occurs in the cells and spreads to the bones. Osteosarcoma can develop due to genetic mutations, but most cases are not inherited. It often starts at the ends of long bones in the arms and legs during periods of rapid growth. Osteosarcoma can be diagnosed by Histopathological examination using microscopic images. In recent trends, pathological images are increasingly being examined by computer-intelligent approaches, also known as computational pathology. This field utilizes machine learning and deep learning algorithms to analyze digital pathology slides. Histopathology image based osteosarcoma detection is not the subject of any review papers. So, this study examines historical and current literature to present a concise review of Histopathology image based Osteosarcoma detection. This review discussed the types, clinical diagnosis, and modern and future treatment methods of Osteosarcoma. Also, a review of machine learning (ML) and deep learning (DL) approaches for histopathology image osteosarcoma detection is analyzed. Sophisticated methods and the future scope of osteosarcoma detection have also been discussed. Comparative analysis and results of reviewed papers were also analyzed in this review study. This article aims to generate fresh concepts for creating more potent treatment alternatives.
10.1007/s11042-024-19554-5
a survey on deep learning and machine learning techniques over histopathology image based osteosarcoma detection
osteosarcoma is a common type of cancer that occurs in the cells and spreads to the bones. osteosarcoma can develop due to genetic mutations, but most cases are not inherited. it often starts at the ends of long bones in the arms and legs during periods of rapid growth. osteosarcoma can be diagnosed by histopathological examination using microscopic images. in recent trends, pathological images are increasingly being examined by computer-intelligent approaches, also known as computational pathology. this field utilizes machine learning and deep learning algorithms to analyze digital pathology slides. histopathology image based osteosarcoma detection is not the subject of any review papers. so, this study examines historical and current literature to present a concise review of histopathology image based osteosarcoma detection. this review discussed the types, clinical diagnosis, and modern and future treatment methods of osteosarcoma. also, a review of machine learning (ml) and deep learning (dl) approaches for histopathology image osteosarcoma detection is analyzed. sophisticated methods and the future scope of osteosarcoma detection have also been discussed. comparative analysis and results of reviewed papers were also analyzed in this review study. this article aims to generate fresh concepts for creating more potent treatment alternatives.
[ "osteosarcoma", "a common type", "cancer", "that", "the cells", "the bones", "osteosarcoma", "genetic mutations", "most cases", "it", "the ends", "long bones", "the arms", "legs", "periods", "rapid growth", "osteosarcoma", "histopathological examination", "microscopic images", "recent trends", "pathological images", "computer-intelligent approaches", "computational pathology", "this field", "machine learning", "deep learning algorithms", "digital pathology slides", "osteosarcoma detection", "the subject", "any review papers", "this study", "historical and current literature", "a concise review", "osteosarcoma detection", "this review", "the types", "clinical diagnosis", "modern and future treatment methods", "osteosarcoma", "a review", "machine learning", "ml", "dl", "histopathology image osteosarcoma detection", "sophisticated methods", "the future scope", "osteosarcoma detection", "comparative analysis", "results", "reviewed papers", "this review study", "this article", "fresh concepts", "more potent treatment alternatives", "osteosarcoma", "osteosarcoma", "osteosarcoma", "osteosarcoma", "osteosarcoma", "osteosarcoma", "osteosarcoma", "osteosarcoma" ]
Prediction of crop yield in India using machine learning and hybrid deep learning models
[ "Krithikha Sanju Saravanan", "Velammal Bhagavathiappan" ]
Crop yield prediction is one of the burgeoning research areas in the agriculture domain. The crop yield forecasting models are developed to enhance productivity with improved decision-making strategies. The highly efficient crop yield forecasting model assists farmers in determining when, what and how much to plant on their cultivable land. The main objective of the proposed research work is to build a high efficacious crop yield prediction model based on the data available for the period of 21 years from 1997 to 2017 using machine learning and hybrid deep learning approaches. Two prediction models have been proposed in this research work to predict the crop yield accurately. The first model is a machine learning-based model which uses the CatBoost regression model and its hyperparameters are tuned which improves the performance of the yield prediction using the Optuna framework. The second model is the hybrid deep learning model which uses spatio-temporal attention-based convolutional neural network (STACNN) for extracting the features and the bidirectional long short-term memory (BiLSTM) model for predicting the crop yield effectively. The proposed models are evaluated using the error metrics and compared with the latest contemporary models. From the evaluation results, it is shown that the proposed models significantly outperform all other existing models and CatBoost regression model slightly performs better than the STACNN-BiLSTM model, with the R-squared value of 0.99.
10.1007/s11600-024-01312-8
prediction of crop yield in india using machine learning and hybrid deep learning models
crop yield prediction is one of the burgeoning research areas in the agriculture domain. the crop yield forecasting models are developed to enhance productivity with improved decision-making strategies. the highly efficient crop yield forecasting model assists farmers in determining when, what and how much to plant on their cultivable land. the main objective of the proposed research work is to build a high efficacious crop yield prediction model based on the data available for the period of 21 years from 1997 to 2017 using machine learning and hybrid deep learning approaches. two prediction models have been proposed in this research work to predict the crop yield accurately. the first model is a machine learning-based model which uses the catboost regression model and its hyperparameters are tuned which improves the performance of the yield prediction using the optuna framework. the second model is the hybrid deep learning model which uses spatio-temporal attention-based convolutional neural network (stacnn) for extracting the features and the bidirectional long short-term memory (bilstm) model for predicting the crop yield effectively. the proposed models are evaluated using the error metrics and compared with the latest contemporary models. from the evaluation results, it is shown that the proposed models significantly outperform all other existing models and catboost regression model slightly performs better than the stacnn-bilstm model, with the r-squared value of 0.99.
[ "crop yield prediction", "the burgeoning research areas", "the agriculture domain", "the crop yield forecasting models", "productivity", "improved decision-making strategies", "the highly efficient crop yield forecasting model", "farmers", "their cultivable land", "the main objective", "the proposed research work", "a high efficacious crop yield prediction model", "the data", "the period", "21 years", "machine learning", "hybrid deep learning approaches", "two prediction models", "this research work", "the first model", "a machine learning-based model", "which", "the catboost regression model", "which", "the performance", "the yield prediction", "the optuna framework", "the second model", "the hybrid deep learning model", "which", "spatio-temporal attention-based convolutional neural network", "stacnn", "the features", "the bidirectional long short-term memory (bilstm) model", "the crop yield", "the proposed models", "the error metrics", "the latest contemporary models", "the evaluation results", "it", "the proposed models", "all other existing models", "catboost regression model", "the stacnn-bilstm model", "the r-squared value", "the period of 21 years from 1997 to 2017", "two", "first", "second", "stacnn", "stacnn", "0.99" ]
Learning sparse and smooth functions by deep Sigmoid nets
[ "Xia Liu" ]
To pursue the outperformance of deep nets in learning, we construct a deep net with three hidden layers and prove that, implementing the empirical risk minimization (ERM) on this deep net, the estimator can theoretically realize the optimal learning rates without the classical saturation problem. In other words, deepening the networks with only three hidden layers can overcome the saturation and not degrade the optimal learning rates. The obtained results underlie the success of deep nets and provide a theoretical guidance for deep learning.
10.1007/s11766-023-4309-4
learning sparse and smooth functions by deep sigmoid nets
to pursue the outperformance of deep nets in learning, we construct a deep net with three hidden layers and prove that, implementing the empirical risk minimization (erm) on this deep net, the estimator can theoretically realize the optimal learning rates without the classical saturation problem. in other words, deepening the networks with only three hidden layers can overcome the saturation and not degrade the optimal learning rates. the obtained results underlie the success of deep nets and provide a theoretical guidance for deep learning.
[ "the outperformance", "deep nets", "we", "a deep net", "three hidden layers", "the empirical risk minimization", "erm", "this deep net", "the estimator", "the optimal learning rates", "the classical saturation problem", "other words", "the networks", "only three hidden layers", "the saturation", "the optimal learning rates", "the obtained results", "the success", "deep nets", "a theoretical guidance", "deep learning", "three", "only three" ]
Parametric RSigELU: a new trainable activation function for deep learning
[ "Serhat Kiliçarslan", "Mete Celik" ]
Activation functions are used to extract meaningful relationships from real-world problems with the help of deep learning models. Thus, the development of activation functions which affect deep learning models’ performances is of great interest to researchers. In the literature, mostly, nonlinear activation functions are preferred since linear activation functions limit the learning performances of the deep learning models. Non-linear activation functions can be classified as fixed-parameter and trainable activation functions based on whether the activation function parameter is fixed (i.e., user-given) or modified during the training process of deep learning models. The parameters of the fixed-parameter activation functions should be specified before the deep learning model training process. However, it takes too much time to determine appropriate function parameter values and can cause the slow convergence of the deep learning model. In contrast, trainable activation functions whose parameters are updated in each iteration of deep learning models training process achieve faster and better convergence by obtaining the most suitable parameter values for the datasets and deep learning architectures. This study proposes parametric RSigELU (P+RSigELU) trainable activation functions, such as P+RSigELU Single (P+RSigELUS) and P+RSigELU Double (P+RSigELUD), to improve the performance of fixed-parameter activation function of RSigELU. The performances of the proposed trainable activation functions were evaluated on the benchmark datasets of MNIST, CIFAR-10, and CIFAR-100 datasets. Results show that the proposed activation functions outperforms PReLU, PELU, ALISA, P+FELU, PSigmoid, and GELU activation functions found in the literature. To access the codes of the activation function; https://github.com/serhatklc/P-RsigELU-Activation-Function.
10.1007/s00521-024-09538-9
parametric rsigelu: a new trainable activation function for deep learning
activation functions are used to extract meaningful relationships from real-world problems with the help of deep learning models. thus, the development of activation functions which affect deep learning models’ performances is of great interest to researchers. in the literature, mostly, nonlinear activation functions are preferred since linear activation functions limit the learning performances of the deep learning models. non-linear activation functions can be classified as fixed-parameter and trainable activation functions based on whether the activation function parameter is fixed (i.e., user-given) or modified during the training process of deep learning models. the parameters of the fixed-parameter activation functions should be specified before the deep learning model training process. however, it takes too much time to determine appropriate function parameter values and can cause the slow convergence of the deep learning model. in contrast, trainable activation functions whose parameters are updated in each iteration of deep learning models training process achieve faster and better convergence by obtaining the most suitable parameter values for the datasets and deep learning architectures. this study proposes parametric rsigelu (p+rsigelu) trainable activation functions, such as p+rsigelu single (p+rsigelus) and p+rsigelu double (p+rsigelud), to improve the performance of fixed-parameter activation function of rsigelu. the performances of the proposed trainable activation functions were evaluated on the benchmark datasets of mnist, cifar-10, and cifar-100 datasets. results show that the proposed activation functions outperforms prelu, pelu, alisa, p+felu, psigmoid, and gelu activation functions found in the literature. to access the codes of the activation function; https://github.com/serhatklc/p-rsigelu-activation-function.
[ "activation functions", "meaningful relationships", "real-world problems", "the help", "deep learning models", "the development", "activation functions", "which", "deep learning models’ performances", "great interest", "researchers", "the literature", "nonlinear activation functions", "linear activation functions", "the learning performances", "the deep learning models", "non-linear activation functions", "fixed-parameter and trainable activation functions", "the activation function parameter", "the training process", "deep learning models", "the parameters", "the fixed-parameter activation functions", "the deep learning model training process", "it", "too much time", "appropriate function parameter values", "the slow convergence", "the deep learning model", "contrast", "trainable activation functions", "whose parameters", "each iteration", "deep learning models training process", "faster and better convergence", "the most suitable parameter values", "the datasets", "deep learning architectures", "this study", "parametric rsigelu (p+rsigelu) trainable activation functions", "p+rsigelu single (p+rsigelus", "(p+rsigelud", "the performance", "fixed-parameter activation function", "rsigelu", "the performances", "the proposed trainable activation functions", "the benchmark datasets", "mnist", "cifar-10", "cifar-100 datasets", "results", "the proposed activation functions", "prelu", "pelu", "alisa", "p+felu", "psigmoid", "gelu activation functions", "the literature", "the codes", "the activation function", "linear", "p+rsigelu", "cifar-10", "prelu", "pelu", "alisa, p+felu" ]
Deep-learning-enabled antibiotic discovery through molecular de-extinction
[ "Fangping Wan", "Marcelo D. T. Torres", "Jacqueline Peng", "Cesar de la Fuente-Nunez" ]
Molecular de-extinction aims at resurrecting molecules to solve antibiotic resistance and other present-day biological and biomedical problems. Here we show that deep learning can be used to mine the proteomes of all available extinct organisms for the discovery of antibiotic peptides. We trained ensembles of deep-learning models consisting of a peptide-sequence encoder coupled with neural networks for the prediction of antimicrobial activity and used it to mine 10,311,899 peptides. The models predicted 37,176 sequences with broad-spectrum antimicrobial activity, 11,035 of which were not found in extant organisms. We synthesized 69 peptides and experimentally confirmed their activity against bacterial pathogens. Most peptides killed bacteria by depolarizing their cytoplasmic membrane, contrary to known antimicrobial peptides, which tend to target the outer membrane. Notably, lead compounds (including mammuthusin-2 from the woolly mammoth, elephasin-2 from the straight-tusked elephant, hydrodamin-1 from the ancient sea cow, mylodonin-2 from the giant sloth and megalocerin-1 from the extinct giant elk) showed anti-infective activity in mice with skin abscess or thigh infections. Molecular de-extinction aided by deep learning may accelerate the discovery of therapeutic molecules.
10.1038/s41551-024-01201-x
deep-learning-enabled antibiotic discovery through molecular de-extinction
molecular de-extinction aims at resurrecting molecules to solve antibiotic resistance and other present-day biological and biomedical problems. here we show that deep learning can be used to mine the proteomes of all available extinct organisms for the discovery of antibiotic peptides. we trained ensembles of deep-learning models consisting of a peptide-sequence encoder coupled with neural networks for the prediction of antimicrobial activity and used it to mine 10,311,899 peptides. the models predicted 37,176 sequences with broad-spectrum antimicrobial activity, 11,035 of which were not found in extant organisms. we synthesized 69 peptides and experimentally confirmed their activity against bacterial pathogens. most peptides killed bacteria by depolarizing their cytoplasmic membrane, contrary to known antimicrobial peptides, which tend to target the outer membrane. notably, lead compounds (including mammuthusin-2 from the woolly mammoth, elephasin-2 from the straight-tusked elephant, hydrodamin-1 from the ancient sea cow, mylodonin-2 from the giant sloth and megalocerin-1 from the extinct giant elk) showed anti-infective activity in mice with skin abscess or thigh infections. molecular de-extinction aided by deep learning may accelerate the discovery of therapeutic molecules.
[ "-", "extinction", "molecules", "antibiotic resistance", "other present-day biological and biomedical problems", "we", "deep learning", "the proteomes", "all available extinct organisms", "the discovery", "antibiotic peptides", "we", "ensembles", "deep-learning models", "a peptide-sequence encoder", "neural networks", "the prediction", "antimicrobial activity", "it", "mine", "10,311,899 peptides", "the models", "37,176 sequences", "broad-spectrum antimicrobial activity", "which", "extant organisms", "we", "69 peptides", "their activity", "bacterial pathogens", "most peptides", "bacteria", "their cytoplasmic membrane", "known antimicrobial peptides", "which", "the outer membrane", "lead compounds", "the woolly mammoth", "the straight-tusked elephant", "hydrodamin-1", "the ancient sea cow", "the giant sloth", "megalocerin-1", "the extinct giant", "elk", "anti-infective activity", "mice", "skin abscess", "thigh infections", "molecular de-extinction", "deep learning", "the discovery", "therapeutic molecules", "10,311,899", "37,176", "11,035", "69" ]
Investigations on machine learning, deep learning, and longitudinal regression methods for global greenhouse gases predictions
[ "S. D. Yazd", "N. Gharib", "J. F. Derakhshandeh" ]
Combating climate change is one of the key topics and concerns that our community is currently facing these days. Since a few decades ago, greenhouse gases emissions gradually started to increase. Thus, the researchers attempted to find a permanent solution for this challenge. In this paper, different methods of machine learning and deep learning models are applied to evaluate their effectiveness and accuracy in predicting greenhouse gases emissions. To increase the accuracy of the assessment, the data of 101 countries over a period of 31 years (1991–2021) from the official World Bank sources are considered. In this study, therefore, a range of matrices are analyzed including Mean Squared Error, Root Mean Squared Error, Mean Absolute Error, p value, and correlation coefficient for each model. The results demonstrate that machine learning models typically overtake the deep learning models with the support vector regression polynomial model. Besides, the statistical findings of longitudinal regression analysis reveal that by increasing cereal yield, and permanent cropland areas the greenhouse gas emissions are significantly increase (p value = 0.000) and (p value = 0.06) respectively; however, increasing in renewable energy consumption and forest areas will lead to decreasing in greenhouse gas emissions (p value = 0.000) and (p value = 0.07) respectively.
10.1007/s13762-024-06014-8
investigations on machine learning, deep learning, and longitudinal regression methods for global greenhouse gases predictions
combating climate change is one of the key topics and concerns that our community is currently facing these days. since a few decades ago, greenhouse gases emissions gradually started to increase. thus, the researchers attempted to find a permanent solution for this challenge. in this paper, different methods of machine learning and deep learning models are applied to evaluate their effectiveness and accuracy in predicting greenhouse gases emissions. to increase the accuracy of the assessment, the data of 101 countries over a period of 31 years (1991–2021) from the official world bank sources are considered. in this study, therefore, a range of matrices are analyzed including mean squared error, root mean squared error, mean absolute error, p value, and correlation coefficient for each model. the results demonstrate that machine learning models typically overtake the deep learning models with the support vector regression polynomial model. besides, the statistical findings of longitudinal regression analysis reveal that by increasing cereal yield, and permanent cropland areas the greenhouse gas emissions are significantly increase (p value = 0.000) and (p value = 0.06) respectively; however, increasing in renewable energy consumption and forest areas will lead to decreasing in greenhouse gas emissions (p value = 0.000) and (p value = 0.07) respectively.
[ "climate change", "the key topics", "concerns", "our community", "these days", "greenhouse gases emissions", "the researchers", "a permanent solution", "this challenge", "this paper", "different methods", "machine learning", "deep learning models", "their effectiveness", "accuracy", "greenhouse gases emissions", "the accuracy", "the assessment", "the data", "101 countries", "a period", "31 years", "the official world bank sources", "this study", "a range", "matrices", "mean squared error", "root mean squared error", "absolute error", "p value", "correlation", "each model", "the results", "machine learning models", "the deep learning models", "the support vector regression polynomial model", "the statistical findings", "longitudinal regression analysis", "cereal yield", "the greenhouse gas emissions", "(p value", "renewable energy consumption", "forest areas", "greenhouse gas emissions", "p value", "(p value", "one", "these days", "a few decades ago", "101", "a period of 31 years", "0.000", "0.06", "0.000", "0.07" ]
Deep ensemble transfer learning framework for COVID-19 Arabic text identification via deep active learning and text data augmentation
[ "Abdullah Y. Muaad", "Hanumanthappa Jayappa Davanagere", "Jamil Hussain", "Mugahed A. Al-antari" ]
Since the declaration of COVID-19 as an epidemic by the World Health Organization in September 2019, the task of monitoring and managing the spread of misinformation related to COVID-19 on social media has become increasingly challenging. Particularly, when it comes to Arabic text recognition, tracking and identifying misleading information regarding COVID-19 on social media platforms presents significant difficulties. The detection of such text is crucial in order to safeguard our communities from the dissemination of false rumors and to establish a reliable framework for text detection. This research paper introduces a novel deep ensemble learning framework that aims to recognize ten distinct categories of Arabic text related to COVID-19, including rumors, restrictions, celebrity news, informational news, plans, requests, advice, personal anecdotes, and others. To build our framework, we leverage a dataset called ArCOVID-19Vac (Dataset1), which consists of 10,000 text samples. In addition, the DAL technique is employed to automatically annotate new text samples acquired for Dataset2. To further expand our datasets, we employ back translation and random insertion augmentation strategies, resulting in Datasets3 and Datasets4, each containing 24,000 text samples. By merging the original and augmented datasets, we create Dataset5, which comprises a total of 39,000 text samples. The final text prediction is carried out using three transformer-based BERT models through ensemble transfer learning. Our proposed ensemble framework is evaluated using each dataset independently, and it demonstrates promising results, particularly when utilizing the largest dataset (Dataset5), achieving an accuracy of 93%, precision of 92%, recall of 93%, and an F1-score of 91%. Furthermore, our proposed model exhibits performance improvements of 27%, 18%, 2%, and 1% when utilizing Datasets2, 3, 4, and 5, respectively. The comprehensive experimental results demonstrate that our ensemble framework outperforms other state-of-the-art AI-based models. The encouraging performance of our framework in accurately identifying Arabic text has the potential to enhance decision-making processes regarding the identification of misleading information and to facilitate the development of strategies to combat such issues in the future.
10.1007/s11042-024-18487-3
deep ensemble transfer learning framework for covid-19 arabic text identification via deep active learning and text data augmentation
since the declaration of covid-19 as an epidemic by the world health organization in september 2019, the task of monitoring and managing the spread of misinformation related to covid-19 on social media has become increasingly challenging. particularly, when it comes to arabic text recognition, tracking and identifying misleading information regarding covid-19 on social media platforms presents significant difficulties. the detection of such text is crucial in order to safeguard our communities from the dissemination of false rumors and to establish a reliable framework for text detection. this research paper introduces a novel deep ensemble learning framework that aims to recognize ten distinct categories of arabic text related to covid-19, including rumors, restrictions, celebrity news, informational news, plans, requests, advice, personal anecdotes, and others. to build our framework, we leverage a dataset called arcovid-19vac (dataset1), which consists of 10,000 text samples. in addition, the dal technique is employed to automatically annotate new text samples acquired for dataset2. to further expand our datasets, we employ back translation and random insertion augmentation strategies, resulting in datasets3 and datasets4, each containing 24,000 text samples. by merging the original and augmented datasets, we create dataset5, which comprises a total of 39,000 text samples. the final text prediction is carried out using three transformer-based bert models through ensemble transfer learning. our proposed ensemble framework is evaluated using each dataset independently, and it demonstrates promising results, particularly when utilizing the largest dataset (dataset5), achieving an accuracy of 93%, precision of 92%, recall of 93%, and an f1-score of 91%. furthermore, our proposed model exhibits performance improvements of 27%, 18%, 2%, and 1% when utilizing datasets2, 3, 4, and 5, respectively. the comprehensive experimental results demonstrate that our ensemble framework outperforms other state-of-the-art ai-based models. the encouraging performance of our framework in accurately identifying arabic text has the potential to enhance decision-making processes regarding the identification of misleading information and to facilitate the development of strategies to combat such issues in the future.
[ "the declaration", "covid-19", "an epidemic", "the world health organization", "september", "the task", "monitoring", "the spread", "misinformation", "covid-19", "social media", "it", "arabic text recognition", "tracking", "misleading information", "covid-19", "social media platforms", "significant difficulties", "the detection", "such text", "order", "our communities", "the dissemination", "false rumors", "a reliable framework", "text detection", "this research paper", "a novel deep ensemble learning framework", "that", "ten distinct categories", "arabic text", "covid-19", "rumors", "restrictions", "celebrity news", "informational news", "plans", "requests", "advice", "personal anecdotes", "others", "our framework", "we", "a dataset", "dataset1", "which", "10,000 text samples", "addition", "the dal technique", "new text samples", "dataset2", "our datasets", "we", "back translation and random insertion augmentation strategies", "datasets3", "datasets4", "each", "24,000 text samples", "the original and augmented datasets", "we", "dataset5", "which", "a total", "39,000 text samples", "the final text prediction", "three transformer-based bert models", "ensemble transfer learning", "our proposed ensemble framework", "each dataset", "it", "promising results", "the largest dataset", "dataset5", "an accuracy", "93%", "precision", "92%", "recall", "93%", "an f1-score", "91%", "our proposed model", "performance improvements", "27%", "18%", "2%", "1%", "datasets2", "the comprehensive experimental results", "our ensemble framework", "the-art", "the encouraging performance", "our framework", "arabic text", "the potential", "decision-making processes", "the identification", "information", "the development", "strategies", "such issues", "the future", "covid-19", "the world health organization", "september 2019", "covid-19", "arabic", "recognition", "covid-19", "ten", "arabic", "covid-19", "dataset1", "10,000", "dataset2", "datasets3", "24,000", "39,000", "three", "93%", "92%", "93%", "91%", "27%", "18%", "2%", "1%", "datasets2", "3", "4", "5", "arabic" ]
A comprehensive review of image denoising in deep learning
[ "Rusul Sabah Jebur", "Mohd Hazli Bin Mohamed Zabil", "Dalal Adulmohsin Hammood", "Lim Kok Cheng" ]
Deep learning has gained significant interest in image denoising, but there are notable distinctions in the types of deep learning methods used. Discriminative learning is suitable for handling Gaussian noise, while optimization models are effective in estimating real noise. However, there is limited research that summarizes the different deep learning techniques for image denoising. This paper conducts a comprehensive review of techniques and methods used for image denoising and identifying challenges associated with existing approaches. In this paper, a comparative study of deep techniques is offered in image denoising. The study conducted a comprehensive review of 68 papers on image denoising published between 2018 and 2023, providing a detailed analysis of the field’s progress and methodologies over a period of 5 years. Through its literature review, the paper provides a comprehensive summary of image denoising in deep learning, including machine learning methods for image denoising, CNNs for image denoising, additive white noisy-image denoising, real noisy image denoising, blind denoising, hybrid noisy images, state- of-the-art methods for image denoising with deep learning, salt and pepper noise, non-linear filters for digital color images. The main objective of this paper is to provide a comprehensive overview of various approaches used for image denoising, each of which has been explored and developed based on individual research studies. The paper aims to discuss these approaches in a systematic and organized manner, comparing their strengths and weaknesses to provide insights for future research in the field.
10.1007/s11042-023-17468-2
a comprehensive review of image denoising in deep learning
deep learning has gained significant interest in image denoising, but there are notable distinctions in the types of deep learning methods used. discriminative learning is suitable for handling gaussian noise, while optimization models are effective in estimating real noise. however, there is limited research that summarizes the different deep learning techniques for image denoising. this paper conducts a comprehensive review of techniques and methods used for image denoising and identifying challenges associated with existing approaches. in this paper, a comparative study of deep techniques is offered in image denoising. the study conducted a comprehensive review of 68 papers on image denoising published between 2018 and 2023, providing a detailed analysis of the field’s progress and methodologies over a period of 5 years. through its literature review, the paper provides a comprehensive summary of image denoising in deep learning, including machine learning methods for image denoising, cnns for image denoising, additive white noisy-image denoising, real noisy image denoising, blind denoising, hybrid noisy images, state- of-the-art methods for image denoising with deep learning, salt and pepper noise, non-linear filters for digital color images. the main objective of this paper is to provide a comprehensive overview of various approaches used for image denoising, each of which has been explored and developed based on individual research studies. the paper aims to discuss these approaches in a systematic and organized manner, comparing their strengths and weaknesses to provide insights for future research in the field.
[ "deep learning", "significant interest", "image denoising", "notable distinctions", "the types", "deep learning methods", "discriminative learning", "gaussian noise", "optimization models", "real noise", "limited research", "that", "the different deep learning techniques", "image denoising", "this paper", "a comprehensive review", "techniques", "methods", "image denoising", "challenges", "existing approaches", "this paper", "a comparative study", "deep techniques", "image denoising", "the study", "a comprehensive review", "68 papers", "image denoising", "a detailed analysis", "the field’s progress", "methodologies", "a period", "5 years", "its literature review", "the paper", "a comprehensive summary", "image", "deep learning", "machine learning methods", "image denoising", "cnns", "image denoising", "additive white noisy-image denoising", "real noisy image denoising", "blind denoising", "hybrid noisy images", "state-", "the-art", "methods", "image", "deep learning", "salt and pepper noise", "non-linear filters", "digital color images", "the main objective", "this paper", "a comprehensive overview", "various approaches", "image denoising", "each", "which", "individual research studies", "the paper", "these approaches", "a systematic and organized manner", "their strengths", "weaknesses", "insights", "future research", "the field", "68", "between 2018 and 2023", "5 years" ]
Fisheye freshness detection using common deep learning algorithms and machine learning methods with a developed mobile application
[ "Muslume Beyza Yildiz", "Elham Tahsin Yasin", "Murat Koklu" ]
AbstractFish is commonly ingested as a source of protein and essential nutrients for humans. To fully benefit from the proteins and substances in fish it is crucial to ensure its freshness. If fish is stored for an extended period, its freshness deteriorates. Determining the freshness of fish can be done by examining its eyes, smell, skin, and gills. In this study, artificial intelligence techniques are employed to assess fish freshness. The author’s objective is to evaluate the freshness of fish by analyzing its eye characteristics. To achieve this, we have developed a combination of deep and machine learning models that accurately classify the freshness of fish. Furthermore, an application that utilizes both deep learning and machine learning, to instantly detect the freshness of any given fish sample was created. Two deep learning algorithms (SqueezeNet, and VGG19) were implemented to extract features from image data. Additionally, five machine learning models to classify the freshness levels of fish samples were applied. Machine learning models include (k-NN, RF, SVM, LR, and ANN). Based on the results, it can be inferred that employing the VGG19 model for feature selection in conjunction with an Artificial Neural Network (ANN) for classification yields the most favorable success rate of 77.3% for the FFE dataset.Graphical Abstract
10.1007/s00217-024-04493-0
fisheye freshness detection using common deep learning algorithms and machine learning methods with a developed mobile application
abstractfish is commonly ingested as a source of protein and essential nutrients for humans. to fully benefit from the proteins and substances in fish it is crucial to ensure its freshness. if fish is stored for an extended period, its freshness deteriorates. determining the freshness of fish can be done by examining its eyes, smell, skin, and gills. in this study, artificial intelligence techniques are employed to assess fish freshness. the author’s objective is to evaluate the freshness of fish by analyzing its eye characteristics. to achieve this, we have developed a combination of deep and machine learning models that accurately classify the freshness of fish. furthermore, an application that utilizes both deep learning and machine learning, to instantly detect the freshness of any given fish sample was created. two deep learning algorithms (squeezenet, and vgg19) were implemented to extract features from image data. additionally, five machine learning models to classify the freshness levels of fish samples were applied. machine learning models include (k-nn, rf, svm, lr, and ann). based on the results, it can be inferred that employing the vgg19 model for feature selection in conjunction with an artificial neural network (ann) for classification yields the most favorable success rate of 77.3% for the ffe dataset.graphical abstract
[ "abstractfish", "a source", "protein", "essential nutrients", "humans", "the proteins", "substances", "fish", "it", "its freshness", "fish", "an extended period", "its freshness", "the freshness", "fish", "its eyes", "smell", "skin", "gills", "this study", "artificial intelligence techniques", "fish freshness", "the author’s objective", "the freshness", "fish", "its eye characteristics", "this", "we", "a combination", "deep and machine learning models", "that", "the freshness", "fish", "an application", "that", "both deep learning", "machine learning", "the freshness", "any given fish sample", "two deep learning algorithms", "squeezenet", "vgg19", "features", "image data", "five machine learning models", "the freshness levels", "fish samples", "machine learning models", "k", ", rf, svm", "lr", "ann", "the results", "it", "the vgg19 model", "feature selection", "conjunction", "an artificial neural network", "ann", "classification yields", "the most favorable success rate", "77.3%", "the ffe dataset.graphical abstract", "two", "five", "77.3%" ]
Deep learning for transesophageal echocardiography view classification
[ "Kirsten R. Steffner", "Matthew Christensen", "George Gill", "Michael Bowdish", "Justin Rhee", "Abirami Kumaresan", "Bryan He", "James Zou", "David Ouyang" ]
Transesophageal echocardiography (TEE) imaging is a vital tool used in the evaluation of complex cardiac pathology and the management of cardiac surgery patients. A key limitation to the application of deep learning strategies to intraoperative and intraprocedural TEE data is the complexity and unstructured nature of these images. In the present study, we developed a deep learning-based, multi-category TEE view classification model that can be used to add structure to intraoperative and intraprocedural TEE imaging data. More specifically, we trained a convolutional neural network (CNN) to predict standardized TEE views using labeled intraoperative and intraprocedural TEE videos from Cedars-Sinai Medical Center (CSMC). We externally validated our model on intraoperative TEE videos from Stanford University Medical Center (SUMC). Accuracy of our model was high across all labeled views. The highest performance was achieved for the Trans-Gastric Left Ventricular Short Axis View (area under the receiver operating curve [AUC] = 0.971 at CSMC, 0.957 at SUMC), the Mid-Esophageal Long Axis View (AUC = 0.954 at CSMC, 0.905 at SUMC), the Mid-Esophageal Aortic Valve Short Axis View (AUC = 0.946 at CSMC, 0.898 at SUMC), and the Mid-Esophageal 4-Chamber View (AUC = 0.939 at CSMC, 0.902 at SUMC). Ultimately, we demonstrate that our deep learning model can accurately classify standardized TEE views, which will facilitate further downstream deep learning analyses for intraoperative and intraprocedural TEE imaging.
10.1038/s41598-023-50735-8
deep learning for transesophageal echocardiography view classification
transesophageal echocardiography (tee) imaging is a vital tool used in the evaluation of complex cardiac pathology and the management of cardiac surgery patients. a key limitation to the application of deep learning strategies to intraoperative and intraprocedural tee data is the complexity and unstructured nature of these images. in the present study, we developed a deep learning-based, multi-category tee view classification model that can be used to add structure to intraoperative and intraprocedural tee imaging data. more specifically, we trained a convolutional neural network (cnn) to predict standardized tee views using labeled intraoperative and intraprocedural tee videos from cedars-sinai medical center (csmc). we externally validated our model on intraoperative tee videos from stanford university medical center (sumc). accuracy of our model was high across all labeled views. the highest performance was achieved for the trans-gastric left ventricular short axis view (area under the receiver operating curve [auc] = 0.971 at csmc, 0.957 at sumc), the mid-esophageal long axis view (auc = 0.954 at csmc, 0.905 at sumc), the mid-esophageal aortic valve short axis view (auc = 0.946 at csmc, 0.898 at sumc), and the mid-esophageal 4-chamber view (auc = 0.939 at csmc, 0.902 at sumc). ultimately, we demonstrate that our deep learning model can accurately classify standardized tee views, which will facilitate further downstream deep learning analyses for intraoperative and intraprocedural tee imaging.
[ "transesophageal echocardiography", "(tee) imaging", "a vital tool", "the evaluation", "complex cardiac pathology", "the management", "cardiac surgery patients", "a key limitation", "the application", "deep learning strategies", "to intraoperative and intraprocedural tee data", "the complexity", "unstructured nature", "these images", "the present study", "we", "a deep learning-based, multi-category tee view classification model", "that", "structure", "intraoperative and intraprocedural tee imaging data", "we", "a convolutional neural network", "cnn", "standardized tee views", "labeled intraoperative and intraprocedural tee videos", "cedars-sinai medical center", "csmc", "we", "our model", "intraoperative tee videos", "stanford university medical center", "sumc", "accuracy", "our model", "all labeled views", "the highest performance", "ventricular short axis view", "area", "the receiver operating curve", "csmc", "sumc", ", the mid-esophageal long axis view", "auc =", "csmc", "sumc", "the mid-esophageal aortic valve short axis view", "csmc", "sumc", "the mid-esophageal 4-chamber view", "csmc", "sumc", "we", "our deep learning model", "standardized tee views", "which", "further downstream deep learning analyses", "intraoperative and intraprocedural tee imaging", "cnn", "stanford university medical center", "0.971", "0.957", "0.954", "0.905", "0.946", "0.898", "4", "0.939", "0.902" ]
Automated optical inspection based on synthetic mechanisms combining deep learning and machine learning
[ "Chung-Ming Lo", "Ting-Yi Lin" ]
The quality inspection of products before delivery plays a critical role in ensuring manufacturing quality. Quick and accurate inspection of samples is realized by highly automated inspection based on pattern recognition in smart manufacturing. Conventional ensemble methods have been demonstrated to be effective for defect detection. This study further proposed synthetic mechanisms based on using various features and learning classifiers. A database of 6000 sample images of printed circuit board (PCB) connectors collected from factories was compiled. A novel confidence synthesis mechanism was proposed to prescreen images using deep learning features. Spatially connected texture features were then used to reclassify images with low reliabilities. The synthetic mechanism was found to outperform a single classifier. In particular, the highest improvement in accuracy (from 96.00 to 97.83%) was obtained using the confidence-based synthesis. The synthetic mechanism can be used to achieve high accuracy in defect detection and make automation in smart manufacturing more practicable.
10.1007/s10845-024-02474-4
automated optical inspection based on synthetic mechanisms combining deep learning and machine learning
the quality inspection of products before delivery plays a critical role in ensuring manufacturing quality. quick and accurate inspection of samples is realized by highly automated inspection based on pattern recognition in smart manufacturing. conventional ensemble methods have been demonstrated to be effective for defect detection. this study further proposed synthetic mechanisms based on using various features and learning classifiers. a database of 6000 sample images of printed circuit board (pcb) connectors collected from factories was compiled. a novel confidence synthesis mechanism was proposed to prescreen images using deep learning features. spatially connected texture features were then used to reclassify images with low reliabilities. the synthetic mechanism was found to outperform a single classifier. in particular, the highest improvement in accuracy (from 96.00 to 97.83%) was obtained using the confidence-based synthesis. the synthetic mechanism can be used to achieve high accuracy in defect detection and make automation in smart manufacturing more practicable.
[ "the quality inspection", "products", "delivery", "a critical role", "manufacturing quality", "quick and accurate inspection", "samples", "highly automated inspection", "pattern recognition", "smart manufacturing", "conventional ensemble methods", "defect detection", "this study", "further proposed synthetic mechanisms", "various features", "learning classifiers", "a database", "6000 sample images", "printed circuit board (pcb) connectors", "factories", "a novel confidence synthesis mechanism", "images", "deep learning features", "spatially connected texture features", "images", "low reliabilities", "the synthetic mechanism", "a single classifier", "the highest improvement", "accuracy", "97.83%", "the confidence-based synthesis", "the synthetic mechanism", "high accuracy", "defect detection", "automation", "smart manufacturing", "6000", "96.00", "97.83%" ]
Deep ensemble transfer learning framework for COVID-19 Arabic text identification via deep active learning and text data augmentation
[ "Abdullah Y. Muaad", "Hanumanthappa Jayappa Davanagere", "Jamil Hussain", "Mugahed A. Al-antari" ]
Since the declaration of COVID-19 as an epidemic by the World Health Organization in September 2019, the task of monitoring and managing the spread of misinformation related to COVID-19 on social media has become increasingly challenging. Particularly, when it comes to Arabic text recognition, tracking and identifying misleading information regarding COVID-19 on social media platforms presents significant difficulties. The detection of such text is crucial in order to safeguard our communities from the dissemination of false rumors and to establish a reliable framework for text detection. This research paper introduces a novel deep ensemble learning framework that aims to recognize ten distinct categories of Arabic text related to COVID-19, including rumors, restrictions, celebrity news, informational news, plans, requests, advice, personal anecdotes, and others. To build our framework, we leverage a dataset called ArCOVID-19Vac (Dataset1), which consists of 10,000 text samples. In addition, the DAL technique is employed to automatically annotate new text samples acquired for Dataset2. To further expand our datasets, we employ back translation and random insertion augmentation strategies, resulting in Datasets3 and Datasets4, each containing 24,000 text samples. By merging the original and augmented datasets, we create Dataset5, which comprises a total of 39,000 text samples. The final text prediction is carried out using three transformer-based BERT models through ensemble transfer learning. Our proposed ensemble framework is evaluated using each dataset independently, and it demonstrates promising results, particularly when utilizing the largest dataset (Dataset5), achieving an accuracy of 93%, precision of 92%, recall of 93%, and an F1-score of 91%. Furthermore, our proposed model exhibits performance improvements of 27%, 18%, 2%, and 1% when utilizing Datasets2, 3, 4, and 5, respectively. The comprehensive experimental results demonstrate that our ensemble framework outperforms other state-of-the-art AI-based models. The encouraging performance of our framework in accurately identifying Arabic text has the potential to enhance decision-making processes regarding the identification of misleading information and to facilitate the development of strategies to combat such issues in the future.
10.1007/s11042-024-18487-3
deep ensemble transfer learning framework for covid-19 arabic text identification via deep active learning and text data augmentation
since the declaration of covid-19 as an epidemic by the world health organization in september 2019, the task of monitoring and managing the spread of misinformation related to covid-19 on social media has become increasingly challenging. particularly, when it comes to arabic text recognition, tracking and identifying misleading information regarding covid-19 on social media platforms presents significant difficulties. the detection of such text is crucial in order to safeguard our communities from the dissemination of false rumors and to establish a reliable framework for text detection. this research paper introduces a novel deep ensemble learning framework that aims to recognize ten distinct categories of arabic text related to covid-19, including rumors, restrictions, celebrity news, informational news, plans, requests, advice, personal anecdotes, and others. to build our framework, we leverage a dataset called arcovid-19vac (dataset1), which consists of 10,000 text samples. in addition, the dal technique is employed to automatically annotate new text samples acquired for dataset2. to further expand our datasets, we employ back translation and random insertion augmentation strategies, resulting in datasets3 and datasets4, each containing 24,000 text samples. by merging the original and augmented datasets, we create dataset5, which comprises a total of 39,000 text samples. the final text prediction is carried out using three transformer-based bert models through ensemble transfer learning. our proposed ensemble framework is evaluated using each dataset independently, and it demonstrates promising results, particularly when utilizing the largest dataset (dataset5), achieving an accuracy of 93%, precision of 92%, recall of 93%, and an f1-score of 91%. furthermore, our proposed model exhibits performance improvements of 27%, 18%, 2%, and 1% when utilizing datasets2, 3, 4, and 5, respectively. the comprehensive experimental results demonstrate that our ensemble framework outperforms other state-of-the-art ai-based models. the encouraging performance of our framework in accurately identifying arabic text has the potential to enhance decision-making processes regarding the identification of misleading information and to facilitate the development of strategies to combat such issues in the future.
[ "the declaration", "covid-19", "an epidemic", "the world health organization", "september", "the task", "monitoring", "the spread", "misinformation", "covid-19", "social media", "it", "arabic text recognition", "tracking", "misleading information", "covid-19", "social media platforms", "significant difficulties", "the detection", "such text", "order", "our communities", "the dissemination", "false rumors", "a reliable framework", "text detection", "this research paper", "a novel deep ensemble learning framework", "that", "ten distinct categories", "arabic text", "covid-19", "rumors", "restrictions", "celebrity news", "informational news", "plans", "requests", "advice", "personal anecdotes", "others", "our framework", "we", "a dataset", "dataset1", "which", "10,000 text samples", "addition", "the dal technique", "new text samples", "dataset2", "our datasets", "we", "back translation and random insertion augmentation strategies", "datasets3", "datasets4", "each", "24,000 text samples", "the original and augmented datasets", "we", "dataset5", "which", "a total", "39,000 text samples", "the final text prediction", "three transformer-based bert models", "ensemble transfer learning", "our proposed ensemble framework", "each dataset", "it", "promising results", "the largest dataset", "dataset5", "an accuracy", "93%", "precision", "92%", "recall", "93%", "an f1-score", "91%", "our proposed model", "performance improvements", "27%", "18%", "2%", "1%", "datasets2", "the comprehensive experimental results", "our ensemble framework", "the-art", "the encouraging performance", "our framework", "arabic text", "the potential", "decision-making processes", "the identification", "information", "the development", "strategies", "such issues", "the future", "covid-19", "the world health organization", "september 2019", "covid-19", "arabic", "recognition", "covid-19", "ten", "arabic", "covid-19", "dataset1", "10,000", "dataset2", "datasets3", "24,000", "39,000", "three", "93%", "92%", "93%", "91%", "27%", "18%", "2%", "1%", "datasets2", "3", "4", "5", "arabic" ]
A comprehensive review of image denoising in deep learning
[ "Rusul Sabah Jebur", "Mohd Hazli Bin Mohamed Zabil", "Dalal Adulmohsin Hammood", "Lim Kok Cheng" ]
Deep learning has gained significant interest in image denoising, but there are notable distinctions in the types of deep learning methods used. Discriminative learning is suitable for handling Gaussian noise, while optimization models are effective in estimating real noise. However, there is limited research that summarizes the different deep learning techniques for image denoising. This paper conducts a comprehensive review of techniques and methods used for image denoising and identifying challenges associated with existing approaches. In this paper, a comparative study of deep techniques is offered in image denoising. The study conducted a comprehensive review of 68 papers on image denoising published between 2018 and 2023, providing a detailed analysis of the field’s progress and methodologies over a period of 5 years. Through its literature review, the paper provides a comprehensive summary of image denoising in deep learning, including machine learning methods for image denoising, CNNs for image denoising, additive white noisy-image denoising, real noisy image denoising, blind denoising, hybrid noisy images, state- of-the-art methods for image denoising with deep learning, salt and pepper noise, non-linear filters for digital color images. The main objective of this paper is to provide a comprehensive overview of various approaches used for image denoising, each of which has been explored and developed based on individual research studies. The paper aims to discuss these approaches in a systematic and organized manner, comparing their strengths and weaknesses to provide insights for future research in the field.
10.1007/s11042-023-17468-2
a comprehensive review of image denoising in deep learning
deep learning has gained significant interest in image denoising, but there are notable distinctions in the types of deep learning methods used. discriminative learning is suitable for handling gaussian noise, while optimization models are effective in estimating real noise. however, there is limited research that summarizes the different deep learning techniques for image denoising. this paper conducts a comprehensive review of techniques and methods used for image denoising and identifying challenges associated with existing approaches. in this paper, a comparative study of deep techniques is offered in image denoising. the study conducted a comprehensive review of 68 papers on image denoising published between 2018 and 2023, providing a detailed analysis of the field’s progress and methodologies over a period of 5 years. through its literature review, the paper provides a comprehensive summary of image denoising in deep learning, including machine learning methods for image denoising, cnns for image denoising, additive white noisy-image denoising, real noisy image denoising, blind denoising, hybrid noisy images, state- of-the-art methods for image denoising with deep learning, salt and pepper noise, non-linear filters for digital color images. the main objective of this paper is to provide a comprehensive overview of various approaches used for image denoising, each of which has been explored and developed based on individual research studies. the paper aims to discuss these approaches in a systematic and organized manner, comparing their strengths and weaknesses to provide insights for future research in the field.
[ "deep learning", "significant interest", "image denoising", "notable distinctions", "the types", "deep learning methods", "discriminative learning", "gaussian noise", "optimization models", "real noise", "limited research", "that", "the different deep learning techniques", "image denoising", "this paper", "a comprehensive review", "techniques", "methods", "image denoising", "challenges", "existing approaches", "this paper", "a comparative study", "deep techniques", "image denoising", "the study", "a comprehensive review", "68 papers", "image denoising", "a detailed analysis", "the field’s progress", "methodologies", "a period", "5 years", "its literature review", "the paper", "a comprehensive summary", "image", "deep learning", "machine learning methods", "image denoising", "cnns", "image denoising", "additive white noisy-image denoising", "real noisy image denoising", "blind denoising", "hybrid noisy images", "state-", "the-art", "methods", "image", "deep learning", "salt and pepper noise", "non-linear filters", "digital color images", "the main objective", "this paper", "a comprehensive overview", "various approaches", "image denoising", "each", "which", "individual research studies", "the paper", "these approaches", "a systematic and organized manner", "their strengths", "weaknesses", "insights", "future research", "the field", "68", "between 2018 and 2023", "5 years" ]