Dataset Viewer
title
stringlengths 31
206
| authors
sequencelengths 1
85
| abstract
stringlengths 428
3.21k
| doi
stringlengths 21
31
| cleaned_title
stringlengths 31
206
| cleaned_abstract
stringlengths 428
3.21k
| key_phrases
sequencelengths 19
150
|
---|---|---|---|---|---|---|
A comparative analysis of deep learning and deep transfer learning approaches for identification of rice varieties | [
"Komal Sharma",
"Ganesh Kumar Sethi",
"Rajesh Kumar Bawa"
] | Rice is an essential staple food for human nutrition. Rice varieties worldwide have been planted, imported, and exported. During production and trading, different types of rice can be mixed. Due to rice impurities, rice importers and exporters may lose trust in each other, requiring the development of a rice variety identification system. India is a significant player in the global rice market, and this extensive study delves into the importance of rice there. The study uses state-of-the-art deep learning and TL classifiers to tackle the problems of rice variety detection. An enormous dataset consisting of more than 600,000 rice photographs divided into 22 different classes is presented in the study to improve classification accuracy. With a training accuracy of 96% and a testing accuracy of 80.5%, ResNet50 stands well among other deep learning models compared by the authors. These models include CNN, Deep CNN, AlexNet2, Xception, Inception V3, DenseNet121, and ResNet50. Finding the best classifiers to identify varieties accurately is crucial, and this work highlights their possible uses in rice seed production. This paper lays the groundwork for future research on image-based rice categorization by suggesting areas for development and investigating ensemble strategies to improve performance. | 10.1007/s11042-024-19126-7 | a comparative analysis of deep learning and deep transfer learning approaches for identification of rice varieties | rice is an essential staple food for human nutrition. rice varieties worldwide have been planted, imported, and exported. during production and trading, different types of rice can be mixed. due to rice impurities, rice importers and exporters may lose trust in each other, requiring the development of a rice variety identification system. india is a significant player in the global rice market, and this extensive study delves into the importance of rice there. the study uses state-of-the-art deep learning and tl classifiers to tackle the problems of rice variety detection. an enormous dataset consisting of more than 600,000 rice photographs divided into 22 different classes is presented in the study to improve classification accuracy. with a training accuracy of 96% and a testing accuracy of 80.5%, resnet50 stands well among other deep learning models compared by the authors. these models include cnn, deep cnn, alexnet2, xception, inception v3, densenet121, and resnet50. finding the best classifiers to identify varieties accurately is crucial, and this work highlights their possible uses in rice seed production. this paper lays the groundwork for future research on image-based rice categorization by suggesting areas for development and investigating ensemble strategies to improve performance. | [
"rice",
"an essential staple food",
"human nutrition",
"rice varieties",
"production",
"trading",
"different types",
"rice",
"rice impurities",
"rice importers",
"exporters",
"trust",
"the development",
"a rice variety identification system",
"india",
"a significant player",
"the global rice market",
"this extensive study",
"the importance",
"rice",
"the study",
"the-art",
"tl classifiers",
"the problems",
"rice variety detection",
"an enormous dataset",
"more than 600,000 rice photographs",
"22 different classes",
"the study",
"classification accuracy",
"a training accuracy",
"96%",
"a testing accuracy",
"80.5%",
"resnet50",
"other deep learning models",
"the authors",
"these models",
"cnn",
"deep cnn",
"alexnet2",
"xception",
"inception v3",
"densenet121",
"resnet50",
"the best classifiers",
"varieties",
"this work",
"their possible uses",
"rice seed production",
"this paper",
"the groundwork",
"future research",
"image-based rice categorization",
"areas",
"development",
"ensemble strategies",
"performance",
"rice",
"india",
"more than 600,000",
"22",
"96%",
"80.5%",
"resnet50",
"cnn",
"cnn",
"v3",
"resnet50"
] |
Interpretable deep learning methods for multiview learning | [
"Hengkang Wang",
"Han Lu",
"Ju Sun",
"Sandra E. Safo"
] | BackgroundTechnological advances have enabled the generation of unique and complementary types of data or views (e.g. genomics, proteomics, metabolomics) and opened up a new era in multiview learning research with the potential to lead to new biomedical discoveries.ResultsWe propose iDeepViewLearn (Interpretable Deep Learning Method for Multiview Learning) to learn nonlinear relationships in data from multiple views while achieving feature selection. iDeepViewLearn combines deep learning flexibility with the statistical benefits of data and knowledge-driven feature selection, giving interpretable results. Deep neural networks are used to learn view-independent low-dimensional embedding through an optimization problem that minimizes the difference between observed and reconstructed data, while imposing a regularization penalty on the reconstructed data. The normalized Laplacian of a graph is used to model bilateral relationships between variables in each view, therefore, encouraging selection of related variables. iDeepViewLearn is tested on simulated and three real-world data for classification, clustering, and reconstruction tasks. For the classification tasks, iDeepViewLearn had competitive classification results with state-of-the-art methods in various settings. For the clustering task, we detected molecular clusters that differed in their 10-year survival rates for breast cancer. For the reconstruction task, we were able to reconstruct handwritten images using a few pixels while achieving competitive classification accuracy. The results of our real data application and simulations with small to moderate sample sizes suggest that iDeepViewLearn may be a useful method for small-sample-size problems compared to other deep learning methods for multiview learning.ConclusioniDeepViewLearn is an innovative deep learning model capable of capturing nonlinear relationships between data from multiple views while achieving feature selection. It is fully open source and is freely available at https://github.com/lasandrall/iDeepViewLearn. | 10.1186/s12859-024-05679-9 | interpretable deep learning methods for multiview learning | backgroundtechnological advances have enabled the generation of unique and complementary types of data or views (e.g. genomics, proteomics, metabolomics) and opened up a new era in multiview learning research with the potential to lead to new biomedical discoveries.resultswe propose ideepviewlearn (interpretable deep learning method for multiview learning) to learn nonlinear relationships in data from multiple views while achieving feature selection. ideepviewlearn combines deep learning flexibility with the statistical benefits of data and knowledge-driven feature selection, giving interpretable results. deep neural networks are used to learn view-independent low-dimensional embedding through an optimization problem that minimizes the difference between observed and reconstructed data, while imposing a regularization penalty on the reconstructed data. the normalized laplacian of a graph is used to model bilateral relationships between variables in each view, therefore, encouraging selection of related variables. ideepviewlearn is tested on simulated and three real-world data for classification, clustering, and reconstruction tasks. for the classification tasks, ideepviewlearn had competitive classification results with state-of-the-art methods in various settings. for the clustering task, we detected molecular clusters that differed in their 10-year survival rates for breast cancer. for the reconstruction task, we were able to reconstruct handwritten images using a few pixels while achieving competitive classification accuracy. the results of our real data application and simulations with small to moderate sample sizes suggest that ideepviewlearn may be a useful method for small-sample-size problems compared to other deep learning methods for multiview learning.conclusionideepviewlearn is an innovative deep learning model capable of capturing nonlinear relationships between data from multiple views while achieving feature selection. it is fully open source and is freely available at https://github.com/lasandrall/ideepviewlearn. | [
"backgroundtechnological advances",
"the generation",
"unique and complementary types",
"data",
"views",
"e.g. genomics",
"proteomics",
"metabolomics",
"a new era",
"multiview learning research",
"the potential",
"new biomedical discoveries.resultswe propose ideepviewlearn",
"(interpretable deep learning method",
"multiview learning",
"nonlinear relationships",
"data",
"multiple views",
"feature selection",
"ideepviewlearn",
"deep learning flexibility",
"the statistical benefits",
"data",
"knowledge-driven feature selection",
"interpretable results",
"deep neural networks",
"an optimization problem",
"that",
"the difference",
"observed and reconstructed data",
"a regularization penalty",
"the reconstructed data",
"the normalized laplacian",
"a graph",
"bilateral relationships",
"variables",
"each view",
"selection",
"related variables",
"ideepviewlearn",
"simulated and three real-world data",
"classification",
"clustering",
"reconstruction tasks",
"the classification tasks",
"ideepviewlearn",
"competitive classification results",
"the-art",
"various settings",
"the clustering task",
"we",
"molecular clusters",
"that",
"their 10-year survival rates",
"breast cancer",
"the reconstruction task",
"we",
"handwritten images",
"a few pixels",
"competitive classification accuracy",
"the results",
"our real data application",
"simulations",
"small to moderate sample sizes",
"that ideepviewlearn",
"a useful method",
"small-sample-size problems",
"other deep learning methods",
"multiview learning.conclusionideepviewlearn",
"an innovative deep learning model",
"nonlinear relationships",
"data",
"multiple views",
"feature selection",
"it",
"fully open source",
"https://github.com/lasandrall/ideepviewlearn",
"ideepviewlearn",
"three",
"10-year",
"https://github.com/lasandrall/ideepviewlearn"
] |
Deep-learning-based image reconstruction with limited data: generating synthetic raw data using deep learning | [
"Frank Zijlstra",
"Peter Thomas While"
] | ObjectDeep learning has shown great promise for fast reconstruction of accelerated MRI acquisitions by learning from large amounts of raw data. However, raw data is not always available in sufficient quantities. This study investigates synthetic data generation to complement small datasets and improve reconstruction quality.Materials and methodsAn adversarial auto-encoder was trained to generate phase and coil sensitivity maps from magnitude images, which were combined into synthetic raw data.On a fourfold accelerated MR reconstruction task, deep-learning-based reconstruction networks were trained with varying amounts of training data (20 to 160 scans). Test set performance was compared between baseline experiments and experiments that incorporated synthetic training data.ResultsTraining with synthetic raw data showed decreasing reconstruction errors with increasing amounts of training data, but importantly this was magnitude-only data, rather than real raw data. For small training sets, training with synthetic data decreased the mean absolute error (MAE) by up to 7.5%, whereas for larger training sets the MAE increased by up to 2.6%.DiscussionSynthetic raw data generation improved reconstruction quality in scenarios with limited training data. A major advantage of synthetic data generation is that it allows for the reuse of magnitude-only datasets, which are more readily available than raw datasets. | 10.1007/s10334-024-01193-4 | deep-learning-based image reconstruction with limited data: generating synthetic raw data using deep learning | objectdeep learning has shown great promise for fast reconstruction of accelerated mri acquisitions by learning from large amounts of raw data. however, raw data is not always available in sufficient quantities. this study investigates synthetic data generation to complement small datasets and improve reconstruction quality.materials and methodsan adversarial auto-encoder was trained to generate phase and coil sensitivity maps from magnitude images, which were combined into synthetic raw data.on a fourfold accelerated mr reconstruction task, deep-learning-based reconstruction networks were trained with varying amounts of training data (20 to 160 scans). test set performance was compared between baseline experiments and experiments that incorporated synthetic training data.resultstraining with synthetic raw data showed decreasing reconstruction errors with increasing amounts of training data, but importantly this was magnitude-only data, rather than real raw data. for small training sets, training with synthetic data decreased the mean absolute error (mae) by up to 7.5%, whereas for larger training sets the mae increased by up to 2.6%.discussionsynthetic raw data generation improved reconstruction quality in scenarios with limited training data. a major advantage of synthetic data generation is that it allows for the reuse of magnitude-only datasets, which are more readily available than raw datasets. | [
"objectdeep learning",
"great promise",
"fast reconstruction",
"accelerated mri acquisitions",
"large amounts",
"raw data",
"raw data",
"sufficient quantities",
"this study",
"synthetic data generation",
"small datasets",
"reconstruction quality.materials",
"methodsan adversarial auto-encoder",
"phase and coil sensitivity maps",
"magnitude images",
"which",
"mr reconstruction task",
"deep-learning-based reconstruction networks",
"varying amounts",
"training data",
"20 to 160 scans",
"test set performance",
"baseline experiments",
"experiments",
"that",
"synthetic raw data",
"reconstruction errors",
"increasing amounts",
"training data",
"this",
"magnitude-only data",
"real raw data",
"small training sets",
"synthetic data",
"the mean absolute error",
"mae",
"up to 7.5%",
"larger training sets",
"the mae",
"2.6%.discussionsynthetic raw data generation",
"improved reconstruction quality",
"scenarios",
"limited training data",
"a major advantage",
"synthetic data generation",
"it",
"the reuse",
"magnitude-only datasets",
"which",
"raw datasets",
"20",
"160",
"up to 7.5%"
] |
Topic- and learning-related predictors of deep-level learning strategies | [
"Eve Kikas",
"Gintautas Silinskas",
"Eliis Härma"
] | The aim of this study was to examine which topic- and learning-related knowledge and motivational beliefs predict the use of specific deep-level learning strategies during an independent learning task. Participants included 335 Estonian fourth- and sixth-grade students who were asked to read about light processes and seasonal changes. The study was completed electronically. Topic-related knowledge was assessed via an open question about seasonal changes, and learning-related knowledge was assessed via scenario-based tasks. Expectancies, interest, and utility values related to learning astronomy and using deep-level learning strategies were assessed via questions based on the Situated Expectancy-Value Theory. Deep-level learning strategies (using drawings in addition to reading and self-testing) were assessed while completing the reading task. Among topic-related variables, prior knowledge and utility value—but not interest or expectancy in learning astronomy—were related to using deep-level learning strategies. Among learning-related variables, interest and utility value of effective learning—but not metacognitive knowledge of learning strategies or expectancy in using deep-level learning strategies—were related to using deep-level learning strategies. This study confirms that it is not enough to examine students’ knowledge and skills in using learning strategies with general or hypothetical questions, instead, it is of crucial importance to study students in real learning situations. | 10.1007/s10212-023-00766-6 | topic- and learning-related predictors of deep-level learning strategies | the aim of this study was to examine which topic- and learning-related knowledge and motivational beliefs predict the use of specific deep-level learning strategies during an independent learning task. participants included 335 estonian fourth- and sixth-grade students who were asked to read about light processes and seasonal changes. the study was completed electronically. topic-related knowledge was assessed via an open question about seasonal changes, and learning-related knowledge was assessed via scenario-based tasks. expectancies, interest, and utility values related to learning astronomy and using deep-level learning strategies were assessed via questions based on the situated expectancy-value theory. deep-level learning strategies (using drawings in addition to reading and self-testing) were assessed while completing the reading task. among topic-related variables, prior knowledge and utility value—but not interest or expectancy in learning astronomy—were related to using deep-level learning strategies. among learning-related variables, interest and utility value of effective learning—but not metacognitive knowledge of learning strategies or expectancy in using deep-level learning strategies—were related to using deep-level learning strategies. this study confirms that it is not enough to examine students’ knowledge and skills in using learning strategies with general or hypothetical questions, instead, it is of crucial importance to study students in real learning situations. | [
"the aim",
"this study",
"which",
"the use",
"specific deep-level learning strategies",
"an independent learning task",
"participants",
"335 estonian fourth- and sixth-grade students",
"who",
"light processes",
"seasonal changes",
"the study",
"topic-related knowledge",
"an open question",
"seasonal changes",
"learning-related knowledge",
"scenario-based tasks",
"expectancies",
"interest",
"utility values",
"astronomy",
"deep-level learning strategies",
"questions",
"the situated expectancy-value theory",
"deep-level learning strategies",
"drawings",
"addition",
"reading",
"self-testing",
"the reading task",
"topic-related variables",
"prior knowledge",
"utility value",
"not interest",
"expectancy",
"astronomy",
"deep-level learning strategies",
"learning-related variables",
"interest",
"utility value",
"effective learning",
"not metacognitive knowledge",
"strategies",
"expectancy",
"deep-level learning strategies",
"deep-level learning strategies",
"this study",
"it",
"students’ knowledge",
"skills",
"learning strategies",
"general or hypothetical questions",
"it",
"crucial importance",
"students",
"real learning situations",
"335",
"sixth"
] |
Urban traffic signal control optimization through Deep Q Learning and double Deep Q Learning: a novel approach for efficient traffic management | [
"Qazi Umer Jamil",
"Karam Dad Kallu",
"Muhammad Jawad Khan",
"Muhammad Safdar",
"Amad Zafar",
"Muhammad Umair Ali"
] | Traffic congestion remains a persistent challenge in urban areas, necessitating efficient traffic control strategies. This research explores the application of advanced reinforcement learning techniques, specifically Deep Q-Learning (DQN) and Double Deep Q-Learning (DDQN), to address this issue at a four-way traffic intersection. The RL agents are trained using a reward function based on minimizing waiting times, enabling them to learn effective traffic signal control policies. The study focuses on comparing the performance of a simple non-reinforcement learning (Non RL) agent, a Deep Q-Network (DQN) agent, and an improved Double Deep Q-Learning (DDQN) agent in different traffic scenarios. The Non RL agent, which follows a fixed order of traffic phases, demonstrates limitations in both low and high traffic situations, leading to inefficiencies and imbalanced queue lengths. On the other hand, the DQN agent exhibits promising results in low traffic conditions but struggles in high traffic due to its greedy behavior. The DDQN agent, with an extended green light base time, outperforms both the Non RL agent and the original DQN agent in high traffic scenarios, making it more suitable for real-world traffic conditions. However, it shows some inefficiencies in low traffic scenarios. Future research is recommended to address multi-agent deep reinforcement learning challenges, incorporate attention mechanisms and hierarchical reinforcement learning, explore graph theory applications, and develop efficient communication protocols among agents to further enhance traffic control solutions. | 10.1007/s11042-024-20060-x | urban traffic signal control optimization through deep q learning and double deep q learning: a novel approach for efficient traffic management | traffic congestion remains a persistent challenge in urban areas, necessitating efficient traffic control strategies. this research explores the application of advanced reinforcement learning techniques, specifically deep q-learning (dqn) and double deep q-learning (ddqn), to address this issue at a four-way traffic intersection. the rl agents are trained using a reward function based on minimizing waiting times, enabling them to learn effective traffic signal control policies. the study focuses on comparing the performance of a simple non-reinforcement learning (non rl) agent, a deep q-network (dqn) agent, and an improved double deep q-learning (ddqn) agent in different traffic scenarios. the non rl agent, which follows a fixed order of traffic phases, demonstrates limitations in both low and high traffic situations, leading to inefficiencies and imbalanced queue lengths. on the other hand, the dqn agent exhibits promising results in low traffic conditions but struggles in high traffic due to its greedy behavior. the ddqn agent, with an extended green light base time, outperforms both the non rl agent and the original dqn agent in high traffic scenarios, making it more suitable for real-world traffic conditions. however, it shows some inefficiencies in low traffic scenarios. future research is recommended to address multi-agent deep reinforcement learning challenges, incorporate attention mechanisms and hierarchical reinforcement learning, explore graph theory applications, and develop efficient communication protocols among agents to further enhance traffic control solutions. | [
"traffic congestion",
"a persistent challenge",
"urban areas",
"efficient traffic control strategies",
"this research",
"the application",
"advanced reinforcement learning techniques",
"specifically deep q-learning (dqn",
"double deep q-learning",
"(ddqn",
"this issue",
"a four-way traffic intersection",
"the rl agents",
"a reward function",
"waiting times",
"them",
"effective traffic signal control policies",
"the study",
"the performance",
"a simple non-reinforcement learning",
"(non rl) agent",
"a deep q-network (dqn) agent",
"an improved double deep q-learning (ddqn) agent",
"different traffic scenarios",
"the non rl agent",
"which",
"a fixed order",
"traffic phases",
"limitations",
"both low and high traffic situations",
"inefficiencies",
"queue lengths",
"the other hand",
"the dqn agent",
"promising results",
"low traffic conditions",
"struggles",
"high traffic",
"its greedy behavior",
"the ddqn agent",
"an extended green light base time",
"both the non rl agent",
"the original dqn agent",
"high traffic scenarios",
"it",
"real-world traffic conditions",
"it",
"some inefficiencies",
"low traffic scenarios",
"future research",
"multi-agent deep reinforcement learning challenges",
"attention mechanisms",
"hierarchical reinforcement learning",
"graph theory applications",
"efficient communication protocols",
"agents",
"traffic control solutions",
"four"
] |
Research trends in deep learning and machine learning for cloud computing security | [
"Yehia Ibrahim Alzoubi",
"Alok Mishra",
"Ahmet Ercan Topcu"
] | Deep learning and machine learning show effectiveness in identifying and addressing cloud security threats. Despite the large number of articles published in this field, there remains a dearth of comprehensive reviews that synthesize the techniques, trends, and challenges of using deep learning and machine learning for cloud computing security. Accordingly, this paper aims to provide the most updated statistics on the development and research in cloud computing security utilizing deep learning and machine learning. Up to the middle of December 2023, 4051 publications were identified after we searched the Scopus database. This paper highlights key trend solutions for cloud computing security utilizing machine learning and deep learning, such as anomaly detection, security automation, and emerging technology's role. However, challenges such as data privacy, scalability, and explainability, among others, are also identified as challenges of using machine learning and deep learning for cloud security. The findings of this paper reveal that deep learning and machine learning for cloud computing security are emerging research areas. Future research directions may include addressing these challenges when utilizing machine learning and deep learning for cloud security. Additionally, exploring the development of algorithms and techniques that comply with relevant laws and regulations is essential for effective implementation in this domain. | 10.1007/s10462-024-10776-5 | research trends in deep learning and machine learning for cloud computing security | deep learning and machine learning show effectiveness in identifying and addressing cloud security threats. despite the large number of articles published in this field, there remains a dearth of comprehensive reviews that synthesize the techniques, trends, and challenges of using deep learning and machine learning for cloud computing security. accordingly, this paper aims to provide the most updated statistics on the development and research in cloud computing security utilizing deep learning and machine learning. up to the middle of december 2023, 4051 publications were identified after we searched the scopus database. this paper highlights key trend solutions for cloud computing security utilizing machine learning and deep learning, such as anomaly detection, security automation, and emerging technology's role. however, challenges such as data privacy, scalability, and explainability, among others, are also identified as challenges of using machine learning and deep learning for cloud security. the findings of this paper reveal that deep learning and machine learning for cloud computing security are emerging research areas. future research directions may include addressing these challenges when utilizing machine learning and deep learning for cloud security. additionally, exploring the development of algorithms and techniques that comply with relevant laws and regulations is essential for effective implementation in this domain. | [
"deep learning and machine learning show effectiveness",
"cloud security threats",
"the large number",
"articles",
"this field",
"a dearth",
"comprehensive reviews",
"that",
"the techniques",
"trends",
"challenges",
"deep learning",
"machine learning",
"cloud computing security",
"this paper",
"the most updated statistics",
"the development",
"research",
"cloud computing security",
"deep learning",
"machine learning",
"the middle",
"december",
"4051 publications",
"we",
"the scopus database",
"this paper",
"key trend solutions",
"cloud computing security",
"machine learning",
"deep learning",
"anomaly detection",
"security automation",
"emerging technology's role",
"challenges",
"data privacy",
"scalability",
"explainability",
"others",
"challenges",
"machine learning",
"deep learning",
"cloud security",
"the findings",
"this paper",
"deep learning and machine learning",
"cloud computing security",
"research areas",
"future research directions",
"these challenges",
"machine learning",
"deep learning",
"cloud security",
"the development",
"algorithms",
"techniques",
"that",
"relevant laws",
"regulations",
"effective implementation",
"this domain",
"the middle of december 2023",
"4051",
"anomaly detection"
] |
Distributed Deep Reinforcement Learning: A Survey and a Multi-player Multi-agent Learning Toolbox | [
"Qiyue Yin",
"Tongtong Yu",
"Shengqi Shen",
"Jun Yang",
"Meijing Zhao",
"Wancheng Ni",
"Kaiqi Huang",
"Bin Liang",
"Liang Wang"
] | With the breakthrough of AlphaGo, deep reinforcement learning has become a recognized technique for solving sequential decision-making problems. Despite its reputation, data inefficiency caused by its trial and error learning mechanism makes deep reinforcement learning difficult to apply in a wide range of areas. Many methods have been developed for sample efficient deep reinforcement learning, such as environment modelling, experience transfer, and distributed modifications, among which distributed deep reinforcement learning has shown its potential in various applications, such as human-computer gaming and intelligent transportation. In this paper, we conclude the state of this exciting field, by comparing the classical distributed deep reinforcement learning methods and studying important components to achieve efficient distributed learning, covering single player single agent distributed deep reinforcement learning to the most complex multiple players multiple agents distributed deep reinforcement learning. Furthermore, we review recently released toolboxes that help to realize distributed deep reinforcement learning without many modifications of their non-distributed versions. By analysing their strengths and weaknesses, a multi-player multi-agent distributed deep reinforcement learning toolbox is developed and released, which is further validated on Wargame, a complex environment, showing the usability of the proposed toolbox for multiple players and multiple agents distributed deep reinforcement learning under complex games. Finally, we try to point out challenges and future trends, hoping that this brief review can provide a guide or a spark for researchers who are interested in distributed deep reinforcement learning. | 10.1007/s11633-023-1454-4 | distributed deep reinforcement learning: a survey and a multi-player multi-agent learning toolbox | with the breakthrough of alphago, deep reinforcement learning has become a recognized technique for solving sequential decision-making problems. despite its reputation, data inefficiency caused by its trial and error learning mechanism makes deep reinforcement learning difficult to apply in a wide range of areas. many methods have been developed for sample efficient deep reinforcement learning, such as environment modelling, experience transfer, and distributed modifications, among which distributed deep reinforcement learning has shown its potential in various applications, such as human-computer gaming and intelligent transportation. in this paper, we conclude the state of this exciting field, by comparing the classical distributed deep reinforcement learning methods and studying important components to achieve efficient distributed learning, covering single player single agent distributed deep reinforcement learning to the most complex multiple players multiple agents distributed deep reinforcement learning. furthermore, we review recently released toolboxes that help to realize distributed deep reinforcement learning without many modifications of their non-distributed versions. by analysing their strengths and weaknesses, a multi-player multi-agent distributed deep reinforcement learning toolbox is developed and released, which is further validated on wargame, a complex environment, showing the usability of the proposed toolbox for multiple players and multiple agents distributed deep reinforcement learning under complex games. finally, we try to point out challenges and future trends, hoping that this brief review can provide a guide or a spark for researchers who are interested in distributed deep reinforcement learning. | [
"the breakthrough",
"alphago",
"deep reinforcement learning",
"a recognized technique",
"sequential decision-making problems",
"its reputation",
"data inefficiency",
"its trial and error learning mechanism",
"deep reinforcement learning",
"a wide range",
"areas",
"many methods",
"sample efficient deep reinforcement learning",
"environment modelling",
"experience transfer",
"modifications",
"which",
"deep reinforcement learning",
"its potential",
"various applications",
"human-computer gaming",
"intelligent transportation",
"this paper",
"we",
"the state",
"this exciting field",
"the classical distributed deep reinforcement learning methods",
"important components",
"efficient distributed learning",
"single player single agent",
"deep reinforcement learning",
"the most complex multiple players",
"multiple agents",
"deep reinforcement learning",
"we",
"toolboxes",
"that",
"distributed deep reinforcement learning",
"many modifications",
"their non-distributed versions",
"their strengths",
"weaknesses",
"-",
"agent",
"which",
"wargame",
"a complex environment",
"the usability",
"the proposed toolbox",
"multiple players",
"multiple agents",
"deep reinforcement learning",
"complex games",
"we",
"challenges",
"future trends",
"this brief review",
"a guide",
"a spark",
"researchers",
"who",
"distributed deep reinforcement learning"
] |
Deep learning in rheumatological image interpretation | [
"Berend C. Stoel",
"Marius Staring",
"Monique Reijnierse",
"Annette H. M. van der Helm-van Mil"
] | Artificial intelligence techniques, specifically deep learning, have already affected daily life in a wide range of areas. Likewise, initial applications have been explored in rheumatology. Deep learning might not easily surpass the accuracy of classic techniques when performing classification or regression on low-dimensional numerical data. With images as input, however, deep learning has become so successful that it has already outperformed the majority of conventional image-processing techniques developed during the past 50 years. As with any new imaging technology, rheumatologists and radiologists need to consider adapting their arsenal of diagnostic, prognostic and monitoring tools, and even their clinical role and collaborations. This adaptation requires a basic understanding of the technical background of deep learning, to efficiently utilize its benefits but also to recognize its drawbacks and pitfalls, as blindly relying on deep learning might be at odds with its capabilities. To facilitate such an understanding, it is necessary to provide an overview of deep-learning techniques for automatic image analysis in detecting, quantifying, predicting and monitoring rheumatic diseases, and of currently published deep-learning applications in radiological imaging for rheumatology, with critical assessment of possible limitations, errors and confounders, and conceivable consequences for rheumatologists and radiologists in clinical practice. | 10.1038/s41584-023-01074-5 | deep learning in rheumatological image interpretation | artificial intelligence techniques, specifically deep learning, have already affected daily life in a wide range of areas. likewise, initial applications have been explored in rheumatology. deep learning might not easily surpass the accuracy of classic techniques when performing classification or regression on low-dimensional numerical data. with images as input, however, deep learning has become so successful that it has already outperformed the majority of conventional image-processing techniques developed during the past 50 years. as with any new imaging technology, rheumatologists and radiologists need to consider adapting their arsenal of diagnostic, prognostic and monitoring tools, and even their clinical role and collaborations. this adaptation requires a basic understanding of the technical background of deep learning, to efficiently utilize its benefits but also to recognize its drawbacks and pitfalls, as blindly relying on deep learning might be at odds with its capabilities. to facilitate such an understanding, it is necessary to provide an overview of deep-learning techniques for automatic image analysis in detecting, quantifying, predicting and monitoring rheumatic diseases, and of currently published deep-learning applications in radiological imaging for rheumatology, with critical assessment of possible limitations, errors and confounders, and conceivable consequences for rheumatologists and radiologists in clinical practice. | [
"artificial intelligence techniques",
"specifically deep learning",
"daily life",
"a wide range",
"areas",
"initial applications",
"rheumatology",
"deep learning",
"the accuracy",
"classic techniques",
"classification",
"regression",
"low-dimensional numerical data",
"images",
"input",
"deep learning",
"it",
"the majority",
"conventional image-processing techniques",
"the past 50 years",
"any new imaging technology",
"rheumatologists",
"radiologists",
"their arsenal",
"diagnostic, prognostic and monitoring tools",
"even their clinical role",
"collaborations",
"this adaptation",
"a basic understanding",
"the technical background",
"deep learning",
"its benefits",
"its drawbacks",
"pitfalls",
"deep learning",
"odds",
"its capabilities",
"such an understanding",
"it",
"an overview",
"deep-learning techniques",
"automatic image analysis",
"rheumatic diseases",
"currently published deep-learning applications",
"radiological imaging",
"rheumatology",
"critical assessment",
"possible limitations",
"errors",
"confounders",
"conceivable consequences",
"rheumatologists",
"radiologists",
"clinical practice",
"the past 50 years"
] |
Loss of plasticity in deep continual learning | [
"Shibhansh Dohare",
"J. Fernando Hernandez-Garcia",
"Qingfeng Lan",
"Parash Rahman",
"A. Rupam Mahmood",
"Richard S. Sutton"
] | Artificial neural networks, deep-learning methods and the backpropagation algorithm1 form the foundation of modern machine learning and artificial intelligence. These methods are almost always used in two phases, one in which the weights of the network are updated and one in which the weights are held constant while the network is used or evaluated. This contrasts with natural learning and many applications, which require continual learning. It has been unclear whether or not deep learning methods work in continual learning settings. Here we show that they do not—that standard deep-learning methods gradually lose plasticity in continual-learning settings until they learn no better than a shallow network. We show such loss of plasticity using the classic ImageNet dataset and reinforcement-learning problems across a wide range of variations in the network and the learning algorithm. Plasticity is maintained indefinitely only by algorithms that continually inject diversity into the network, such as our continual backpropagation algorithm, a variation of backpropagation in which a small fraction of less-used units are continually and randomly reinitialized. Our results indicate that methods based on gradient descent are not enough—that sustained deep learning requires a random, non-gradient component to maintain variability and plasticity. | 10.1038/s41586-024-07711-7 | loss of plasticity in deep continual learning | artificial neural networks, deep-learning methods and the backpropagation algorithm1 form the foundation of modern machine learning and artificial intelligence. these methods are almost always used in two phases, one in which the weights of the network are updated and one in which the weights are held constant while the network is used or evaluated. this contrasts with natural learning and many applications, which require continual learning. it has been unclear whether or not deep learning methods work in continual learning settings. here we show that they do not—that standard deep-learning methods gradually lose plasticity in continual-learning settings until they learn no better than a shallow network. we show such loss of plasticity using the classic imagenet dataset and reinforcement-learning problems across a wide range of variations in the network and the learning algorithm. plasticity is maintained indefinitely only by algorithms that continually inject diversity into the network, such as our continual backpropagation algorithm, a variation of backpropagation in which a small fraction of less-used units are continually and randomly reinitialized. our results indicate that methods based on gradient descent are not enough—that sustained deep learning requires a random, non-gradient component to maintain variability and plasticity. | [
"artificial neural networks",
"deep-learning methods",
"the backpropagation algorithm1 form",
"the foundation",
"modern machine learning",
"artificial intelligence",
"these methods",
"two phases",
"which",
"the weights",
"the network",
"which",
"the weights",
"the network",
"this",
"natural learning",
"many applications",
"which",
"continual learning",
"it",
"not deep learning methods",
"continual learning settings",
"we",
"they",
"that standard deep-learning methods",
"plasticity",
"continual-learning settings",
"they",
"a shallow network",
"we",
"such loss",
"plasticity",
"the classic imagenet dataset and reinforcement-learning problems",
"a wide range",
"variations",
"the network",
"the learning algorithm",
"plasticity",
"algorithms",
"that",
"diversity",
"the network",
"our continual backpropagation algorithm",
"a variation",
"backpropagation",
"which",
"a small fraction",
"less-used units",
"our results",
"methods",
"gradient descent",
"that",
"deep learning",
"a random, non-gradient component",
"variability",
"plasticity",
"algorithm1",
"two",
"one"
] |
Comparative approach on crop detection using machine learning and deep learning techniques | [
"V. Nithya",
"M. S. Josephine",
"V. Jeyabalaraja"
] | Agriculture is an expanding area of study. Crop prediction in agriculture is highly dependent on soil and environmental factors, such as rainfall, humidity, and temperature. Previously, farmers had the authority to select the crop to be farmed, oversee its development, and ascertain the optimal harvest time. The farming community is facing challenges in sustaining its practices due to the swift alterations in climatic conditions. Therefore, machine learning algorithms have replaced traditional methods in predicting agricultural productivity in recent years. To guarantee optimal precision through a specific machine learning approach. Authors extend their approach not limited to Machine Learning but also with Deep Learning Techniques. We use machine and deep learning algorithms to predict crop outcomes accurately. In this proposed model, we utilise machine learning algorithms such as Naive Bayes, decision tree, and KNN. It is worth noting that the decision tree algorithm demonstrates superior performance compared to the other algorithms, achieving an accuracy rate of 83%. In order to enhance the precision, we have suggested implementing a deep learning technique, specifically a convolutional neural network, to identify the crops. Achieving an accuracy of 93.54% was made possible by implementing this advanced deep-learning model. | 10.1007/s13198-024-02483-9 | comparative approach on crop detection using machine learning and deep learning techniques | agriculture is an expanding area of study. crop prediction in agriculture is highly dependent on soil and environmental factors, such as rainfall, humidity, and temperature. previously, farmers had the authority to select the crop to be farmed, oversee its development, and ascertain the optimal harvest time. the farming community is facing challenges in sustaining its practices due to the swift alterations in climatic conditions. therefore, machine learning algorithms have replaced traditional methods in predicting agricultural productivity in recent years. to guarantee optimal precision through a specific machine learning approach. authors extend their approach not limited to machine learning but also with deep learning techniques. we use machine and deep learning algorithms to predict crop outcomes accurately. in this proposed model, we utilise machine learning algorithms such as naive bayes, decision tree, and knn. it is worth noting that the decision tree algorithm demonstrates superior performance compared to the other algorithms, achieving an accuracy rate of 83%. in order to enhance the precision, we have suggested implementing a deep learning technique, specifically a convolutional neural network, to identify the crops. achieving an accuracy of 93.54% was made possible by implementing this advanced deep-learning model. | [
"agriculture",
"an expanding area",
"study",
"crop prediction",
"agriculture",
"soil and environmental factors",
"rainfall",
"humidity",
"temperature",
"farmers",
"the authority",
"the crop",
"its development",
"the optimal harvest time",
"the farming community",
"challenges",
"its practices",
"the swift alterations",
"climatic conditions",
"machine learning algorithms",
"traditional methods",
"agricultural productivity",
"recent years",
"optimal precision",
"a specific machine learning approach",
"authors",
"their approach",
"machine learning",
"deep learning techniques",
"we",
"machine",
"deep learning",
"algorithms",
"crop outcomes",
"this proposed model",
"we",
"machine learning algorithms",
"naive bayes",
"decision tree",
"knn",
"it",
"the decision tree",
"algorithm",
"superior performance",
"the other algorithms",
"an accuracy rate",
"83%",
"order",
"the precision",
"we",
"a deep learning technique",
"specifically a convolutional neural network",
"the crops",
"an accuracy",
"93.54%",
"this advanced deep-learning model",
"recent years",
"83%",
"93.54%"
] |
A systematic review on machine learning and deep learning techniques in the effective diagnosis of Alzheimer’s disease | [
"Akhilesh Deep Arya",
"Sourabh Singh Verma",
"Prasun Chakarabarti",
"Tulika Chakrabarti",
"Ahmed A. Elngar",
"Ali-Mohammad Kamali",
"Mohammad Nami"
] | Alzheimer’s disease (AD) is a brain-related disease in which the condition of the patient gets worse with time. AD is not a curable disease by any medication. It is impossible to halt the death of brain cells, but with the help of medication, the effects of AD can be delayed. As not all MCI patients will suffer from AD, it is required to accurately diagnose whether a mild cognitive impaired (MCI) patient will convert to AD (namely MCI converter MCI-C) or not (namely MCI non-converter MCI-NC), during early diagnosis. There are two modalities, positron emission tomography (PET) and magnetic resonance image (MRI), used by a physician for the diagnosis of Alzheimer’s disease. Machine learning and deep learning perform exceptionally well in the field of computer vision where there is a requirement to extract information from high-dimensional data. Researchers use deep learning models in the field of medicine for diagnosis, prognosis, and even to predict the future health of the patient under medication. This study is a systematic review of publications using machine learning and deep learning methods for early classification of normal cognitive (NC) and Alzheimer’s disease (AD).This study is an effort to provide the details of the two most commonly used modalities PET and MRI for the identification of AD, and to evaluate the performance of both modalities while working with different classifiers. | 10.1186/s40708-023-00195-7 | a systematic review on machine learning and deep learning techniques in the effective diagnosis of alzheimer’s disease | alzheimer’s disease (ad) is a brain-related disease in which the condition of the patient gets worse with time. ad is not a curable disease by any medication. it is impossible to halt the death of brain cells, but with the help of medication, the effects of ad can be delayed. as not all mci patients will suffer from ad, it is required to accurately diagnose whether a mild cognitive impaired (mci) patient will convert to ad (namely mci converter mci-c) or not (namely mci non-converter mci-nc), during early diagnosis. there are two modalities, positron emission tomography (pet) and magnetic resonance image (mri), used by a physician for the diagnosis of alzheimer’s disease. machine learning and deep learning perform exceptionally well in the field of computer vision where there is a requirement to extract information from high-dimensional data. researchers use deep learning models in the field of medicine for diagnosis, prognosis, and even to predict the future health of the patient under medication. this study is a systematic review of publications using machine learning and deep learning methods for early classification of normal cognitive (nc) and alzheimer’s disease (ad).this study is an effort to provide the details of the two most commonly used modalities pet and mri for the identification of ad, and to evaluate the performance of both modalities while working with different classifiers. | [
"alzheimer’s disease",
"ad",
"a brain-related disease",
"which",
"the condition",
"the patient",
"time",
"ad",
"a curable disease",
"any medication",
"it",
"the death",
"brain cells",
"the help",
"medication",
"the effects",
"ad",
"not all mci patients",
"ad",
"it",
"a mild cognitive impaired (mci) patient",
"ad",
"namely mci converter mci-c",
"namely mci non-converter mci-nc",
"early diagnosis",
"two modalities",
"positron emission tomography",
"pet",
"magnetic resonance image",
"mri",
"a physician",
"the diagnosis",
"alzheimer’s disease",
"machine learning",
"deep learning",
"the field",
"computer vision",
"a requirement",
"information",
"high-dimensional data",
"researchers",
"deep learning models",
"the field",
"medicine",
"diagnosis",
"prognosis",
"the future health",
"the patient",
"medication",
"this study",
"a systematic review",
"publications",
"machine learning",
"deep learning methods",
"early classification",
"(nc",
"ad).this study",
"an effort",
"the details",
"the two most commonly used modalities",
"the identification",
"ad",
"the performance",
"both modalities",
"different classifiers",
"mci",
"mci",
"mci",
"mci",
"mci",
"mci-nc",
"two",
"two"
] |
Exploring the connection between deep learning and learning assessments: a cross-disciplinary engineering education perspective | [
"Sabrina Fawzia",
"Azharul Karim"
] | It is widely accepted that student learning is significantly affected by assessment methods, but a concrete relationship has not been established in the context of multidisciplinary engineering education. Students make a physiological investment and internalize learning (deep learning) if they see high value in their learning. They persist despite challenges and take delight in accomplishing their work. As student deep learning is affected by the assessment system, it is important to explore the relationship between assessment systems and factors affecting deep learning. This study identifies the factors associated with deep learning and examines the relationships between different assessment systems those factors. A conceptual model is proposed, and a structured questionnaire was designed and directed to 600 Queensland University of Technology (QUT) multidisciplinary engineering students, with 243 responses received. The gathered data were analyzed using both SPSS and SEM. Exploratory factor analysis revealed that deep learning is strongly associated with learning environment and course design and content. Strong influence of both summative and formative assessment on learning was established in this study. Engineering educators can facilitate deep learning by adopting both assessment types simultaneously to make the learning process more effective. The proposed theoretical model related to the deep learning concept can support the key practices and modern learning methodologies currently adopted to enhance the learning and teaching process. | 10.1057/s41599-023-02542-9 | exploring the connection between deep learning and learning assessments: a cross-disciplinary engineering education perspective | it is widely accepted that student learning is significantly affected by assessment methods, but a concrete relationship has not been established in the context of multidisciplinary engineering education. students make a physiological investment and internalize learning (deep learning) if they see high value in their learning. they persist despite challenges and take delight in accomplishing their work. as student deep learning is affected by the assessment system, it is important to explore the relationship between assessment systems and factors affecting deep learning. this study identifies the factors associated with deep learning and examines the relationships between different assessment systems those factors. a conceptual model is proposed, and a structured questionnaire was designed and directed to 600 queensland university of technology (qut) multidisciplinary engineering students, with 243 responses received. the gathered data were analyzed using both spss and sem. exploratory factor analysis revealed that deep learning is strongly associated with learning environment and course design and content. strong influence of both summative and formative assessment on learning was established in this study. engineering educators can facilitate deep learning by adopting both assessment types simultaneously to make the learning process more effective. the proposed theoretical model related to the deep learning concept can support the key practices and modern learning methodologies currently adopted to enhance the learning and teaching process. | [
"it",
"that student learning",
"assessment methods",
"a concrete relationship",
"the context",
"multidisciplinary engineering education",
"students",
"a physiological investment",
"they",
"high value",
"their learning",
"they",
"challenges",
"delight",
"their work",
"student deep learning",
"the assessment system",
"it",
"the relationship",
"assessment systems",
"factors",
"deep learning",
"this study",
"the factors",
"deep learning",
"the relationships",
"those factors",
"a conceptual model",
"a structured questionnaire",
"technology",
"243 responses",
"the gathered data",
"sem",
"exploratory factor analysis",
"deep learning",
"environment",
"course design",
"content",
"strong influence",
"both summative and formative assessment",
"learning",
"this study",
"engineering educators",
"deep learning",
"both assessment types",
"the learning process",
"the proposed theoretical model",
"the deep learning concept",
"the key practices",
"modern learning methodologies",
"the learning and teaching process",
"600 queensland university of technology",
"243",
"spss"
] |
Integrating QSAR modelling and deep learning in drug discovery: the emergence of deep QSAR | [
"Alexander Tropsha",
"Olexandr Isayev",
"Alexandre Varnek",
"Gisbert Schneider",
"Artem Cherkasov"
] | Quantitative structure–activity relationship (QSAR) modelling, an approach that was introduced 60 years ago, is widely used in computer-aided drug design. In recent years, progress in artificial intelligence techniques, such as deep learning, the rapid growth of databases of molecules for virtual screening and dramatic improvements in computational power have supported the emergence of a new field of QSAR applications that we term ‘deep QSAR’. Marking a decade from the pioneering applications of deep QSAR to tasks involved in small-molecule drug discovery, we herein describe key advances in the field, including deep generative and reinforcement learning approaches in molecular design, deep learning models for synthetic planning and the application of deep QSAR models in structure-based virtual screening. We also reflect on the emergence of quantum computing, which promises to further accelerate deep QSAR applications and the need for open-source and democratized resources to support computer-aided drug design. | 10.1038/s41573-023-00832-0 | integrating qsar modelling and deep learning in drug discovery: the emergence of deep qsar | quantitative structure–activity relationship (qsar) modelling, an approach that was introduced 60 years ago, is widely used in computer-aided drug design. in recent years, progress in artificial intelligence techniques, such as deep learning, the rapid growth of databases of molecules for virtual screening and dramatic improvements in computational power have supported the emergence of a new field of qsar applications that we term ‘deep qsar’. marking a decade from the pioneering applications of deep qsar to tasks involved in small-molecule drug discovery, we herein describe key advances in the field, including deep generative and reinforcement learning approaches in molecular design, deep learning models for synthetic planning and the application of deep qsar models in structure-based virtual screening. we also reflect on the emergence of quantum computing, which promises to further accelerate deep qsar applications and the need for open-source and democratized resources to support computer-aided drug design. | [
"quantitative structure",
"activity relationship",
"qsar",
"modelling",
"an approach",
"that",
"computer-aided drug design",
"recent years",
"progress",
"artificial intelligence techniques",
"deep learning",
"the rapid growth",
"databases",
"molecules",
"virtual screening",
"dramatic improvements",
"computational power",
"the emergence",
"a new field",
"qsar applications",
"we",
"a decade",
"the pioneering applications",
"deep qsar",
"tasks",
"small-molecule drug discovery",
"we",
"key advances",
"the field",
"deep generative and reinforcement learning approaches",
"molecular design",
"deep learning models",
"synthetic planning",
"the application",
"deep qsar models",
"structure-based virtual screening",
"we",
"the emergence",
"quantum computing",
"which",
"deep qsar applications",
"the need",
"open-source",
"democratized resources",
"computer-aided drug design",
"60 years ago",
"recent years",
"quantum"
] |
Deep learning for water quality | [
"Wei Zhi",
"Alison P. Appling",
"Heather E. Golden",
"Joel Podgorski",
"Li Li"
] | Understanding and predicting the quality of inland waters are challenging, particularly in the context of intensifying climate extremes expected in the future. These challenges arise partly due to complex processes that regulate water quality, and arduous and expensive data collection that exacerbate the issue of data scarcity. Traditional process-based and statistical models often fall short in predicting water quality. In this Review, we posit that deep learning represents an underutilized yet promising approach that can unravel intricate structures and relationships in high-dimensional data. We demonstrate that deep learning methods can help address data scarcity by filling temporal and spatial gaps and aid in formulating and testing hypotheses via identifying influential drivers of water quality. This Review highlights the strengths and limitations of deep learning methods relative to traditional approaches, and underscores its potential as an emerging and indispensable approach in overcoming challenges and discovering new knowledge in water-quality sciences. | 10.1038/s44221-024-00202-z | deep learning for water quality | understanding and predicting the quality of inland waters are challenging, particularly in the context of intensifying climate extremes expected in the future. these challenges arise partly due to complex processes that regulate water quality, and arduous and expensive data collection that exacerbate the issue of data scarcity. traditional process-based and statistical models often fall short in predicting water quality. in this review, we posit that deep learning represents an underutilized yet promising approach that can unravel intricate structures and relationships in high-dimensional data. we demonstrate that deep learning methods can help address data scarcity by filling temporal and spatial gaps and aid in formulating and testing hypotheses via identifying influential drivers of water quality. this review highlights the strengths and limitations of deep learning methods relative to traditional approaches, and underscores its potential as an emerging and indispensable approach in overcoming challenges and discovering new knowledge in water-quality sciences. | [
"understanding",
"the quality",
"inland waters",
"the context",
"climate extremes",
"the future",
"these challenges",
"complex processes",
"that",
"water quality",
"arduous and expensive data collection",
"that",
"the issue",
"data scarcity",
"traditional process-based and statistical models",
"water quality",
"this review",
"we",
"deep learning",
"an underutilized yet promising approach",
"that",
"intricate structures",
"relationships",
"high-dimensional data",
"we",
"deep learning methods",
"data scarcity",
"temporal and spatial gaps",
"aid",
"formulating and testing hypotheses",
"influential drivers",
"water quality",
"this review",
"the strengths",
"limitations",
"deep learning methods",
"traditional approaches",
"its potential",
"an emerging and indispensable approach",
"challenges",
"new knowledge",
"water-quality sciences"
] |
Comparative Analysis of Machine Learning, Ensemble Learning and Deep Learning Classifiers for Parkinson’s Disease Detection | [
"Palak Goyal",
"Rinkle Rani"
] | A progressive neurodegenerative ailment called Parkinson's disease (PD) is marked by the death of dopamine-producing cells in the substantia nigra area of the brain. The exact etiology of PD remains elusive, but it is believed to involve the presence of Lewy bodies, abnormal protein aggregates, in affected brain regions, leading to the mobile symptoms of PD. Hence, as the management of PD continues to evolve, there is a growing demand for the establishment of a descriptive system that enables the early detection of PD. In this study, we conducted an extensive analysis using machine learning, ensemble learning, and deep learning models with different hyperparameters to develop accurate classification models for PD prediction. To enhance classifier performance and address overfitting, we employed principal component analysis (PCA) for feature selection along with various preprocessing techniques. The dataset used consisted of voice samples, comprising 188 PD patients and 64 normal individuals. Our results demonstrated that the Random Forest (RF) model with accuracy of 82.37% outperformed the other base classifiers Among the ensemble classifiers, the LGBM model exhibited the highest accuracy of 85.90% when compared to both base and ensemble classifiers. Notably, the deep learning model has 91.33% training accuracy and 85.02% testing accuracy, suggesting that deep learning models perform comparably equivalent on small datasets compared to machine learning classifiers. Overall, our findings underscore the effectiveness of machine learning, ensemble techniques and deep learning models in accurately predicting PD. | 10.1007/s42979-023-02368-x | comparative analysis of machine learning, ensemble learning and deep learning classifiers for parkinson’s disease detection | a progressive neurodegenerative ailment called parkinson's disease (pd) is marked by the death of dopamine-producing cells in the substantia nigra area of the brain. the exact etiology of pd remains elusive, but it is believed to involve the presence of lewy bodies, abnormal protein aggregates, in affected brain regions, leading to the mobile symptoms of pd. hence, as the management of pd continues to evolve, there is a growing demand for the establishment of a descriptive system that enables the early detection of pd. in this study, we conducted an extensive analysis using machine learning, ensemble learning, and deep learning models with different hyperparameters to develop accurate classification models for pd prediction. to enhance classifier performance and address overfitting, we employed principal component analysis (pca) for feature selection along with various preprocessing techniques. the dataset used consisted of voice samples, comprising 188 pd patients and 64 normal individuals. our results demonstrated that the random forest (rf) model with accuracy of 82.37% outperformed the other base classifiers among the ensemble classifiers, the lgbm model exhibited the highest accuracy of 85.90% when compared to both base and ensemble classifiers. notably, the deep learning model has 91.33% training accuracy and 85.02% testing accuracy, suggesting that deep learning models perform comparably equivalent on small datasets compared to machine learning classifiers. overall, our findings underscore the effectiveness of machine learning, ensemble techniques and deep learning models in accurately predicting pd. | [
"a progressive neurodegenerative ailment",
"parkinson's disease",
"pd",
"the death",
"dopamine-producing cells",
"the substantia nigra area",
"the brain",
"the exact etiology",
"pd",
"it",
"the presence",
"lewy bodies",
"abnormal protein aggregates",
"affected brain regions",
"the mobile symptoms",
"pd",
"the management",
"pd",
"a growing demand",
"the establishment",
"a descriptive system",
"that",
"the early detection",
"pd",
"this study",
"we",
"an extensive analysis",
"machine learning",
"ensemble learning",
"deep learning models",
"different hyperparameters",
"accurate classification models",
"pd prediction",
"classifier performance",
"address overfitting",
"we",
"principal component analysis",
"pca",
"feature selection",
"various preprocessing techniques",
"the dataset",
"voice samples",
"188 pd patients",
"64 normal individuals",
"our results",
"the random forest",
"(rf) model",
"accuracy",
"82.37%",
"the other base classifiers",
"the ensemble classifiers",
"the lgbm model",
"the highest accuracy",
"85.90%",
"both base and ensemble classifiers",
"the deep learning model",
"91.33% training accuracy",
"85.02% testing accuracy",
"deep learning models",
"small datasets",
"machine learning classifiers",
"our findings",
"the effectiveness",
"machine learning",
"ensemble techniques",
"deep learning models",
"pd",
"188",
"64",
"82.37%",
"85.90%",
"91.33%",
"85.02%"
] |
Employing deep learning and transfer learning for accurate brain tumor detection | [
"Sandeep Kumar Mathivanan",
"Sridevi Sonaimuthu",
"Sankar Murugesan",
"Hariharan Rajadurai",
"Basu Dev Shivahare",
"Mohd Asif Shah"
] | Artificial intelligence-powered deep learning methods are being used to diagnose brain tumors with high accuracy, owing to their ability to process large amounts of data. Magnetic resonance imaging stands as the gold standard for brain tumor diagnosis using machine vision, surpassing computed tomography, ultrasound, and X-ray imaging in its effectiveness. Despite this, brain tumor diagnosis remains a challenging endeavour due to the intricate structure of the brain. This study delves into the potential of deep transfer learning architectures to elevate the accuracy of brain tumor diagnosis. Transfer learning is a machine learning technique that allows us to repurpose pre-trained models on new tasks. This can be particularly useful for medical imaging tasks, where labelled data is often scarce. Four distinct transfer learning architectures were assessed in this study: ResNet152, VGG19, DenseNet169, and MobileNetv3. The models were trained and validated on a dataset from benchmark database: Kaggle. Five-fold cross validation was adopted for training and testing. To enhance the balance of the dataset and improve the performance of the models, image enhancement techniques were applied to the data for the four categories: pituitary, normal, meningioma, and glioma. MobileNetv3 achieved the highest accuracy of 99.75%, significantly outperforming other existing methods. This demonstrates the potential of deep transfer learning architectures to revolutionize the field of brain tumor diagnosis. | 10.1038/s41598-024-57970-7 | employing deep learning and transfer learning for accurate brain tumor detection | artificial intelligence-powered deep learning methods are being used to diagnose brain tumors with high accuracy, owing to their ability to process large amounts of data. magnetic resonance imaging stands as the gold standard for brain tumor diagnosis using machine vision, surpassing computed tomography, ultrasound, and x-ray imaging in its effectiveness. despite this, brain tumor diagnosis remains a challenging endeavour due to the intricate structure of the brain. this study delves into the potential of deep transfer learning architectures to elevate the accuracy of brain tumor diagnosis. transfer learning is a machine learning technique that allows us to repurpose pre-trained models on new tasks. this can be particularly useful for medical imaging tasks, where labelled data is often scarce. four distinct transfer learning architectures were assessed in this study: resnet152, vgg19, densenet169, and mobilenetv3. the models were trained and validated on a dataset from benchmark database: kaggle. five-fold cross validation was adopted for training and testing. to enhance the balance of the dataset and improve the performance of the models, image enhancement techniques were applied to the data for the four categories: pituitary, normal, meningioma, and glioma. mobilenetv3 achieved the highest accuracy of 99.75%, significantly outperforming other existing methods. this demonstrates the potential of deep transfer learning architectures to revolutionize the field of brain tumor diagnosis. | [
"artificial intelligence-powered deep learning methods",
"brain tumors",
"high accuracy",
"their ability",
"large amounts",
"data",
"magnetic resonance imaging",
"the gold standard",
"brain tumor diagnosis",
"machine vision",
"computed tomography",
"ultrasound",
"x",
"-ray imaging",
"its effectiveness",
"this",
"brain tumor diagnosis",
"a challenging endeavour",
"the intricate structure",
"the brain",
"this study",
"the potential",
"architectures",
"the accuracy",
"brain tumor diagnosis",
"transfer learning",
"a machine learning technique",
"that",
"us",
"pre-trained models",
"new tasks",
"this",
"medical imaging tasks",
"labelled data",
"four distinct transfer learning architectures",
"this study",
"resnet152",
"vgg19",
"densenet169",
"mobilenetv3",
"the models",
"a dataset",
"benchmark database",
"kaggle",
"five-fold cross validation",
"training",
"testing",
"the balance",
"the dataset",
"the performance",
"the models",
"image enhancement techniques",
"the data",
"the four categories",
"glioma",
"mobilenetv3",
"the highest accuracy",
"99.75%",
"other existing methods",
"this",
"the potential",
"architectures",
"the field",
"brain tumor diagnosis",
"four",
"mobilenetv3",
"five-fold",
"four",
"mobilenetv3",
"99.75%"
] |
Detecting Suicidality in Arabic Tweets Using Machine Learning and Deep Learning Techniques | [
"Asma Abdulsalam",
"Areej Alhothali",
"Saleh Al-Ghamdi"
] | Social media platforms have revolutionized traditional communication techniques by allowing people to connect instantaneously, openly, and frequently. As people use social media to share personal stories and express their opinions, negative emotions such as thoughts of death, self-harm, and hardship are commonly expressed, particularly among younger generations. Accordingly, the use of social media to detect suicidality may help provide proper intervention that will ultimately deter the spread of self-harm and suicidal ideation on social media. To investigate the automated detection of suicidal thoughts in Arabic tweets, we developed a novel Arabic suicidal tweet dataset, examined several machine learning models trained on word frequency and embedding features, and investigated the performance of pre-trained deep learning models in identifying suicidal sentiment. The results indicate that the support vector machine trained on character n-gram features yields the best performance among conventional machine learning models, with an accuracy of 86% and F1 score of 79%. In the subsequent deep learning experiment, AraBert outperformed all other machine and deep learning models with an accuracy of 91% and F1-score of 88%, significantly improving the detection of suicidal ideation in the dataset. To the best of our knowledge, this study represents the first attempt to compile an Arabic suicidality detection dataset from Twitter and to use deep learning to detect suicidal sentiment in Arabic posts. | 10.1007/s13369-024-08767-3 | detecting suicidality in arabic tweets using machine learning and deep learning techniques | social media platforms have revolutionized traditional communication techniques by allowing people to connect instantaneously, openly, and frequently. as people use social media to share personal stories and express their opinions, negative emotions such as thoughts of death, self-harm, and hardship are commonly expressed, particularly among younger generations. accordingly, the use of social media to detect suicidality may help provide proper intervention that will ultimately deter the spread of self-harm and suicidal ideation on social media. to investigate the automated detection of suicidal thoughts in arabic tweets, we developed a novel arabic suicidal tweet dataset, examined several machine learning models trained on word frequency and embedding features, and investigated the performance of pre-trained deep learning models in identifying suicidal sentiment. the results indicate that the support vector machine trained on character n-gram features yields the best performance among conventional machine learning models, with an accuracy of 86% and f1 score of 79%. in the subsequent deep learning experiment, arabert outperformed all other machine and deep learning models with an accuracy of 91% and f1-score of 88%, significantly improving the detection of suicidal ideation in the dataset. to the best of our knowledge, this study represents the first attempt to compile an arabic suicidality detection dataset from twitter and to use deep learning to detect suicidal sentiment in arabic posts. | [
"social media platforms",
"traditional communication techniques",
"people",
"people",
"social media",
"personal stories",
"their opinions",
"negative emotions",
"thoughts",
"death",
"self-harm",
"hardship",
"younger generations",
"the use",
"social media",
"suicidality",
"proper intervention",
"that",
"the spread",
"self-harm",
"suicidal ideation",
"social media",
"the automated detection",
"suicidal thoughts",
"arabic tweets",
"we",
"a novel arabic suicidal tweet dataset",
"several machine learning models",
"word frequency",
"features",
"the performance",
"pre-trained deep learning models",
"suicidal sentiment",
"the results",
"the support vector machine",
"character n-gram features",
"the best performance",
"conventional machine learning models",
"an accuracy",
"86%",
"f1 score",
"79%",
"the subsequent deep learning experiment",
"arabert",
"all other machine",
"deep learning models",
"an accuracy",
"91%",
"f1-score",
"88%",
"the detection",
"suicidal ideation",
"the dataset",
"our knowledge",
"this study",
"the first attempt",
"an arabic suicidality detection",
"twitter",
"deep learning",
"suicidal sentiment",
"arabic posts",
"arabic",
"arabic",
"86%",
"79%",
"91%",
"88%",
"first",
"arabic",
"arabic"
] |
Deep learning for code generation: a survey | [
"Huangzhao Zhang",
"Kechi Zhang",
"Zhuo Li",
"Jia Li",
"Jia Li",
"Yongmin Li",
"Yunfei Zhao",
"Yuqi Zhu",
"Fang Liu",
"Ge Li",
"Zhi Jin"
] | In the past decade, thanks to the powerfulness of deep-learning techniques, we have witnessed a whole new era of automated code generation. To sort out developments, we have conducted a comprehensive review of solutions to deep learning-based code generation. In this survey, we generally formalize the pipeline and procedure of code generation and categorize existing solutions according to taxonomy from perspectives of architecture, model-agnostic enhancing strategy, metrics, and tasks. In addition, we outline the challenges faced by current dominant large models and list several plausible directions for future research. We hope that this survey may provide handy guidance to understanding, utilizing, and developing deep learning-based code-generation techniques for researchers and practitioners. | 10.1007/s11432-023-3956-3 | deep learning for code generation: a survey | in the past decade, thanks to the powerfulness of deep-learning techniques, we have witnessed a whole new era of automated code generation. to sort out developments, we have conducted a comprehensive review of solutions to deep learning-based code generation. in this survey, we generally formalize the pipeline and procedure of code generation and categorize existing solutions according to taxonomy from perspectives of architecture, model-agnostic enhancing strategy, metrics, and tasks. in addition, we outline the challenges faced by current dominant large models and list several plausible directions for future research. we hope that this survey may provide handy guidance to understanding, utilizing, and developing deep learning-based code-generation techniques for researchers and practitioners. | [
"the past decade",
"the powerfulness",
"deep-learning techniques",
"we",
"a whole new era",
"automated code generation",
"developments",
"we",
"a comprehensive review",
"solutions",
"deep learning-based code generation",
"this survey",
"we",
"the pipeline",
"procedure",
"code generation",
"existing solutions",
"taxonomy",
"perspectives",
"architecture",
"model-agnostic enhancing strategy",
"metrics",
"tasks",
"addition",
"we",
"the challenges",
"current dominant large models",
"several plausible directions",
"future research",
"we",
"this survey",
"handy guidance",
"understanding",
"deep learning-based code-generation techniques",
"researchers",
"practitioners",
"the past decade"
] |
Privacy enhanced course recommendations through deep learning in Federated Learning environments | [
"Chandra Sekhar Kolli",
"Sreenivasu Seelamanthula",
"Venkata Krishna Reddy V",
"Padamata Ramesh Babu",
"Mule Rama Krishna Reddy",
"Babu Rao Gumpina"
] | The increasing concerns around data security and privacy among users have significantly pushed the interest of the research community towards developing privacy-preserving recommendation systems. Amidst this backdrop, our study introduces a novel course recommendation methodology leveraging Federated Learning (FL) coupled with advanced Deep Learning techniques. This method executes the recommendation process across local nodes through several stages, including agglomerative matrix formulation, course clustering, bi-level matching, identification of learner-preferred courses, and ultimately, course recommendation. Notably, course clustering is achieved through Deep Fuzzy Clustering (DFC), while Deep Convolutional Neural Networks (DCNN) are employed for the recommendation phase. The efficacy of our DFC-DCNN-FL approach is rigorously evaluated based on several metrics: accuracy, False Positive Rate (FPR), loss function, Mean Square Error (MSE), Root MSE (RMSE), and Mean Average Precision (MAP). The results demonstrate remarkable performance with scores of 0.909, 0.116, 0.126, 0.291, 0.539, and 0.925, respectively. | 10.1007/s41870-024-02087-3 | privacy enhanced course recommendations through deep learning in federated learning environments | the increasing concerns around data security and privacy among users have significantly pushed the interest of the research community towards developing privacy-preserving recommendation systems. amidst this backdrop, our study introduces a novel course recommendation methodology leveraging federated learning (fl) coupled with advanced deep learning techniques. this method executes the recommendation process across local nodes through several stages, including agglomerative matrix formulation, course clustering, bi-level matching, identification of learner-preferred courses, and ultimately, course recommendation. notably, course clustering is achieved through deep fuzzy clustering (dfc), while deep convolutional neural networks (dcnn) are employed for the recommendation phase. the efficacy of our dfc-dcnn-fl approach is rigorously evaluated based on several metrics: accuracy, false positive rate (fpr), loss function, mean square error (mse), root mse (rmse), and mean average precision (map). the results demonstrate remarkable performance with scores of 0.909, 0.116, 0.126, 0.291, 0.539, and 0.925, respectively. | [
"the increasing concerns",
"data security",
"privacy",
"users",
"the interest",
"the research community",
"privacy-preserving recommendation systems",
"this backdrop",
"our study",
"a novel course recommendation methodology",
"federated learning",
"advanced deep learning techniques",
"this method",
"the recommendation process",
"local nodes",
"several stages",
"agglomerative matrix formulation",
"bi-level matching",
"identification",
"learner-preferred courses",
"ultimately, course recommendation",
"course clustering",
"deep fuzzy clustering",
"dfc",
"deep convolutional neural networks",
"dcnn",
"the recommendation phase",
"the efficacy",
"our dfc-dcnn-fl approach",
"several metrics",
"accuracy",
"false positive rate",
"fpr",
"loss function",
"square error",
"mse",
"root mse",
"rmse",
"average precision",
"(map",
"the results",
"remarkable performance",
"scores",
"rmse",
"0.909",
"0.116",
"0.126",
"0.291",
"0.539",
"0.925"
] |
DEEP-squared: deep learning powered De-scattering with Excitation Patterning | [
"Navodini Wijethilake",
"Mithunjha Anandakumar",
"Cheng Zheng",
"Peter T. C. So",
"Murat Yildirim",
"Dushan N. Wadduwage"
] | Limited throughput is a key challenge in in vivo deep tissue imaging using nonlinear optical microscopy. Point scanning multiphoton microscopy, the current gold standard, is slow especially compared to the widefield imaging modalities used for optically cleared or thin specimens. We recently introduced “De-scattering with Excitation Patterning” or “DEEP” as a widefield alternative to point-scanning geometries. Using patterned multiphoton excitation, DEEP encodes spatial information inside tissue before scattering. However, to de-scatter at typical depths, hundreds of such patterned excitations were needed. In this work, we present DEEP2, a deep learning-based model that can de-scatter images from just tens of patterned excitations instead of hundreds. Consequently, we improve DEEP’s throughput by almost an order of magnitude. We demonstrate our method in multiple numerical and experimental imaging studies, including in vivo cortical vasculature imaging up to 4 scattering lengths deep in live mice. | 10.1038/s41377-023-01248-6 | deep-squared: deep learning powered de-scattering with excitation patterning | limited throughput is a key challenge in in vivo deep tissue imaging using nonlinear optical microscopy. point scanning multiphoton microscopy, the current gold standard, is slow especially compared to the widefield imaging modalities used for optically cleared or thin specimens. we recently introduced “de-scattering with excitation patterning” or “deep” as a widefield alternative to point-scanning geometries. using patterned multiphoton excitation, deep encodes spatial information inside tissue before scattering. however, to de-scatter at typical depths, hundreds of such patterned excitations were needed. in this work, we present deep2, a deep learning-based model that can de-scatter images from just tens of patterned excitations instead of hundreds. consequently, we improve deep’s throughput by almost an order of magnitude. we demonstrate our method in multiple numerical and experimental imaging studies, including in vivo cortical vasculature imaging up to 4 scattering lengths deep in live mice. | [
"limited throughput",
"a key challenge",
"vivo deep tissue",
"nonlinear optical microscopy",
"point",
"multiphoton microscopy",
"the current gold standard",
"the widefield imaging modalities",
"optically cleared or thin specimens",
"we",
"de",
"a widefield alternative",
"point-scanning geometries",
"patterned multiphoton excitation",
"tissue",
"de",
"-",
"scatter",
"typical depths",
"hundreds",
"such patterned excitations",
"this work",
"we",
"deep2",
"a deep learning-based model",
"that",
"de-scatter images",
"just tens",
"patterned excitations",
"hundreds",
"we",
"deep’s throughput",
"almost an order",
"magnitude",
"we",
"our method",
"multiple numerical and experimental imaging studies",
"vivo cortical vasculature",
"scattering lengths",
"live mice",
"multiphoton",
"hundreds",
"deep2",
"just tens",
"hundreds",
"4"
] |
Harnessing deep learning for population genetic inference | [
"Xin Huang",
"Aigerim Rymbekova",
"Olga Dolgova",
"Oscar Lao",
"Martin Kuhlwilm"
] | In population genetics, the emergence of large-scale genomic data for various species and populations has provided new opportunities to understand the evolutionary forces that drive genetic diversity using statistical inference. However, the era of population genomics presents new challenges in analysing the massive amounts of genomes and variants. Deep learning has demonstrated state-of-the-art performance for numerous applications involving large-scale data. Recently, deep learning approaches have gained popularity in population genetics; facilitated by the advent of massive genomic data sets, powerful computational hardware and complex deep learning architectures, they have been used to identify population structure, infer demographic history and investigate natural selection. Here, we introduce common deep learning architectures and provide comprehensive guidelines for implementing deep learning models for population genetic inference. We also discuss current challenges and future directions for applying deep learning in population genetics, focusing on efficiency, robustness and interpretability. | 10.1038/s41576-023-00636-3 | harnessing deep learning for population genetic inference | in population genetics, the emergence of large-scale genomic data for various species and populations has provided new opportunities to understand the evolutionary forces that drive genetic diversity using statistical inference. however, the era of population genomics presents new challenges in analysing the massive amounts of genomes and variants. deep learning has demonstrated state-of-the-art performance for numerous applications involving large-scale data. recently, deep learning approaches have gained popularity in population genetics; facilitated by the advent of massive genomic data sets, powerful computational hardware and complex deep learning architectures, they have been used to identify population structure, infer demographic history and investigate natural selection. here, we introduce common deep learning architectures and provide comprehensive guidelines for implementing deep learning models for population genetic inference. we also discuss current challenges and future directions for applying deep learning in population genetics, focusing on efficiency, robustness and interpretability. | [
"population genetics",
"the emergence",
"large-scale genomic data",
"various species",
"populations",
"new opportunities",
"the evolutionary forces",
"that",
"genetic diversity",
"statistical inference",
"the era",
"population genomics",
"new challenges",
"the massive amounts",
"genomes",
"variants",
"deep learning",
"the-art",
"numerous applications",
"large-scale data",
"deep learning approaches",
"popularity",
"population genetics",
"the advent",
"massive genomic data sets",
"powerful computational hardware",
"complex deep learning architectures",
"they",
"population structure",
"demographic history",
"natural selection",
"we",
"common deep learning architectures",
"comprehensive guidelines",
"deep learning models",
"population genetic inference",
"we",
"current challenges",
"future directions",
"deep learning",
"population genetics",
"efficiency",
"robustness",
"interpretability"
] |
An enhanced deep learning method for multi-class brain tumor classification using deep transfer learning | [
"Sohaib Asif",
"Ming Zhao",
"Fengxiao Tang",
"Yusen Zhu"
] | Multi-class brain tumor classification is an important area of research in the field of medical imaging because of the different tumor characteristics. One such challenging problem is the multiclass classification of brain tumors using MR images. Since accuracy is critical in classification, computer vision researchers are introducing a number of techniques; However, achieving high accuracy remains challenging when classifying brain images. Early diagnosis of brain tumor types can activate timely treatment, thereby improving the patient’s chances of survival. In recent years, deep learning models have achieved promising results, especially in classifying brain tumors to help neurologists. This work proposes a deep transfer learning model that accelerates brain tumor detection using MR imaging. In this paper, five popular deep learning architectures are utilized to develop a system for diagnosing brain tumors. The architectures used is this paper are Xception, DenseNet201, DenseNet121, ResNet152V2, and InceptionResNetV2. The final layer of these architectures has been modified with our deep dense block and softmax layer as the output layer to improve the classification accuracy. This article presents two main experiments to assess the effectiveness of the proposed model. First, three-class results using images from patients with glioma, meningioma, and pituitary are discussed. Second, the results of four classes are discussed using images of glioma, meningioma, pituitary and healthy patients. The results show that the proposed model based on Xception architecture is the most suitable deep learning model for detecting brain tumors. It achieves a classification accuracy of 99.67% on the 3-class dataset and 95.87% on the 4-class dataset, which is better than the state-of-the-art methods. In conclusion, the proposed model can provide radiologists with an automated medical diagnostic system to make fast and accurate decisions. | 10.1007/s11042-023-14828-w | an enhanced deep learning method for multi-class brain tumor classification using deep transfer learning | multi-class brain tumor classification is an important area of research in the field of medical imaging because of the different tumor characteristics. one such challenging problem is the multiclass classification of brain tumors using mr images. since accuracy is critical in classification, computer vision researchers are introducing a number of techniques; however, achieving high accuracy remains challenging when classifying brain images. early diagnosis of brain tumor types can activate timely treatment, thereby improving the patient’s chances of survival. in recent years, deep learning models have achieved promising results, especially in classifying brain tumors to help neurologists. this work proposes a deep transfer learning model that accelerates brain tumor detection using mr imaging. in this paper, five popular deep learning architectures are utilized to develop a system for diagnosing brain tumors. the architectures used is this paper are xception, densenet201, densenet121, resnet152v2, and inceptionresnetv2. the final layer of these architectures has been modified with our deep dense block and softmax layer as the output layer to improve the classification accuracy. this article presents two main experiments to assess the effectiveness of the proposed model. first, three-class results using images from patients with glioma, meningioma, and pituitary are discussed. second, the results of four classes are discussed using images of glioma, meningioma, pituitary and healthy patients. the results show that the proposed model based on xception architecture is the most suitable deep learning model for detecting brain tumors. it achieves a classification accuracy of 99.67% on the 3-class dataset and 95.87% on the 4-class dataset, which is better than the state-of-the-art methods. in conclusion, the proposed model can provide radiologists with an automated medical diagnostic system to make fast and accurate decisions. | [
"multi-class brain tumor classification",
"an important area",
"research",
"the field",
"medical imaging",
"the different tumor characteristics",
"one such challenging problem",
"the multiclass classification",
"brain tumors",
"mr images",
"accuracy",
"classification",
"computer vision researchers",
"a number",
"techniques",
"high accuracy",
"brain images",
"early diagnosis",
"brain tumor types",
"timely treatment",
"the patient’s chances",
"survival",
"recent years",
"deep learning models",
"promising results",
"brain tumors",
"neurologists",
"this work",
"a deep transfer learning model",
"that",
"brain tumor detection",
"mr imaging",
"this paper",
"five popular deep learning architectures",
"a system",
"brain tumors",
"the architectures",
"this paper",
"xception",
"densenet201",
"densenet121",
"resnet152v2",
"the final layer",
"these architectures",
"our deep dense block",
"softmax layer",
"the output layer",
"the classification accuracy",
"this article",
"two main experiments",
"the effectiveness",
"the proposed model",
"first, three-class results",
"images",
"patients",
"glioma",
"pituitary",
"the results",
"four classes",
"images",
"glioma",
"patients",
"the results",
"the proposed model",
"xception architecture",
"the most suitable deep learning model",
"brain tumors",
"it",
"a classification accuracy",
"99.67%",
"the 3-class dataset",
"95.87%",
"the 4-class dataset",
"which",
"the-art",
"conclusion",
"the proposed model",
"radiologists",
"an automated medical diagnostic system",
"fast and accurate decisions",
"one",
"recent years",
"five",
"inceptionresnetv2",
"two",
"first",
"three",
"glioma",
"second",
"four",
"glioma",
"99.67%",
"3",
"95.87%",
"4"
] |
A Procedural Constructive Learning Mechanism with Deep Reinforcement Learning for Cognitive Agents | [
"Leonardo de Lellis Rossi",
"Eric Rohmer",
"Paula Dornhofer Paro Costa",
"Esther Luna Colombini",
"Alexandre da Silva Simões",
"Ricardo Ribeiro Gudwin"
] | Recent advancements in AI and deep learning have created a growing demand for artificial agents capable of performing tasks within increasingly complex environments. To address the challenges associated with continuous learning constraints and knowledge capacity in this context, cognitive architectures inspired by human cognition have gained significance. This study contributes to existing research by introducing a cognitive-attentional system employing a constructive neural network-based learning approach for continuous acquisition of procedural knowledge. We replace an incremental tabular Reinforcement Learning algorithm with a constructive neural network deep reinforcement learning mechanism for continuous sensorimotor knowledge acquisition, thereby enhancing the overall learning capacity. The primary emphasis of this modification centers on optimizing memory utilization and reducing training time. Our study presents a learning strategy that amalgamates deep reinforcement learning with procedural learning, mirroring the incremental learning process observed in human sensorimotor development. This approach is embedded within the CONAIM cognitive-attentional architecture, leveraging the cognitive tools of CST. The proposed learning mechanism allows the model to dynamically create and modify elements in its procedural memory, facilitating the reuse of previously acquired functions and procedures. Additionally, it equips the model with the capability to combine learned elements to effectively adapt to complex scenarios. A constructive neural network was employed, initiating with an initial hidden layer comprising one neuron. However, it possesses the capacity to adapt its internal architecture in response to its performance in procedural and sensorimotor learning tasks, inserting new hidden layers or neurons. Experimentation conducted through simulations involving a humanoid robot demonstrates the successful resolution of tasks that were previously unsolved through incremental knowledge acquisition. Throughout the training phase, the constructive agent achieved a minimum of 40% greater rewards and executed 8% more actions when compared to other agents. In the subsequent testing phase, the constructive agent exhibited a 15% increase in the number of actions performed in contrast to its counterparts. | 10.1007/s10846-024-02064-9 | a procedural constructive learning mechanism with deep reinforcement learning for cognitive agents | recent advancements in ai and deep learning have created a growing demand for artificial agents capable of performing tasks within increasingly complex environments. to address the challenges associated with continuous learning constraints and knowledge capacity in this context, cognitive architectures inspired by human cognition have gained significance. this study contributes to existing research by introducing a cognitive-attentional system employing a constructive neural network-based learning approach for continuous acquisition of procedural knowledge. we replace an incremental tabular reinforcement learning algorithm with a constructive neural network deep reinforcement learning mechanism for continuous sensorimotor knowledge acquisition, thereby enhancing the overall learning capacity. the primary emphasis of this modification centers on optimizing memory utilization and reducing training time. our study presents a learning strategy that amalgamates deep reinforcement learning with procedural learning, mirroring the incremental learning process observed in human sensorimotor development. this approach is embedded within the conaim cognitive-attentional architecture, leveraging the cognitive tools of cst. the proposed learning mechanism allows the model to dynamically create and modify elements in its procedural memory, facilitating the reuse of previously acquired functions and procedures. additionally, it equips the model with the capability to combine learned elements to effectively adapt to complex scenarios. a constructive neural network was employed, initiating with an initial hidden layer comprising one neuron. however, it possesses the capacity to adapt its internal architecture in response to its performance in procedural and sensorimotor learning tasks, inserting new hidden layers or neurons. experimentation conducted through simulations involving a humanoid robot demonstrates the successful resolution of tasks that were previously unsolved through incremental knowledge acquisition. throughout the training phase, the constructive agent achieved a minimum of 40% greater rewards and executed 8% more actions when compared to other agents. in the subsequent testing phase, the constructive agent exhibited a 15% increase in the number of actions performed in contrast to its counterparts. | [
"recent advancements",
"ai",
"deep learning",
"a growing demand",
"artificial agents",
"tasks",
"increasingly complex environments",
"the challenges",
"continuous learning constraints",
"knowledge capacity",
"this context",
"cognitive architectures",
"human cognition",
"significance",
"this study",
"existing research",
"a cognitive-attentional system",
"a constructive neural network-based learning approach",
"continuous acquisition",
"procedural knowledge",
"we",
"an incremental tabular reinforcement learning algorithm",
"a constructive neural network deep reinforcement learning mechanism",
"continuous sensorimotor knowledge acquisition",
"the overall learning capacity",
"the primary emphasis",
"this modification centers",
"memory utilization",
"training time",
"our study",
"a learning strategy",
"that",
"deep reinforcement learning",
"procedural learning",
"the incremental learning process",
"human sensorimotor development",
"this approach",
"the conaim cognitive-attentional architecture",
"the cognitive tools",
"cst",
"the proposed learning mechanism",
"the model",
"elements",
"its procedural memory",
"the reuse",
"previously acquired functions",
"procedures",
"it",
"the model",
"the capability",
"elements",
"complex scenarios",
"a constructive neural network",
"an initial hidden layer",
"one neuron",
"it",
"the capacity",
"its internal architecture",
"response",
"its performance",
"procedural and sensorimotor learning tasks",
"new hidden layers",
"neurons",
"experimentation",
"simulations",
"a humanoid robot",
"the successful resolution",
"tasks",
"that",
"incremental knowledge acquisition",
"the training phase",
"the constructive agent",
"a minimum",
"40% greater rewards",
"8% more actions",
"other agents",
"the subsequent testing phase",
"the constructive agent",
"a 15% increase",
"the number",
"actions",
"contrast",
"its counterparts",
"one",
"40%",
"8%",
"15%"
] |
Classification of Different Plant Species Using Deep Learning and Machine Learning Algorithms | [
"Siddharth Singh Chouhan",
"Uday Pratap Singh",
"Utkarsh Sharma",
"Sanjeev Jain"
] | In the present situation, a lot of research has been directed towards the potency of plants. These natural resources contain characteristics valuable in combat against a number of diseases. But due to lack of familiarity of these plants among human beings, an appropriate advantage of their significance cannot be drawn away. Plants also shares the certain similar characteristics of leaves like color, texture, shape or size, making them hard to classify them among others. So, to eradicate this problem, a deep learning model has been used for the purpose for classification of different plants species captured in real-time using internet of things practice. Six different plants namely Ashwagandha, Black Pepper, Garlic, Ginger, Basil, and Turmeric has been selected for this purpose. Our proposed convolutional neural network (CNN) model achieved higher performance with an accuracy of 99% when compared with other benchmark deep learning models. Also, to analyze the performance of deep learning versus machine learning models like logistic regression, decision tree, random forest, Gaussian naïve Bayes, support vector machine results were evaluated and when compared CNN outperforms against all machine learning models. The future study will be directed towards the automated plant growth estimation. | 10.1007/s11277-024-11374-y | classification of different plant species using deep learning and machine learning algorithms | in the present situation, a lot of research has been directed towards the potency of plants. these natural resources contain characteristics valuable in combat against a number of diseases. but due to lack of familiarity of these plants among human beings, an appropriate advantage of their significance cannot be drawn away. plants also shares the certain similar characteristics of leaves like color, texture, shape or size, making them hard to classify them among others. so, to eradicate this problem, a deep learning model has been used for the purpose for classification of different plants species captured in real-time using internet of things practice. six different plants namely ashwagandha, black pepper, garlic, ginger, basil, and turmeric has been selected for this purpose. our proposed convolutional neural network (cnn) model achieved higher performance with an accuracy of 99% when compared with other benchmark deep learning models. also, to analyze the performance of deep learning versus machine learning models like logistic regression, decision tree, random forest, gaussian naïve bayes, support vector machine results were evaluated and when compared cnn outperforms against all machine learning models. the future study will be directed towards the automated plant growth estimation. | [
"the present situation",
"a lot",
"research",
"the potency",
"plants",
"these natural resources",
"characteristics",
"combat",
"a number",
"diseases",
"lack",
"familiarity",
"these plants",
"human beings",
"an appropriate advantage",
"their significance",
"plants",
"the certain similar characteristics",
"leaves",
"color",
"texture",
"shape",
"size",
"them",
"them",
"others",
"this problem",
"a deep learning model",
"the purpose",
"classification",
"different plants species",
"real-time",
"internet",
"six different plants",
"namely ashwagandha",
"black pepper",
"garlic",
"ginger",
"basil",
"turmeric",
"this purpose",
"our proposed convolutional neural network (cnn) model",
"higher performance",
"an accuracy",
"99%",
"other benchmark deep learning models",
"the performance",
"deep learning",
"machine learning models",
"logistic regression",
"decision tree",
"random forest",
"gaussian naïve bayes",
"support vector machine results",
"cnn",
"outperforms",
"all machine learning models",
"the future study",
"the automated plant growth estimation",
"six",
"basil",
"cnn",
"99%",
"gaussian naïve bayes",
"cnn"
] |
A deep learning model for anti-inflammatory peptides identification based on deep variational autoencoder and contrastive learning | [
"Yujie Xu",
"Shengli Zhang",
"Feng Zhu",
"Yunyun Liang"
] | As a class of biologically active molecules with significant immunomodulatory and anti-inflammatory effects, anti-inflammatory peptides have important application value in the medical and biotechnology fields due to their unique biological functions. Research on the identification of anti-inflammatory peptides provides important theoretical foundations and practical value for a deeper understanding of the biological mechanisms of inflammation and immune regulation, as well as for the development of new drugs and biotechnological applications. Therefore, it is necessary to develop more advanced computational models for identifying anti-inflammatory peptides. In this study, we propose a deep learning model named DAC-AIPs based on variational autoencoder and contrastive learning for accurate identification of anti-inflammatory peptides. In the sequence encoding part, the incorporation of multi-hot encoding helps capture richer sequence information. The autoencoder, composed of convolutional layers and linear layers, can learn latent features and reconstruct features, with variational inference enhancing the representation capability of latent features. Additionally, the introduction of contrastive learning aims to improve the model's classification ability. Through cross-validation and independent dataset testing experiments, DAC-AIPs achieves superior performance compared to existing state-of-the-art models. In cross-validation, the classification accuracy of DAC-AIPs reached around 88%, which is 7% higher than previous models. Furthermore, various ablation experiments and interpretability experiments validate the effectiveness of DAC-AIPs. Finally, a user-friendly online predictor is designed to enhance the practicality of the model, and the server is freely accessible at http://dac-aips.online. | 10.1038/s41598-024-69419-y | a deep learning model for anti-inflammatory peptides identification based on deep variational autoencoder and contrastive learning | as a class of biologically active molecules with significant immunomodulatory and anti-inflammatory effects, anti-inflammatory peptides have important application value in the medical and biotechnology fields due to their unique biological functions. research on the identification of anti-inflammatory peptides provides important theoretical foundations and practical value for a deeper understanding of the biological mechanisms of inflammation and immune regulation, as well as for the development of new drugs and biotechnological applications. therefore, it is necessary to develop more advanced computational models for identifying anti-inflammatory peptides. in this study, we propose a deep learning model named dac-aips based on variational autoencoder and contrastive learning for accurate identification of anti-inflammatory peptides. in the sequence encoding part, the incorporation of multi-hot encoding helps capture richer sequence information. the autoencoder, composed of convolutional layers and linear layers, can learn latent features and reconstruct features, with variational inference enhancing the representation capability of latent features. additionally, the introduction of contrastive learning aims to improve the model's classification ability. through cross-validation and independent dataset testing experiments, dac-aips achieves superior performance compared to existing state-of-the-art models. in cross-validation, the classification accuracy of dac-aips reached around 88%, which is 7% higher than previous models. furthermore, various ablation experiments and interpretability experiments validate the effectiveness of dac-aips. finally, a user-friendly online predictor is designed to enhance the practicality of the model, and the server is freely accessible at http://dac-aips.online. | [
"a class",
"biologically active molecules",
"anti-inflammatory effects",
"anti-inflammatory peptides",
"important application value",
"the medical and biotechnology fields",
"their unique biological functions",
"research",
"the identification",
"anti-inflammatory peptides",
"important theoretical foundations",
"practical value",
"a deeper understanding",
"the biological mechanisms",
"inflammation",
"immune regulation",
"the development",
"new drugs",
"biotechnological applications",
"it",
"more advanced computational models",
"anti-inflammatory peptides",
"this study",
"we",
"a deep learning model",
"dac-aips",
"variational autoencoder",
"contrastive learning",
"accurate identification",
"anti-inflammatory peptides",
"the sequence",
"part",
"the incorporation",
"multi-hot encoding",
"richer sequence information",
"the autoencoder",
"convolutional layers",
"linear layers",
"latent features",
"features",
"variational inference",
"the representation capability",
"latent features",
"the introduction",
"contrastive learning",
"the model's classification ability",
"cross-validation and independent dataset testing experiments",
"dac-aips",
"superior performance",
"the-art",
"-",
"validation",
"the classification accuracy",
"dac-aips",
"around 88%",
"which",
"previous models",
"various ablation experiments",
"interpretability experiments",
"the effectiveness",
"dac-aips",
"a user-friendly online predictor",
"the practicality",
"the model",
"the server",
"http://dac-aips.online",
"linear",
"around 88%",
"7%"
] |
Fraud Detection Using Machine Learning and Deep Learning | [
"Akash Gandhar",
"Kapil Gupta",
"Aman Kumar Pandey",
"Dharm Raj"
] | Detecting fraudulent activities is a major worry for businesses and financial organizations because they can result in significant financial losses and reputational harm. Traditional fraud detection a method frequently depend on present rules and patterns that skilled scammer can easily circumvent. Machine learning and deep learning algorithms have surfaced as promising methods for detecting fraud in order to handle this problem. Authors present a thorough overview of the most recent ML and DL techniques for fraud identification in this article. These approaches are classified based on their fundamental tactics, which include supervised learning, unsupervised learning, and reinforcement learning. We review recent developments in each area, as well as their strengths and weaknesses. Additionally, we draw attention to some of the major problems with imbalanced datasets, adversarial assaults, and the interpretability of models as well as other important research tasks and difficulties in fraud detection. We also stress the value of feature science and data pre-processing techniques in enhancing the effectiveness of scam detection systems. Finally, we show a case study on the use of DL and ML techniques in the financial sector for fraud detection. Authors show how these algorithms can successfully identify fraudulent transactions, minimize false positives, and keep high precision and scalability. The overall aim of this article is to provide a comprehensive evaluation of the most cutting-edge ML and DL techniques for fraud identification and to shed light on potential future paths for this field of study. | 10.1007/s42979-024-02772-x | fraud detection using machine learning and deep learning | detecting fraudulent activities is a major worry for businesses and financial organizations because they can result in significant financial losses and reputational harm. traditional fraud detection a method frequently depend on present rules and patterns that skilled scammer can easily circumvent. machine learning and deep learning algorithms have surfaced as promising methods for detecting fraud in order to handle this problem. authors present a thorough overview of the most recent ml and dl techniques for fraud identification in this article. these approaches are classified based on their fundamental tactics, which include supervised learning, unsupervised learning, and reinforcement learning. we review recent developments in each area, as well as their strengths and weaknesses. additionally, we draw attention to some of the major problems with imbalanced datasets, adversarial assaults, and the interpretability of models as well as other important research tasks and difficulties in fraud detection. we also stress the value of feature science and data pre-processing techniques in enhancing the effectiveness of scam detection systems. finally, we show a case study on the use of dl and ml techniques in the financial sector for fraud detection. authors show how these algorithms can successfully identify fraudulent transactions, minimize false positives, and keep high precision and scalability. the overall aim of this article is to provide a comprehensive evaluation of the most cutting-edge ml and dl techniques for fraud identification and to shed light on potential future paths for this field of study. | [
"fraudulent activities",
"a major worry",
"businesses",
"financial organizations",
"they",
"significant financial losses",
"reputational harm",
"traditional fraud detection",
"a method",
"present rules",
"patterns",
"skilled scammer",
"machine learning",
"deep learning algorithms",
"promising methods",
"fraud",
"order",
"this problem",
"authors",
"a thorough overview",
"the most recent ml and dl techniques",
"fraud identification",
"this article",
"these approaches",
"their fundamental tactics",
"which",
"supervised learning",
"unsupervised learning",
"reinforcement learning",
"we",
"recent developments",
"each area",
"their strengths",
"weaknesses",
"we",
"attention",
"some",
"the major problems",
"imbalanced datasets",
"adversarial assaults",
"the interpretability",
"models",
"other important research tasks",
"difficulties",
"fraud detection",
"we",
"the value",
"feature science and data pre-processing techniques",
"the effectiveness",
"scam detection systems",
"we",
"a case study",
"the use",
"dl and ml techniques",
"the financial sector",
"fraud detection",
"authors",
"these algorithms",
"fraudulent transactions",
"false positives",
"high precision",
"scalability",
"the overall aim",
"this article",
"a comprehensive evaluation",
"the most cutting-edge ml and dl techniques",
"fraud identification",
"light",
"potential future paths",
"this field",
"study"
] |
Robot autonomous grasping and assembly skill learning based on deep reinforcement learning | [
"Chengjun Chen",
"Hao Zhang",
"Yong Pan",
"Dongnian Li"
] | This paper proposes a deep reinforcement learning-based framework for robot autonomous grasping and assembly skill learning. Meanwhile, a deep Q-learning-based robot grasping skill learning algorithm and a PPO-based robot assembly skill learning algorithm are presented, where a priori knowledge information is introduced to optimize the grasping action and reduce the training time and interaction data needed by the assembly strategy learning algorithm. Besides, a grasping constraint reward function and an assembly constraint reward function are designed to evaluate the robot grasping and assembly quality effectively. Finally, the effectiveness of the proposed framework and algorithms was verified in both simulated and real environments, and the average success rate of grasping in both environments was up to 90%. Under a peg-in-hole assembly tolerance of 3 mm, the assembly success rate was 86.7% and 73.3% in the simulated environment and the physical environment, respectively. | 10.1007/s00170-024-13004-0 | robot autonomous grasping and assembly skill learning based on deep reinforcement learning | this paper proposes a deep reinforcement learning-based framework for robot autonomous grasping and assembly skill learning. meanwhile, a deep q-learning-based robot grasping skill learning algorithm and a ppo-based robot assembly skill learning algorithm are presented, where a priori knowledge information is introduced to optimize the grasping action and reduce the training time and interaction data needed by the assembly strategy learning algorithm. besides, a grasping constraint reward function and an assembly constraint reward function are designed to evaluate the robot grasping and assembly quality effectively. finally, the effectiveness of the proposed framework and algorithms was verified in both simulated and real environments, and the average success rate of grasping in both environments was up to 90%. under a peg-in-hole assembly tolerance of 3 mm, the assembly success rate was 86.7% and 73.3% in the simulated environment and the physical environment, respectively. | [
"this paper",
"a deep reinforcement learning-based framework",
"robot autonomous grasping and assembly skill learning",
"a deep q-learning-based robot grasping skill",
"algorithm",
"a ppo-based robot assembly skill learning algorithm",
"a priori knowledge information",
"the grasping action",
"the training time",
"interaction data",
"the assembly strategy",
"a grasping constraint reward function",
"an assembly constraint reward function",
"the robot grasping and assembly quality",
"the effectiveness",
"the proposed framework",
"algorithms",
"both simulated and real environments",
"the average success rate",
"both environments",
"up to 90%",
"hole",
"the assembly success rate",
"86.7%",
"73.3%",
"the simulated environment",
"the physical environment",
"up to 90%",
"3 mm",
"86.7%",
"73.3%"
] |
Enabling business sustainability for stock market data using machine learning and deep learning approaches | [
"S. Divyashree",
"Christy Jackson Joshua",
"Abdul Quadir Md",
"Senthilkumar Mohan",
"A. Sheik Abdullah",
"Ummul Hanan Mohamad",
"Nisreen Innab",
"Ali Ahmadian"
] | This paper introduces AlphaVision, an innovative decision support model designed for stock price prediction by seamlessly integrating real-time news updates and Return on Investment (ROI) values, utilizing various machine learning and deep learning approaches. The research investigates the application of these techniques to enhance the effectiveness of stock trading and investment decisions by accurately anticipating stock prices and providing valuable insights to investors and businesses. The study begins by analyzing the complexities and challenges of stock market analysis, considering factors like political, macroeconomic, and legal issues that contribute to market volatility. To address these challenges, we proposed the methodology called AlphaVision, which incorporates various machine learning algorithms, including Decision Trees, Random Forest, Naïve Bayes, Boosting, K-Nearest Neighbors, and Support Vector Machine, alongside deep learning models such as Multi-layer Perceptron (MLP), Artificial Neural Networks, and Recurrent Neural Networks. The effectiveness of each model is evaluated based on their accuracy in predicting stock prices. Experimental results revealed that the MLP model achieved the highest accuracy of approximately 92%, outperforming other deep learning models. The Random Forest algorithm also demonstrated promising results with an accuracy of around 84.6%. These findings indicate the potential of machine learning and deep learning techniques in improving stock market analysis and prediction. The AlphaVision methodology presented in this research empowers investors and businesses with valuable tools to make informed investment decisions and navigate the complexities of the stock market. By accurately forecasting stock prices based on news updates and ROI values, the model contributes to better financial management and business sustainability. The integration of machine learning and deep learning approaches offers a promising solution for enhancing stock market analysis and prediction. Future research will focus on extracting more relevant financial features to further improve the model’s accuracy. By advancing decision support models for stock price prediction, researchers and practitioners can foster better investment strategies and foster economic growth. The proposed model holds potential to revolutionize stock trading and investment practices, enabling more informed and profitable decision-making in the financial sector. | 10.1007/s10479-024-06118-x | enabling business sustainability for stock market data using machine learning and deep learning approaches | this paper introduces alphavision, an innovative decision support model designed for stock price prediction by seamlessly integrating real-time news updates and return on investment (roi) values, utilizing various machine learning and deep learning approaches. the research investigates the application of these techniques to enhance the effectiveness of stock trading and investment decisions by accurately anticipating stock prices and providing valuable insights to investors and businesses. the study begins by analyzing the complexities and challenges of stock market analysis, considering factors like political, macroeconomic, and legal issues that contribute to market volatility. to address these challenges, we proposed the methodology called alphavision, which incorporates various machine learning algorithms, including decision trees, random forest, naïve bayes, boosting, k-nearest neighbors, and support vector machine, alongside deep learning models such as multi-layer perceptron (mlp), artificial neural networks, and recurrent neural networks. the effectiveness of each model is evaluated based on their accuracy in predicting stock prices. experimental results revealed that the mlp model achieved the highest accuracy of approximately 92%, outperforming other deep learning models. the random forest algorithm also demonstrated promising results with an accuracy of around 84.6%. these findings indicate the potential of machine learning and deep learning techniques in improving stock market analysis and prediction. the alphavision methodology presented in this research empowers investors and businesses with valuable tools to make informed investment decisions and navigate the complexities of the stock market. by accurately forecasting stock prices based on news updates and roi values, the model contributes to better financial management and business sustainability. the integration of machine learning and deep learning approaches offers a promising solution for enhancing stock market analysis and prediction. future research will focus on extracting more relevant financial features to further improve the model’s accuracy. by advancing decision support models for stock price prediction, researchers and practitioners can foster better investment strategies and foster economic growth. the proposed model holds potential to revolutionize stock trading and investment practices, enabling more informed and profitable decision-making in the financial sector. | [
"this paper",
"alphavision",
"an innovative decision support model",
"stock price prediction",
"real-time news updates",
"return",
"roi",
"various machine learning",
"deep learning approaches",
"the research",
"the application",
"these techniques",
"the effectiveness",
"stock trading and investment decisions",
"stock prices",
"valuable insights",
"investors",
"businesses",
"the study",
"the complexities",
"challenges",
"stock market analysis",
"factors",
"political, macroeconomic, and legal issues",
"that",
"market volatility",
"these challenges",
"we",
"the methodology",
"alphavision",
"which",
"various machine learning algorithms",
"decision trees",
"random forest",
"naïve bayes",
"k-nearest neighbors",
"vector machine",
"deep learning models",
"multi-layer perceptron",
"mlp",
"artificial neural networks",
"neural networks",
"the effectiveness",
"each model",
"their accuracy",
"stock prices",
"experimental results",
"the mlp model",
"the highest accuracy",
"approximately 92%",
"other deep learning models",
"the random forest algorithm",
"promising results",
"an accuracy",
"around 84.6%",
"these findings",
"the potential",
"machine learning",
"deep learning techniques",
"stock market analysis",
"prediction",
"the alphavision methodology",
"this research",
"investors",
"businesses",
"valuable tools",
"informed investment decisions",
"the complexities",
"the stock market",
"stock prices",
"news updates",
"roi values",
"the model",
"better financial management",
"business sustainability",
"the integration",
"machine learning",
"deep learning approaches",
"a promising solution",
"stock market analysis",
"prediction",
"future research",
"more relevant financial features",
"the model’s accuracy",
"decision support models",
"stock price prediction",
"researchers",
"practitioners",
"better investment strategies",
"foster economic growth",
"the proposed model",
"potential",
"stock trading",
"investment practices",
"more informed and profitable decision-making",
"the financial sector",
"approximately 92%",
"around 84.6%"
] |
Deep learning for lungs cancer detection: a review | [
"Rabia Javed",
"Tahir Abbas",
"Ali Haider Khan",
"Ali Daud",
"Amal Bukhari",
"Riad Alharbey"
] | Although lung cancer has been recognized to be the deadliest type of cancer, a good prognosis and efficient treatment depend on early detection. Medical practitioners’ burden is reduced by deep learning techniques, especially Deep Convolutional Neural Networks (DCNN), which are essential in automating the diagnosis and classification of diseases. In this study, we use a variety of medical imaging modalities, including X-rays, WSI, CT scans, and MRI, to thoroughly investigate the use of deep learning techniques in the field of lung cancer diagnosis and classification. This study conducts a comprehensive Systematic Literature Review (SLR) using deep learning techniques for lung cancer research, providing a comprehensive overview of the methodology, cutting-edge developments, quality assessments, and customized deep learning approaches. It presents data from reputable journals and concentrates on the years 2015–2024. Deep learning techniques solve the difficulty of manually identifying and selecting abstract features from lung cancer images. This study includes a wide range of deep learning methods for classifying lung cancer but focuses especially on the most popular method, the Convolutional Neural Network (CNN). CNN can achieve maximum accuracy because of its multi-layer structure, automatic learning of weights, and capacity to communicate local weights. Various algorithms are shown with performance measures like precision, accuracy, specificity, sensitivity, and AUC; CNN consistently shows the greatest accuracy. The findings highlight the important contributions of DCNN in improving lung cancer detection and classification, making them an invaluable resource for researchers looking to gain a greater knowledge of deep learning’s function in medical applications. | 10.1007/s10462-024-10807-1 | deep learning for lungs cancer detection: a review | although lung cancer has been recognized to be the deadliest type of cancer, a good prognosis and efficient treatment depend on early detection. medical practitioners’ burden is reduced by deep learning techniques, especially deep convolutional neural networks (dcnn), which are essential in automating the diagnosis and classification of diseases. in this study, we use a variety of medical imaging modalities, including x-rays, wsi, ct scans, and mri, to thoroughly investigate the use of deep learning techniques in the field of lung cancer diagnosis and classification. this study conducts a comprehensive systematic literature review (slr) using deep learning techniques for lung cancer research, providing a comprehensive overview of the methodology, cutting-edge developments, quality assessments, and customized deep learning approaches. it presents data from reputable journals and concentrates on the years 2015–2024. deep learning techniques solve the difficulty of manually identifying and selecting abstract features from lung cancer images. this study includes a wide range of deep learning methods for classifying lung cancer but focuses especially on the most popular method, the convolutional neural network (cnn). cnn can achieve maximum accuracy because of its multi-layer structure, automatic learning of weights, and capacity to communicate local weights. various algorithms are shown with performance measures like precision, accuracy, specificity, sensitivity, and auc; cnn consistently shows the greatest accuracy. the findings highlight the important contributions of dcnn in improving lung cancer detection and classification, making them an invaluable resource for researchers looking to gain a greater knowledge of deep learning’s function in medical applications. | [
"lung cancer",
"the deadliest type",
"cancer",
"a good prognosis and efficient treatment",
"early detection",
"medical practitioners’ burden",
"deep learning techniques",
"especially deep convolutional neural networks",
"dcnn",
"which",
"the diagnosis",
"classification",
"diseases",
"this study",
"we",
"a variety",
"medical imaging modalities",
"x",
"-",
"rays",
"wsi",
"ct scans",
"mri",
"the use",
"deep learning techniques",
"the field",
"lung cancer diagnosis",
"classification",
"this study",
"a comprehensive systematic literature review",
"slr",
"deep learning techniques",
"lung cancer research",
"a comprehensive overview",
"the methodology",
"cutting-edge developments",
"quality assessments",
"customized deep learning approaches",
"it",
"data",
"reputable journals",
"concentrates",
"the years",
"deep learning techniques",
"the difficulty",
"abstract features",
"lung cancer images",
"this study",
"a wide range",
"deep learning methods",
"lung cancer",
"the most popular method",
"the convolutional neural network",
"cnn",
"cnn",
"maximum accuracy",
"its multi-layer structure",
"automatic learning",
"weights",
"capacity",
"local weights",
"various algorithms",
"performance measures",
"precision",
"accuracy",
"specificity",
"sensitivity",
"auc",
"cnn",
"the greatest accuracy",
"the findings",
"the important contributions",
"dcnn",
"lung cancer detection",
"classification",
"them",
"researchers",
"a greater knowledge",
"deep learning’s function",
"medical applications",
"the years 2015–2024",
"cnn",
"cnn",
"cnn"
] |
A brief review of hypernetworks in deep learning | [
"Vinod Kumar Chauhan",
"Jiandong Zhou",
"Ping Lu",
"Soheila Molaei",
"David A. Clifton"
] | Hypernetworks, or hypernets for short, are neural networks that generate weights for another neural network, known as the target network. They have emerged as a powerful deep learning technique that allows for greater flexibility, adaptability, dynamism, faster training, information sharing, and model compression. Hypernets have shown promising results in a variety of deep learning problems, including continual learning, causal inference, transfer learning, weight pruning, uncertainty quantification, zero-shot learning, natural language processing, and reinforcement learning. Despite their success across different problem settings, there is currently no comprehensive review available to inform researchers about the latest developments and to assist in utilizing hypernets. To fill this gap, we review the progress in hypernets. We present an illustrative example of training deep neural networks using hypernets and propose categorizing hypernets based on five design criteria: inputs, outputs, variability of inputs and outputs, and the architecture of hypernets. We also review applications of hypernets across different deep learning problem settings, followed by a discussion of general scenarios where hypernets can be effectively employed. Finally, we discuss the challenges and future directions that remain underexplored in the field of hypernets. We believe that hypernetworks have the potential to revolutionize the field of deep learning. They offer a new way to design and train neural networks, and they have the potential to improve the performance of deep learning models on a variety of tasks. Through this review, we aim to inspire further advancements in deep learning through hypernetworks. | 10.1007/s10462-024-10862-8 | a brief review of hypernetworks in deep learning | hypernetworks, or hypernets for short, are neural networks that generate weights for another neural network, known as the target network. they have emerged as a powerful deep learning technique that allows for greater flexibility, adaptability, dynamism, faster training, information sharing, and model compression. hypernets have shown promising results in a variety of deep learning problems, including continual learning, causal inference, transfer learning, weight pruning, uncertainty quantification, zero-shot learning, natural language processing, and reinforcement learning. despite their success across different problem settings, there is currently no comprehensive review available to inform researchers about the latest developments and to assist in utilizing hypernets. to fill this gap, we review the progress in hypernets. we present an illustrative example of training deep neural networks using hypernets and propose categorizing hypernets based on five design criteria: inputs, outputs, variability of inputs and outputs, and the architecture of hypernets. we also review applications of hypernets across different deep learning problem settings, followed by a discussion of general scenarios where hypernets can be effectively employed. finally, we discuss the challenges and future directions that remain underexplored in the field of hypernets. we believe that hypernetworks have the potential to revolutionize the field of deep learning. they offer a new way to design and train neural networks, and they have the potential to improve the performance of deep learning models on a variety of tasks. through this review, we aim to inspire further advancements in deep learning through hypernetworks. | [
"hypernetworks",
"hypernets",
"neural networks",
"that",
"weights",
"another neural network",
"the target network",
"they",
"a powerful deep learning technique",
"that",
"greater flexibility",
"adaptability",
"dynamism",
"faster training",
"information sharing",
"model compression",
"hypernets",
"promising results",
"a variety",
"deep learning problems",
"continual learning",
"causal inference",
"transfer learning",
"weight pruning",
"uncertainty quantification",
"zero-shot learning",
"natural language processing",
"reinforcement learning",
"their success",
"different problem settings",
"no comprehensive review",
"researchers",
"the latest developments",
"hypernets",
"this gap",
"we",
"the progress",
"hypernets",
"we",
"an illustrative example",
"deep neural networks",
"hypernets",
"categorizing hypernets",
"five design criteria",
"inputs",
"outputs",
"variability",
"inputs",
"outputs",
"the architecture",
"hypernets",
"we",
"applications",
"hypernets",
"different deep learning problem settings",
"a discussion",
"general scenarios",
"hypernets",
"we",
"the challenges",
"future directions",
"that",
"the field",
"hypernets",
"we",
"hypernetworks",
"the potential",
"the field",
"deep learning",
"they",
"a new way",
"neural networks",
"they",
"the potential",
"the performance",
"deep learning models",
"a variety",
"tasks",
"this review",
"we",
"further advancements",
"deep learning",
"hypernetworks",
"zero",
"five"
] |
Learning Dynamic Batch-Graph Representation for Deep Representation Learning | [
"Xixi Wang",
"Bo Jiang",
"Xiao Wang",
"Bin Luo"
] | Recently, batch-based image data representation has been demonstrated to be effective for context-enhanced image representation. The core issue for this task is capturing the dependences of image samples within each mini-batch and conducting message communication among different samples. Existing approaches mainly adopt self-attention or local self-attention models (on patch dimension) for this task which fail to fully exploit the intrinsic relationships of samples within mini-batch and also be sensitive to noises and outliers. To address this issue, in this paper, we propose a flexible Dynamic Batch-Graph Representation (DyBGR) model, to automatically explore the intrinsic relationship of samples for contextual sample representation. Specifically, DyBGR first represents the mini-batch with a graph (termed batch-graph) in which nodes represent image samples and edges encode the dependences of images. This graph is dynamically learned with the constraint of similarity, sparseness and semantic correlation. Upon this, DyBGR exchanges the sample (node) information on the batch-graph to update each node representation. Note that, both batch-graph learning and information propagation are jointly optimized to boost their respective performance. Furthermore, in practical, DyBGR model can be implemented via a simple plug-and-play block (named DyBGR block) which thus can be potentially integrated into any mini-batch based deep representation learning schemes. Extensive experiments on deep metric learning tasks demonstrate the effectiveness of DyBGR. We will release the code at https://github.com/SissiW/DyBGR. | 10.1007/s11263-024-02175-8 | learning dynamic batch-graph representation for deep representation learning | recently, batch-based image data representation has been demonstrated to be effective for context-enhanced image representation. the core issue for this task is capturing the dependences of image samples within each mini-batch and conducting message communication among different samples. existing approaches mainly adopt self-attention or local self-attention models (on patch dimension) for this task which fail to fully exploit the intrinsic relationships of samples within mini-batch and also be sensitive to noises and outliers. to address this issue, in this paper, we propose a flexible dynamic batch-graph representation (dybgr) model, to automatically explore the intrinsic relationship of samples for contextual sample representation. specifically, dybgr first represents the mini-batch with a graph (termed batch-graph) in which nodes represent image samples and edges encode the dependences of images. this graph is dynamically learned with the constraint of similarity, sparseness and semantic correlation. upon this, dybgr exchanges the sample (node) information on the batch-graph to update each node representation. note that, both batch-graph learning and information propagation are jointly optimized to boost their respective performance. furthermore, in practical, dybgr model can be implemented via a simple plug-and-play block (named dybgr block) which thus can be potentially integrated into any mini-batch based deep representation learning schemes. extensive experiments on deep metric learning tasks demonstrate the effectiveness of dybgr. we will release the code at https://github.com/sissiw/dybgr. | [
"batch-based image data representation",
"context-enhanced image representation",
"the core issue",
"this task",
"the dependences",
"image samples",
"each mini",
"-",
"batch",
"message communication",
"different samples",
"existing approaches",
"self-attention",
"local self-attention models",
"patch dimension",
"this task",
"which",
"the intrinsic relationships",
"samples",
"mini",
"-",
"batch",
"noises",
"outliers",
"this issue",
"this paper",
"we",
"a flexible dynamic batch-graph representation (dybgr) model",
"the intrinsic relationship",
"samples",
"contextual sample representation",
"the mini",
"-",
"batch",
"a graph",
"termed batch-graph",
"which",
"nodes",
"image samples",
"edges",
"the dependences",
"images",
"this graph",
"the constraint",
"similarity",
"sparseness",
"semantic correlation",
"this",
"the batch-graph",
"each node representation",
"both batch-graph learning and information propagation",
"their respective performance",
"practical, dybgr model",
"a simple plug-and-play block",
"dybgr block",
"which",
"any mini-batch based deep representation learning schemes",
"extensive experiments",
"deep metric learning tasks",
"the effectiveness",
"dybgr",
"we",
"the code",
"https://github.com/sissiw/dybgr",
"first"
] |
Relay learning: a physically secure framework for clinical multi-site deep learning | [
"Zi-Hao Bo",
"Yuchen Guo",
"Jinhao Lyu",
"Hengrui Liang",
"Jianxing He",
"Shijie Deng",
"Feng Xu",
"Xin Lou",
"Qionghai Dai"
] | Big data serves as the cornerstone for constructing real-world deep learning systems across various domains. In medicine and healthcare, a single clinical site lacks sufficient data, thus necessitating the involvement of multiple sites. Unfortunately, concerns regarding data security and privacy hinder the sharing and reuse of data across sites. Existing approaches to multi-site clinical learning heavily depend on the security of the network firewall and system implementation. To address this issue, we propose Relay Learning, a secure deep-learning framework that physically isolates clinical data from external intruders while still leveraging the benefits of multi-site big data. We demonstrate the efficacy of Relay Learning in three medical tasks of different diseases and anatomical structures, including structure segmentation of retina fundus, mediastinum tumors diagnosis, and brain midline localization. We evaluate Relay Learning by comparing its performance to alternative solutions through multi-site validation and external validation. Incorporating a total of 41,038 medical images from 21 medical hosts, including 7 external hosts, with non-uniform distributions, we observe significant performance improvements with Relay Learning across all three tasks. Specifically, it achieves an average performance increase of 44.4%, 24.2%, and 36.7% for retinal fundus segmentation, mediastinum tumor diagnosis, and brain midline localization, respectively. Remarkably, Relay Learning even outperforms central learning on external test sets. In the meanwhile, Relay Learning keeps data sovereignty locally without cross-site network connections. We anticipate that Relay Learning will revolutionize clinical multi-site collaboration and reshape the landscape of healthcare in the future. | 10.1038/s41746-023-00934-4 | relay learning: a physically secure framework for clinical multi-site deep learning | big data serves as the cornerstone for constructing real-world deep learning systems across various domains. in medicine and healthcare, a single clinical site lacks sufficient data, thus necessitating the involvement of multiple sites. unfortunately, concerns regarding data security and privacy hinder the sharing and reuse of data across sites. existing approaches to multi-site clinical learning heavily depend on the security of the network firewall and system implementation. to address this issue, we propose relay learning, a secure deep-learning framework that physically isolates clinical data from external intruders while still leveraging the benefits of multi-site big data. we demonstrate the efficacy of relay learning in three medical tasks of different diseases and anatomical structures, including structure segmentation of retina fundus, mediastinum tumors diagnosis, and brain midline localization. we evaluate relay learning by comparing its performance to alternative solutions through multi-site validation and external validation. incorporating a total of 41,038 medical images from 21 medical hosts, including 7 external hosts, with non-uniform distributions, we observe significant performance improvements with relay learning across all three tasks. specifically, it achieves an average performance increase of 44.4%, 24.2%, and 36.7% for retinal fundus segmentation, mediastinum tumor diagnosis, and brain midline localization, respectively. remarkably, relay learning even outperforms central learning on external test sets. in the meanwhile, relay learning keeps data sovereignty locally without cross-site network connections. we anticipate that relay learning will revolutionize clinical multi-site collaboration and reshape the landscape of healthcare in the future. | [
"big data",
"the cornerstone",
"real-world deep learning systems",
"various domains",
"medicine",
"healthcare",
"a single clinical site",
"sufficient data",
"the involvement",
"multiple sites",
"concerns",
"data security",
"privacy",
"the sharing",
"reuse",
"data",
"sites",
"existing approaches",
"multi-site clinical learning",
"the security",
"the network firewall and system implementation",
"this issue",
"we",
"relay learning",
"a secure deep-learning framework",
"that",
"clinical data",
"external intruders",
"the benefits",
"multi-site big data",
"we",
"the efficacy",
"three medical tasks",
"different diseases",
"anatomical structures",
"structure segmentation",
"retina fundus",
"mediastinum tumors diagnosis",
"brain midline localization",
"we",
"relay",
"its performance",
"solutions",
"multi-site validation",
"external validation",
"a total",
"41,038 medical images",
"21 medical hosts",
"7 external hosts",
"non-uniform distributions",
"we",
"significant performance improvements",
"relay",
"all three tasks",
"it",
"an average performance increase",
"44.4%",
"24.2%",
"36.7%",
"retinal fundus segmentation",
"mediastinum tumor diagnosis",
"brain midline localization",
"remarkably, relay",
"central learning",
"external test sets",
"the meanwhile",
"relay learning",
"data sovereignty",
"cross-site network connections",
"we",
"relay learning",
"clinical multi-site collaboration",
"the landscape",
"healthcare",
"the future",
"three",
"41,038",
"21",
"7",
"three",
"44.4%",
"24.2%",
"36.7%"
] |
Predicting Apple Plant Diseases in Orchards Using Machine Learning and Deep Learning Algorithms | [
"Imtiaz Ahmed",
"Pramod Kumar Yadav"
] | Apple cultivation in the Kashmir Valley is a cornerstone of the region’s agriculture, contributing significantly to the economy through substantial annual apple exports. This study explores the application of machine learning and deep learning algorithms for predicting apple plant diseases in orchards. By leveraging advanced computational techniques, the research aims to enhance early detection and diagnosis of diseases, thereby enabling proactive disease management. The study utilizes a dataset comprising diverse environmental and plant health factors to train and validate the models. Key highlights include the comparative analysis of machine learning and deep learning approaches, the identification of optimal feature sets, and the assessment of model performance. The findings contribute to the development of efficient and accurate tools for precision agriculture, facilitating timely intervention and sustainable orchard management. The apple industry in Kashmir faces a significant challenge due to the prevalence of various diseases affecting apple trees. One prominent disease that adversely impacts apple yields in the region is the Apple Scab, caused by the fungus Venturia inadequacies. Apple Scab is characterized by dark, scaly lesions on leaves, fruit, and twigs, leading to defoliation and reduced fruit quality. The disease thrives in cool and humid conditions, which are prevalent in the Kashmir Valley. This study addresses the limitations of traditional, labor-intensive, and time-consuming laboratory methods for diagnosing apple plant diseases. The goal is to provide an accurate and efficient deep learning-based system for the prompt identification and prediction of foliar diseases in Kashmiri apple plants. Our study begins involves the creation of a dataset annotated by experts containing approximately 10,000 high-quality RGB images that illustrate key symptoms associated with foliar diseases. In the next step, an approach to deep learning that utilizes convolutional neural networks (CNNs) was developed. Comparative analysis five different deep learning algorithms, including Faster R-CNN, showed that the method was effective in detecting apple diseases in real time. The proposed framework, when tested, achieves state-of-the-art results with a remarkable 92% accuracy in identifying apple plant diseases. A new dataset is presented that includes samples of leaves from Kashmiri apple plants that have three different illnesses. The findings hold promise for revolutionizing orchard management practices, ultimately benefiting apple growers and sustaining the thriving apple industry in the Kashmir Valley. | 10.1007/s42979-024-02959-2 | predicting apple plant diseases in orchards using machine learning and deep learning algorithms | apple cultivation in the kashmir valley is a cornerstone of the region’s agriculture, contributing significantly to the economy through substantial annual apple exports. this study explores the application of machine learning and deep learning algorithms for predicting apple plant diseases in orchards. by leveraging advanced computational techniques, the research aims to enhance early detection and diagnosis of diseases, thereby enabling proactive disease management. the study utilizes a dataset comprising diverse environmental and plant health factors to train and validate the models. key highlights include the comparative analysis of machine learning and deep learning approaches, the identification of optimal feature sets, and the assessment of model performance. the findings contribute to the development of efficient and accurate tools for precision agriculture, facilitating timely intervention and sustainable orchard management. the apple industry in kashmir faces a significant challenge due to the prevalence of various diseases affecting apple trees. one prominent disease that adversely impacts apple yields in the region is the apple scab, caused by the fungus venturia inadequacies. apple scab is characterized by dark, scaly lesions on leaves, fruit, and twigs, leading to defoliation and reduced fruit quality. the disease thrives in cool and humid conditions, which are prevalent in the kashmir valley. this study addresses the limitations of traditional, labor-intensive, and time-consuming laboratory methods for diagnosing apple plant diseases. the goal is to provide an accurate and efficient deep learning-based system for the prompt identification and prediction of foliar diseases in kashmiri apple plants. our study begins involves the creation of a dataset annotated by experts containing approximately 10,000 high-quality rgb images that illustrate key symptoms associated with foliar diseases. in the next step, an approach to deep learning that utilizes convolutional neural networks (cnns) was developed. comparative analysis five different deep learning algorithms, including faster r-cnn, showed that the method was effective in detecting apple diseases in real time. the proposed framework, when tested, achieves state-of-the-art results with a remarkable 92% accuracy in identifying apple plant diseases. a new dataset is presented that includes samples of leaves from kashmiri apple plants that have three different illnesses. the findings hold promise for revolutionizing orchard management practices, ultimately benefiting apple growers and sustaining the thriving apple industry in the kashmir valley. | [
"apple cultivation",
"the kashmir valley",
"a cornerstone",
"the region’s agriculture",
"the economy",
"substantial annual apple exports",
"this study",
"the application",
"machine learning",
"deep learning algorithms",
"apple plant diseases",
"orchards",
"advanced computational techniques",
"the research",
"early detection",
"diagnosis",
"diseases",
"proactive disease management",
"the study",
"a dataset",
"diverse environmental and plant health factors",
"the models",
"key highlights",
"the comparative analysis",
"machine learning",
"deep learning approaches",
"the identification",
"optimal feature sets",
"the assessment",
"model performance",
"the findings",
"the development",
"efficient and accurate tools",
"precision agriculture",
"timely intervention",
"sustainable orchard management",
"the apple industry",
"kashmir",
"a significant challenge",
"the prevalence",
"various diseases",
"apple trees",
"one prominent disease",
"that",
"apple yields",
"the region",
"the apple scab",
"the fungus venturia inadequacies",
"apple scab",
"dark",
"scaly lesions",
"leaves",
"fruit",
"twigs",
"defoliation",
"reduced fruit quality",
"the disease",
"cool and humid conditions",
"which",
"the kashmir valley",
"this study",
"the limitations",
"traditional, labor-intensive, and time-consuming laboratory methods",
"apple plant diseases",
"the goal",
"an accurate and efficient deep learning-based system",
"the prompt identification",
"prediction",
"foliar diseases",
"kashmiri apple plants",
"our study",
"the creation",
"a dataset",
"experts",
"approximately 10,000 high-quality rgb images",
"that",
"key symptoms",
"foliar diseases",
"the next step",
"an approach",
"deep learning",
"that",
"convolutional neural networks",
"cnns",
"comparative analysis",
"five different deep learning algorithms",
"faster r-cnn",
"the method",
"apple diseases",
"real time",
"the proposed framework",
"the-art",
"a remarkable 92% accuracy",
"apple plant diseases",
"a new dataset",
"that",
"samples",
"leaves",
"kashmiri apple plants",
"that",
"three different illnesses",
"the findings",
"promise",
"orchard management practices",
"apple growers",
"the thriving apple industry",
"the kashmir valley",
"the kashmir valley",
"annual",
"kashmir",
"one",
"apple scab",
"the kashmir valley",
"10,000",
"five",
"92%",
"three",
"the kashmir valley"
] |
Deep Learning zur Kariesdiagnostik | [
"Norbert Krämer",
"Roland Frankenberger"
] | Deep-Learning-Modelle spielen auch in der Zahnheilkunde eine zunehmend größere Rolle und werden in unterschiedlichen Feldern eingesetzt. Vor diesem Hintergrund wurde in der vorliegenden Literaturübersicht eine systematische Übersichtsarbeit einer internationalen Autorengruppe vorgestellt, die Deep-Learning-Modelle zur Kariesdiagnostik analysierte und bewertete. Sie kam zu dem Schluss, dass in einer zunehmenden Anzahl von Studien die Kariesdiagnostik mithilfe von Deep-Learning-Modellen unterstützt wird. Die dokumentierte Genauigkeit erscheint vielversprechend, während Studien- und Berichtsqualität derzeit unzureichend sind, um weiterführende Analysen durchzuführen. Bei verbesserter Datenlage könnten jedoch künftig Deep-Learning-Modelle als Hilfsmittel für Entscheidungen über das Vorhandensein von kariösen Läsionen herangezogen werden. | 10.1007/s44190-023-0647-4 | deep learning zur kariesdiagnostik | deep-learning-modelle spielen auch in der zahnheilkunde eine zunehmend größere rolle und werden in unterschiedlichen feldern eingesetzt. vor diesem hintergrund wurde in der vorliegenden literaturübersicht eine systematische übersichtsarbeit einer internationalen autorengruppe vorgestellt, die deep-learning-modelle zur kariesdiagnostik analysierte und bewertete. sie kam zu dem schluss, dass in einer zunehmenden anzahl von studien die kariesdiagnostik mithilfe von deep-learning-modellen unterstützt wird. die dokumentierte genauigkeit erscheint vielversprechend, während studien- und berichtsqualität derzeit unzureichend sind, um weiterführende analysen durchzuführen. bei verbesserter datenlage könnten jedoch künftig deep-learning-modelle als hilfsmittel für entscheidungen über das vorhandensein von kariösen läsionen herangezogen werden. | [
"deep-learning-modelle spielen",
"unterschiedlichen feldern eingesetzt",
"vor diesem hintergrund wurde",
"der vorliegenden literaturübersicht eine systematische übersichtsarbeit einer internationalen autorengruppe",
"deep-learning-modelle zur kariesdiagnostik analysierte",
"bewertete",
"sie kam zu dem schluss",
"dass",
"einer zunehmenden",
"anzahl von studien die kariesdiagnostik mithilfe von deep-learning-modellen unterstützt wird",
"die dokumentierte genauigkeit erscheint vielversprechend",
"studien-",
"und berichtsqualität derzeit unzureichend sind",
"analysen durchzuführen",
"bei verbesserter datenlage",
"jedoch künftig deep-learning-modelle als hilfsmittel für entscheidungen über das vorhandensein von kariösen läsionen herangezogen werden",
"unterschiedlichen feldern eingesetzt",
"diesem",
"hintergrund wurde",
"literaturübersicht",
"kam",
"dem",
"anzahl von studien",
"könnten jedoch",
"das vorhandensein von kariösen"
] |
Deep doubly robust outcome weighted learning | [
"Xiaotong Jiang",
"Xin Zhou",
"Michael R. Kosorok"
] | Precision medicine is a framework that adapts treatment strategies to a patient’s individual characteristics and provides helpful clinical decision support. Existing research has been extended to various situations but high-dimensional data have not yet been fully incorporated into the paradigm. We propose a new precision medicine approach called deep doubly robust outcome weighted learning (DDROWL) that can handle big and complex data. This is a machine learning tool that directly estimates the optimal decision rule and achieves the best of three worlds: deep learning, double robustness, and residual weighted learning. Two architectures have been implemented in the proposed method, a fully-connected feedforward neural network and the Deep Kernel Learning model, a Gaussian process with deep learning-filtered inputs. We compare and discuss the performance and limitation of different methods through a range of simulations. Using longitudinal and brain imaging data from patients with Alzheimer’s disease, we demonstrate the application of the proposed method in real-world clinical practice. With the implementation of deep learning, the proposed method can expand the influence of precision medicine to high-dimensional abundant data with greater flexibility and computational power. | 10.1007/s10994-023-06484-w | deep doubly robust outcome weighted learning | precision medicine is a framework that adapts treatment strategies to a patient’s individual characteristics and provides helpful clinical decision support. existing research has been extended to various situations but high-dimensional data have not yet been fully incorporated into the paradigm. we propose a new precision medicine approach called deep doubly robust outcome weighted learning (ddrowl) that can handle big and complex data. this is a machine learning tool that directly estimates the optimal decision rule and achieves the best of three worlds: deep learning, double robustness, and residual weighted learning. two architectures have been implemented in the proposed method, a fully-connected feedforward neural network and the deep kernel learning model, a gaussian process with deep learning-filtered inputs. we compare and discuss the performance and limitation of different methods through a range of simulations. using longitudinal and brain imaging data from patients with alzheimer’s disease, we demonstrate the application of the proposed method in real-world clinical practice. with the implementation of deep learning, the proposed method can expand the influence of precision medicine to high-dimensional abundant data with greater flexibility and computational power. | [
"precision medicine",
"a framework",
"that",
"treatment strategies",
"a patient’s individual characteristics",
"helpful clinical decision support",
"existing research",
"various situations",
"high-dimensional data",
"the paradigm",
"we",
"a new precision medicine approach",
"deep doubly robust outcome",
"ddrowl",
"that",
"big and complex data",
"this",
"a machine learning tool",
"that",
"the optimal decision rule",
"three worlds",
"deep learning",
"double robustness",
"residual weighted learning",
"two architectures",
"the proposed method",
"a fully-connected feedforward neural network",
"the deep kernel learning model",
"a gaussian process",
"deep learning-filtered inputs",
"we",
"the performance",
"limitation",
"different methods",
"a range",
"simulations",
"longitudinal and brain imaging data",
"patients",
"disease",
"we",
"the application",
"the proposed method",
"real-world clinical practice",
"the implementation",
"deep learning",
"the proposed method",
"the influence",
"precision medicine",
"high-dimensional abundant data",
"greater flexibility",
"computational power",
"three",
"two"
] |
Topological deep learning: a review of an emerging paradigm | [
"Ali Zia",
"Abdelwahed Khamis",
"James Nichols",
"Usman Bashir Tayab",
"Zeeshan Hayder",
"Vivien Rolland",
"Eric Stone",
"Lars Petersson"
] | Topological deep learning (TDL) is an emerging area that combines the principles of Topological data analysis (TDA) with deep learning techniques. TDA provides insight into data shape; it obtains global descriptions of multi-dimensional data whilst exhibiting robustness to deformation and noise. Such properties are desirable in deep learning pipelines, but they are typically obtained using non-TDA strategies. This is partly caused by the difficulty of combining TDA constructs (e.g. barcode and persistence diagrams) with current deep learning algorithms. Fortunately, we are now witnessing a growth of deep learning applications embracing topologically-guided components. In this survey, we review the nascent field of topological deep learning by first revisiting the core concepts of TDA. We then explore how the use of TDA techniques has evolved over time to support deep learning frameworks, and how they can be integrated into different aspects of deep learning. Furthermore, we touch on TDA usage for analyzing existing deep models; deep topological analytics. Finally, we discuss the challenges and future prospects of topological deep learning. | 10.1007/s10462-024-10710-9 | topological deep learning: a review of an emerging paradigm | topological deep learning (tdl) is an emerging area that combines the principles of topological data analysis (tda) with deep learning techniques. tda provides insight into data shape; it obtains global descriptions of multi-dimensional data whilst exhibiting robustness to deformation and noise. such properties are desirable in deep learning pipelines, but they are typically obtained using non-tda strategies. this is partly caused by the difficulty of combining tda constructs (e.g. barcode and persistence diagrams) with current deep learning algorithms. fortunately, we are now witnessing a growth of deep learning applications embracing topologically-guided components. in this survey, we review the nascent field of topological deep learning by first revisiting the core concepts of tda. we then explore how the use of tda techniques has evolved over time to support deep learning frameworks, and how they can be integrated into different aspects of deep learning. furthermore, we touch on tda usage for analyzing existing deep models; deep topological analytics. finally, we discuss the challenges and future prospects of topological deep learning. | [
"topological deep learning",
"tdl",
"an emerging area",
"that",
"the principles",
"topological data analysis",
"tda",
"deep learning techniques",
"tda",
"insight",
"data shape",
"it",
"global descriptions",
"multi-dimensional data",
"robustness",
"deformation",
"noise",
"such properties",
"deep learning pipelines",
"they",
"non-tda strategies",
"this",
"the difficulty",
"tda constructs",
"e.g. barcode and persistence diagrams",
"current deep learning algorithms",
"we",
"a growth",
"deep learning applications",
"topologically-guided components",
"this survey",
"we",
"the nascent field",
"topological deep learning",
"the core concepts",
"tda",
"we",
"the use",
"tda techniques",
"time",
"deep learning frameworks",
"they",
"different aspects",
"deep learning",
"we",
"tda usage",
"existing deep models",
"deep topological analytics",
"we",
"the challenges",
"future prospects",
"topological deep learning",
"first"
] |
Deep learning in computational mechanics: a review | [
"Leon Herrmann",
"Stefan Kollmannsberger"
] | The rapid growth of deep learning research, including within the field of computational mechanics, has resulted in an extensive and diverse body of literature. To help researchers identify key concepts and promising methodologies within this field, we provide an overview of deep learning in deterministic computational mechanics. Five main categories are identified and explored: simulation substitution, simulation enhancement, discretizations as neural networks, generative approaches, and deep reinforcement learning. This review focuses on deep learning methods rather than applications for computational mechanics, thereby enabling researchers to explore this field more effectively. As such, the review is not necessarily aimed at researchers with extensive knowledge of deep learning—instead, the primary audience is researchers on the verge of entering this field or those attempting to gain an overview of deep learning in computational mechanics. The discussed concepts are, therefore, explained as simple as possible. | 10.1007/s00466-023-02434-4 | deep learning in computational mechanics: a review | the rapid growth of deep learning research, including within the field of computational mechanics, has resulted in an extensive and diverse body of literature. to help researchers identify key concepts and promising methodologies within this field, we provide an overview of deep learning in deterministic computational mechanics. five main categories are identified and explored: simulation substitution, simulation enhancement, discretizations as neural networks, generative approaches, and deep reinforcement learning. this review focuses on deep learning methods rather than applications for computational mechanics, thereby enabling researchers to explore this field more effectively. as such, the review is not necessarily aimed at researchers with extensive knowledge of deep learning—instead, the primary audience is researchers on the verge of entering this field or those attempting to gain an overview of deep learning in computational mechanics. the discussed concepts are, therefore, explained as simple as possible. | [
"the rapid growth",
"deep learning research",
"the field",
"computational mechanics",
"an extensive and diverse body",
"literature",
"researchers",
"key concepts",
"methodologies",
"this field",
"we",
"an overview",
"deep learning",
"deterministic computational mechanics",
"five main categories",
"simulation substitution",
"simulation enhancement",
"discretizations",
"neural networks",
"generative approaches",
"deep reinforcement learning",
"this review",
"deep learning methods",
"applications",
"computational mechanics",
"researchers",
"this field",
"the review",
"researchers",
"extensive knowledge",
"deep learning",
"the primary audience",
"researchers",
"the verge",
"this field",
"those",
"an overview",
"deep learning",
"computational mechanics",
"the discussed concepts",
"five"
] |
OpBench: an operator-level GPU benchmark for deep learning | [
"Qingwen Gu",
"Bo Fan",
"Zhengning Liu",
"Kaicheng Cao",
"Songhai Zhang",
"Shimin Hu"
] | Operators (such as Conv and ReLU) play an important role in deep neural networks. Every neural network is composed of a series of differentiable operators. However, existing AI benchmarks mainly focus on accessing model training and inference performance of deep learning systems on specific models. To help GPU hardware find computing bottlenecks and intuitively evaluate GPU performance on specific deep learning tasks, this paper focuses on evaluating GPU performance at the operator level. We statistically analyze the information of operators on 12 representative deep learning models from six prominent AI tasks and provide an operator dataset to show the different importance of various types of operators in different networks. An operator-level benchmark, OpBench, is proposed on the basis of this dataset, allowing users to choose from a given range of models and set the input sizes according to their demands. This benchmark offers a detailed operator-level performance report for AI and hardware developers. We also evaluate four GPU models on OpBench and find that their performances differ on various types of operators and are not fully consistent with the performance metric FLOPS (floating point operations per second). | 10.1007/s11432-023-3989-3 | opbench: an operator-level gpu benchmark for deep learning | operators (such as conv and relu) play an important role in deep neural networks. every neural network is composed of a series of differentiable operators. however, existing ai benchmarks mainly focus on accessing model training and inference performance of deep learning systems on specific models. to help gpu hardware find computing bottlenecks and intuitively evaluate gpu performance on specific deep learning tasks, this paper focuses on evaluating gpu performance at the operator level. we statistically analyze the information of operators on 12 representative deep learning models from six prominent ai tasks and provide an operator dataset to show the different importance of various types of operators in different networks. an operator-level benchmark, opbench, is proposed on the basis of this dataset, allowing users to choose from a given range of models and set the input sizes according to their demands. this benchmark offers a detailed operator-level performance report for ai and hardware developers. we also evaluate four gpu models on opbench and find that their performances differ on various types of operators and are not fully consistent with the performance metric flops (floating point operations per second). | [
"operators",
"relu",
"an important role",
"deep neural networks",
"every neural network",
"a series",
"differentiable operators",
"existing ai benchmarks",
"model training and inference performance",
"deep learning systems",
"specific models",
"gpu hardware",
"computing bottlenecks",
"gpu performance",
"specific deep learning tasks",
"this paper",
"gpu performance",
"the operator level",
"we",
"the information",
"operators",
"12 representative deep learning models",
"six prominent ai tasks",
"an operator dataset",
"the different importance",
"various types",
"operators",
"different networks",
"an operator-level benchmark, opbench",
"the basis",
"this dataset",
"users",
"a given range",
"models",
"the input sizes",
"their demands",
"this benchmark",
"a detailed operator-level performance report",
"ai and hardware developers",
"we",
"four gpu models",
"opbench",
"their performances",
"various types",
"operators",
"the performance metric flops",
"floating point operations",
"12",
"six",
"four",
"second"
] |
Diabetes detection based on machine learning and deep learning approaches | [
"Boon Feng Wee",
"Saaveethya Sivakumar",
"King Hann Lim",
"W. K. Wong",
"Filbert H. Juwono"
] | The increasing number of diabetes individuals in the globe has alarmed the medical sector to seek alternatives to improve their medical technologies. Machine learning and deep learning approaches are active research in developing intelligent and efficient diabetes detection systems. This study profoundly investigates and discusses the impacts of the latest machine learning and deep learning approaches in diabetes identification/classifications. It is observed that diabetes data are limited in availability. Available databases comprise lab-based and invasive test measurements. Investigating anthropometric measurements and non-invasive tests must be performed to create a cost-effective yet high-performance solution. Several findings showed the possibility of reconstructing the detection models based on anthropometric measurements and non-invasive medical indicators. This study investigated the consequences of oversampling techniques and data dimensionality reduction through feature selection approaches. The future direction is highlighted in the research of feature selection approaches to improve the accuracy and reliability of diabetes identifications. | 10.1007/s11042-023-16407-5 | diabetes detection based on machine learning and deep learning approaches | the increasing number of diabetes individuals in the globe has alarmed the medical sector to seek alternatives to improve their medical technologies. machine learning and deep learning approaches are active research in developing intelligent and efficient diabetes detection systems. this study profoundly investigates and discusses the impacts of the latest machine learning and deep learning approaches in diabetes identification/classifications. it is observed that diabetes data are limited in availability. available databases comprise lab-based and invasive test measurements. investigating anthropometric measurements and non-invasive tests must be performed to create a cost-effective yet high-performance solution. several findings showed the possibility of reconstructing the detection models based on anthropometric measurements and non-invasive medical indicators. this study investigated the consequences of oversampling techniques and data dimensionality reduction through feature selection approaches. the future direction is highlighted in the research of feature selection approaches to improve the accuracy and reliability of diabetes identifications. | [
"the increasing number",
"diabetes individuals",
"the globe",
"the medical sector",
"alternatives",
"their medical technologies",
"machine learning",
"deep learning approaches",
"active research",
"intelligent and efficient diabetes detection systems",
"this study profoundly investigates",
"the impacts",
"the latest machine learning",
"deep learning approaches",
"diabetes identification/classifications",
"it",
"diabetes data",
"availability",
"available databases",
"lab-based and invasive test measurements",
"anthropometric measurements",
"non-invasive tests",
"a cost-effective yet high-performance solution",
"several findings",
"the possibility",
"the detection models",
"anthropometric measurements",
"non-invasive medical indicators",
"this study",
"the consequences",
"techniques and data dimensionality reduction",
"feature selection approaches",
"the future direction",
"the research",
"feature selection approaches",
"the accuracy",
"reliability",
"diabetes identifications"
] |
Deep-kidney: an effective deep learning framework for chronic kidney disease prediction | [
"Dina Saif",
"Amany M. Sarhan",
"Nada M. Elshennawy"
] | Chronic kidney disease (CKD) is one of today’s most serious illnesses. Because this disease usually does not manifest itself until the kidney is severely damaged, early detection saves many people’s lives. Therefore, the contribution of the current paper is proposing three predictive models to predict CKD possible occurrence within 6 or 12 months before disease existence namely; convolutional neural network (CNN), long short-term memory (LSTM) model, and deep ensemble model. The deep ensemble model fuses three base deep learning classifiers (CNN, LSTM, and LSTM-BLSTM) using majority voting technique. To evaluate the performance of the proposed models, several experiments were conducted on two different public datasets. Among the predictive models and the reached results, the deep ensemble model is superior to all the other models, with an accuracy of 0.993 and 0.992 for the 6-month data and 12-month data predictions, respectively. | 10.1007/s13755-023-00261-8 | deep-kidney: an effective deep learning framework for chronic kidney disease prediction | chronic kidney disease (ckd) is one of today’s most serious illnesses. because this disease usually does not manifest itself until the kidney is severely damaged, early detection saves many people’s lives. therefore, the contribution of the current paper is proposing three predictive models to predict ckd possible occurrence within 6 or 12 months before disease existence namely; convolutional neural network (cnn), long short-term memory (lstm) model, and deep ensemble model. the deep ensemble model fuses three base deep learning classifiers (cnn, lstm, and lstm-blstm) using majority voting technique. to evaluate the performance of the proposed models, several experiments were conducted on two different public datasets. among the predictive models and the reached results, the deep ensemble model is superior to all the other models, with an accuracy of 0.993 and 0.992 for the 6-month data and 12-month data predictions, respectively. | [
"chronic kidney disease",
"ckd",
"today’s most serious illnesses",
"this disease",
"itself",
"the kidney",
"early detection",
"many people’s lives",
"the contribution",
"the current paper",
"three predictive models",
"possible occurrence",
"disease existence",
"convolutional neural network",
"cnn",
"long short-term memory",
"lstm) model",
"deep ensemble model",
"the deep ensemble model",
"three base deep learning classifiers",
"cnn",
"lstm",
"lstm-blstm",
"majority voting technique",
"the performance",
"the proposed models",
"several experiments",
"two different public datasets",
"the predictive models",
"the reached results",
"the deep ensemble model",
"all the other models",
"an accuracy",
"the 6-month data",
"12-month data predictions",
"today",
"three",
"6 or 12 months",
"cnn",
"three",
"cnn",
"two",
"0.993",
"0.992",
"6-month",
"12-month"
] |
Predicting Potato Crop Yield with Machine Learning and Deep Learning for Sustainable Agriculture | [
"El-Sayed M. El-Kenawy",
"Amel Ali Alhussan",
"Nima Khodadadi",
"Seyedali Mirjalili",
"Marwa M. Eid"
] | Potatoes are an important crop in the world; they are the main source of food for a large number of people globally and also provide an income for many people. The true forecasting of potato yields is a determining factor for the rational use and maximization of agricultural practices, responsible management of the resources, and wider regions’ food security. The latest discoveries in machine learning and deep learning provide new directions to yield prediction models more accurately and sparingly. From the study, we evaluated different types of predictive models, including K-nearest neighbors (KNN), gradient boosting, XGBoost, and multilayer perceptron that use machine learning, as well as graph neural networks (GNNs), gated recurrent units (GRUs), and long short-term memory networks (LSTM), which are popular in deep learning models. These models are evaluated on the basis of some performance measures like mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE) to know how much they accurately predict the potato yields. The terminal results show that although gradient boosting and XGBoost algorithms are good at potato yield prediction, GNNs and LSTMs not only have the advantage of high accuracy but also capture the complex spatial and temporal patterns in the data. Gradient boosting resulted in an MSE of 0.03438 and an R2 of 0.49168, while XGBoost had an MSE of 0.03583 and an R2 of 0.35106. Out of all deep learning models, GNNs displayed an MSE of 0.02363 and an R2 of 0.51719, excelling in the overall performance. LSTMs and GRUs were reported to be very promising as well, with LSTMs comprehending an MSE of 0.03177 and GRUs grabbing an MSE of 0.03150. These findings underscore the potential of advanced predictive models to support sustainable agricultural practices and informed decision-making in the context of potato farming. | 10.1007/s11540-024-09753-w | predicting potato crop yield with machine learning and deep learning for sustainable agriculture | potatoes are an important crop in the world; they are the main source of food for a large number of people globally and also provide an income for many people. the true forecasting of potato yields is a determining factor for the rational use and maximization of agricultural practices, responsible management of the resources, and wider regions’ food security. the latest discoveries in machine learning and deep learning provide new directions to yield prediction models more accurately and sparingly. from the study, we evaluated different types of predictive models, including k-nearest neighbors (knn), gradient boosting, xgboost, and multilayer perceptron that use machine learning, as well as graph neural networks (gnns), gated recurrent units (grus), and long short-term memory networks (lstm), which are popular in deep learning models. these models are evaluated on the basis of some performance measures like mean squared error (mse), root mean squared error (rmse), and mean absolute error (mae) to know how much they accurately predict the potato yields. the terminal results show that although gradient boosting and xgboost algorithms are good at potato yield prediction, gnns and lstms not only have the advantage of high accuracy but also capture the complex spatial and temporal patterns in the data. gradient boosting resulted in an mse of 0.03438 and an r2 of 0.49168, while xgboost had an mse of 0.03583 and an r2 of 0.35106. out of all deep learning models, gnns displayed an mse of 0.02363 and an r2 of 0.51719, excelling in the overall performance. lstms and grus were reported to be very promising as well, with lstms comprehending an mse of 0.03177 and grus grabbing an mse of 0.03150. these findings underscore the potential of advanced predictive models to support sustainable agricultural practices and informed decision-making in the context of potato farming. | [
"potatoes",
"an important crop",
"the world",
"they",
"the main source",
"food",
"a large number",
"people",
"an income",
"many people",
"the true forecasting",
"potato yields",
"a determining factor",
"the rational use",
"maximization",
"agricultural practices",
"responsible management",
"the resources",
"wider regions’ food security",
"the latest discoveries",
"machine learning",
"deep learning",
"new directions",
"prediction models",
"the study",
"we",
"different types",
"predictive models",
"k-nearest neighbors",
"gradient boosting",
"xgboost",
"multilayer perceptron",
"that",
"machine learning",
"graph neural networks",
"gnns",
"grus",
"lstm",
"which",
"deep learning models",
"these models",
"the basis",
"some performance measures",
"mean squared error",
"mse",
"root mean squared error",
"rmse",
"absolute error",
"mae",
"they",
"the potato yields",
"the terminal results",
"gradient boosting",
"xgboost algorithms",
"potato yield prediction",
"gnns",
"the advantage",
"high accuracy",
"the complex spatial and temporal patterns",
"the data",
"an mse",
"an r2",
"xgboost",
"an mse",
"an r2",
"all deep learning models",
"gnns",
"an mse",
"an r2",
"the overall performance",
"grus",
"an mse",
"an mse",
"these findings",
"the potential",
"advanced predictive models",
"sustainable agricultural practices",
"informed decision-making",
"the context",
"potato farming",
"0.03438",
"0.49168",
"0.03583",
"0.02363",
"0.51719",
"0.03177",
"0.03150"
] |
Prediction of glycopeptide fragment mass spectra by deep learning | [
"Yi Yang",
"Qun Fang"
] | Deep learning has achieved a notable success in mass spectrometry-based proteomics and is now emerging in glycoproteomics. While various deep learning models can predict fragment mass spectra of peptides with good accuracy, they cannot cope with the non-linear glycan structure in an intact glycopeptide. Herein, we present DeepGlyco, a deep learning-based approach for the prediction of fragment spectra of intact glycopeptides. Our model adopts tree-structured long-short term memory networks to process the glycan moiety and a graph neural network architecture to incorporate potential fragmentation pathways of a specific glycan structure. This feature is beneficial to model explainability and differentiation ability of glycan structural isomers. We further demonstrate that predicted spectral libraries can be used for data-independent acquisition glycoproteomics as a supplement for library completeness. We expect that this work will provide a valuable deep learning resource for glycoproteomics. | 10.1038/s41467-024-46771-1 | prediction of glycopeptide fragment mass spectra by deep learning | deep learning has achieved a notable success in mass spectrometry-based proteomics and is now emerging in glycoproteomics. while various deep learning models can predict fragment mass spectra of peptides with good accuracy, they cannot cope with the non-linear glycan structure in an intact glycopeptide. herein, we present deepglyco, a deep learning-based approach for the prediction of fragment spectra of intact glycopeptides. our model adopts tree-structured long-short term memory networks to process the glycan moiety and a graph neural network architecture to incorporate potential fragmentation pathways of a specific glycan structure. this feature is beneficial to model explainability and differentiation ability of glycan structural isomers. we further demonstrate that predicted spectral libraries can be used for data-independent acquisition glycoproteomics as a supplement for library completeness. we expect that this work will provide a valuable deep learning resource for glycoproteomics. | [
"deep learning",
"a notable success",
"mass spectrometry-based proteomics",
"glycoproteomics",
"various deep learning models",
"fragment mass spectra",
"peptides",
"good accuracy",
"they",
"the non-linear glycan structure",
"an intact glycopeptide",
"we",
"deepglyco",
"a deep learning-based approach",
"the prediction",
"fragment spectra",
"intact glycopeptides",
"our model",
"tree-structured long-short term memory networks",
"the glycan moiety",
"a graph neural network architecture",
"potential fragmentation pathways",
"a specific glycan structure",
"this feature",
"model explainability",
"differentiation ability",
"glycan structural isomers",
"we",
"spectral libraries",
"data-independent acquisition glycoproteomics",
"a supplement",
"library completeness",
"we",
"this work",
"a valuable deep learning resource",
"glycoproteomics"
] |
Deep learning application in diagnosing breast cancer recurrence | [
"Zeinab Jam",
"Amir Albadvi",
"Alireza Atashi"
] | Patients' lives can always be saved when diseases, especially special diseases, are detected early. The chances of a patient surviving can be increased by early detection. Breast cancer is one of the deadliest and common cancers. After recovering from breast cancer, patients are always worried about recurrence and return. The use of modern technology, however, can help predict disease recurrence at an early stage, allowing patients to receive treatment sooner.Significant strides have been achieved in deep learning, demonstrating strong performance in handling unstructured data challenges. However, when it comes to predicting tabular data, deep learning hasn't quite matched its success with unstructured data. Presently, ensemble models relying on gradient-boosted decision trees (GBDT) are frequently favored for tabular data prediction tasks. Typically, these GBDT-based models outshine deep learning approaches.Many novel deep learning techniques are emerging for handling tabular data. TabNet, for instance, mirrors decision tree feature selection within a neural network framework. AutoInt addresses high dimensionality by condensing data through embedding layers. Tab Transformer adapts the transformer model, generating text representations for categorical attributes. Despite their innovation, these methods remain less recognized compared to those for image and text data processing.In this study, 158 different characteristics of 5142 breast cancer patients from 1997 to 2019 were examined. We aim to evaluate deep learning techniques effectiveness in detecting breast cancer recurrence. Through examination of evaluation metrics, it becomes evident that deep learning approaches applied to tabular data surpass traditional machine learning algorithms, even when dealing with imbalanced datasets. Ultimately, the results derived from each algorithm analyzed and concluded with a review and comparison of the findings. | 10.1007/s11042-024-19423-1 | deep learning application in diagnosing breast cancer recurrence | patients' lives can always be saved when diseases, especially special diseases, are detected early. the chances of a patient surviving can be increased by early detection. breast cancer is one of the deadliest and common cancers. after recovering from breast cancer, patients are always worried about recurrence and return. the use of modern technology, however, can help predict disease recurrence at an early stage, allowing patients to receive treatment sooner.significant strides have been achieved in deep learning, demonstrating strong performance in handling unstructured data challenges. however, when it comes to predicting tabular data, deep learning hasn't quite matched its success with unstructured data. presently, ensemble models relying on gradient-boosted decision trees (gbdt) are frequently favored for tabular data prediction tasks. typically, these gbdt-based models outshine deep learning approaches.many novel deep learning techniques are emerging for handling tabular data. tabnet, for instance, mirrors decision tree feature selection within a neural network framework. autoint addresses high dimensionality by condensing data through embedding layers. tab transformer adapts the transformer model, generating text representations for categorical attributes. despite their innovation, these methods remain less recognized compared to those for image and text data processing.in this study, 158 different characteristics of 5142 breast cancer patients from 1997 to 2019 were examined. we aim to evaluate deep learning techniques effectiveness in detecting breast cancer recurrence. through examination of evaluation metrics, it becomes evident that deep learning approaches applied to tabular data surpass traditional machine learning algorithms, even when dealing with imbalanced datasets. ultimately, the results derived from each algorithm analyzed and concluded with a review and comparison of the findings. | [
"patients' lives",
"diseases",
"especially special diseases",
"the chances",
"a patient surviving",
"early detection",
"breast cancer",
"the deadliest and common cancers",
"breast cancer",
"patients",
"recurrence",
"return",
"the use",
"modern technology",
"disease recurrence",
"an early stage",
"patients",
"treatment sooner.significant strides",
"deep learning",
"strong performance",
"unstructured data challenges",
"it",
"tabular data",
"deep learning",
"its success",
"unstructured data",
"ensemble models",
"gradient-boosted decision trees",
"gbdt",
"tabular data prediction tasks",
"these gbdt-based models",
"approaches.many novel deep learning techniques",
"tabular data",
"tabnet",
"instance",
"mirrors decision tree",
"a neural network framework",
"autoint addresses high dimensionality",
"data",
"embedding layers",
"tab transformer",
"the transformer model",
"text representations",
"categorical attributes",
"their innovation",
"these methods",
"those",
"image",
"text data",
"158 different characteristics",
"5142 breast cancer patients",
"we",
"deep learning techniques effectiveness",
"breast cancer recurrence",
"examination",
"evaluation metrics",
"it",
"deep learning approaches",
"data surpass traditional machine learning algorithms",
"imbalanced datasets",
"the results",
"each algorithm",
"a review",
"comparison",
"the findings",
"one",
"sooner.significant",
"autoint",
"158",
"5142",
"from 1997 to 2019"
] |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 43