title
stringlengths 31
206
| authors
sequencelengths 1
85
| abstract
stringlengths 428
3.21k
| doi
stringlengths 21
31
| cleaned_title
stringlengths 31
206
| cleaned_abstract
stringlengths 428
3.21k
| key_phrases
sequencelengths 19
150
|
---|---|---|---|---|---|---|
Deep transfer learning for tool condition monitoring under different processing conditions | [
"Yongqing Wang",
"Mengmeng Niu",
"Kuo Liu",
"Haibo Liu",
"Bo Qin",
"Yiming Cui"
] | Deep learning methods have developed rapidly in the field of tool condition monitoring, but due to the complexity and diversity of working conditions, it is difficult to ensure the high accuracy, strong generalization performance, and wide applicability of monitoring models. Therefore, this article proposes a deep transfer learning method with center loss for tool condition monitoring under different processing conditions. Firstly, Deep Extreme Learning Machine (DELM) is used to extract sample features and tool condition monitoring, and deep coral is integrated into the last feature extraction layer of DELM for domain adaptation. Secondly, the center loss is introduced into the transfer learning model to improve the intra-class compactness by minimizing the center loss, thereby obtaining a broader decision boundary. A tool wear experiment was conducted on a milling machine. Research shows that the proposed method can achieve the tool condition monitoring under different processing conditions. The introduction of central loss is beneficial for promoting the separation of samples from different categories in the target domain and effectively improves the model’s applicability. Compared with other domain adaptation methods, this method has better accuracy and generalization ability. | 10.1007/s00170-024-13713-6 | deep transfer learning for tool condition monitoring under different processing conditions | deep learning methods have developed rapidly in the field of tool condition monitoring, but due to the complexity and diversity of working conditions, it is difficult to ensure the high accuracy, strong generalization performance, and wide applicability of monitoring models. therefore, this article proposes a deep transfer learning method with center loss for tool condition monitoring under different processing conditions. firstly, deep extreme learning machine (delm) is used to extract sample features and tool condition monitoring, and deep coral is integrated into the last feature extraction layer of delm for domain adaptation. secondly, the center loss is introduced into the transfer learning model to improve the intra-class compactness by minimizing the center loss, thereby obtaining a broader decision boundary. a tool wear experiment was conducted on a milling machine. research shows that the proposed method can achieve the tool condition monitoring under different processing conditions. the introduction of central loss is beneficial for promoting the separation of samples from different categories in the target domain and effectively improves the model’s applicability. compared with other domain adaptation methods, this method has better accuracy and generalization ability. | [
"deep learning methods",
"the field",
"tool condition monitoring",
"the complexity",
"diversity",
"working conditions",
"it",
"the high accuracy, strong generalization performance",
"wide applicability",
"monitoring models",
"this article",
"a deep transfer learning method",
"center loss",
"tool condition monitoring",
"different processing conditions",
"deep extreme learning machine",
"(delm",
"sample features",
"tool condition monitoring",
"deep coral",
"the last feature extraction layer",
"delm",
"domain adaptation",
"the center loss",
"the transfer learning model",
"the intra-class compactness",
"the center loss",
"a broader decision boundary",
"a tool wear experiment",
"a milling machine",
"research",
"the proposed method",
"the tool condition monitoring",
"different processing conditions",
"the introduction",
"central loss",
"the separation",
"samples",
"different categories",
"the target domain",
"the model’s applicability",
"other domain adaptation methods",
"this method",
"better accuracy",
"generalization ability",
"firstly",
"secondly"
] |
Breast Mammograms Diagnosis Using Deep Learning: State of Art Tutorial Review | [
"Osama Bin Naeem",
"Yasir Saleem",
"M. Usman Ghani Khan",
"Amjad Rehman Khan",
"Tanzila Saba",
"Saeed Ali Bahaj",
"Noor Ayesha"
] | Usually, screening (mostly mammography) is used by radiologists to manually detect breast cancer. The likelihood of identifying suspected cases as false positives or false negatives is significant, contingent on the experience of the radiologist and the kind of imaging screening device/method utilized. The confirmation of the type of tumour seen by the radiologist is sent for histological investigation (microscopic analysis) through a biopsy, where the tumor's grade and stage, which are used in the latter stages of treatment, are ascertained through biopsy. However, a secondary issue with the cancer detection process is that only 15 to 30% of instances that are referred for biopsy result in malignant findings. Since deep learning demonstrated remarkable performance in visual recognition challenges, it has been widely applied to a variety of tasks. Similar examples include deep learning applications in healthcare, which are gaining a lot of interest from the research community. Deep learning is used to identify, categories tumours, and breast cancer is a significant global health concern. The medical sciences could now make more accurate diagnoses and detections due to recent advancements in machine learning techniques. Hence due to systems potential accuracy, it could offer optimistic outcomes when used to read malignant images. In imaging domains, deep learning-based methods have achieved remarkable success in constituent segmentation (UNet), localization (DenseNet), and classification (VGG-19). This study examines, how deep learning methods are assisting in the highly accurate diagnosis of benign or malignant tumours based on screened images. In contrast to a mammogram, which is covered in detail, this paper briefly discusses imaging methods for cancer detection. Early detection and cost effectiveness are two main benefits of applying machine learning and deep learning techniques to mammograms. | 10.1007/s11831-023-10052-9 | breast mammograms diagnosis using deep learning: state of art tutorial review | usually, screening (mostly mammography) is used by radiologists to manually detect breast cancer. the likelihood of identifying suspected cases as false positives or false negatives is significant, contingent on the experience of the radiologist and the kind of imaging screening device/method utilized. the confirmation of the type of tumour seen by the radiologist is sent for histological investigation (microscopic analysis) through a biopsy, where the tumor's grade and stage, which are used in the latter stages of treatment, are ascertained through biopsy. however, a secondary issue with the cancer detection process is that only 15 to 30% of instances that are referred for biopsy result in malignant findings. since deep learning demonstrated remarkable performance in visual recognition challenges, it has been widely applied to a variety of tasks. similar examples include deep learning applications in healthcare, which are gaining a lot of interest from the research community. deep learning is used to identify, categories tumours, and breast cancer is a significant global health concern. the medical sciences could now make more accurate diagnoses and detections due to recent advancements in machine learning techniques. hence due to systems potential accuracy, it could offer optimistic outcomes when used to read malignant images. in imaging domains, deep learning-based methods have achieved remarkable success in constituent segmentation (unet), localization (densenet), and classification (vgg-19). this study examines, how deep learning methods are assisting in the highly accurate diagnosis of benign or malignant tumours based on screened images. in contrast to a mammogram, which is covered in detail, this paper briefly discusses imaging methods for cancer detection. early detection and cost effectiveness are two main benefits of applying machine learning and deep learning techniques to mammograms. | [
"(mostly mammography",
"radiologists",
"breast cancer",
"the likelihood",
"suspected cases",
"false positives",
"false negatives",
"the experience",
"the radiologist",
"the kind",
"device/method",
"the confirmation",
"the type",
"tumour",
"the radiologist",
"histological investigation",
"microscopic analysis",
"a biopsy",
"the tumor's grade",
"stage",
"which",
"the latter stages",
"treatment",
"biopsy",
"a secondary issue",
"the cancer detection process",
"only 15 to 30%",
"instances",
"that",
"biopsy result",
"malignant findings",
"deep learning",
"remarkable performance",
"visual recognition challenges",
"it",
"a variety",
"tasks",
"similar examples",
"deep learning applications",
"healthcare",
"which",
"a lot",
"interest",
"the research community",
"deep learning",
"categories tumours",
"breast cancer",
"a significant global health concern",
"the medical sciences",
"more accurate diagnoses",
"detections",
"recent advancements",
"machine learning techniques",
"systems potential accuracy",
"it",
"optimistic outcomes",
"malignant images",
"imaging",
"domains",
"deep learning-based methods",
"remarkable success",
"constituent segmentation",
"unet",
"localization",
"densenet",
"classification",
"vgg-19",
"this study examines",
"deep learning methods",
"the highly accurate diagnosis",
"benign or malignant tumours",
"screened images",
"contrast",
"a mammogram",
"which",
"detail",
"this paper",
"imaging methods",
"cancer detection",
"early detection and cost effectiveness",
"two main benefits",
"machine learning",
"deep learning techniques",
"mammograms",
"only 15 to",
"two"
] |
A Survey on ensemble learning under the era of deep learning | [
"Yongquan Yang",
"Haijun Lv",
"Ning Chen"
] | Due to the dominant position of deep learning (mostly deep neural networks) in various artificial intelligence applications, recently, ensemble learning based on deep neural networks (ensemble deep learning) has shown significant performances in improving the generalization of learning system. However, since modern deep neural networks usually have millions to billions of parameters, the time and space overheads for training multiple base deep learners and testing with the ensemble deep learner are far greater than that of traditional ensemble learning. Though several algorithms of fast ensemble deep learning have been proposed to promote the deployment of ensemble deep learning in some applications, further advances still need to be made for many applications in specific fields, where the developing time and computing resources are usually restricted or the data to be processed is of large dimensionality. An urgent problem needs to be solved is how to take the significant advantages of ensemble deep learning while reduce the required expenses so that many more applications in specific fields can benefit from it. For the alleviation of this problem, it is essential to know about how ensemble learning has developed under the era of deep learning. Thus, in this article, we present discussions focusing on data analyses of published works, methodologies, recent advances and unattainability of traditional ensemble learning and ensemble deep learning. We hope this article will be helpful to realize the intrinsic problems and technical challenges faced by future developments of ensemble learning under the era of deep learning. | 10.1007/s10462-022-10283-5 | a survey on ensemble learning under the era of deep learning | due to the dominant position of deep learning (mostly deep neural networks) in various artificial intelligence applications, recently, ensemble learning based on deep neural networks (ensemble deep learning) has shown significant performances in improving the generalization of learning system. however, since modern deep neural networks usually have millions to billions of parameters, the time and space overheads for training multiple base deep learners and testing with the ensemble deep learner are far greater than that of traditional ensemble learning. though several algorithms of fast ensemble deep learning have been proposed to promote the deployment of ensemble deep learning in some applications, further advances still need to be made for many applications in specific fields, where the developing time and computing resources are usually restricted or the data to be processed is of large dimensionality. an urgent problem needs to be solved is how to take the significant advantages of ensemble deep learning while reduce the required expenses so that many more applications in specific fields can benefit from it. for the alleviation of this problem, it is essential to know about how ensemble learning has developed under the era of deep learning. thus, in this article, we present discussions focusing on data analyses of published works, methodologies, recent advances and unattainability of traditional ensemble learning and ensemble deep learning. we hope this article will be helpful to realize the intrinsic problems and technical challenges faced by future developments of ensemble learning under the era of deep learning. | [
"the dominant position",
"deep learning",
"mostly deep neural networks",
"various artificial intelligence applications",
", recently, ensemble learning",
"deep neural networks",
"ensemble deep learning",
"significant performances",
"the generalization",
"learning system",
"modern deep neural networks",
"millions",
"billions",
"parameters",
"the time and space overheads",
"multiple base deep learners",
"the ensemble deep learner",
"that",
"traditional ensemble learning",
"several algorithms",
"fast ensemble deep learning",
"the deployment",
"ensemble deep learning",
"some applications",
"further advances",
"many applications",
"specific fields",
"the developing time",
"computing resources",
"the data",
"large dimensionality",
"an urgent problem",
"the significant advantages",
"ensemble deep learning",
"the required expenses",
"many more applications",
"specific fields",
"it",
"the alleviation",
"this problem",
"it",
"how ensemble learning",
"the era",
"deep learning",
"this article",
"we",
"discussions",
"data analyses",
"published works",
"methodologies",
"recent advances",
"unattainability",
"traditional ensemble learning",
"ensemble deep learning",
"we",
"this article",
"the intrinsic problems",
"technical challenges",
"future developments",
"ensemble learning",
"the era",
"deep learning",
"millions to billions"
] |
Realistic fault detection of li-ion battery via dynamical deep learning | [
"Jingzhao Zhang",
"Yanan Wang",
"Benben Jiang",
"Haowei He",
"Shaobo Huang",
"Chen Wang",
"Yang Zhang",
"Xuebing Han",
"Dongxu Guo",
"Guannan He",
"Minggao Ouyang"
] | Accurate evaluation of Li-ion battery (LiB) safety conditions can reduce unexpected cell failures, facilitate battery deployment, and promote low-carbon economies. Despite the recent progress in artificial intelligence, anomaly detection methods are not customized for or validated in realistic battery settings due to the complex failure mechanisms and the lack of real-world testing frameworks with large-scale datasets. Here, we develop a realistic deep-learning framework for electric vehicle (EV) LiB anomaly detection. It features a dynamical autoencoder tailored for dynamical systems and configured by social and financial factors. We test our detection algorithm on released datasets comprising over 690,000 LiB charging snippets from 347 EVs. Our model overcomes the limitations of state-of-the-art fault detection models, including deep learning ones. Moreover, it reduces the expected direct EV battery fault and inspection costs. Our work highlights the potential of deep learning in improving LiB safety and the significance of social and financial information in designing deep learning models. | 10.1038/s41467-023-41226-5 | realistic fault detection of li-ion battery via dynamical deep learning | accurate evaluation of li-ion battery (lib) safety conditions can reduce unexpected cell failures, facilitate battery deployment, and promote low-carbon economies. despite the recent progress in artificial intelligence, anomaly detection methods are not customized for or validated in realistic battery settings due to the complex failure mechanisms and the lack of real-world testing frameworks with large-scale datasets. here, we develop a realistic deep-learning framework for electric vehicle (ev) lib anomaly detection. it features a dynamical autoencoder tailored for dynamical systems and configured by social and financial factors. we test our detection algorithm on released datasets comprising over 690,000 lib charging snippets from 347 evs. our model overcomes the limitations of state-of-the-art fault detection models, including deep learning ones. moreover, it reduces the expected direct ev battery fault and inspection costs. our work highlights the potential of deep learning in improving lib safety and the significance of social and financial information in designing deep learning models. | [
"accurate evaluation",
"li-ion battery (lib) safety conditions",
"unexpected cell failures",
"facilitate battery deployment",
"low-carbon economies",
"the recent progress",
"artificial intelligence",
"anomaly detection methods",
"realistic battery settings",
"the complex failure mechanisms",
"the lack",
"real-world testing frameworks",
"large-scale datasets",
"we",
"a realistic deep-learning framework",
"electric vehicle (ev) lib anomaly detection",
"it",
"a dynamical autoencoder",
"dynamical systems",
"social and financial factors",
"we",
"our detection algorithm",
"released datasets",
"over 690,000 lib",
"snippets",
"347 evs",
"our model",
"the limitations",
"the-art",
"deep learning ones",
"it",
"the expected direct ev battery fault and inspection costs",
"our work",
"the potential",
"deep learning",
"lib safety",
"the significance",
"social and financial information",
"deep learning models",
"690,000",
"347"
] |
A Deep Learning-Based Object Representation Algorithm for Smart Retail Management | [
"Bin Liu"
] | This study underscores the vital role of object representation and detection in smart retail management systems for optimizing customer experiences and operational efficiency. The literature review reveals a preference for deep learning techniques, citing their superior accuracy compared to traditional methods. While acknowledging the challenges of achieving high accuracy and low computation costs simultaneously in deep learning-based object representation, the paper proposes a solution using the YOLOv7 framework. In order to navigate the ever-changing landscape of smart retail technologies, the study clarifies the potential scalability and flexibility of deep learning approaches. The method employs a custom dataset, and experimental results demonstrate the model’s efficacy, showcasing accurate results and enhanced performance in various experiments and analyses. | 10.1007/s40031-024-01051-w | a deep learning-based object representation algorithm for smart retail management | this study underscores the vital role of object representation and detection in smart retail management systems for optimizing customer experiences and operational efficiency. the literature review reveals a preference for deep learning techniques, citing their superior accuracy compared to traditional methods. while acknowledging the challenges of achieving high accuracy and low computation costs simultaneously in deep learning-based object representation, the paper proposes a solution using the yolov7 framework. in order to navigate the ever-changing landscape of smart retail technologies, the study clarifies the potential scalability and flexibility of deep learning approaches. the method employs a custom dataset, and experimental results demonstrate the model’s efficacy, showcasing accurate results and enhanced performance in various experiments and analyses. | [
"this study",
"the vital role",
"object representation",
"detection",
"smart retail management systems",
"customer experiences",
"operational efficiency",
"the literature review",
"a preference",
"deep learning techniques",
"their superior accuracy",
"traditional methods",
"the challenges",
"high accuracy",
"low computation costs",
"deep learning-based object representation",
"the paper",
"a solution",
"the yolov7 framework",
"order",
"the ever-changing landscape",
"smart retail technologies",
"the study",
"the potential scalability",
"flexibility",
"deep learning approaches",
"the method",
"a custom dataset",
"experimental results",
"the model’s efficacy",
"accurate results",
"enhanced performance",
"various experiments",
"analyses"
] |
Deep transfer learning for tool condition monitoring under different processing conditions | [
"Yongqing Wang",
"Mengmeng Niu",
"Kuo Liu",
"Haibo Liu",
"Bo Qin",
"Yiming Cui"
] | Deep learning methods have developed rapidly in the field of tool condition monitoring, but due to the complexity and diversity of working conditions, it is difficult to ensure the high accuracy, strong generalization performance, and wide applicability of monitoring models. Therefore, this article proposes a deep transfer learning method with center loss for tool condition monitoring under different processing conditions. Firstly, Deep Extreme Learning Machine (DELM) is used to extract sample features and tool condition monitoring, and deep coral is integrated into the last feature extraction layer of DELM for domain adaptation. Secondly, the center loss is introduced into the transfer learning model to improve the intra-class compactness by minimizing the center loss, thereby obtaining a broader decision boundary. A tool wear experiment was conducted on a milling machine. Research shows that the proposed method can achieve the tool condition monitoring under different processing conditions. The introduction of central loss is beneficial for promoting the separation of samples from different categories in the target domain and effectively improves the model’s applicability. Compared with other domain adaptation methods, this method has better accuracy and generalization ability. | 10.1007/s00170-024-13713-6 | deep transfer learning for tool condition monitoring under different processing conditions | deep learning methods have developed rapidly in the field of tool condition monitoring, but due to the complexity and diversity of working conditions, it is difficult to ensure the high accuracy, strong generalization performance, and wide applicability of monitoring models. therefore, this article proposes a deep transfer learning method with center loss for tool condition monitoring under different processing conditions. firstly, deep extreme learning machine (delm) is used to extract sample features and tool condition monitoring, and deep coral is integrated into the last feature extraction layer of delm for domain adaptation. secondly, the center loss is introduced into the transfer learning model to improve the intra-class compactness by minimizing the center loss, thereby obtaining a broader decision boundary. a tool wear experiment was conducted on a milling machine. research shows that the proposed method can achieve the tool condition monitoring under different processing conditions. the introduction of central loss is beneficial for promoting the separation of samples from different categories in the target domain and effectively improves the model’s applicability. compared with other domain adaptation methods, this method has better accuracy and generalization ability. | [
"deep learning methods",
"the field",
"tool condition monitoring",
"the complexity",
"diversity",
"working conditions",
"it",
"the high accuracy, strong generalization performance",
"wide applicability",
"monitoring models",
"this article",
"a deep transfer learning method",
"center loss",
"tool condition monitoring",
"different processing conditions",
"deep extreme learning machine",
"(delm",
"sample features",
"tool condition monitoring",
"deep coral",
"the last feature extraction layer",
"delm",
"domain adaptation",
"the center loss",
"the transfer learning model",
"the intra-class compactness",
"the center loss",
"a broader decision boundary",
"a tool wear experiment",
"a milling machine",
"research",
"the proposed method",
"the tool condition monitoring",
"different processing conditions",
"the introduction",
"central loss",
"the separation",
"samples",
"different categories",
"the target domain",
"the model’s applicability",
"other domain adaptation methods",
"this method",
"better accuracy",
"generalization ability",
"firstly",
"secondly"
] |
Breast Mammograms Diagnosis Using Deep Learning: State of Art Tutorial Review | [
"Osama Bin Naeem",
"Yasir Saleem",
"M. Usman Ghani Khan",
"Amjad Rehman Khan",
"Tanzila Saba",
"Saeed Ali Bahaj",
"Noor Ayesha"
] | Usually, screening (mostly mammography) is used by radiologists to manually detect breast cancer. The likelihood of identifying suspected cases as false positives or false negatives is significant, contingent on the experience of the radiologist and the kind of imaging screening device/method utilized. The confirmation of the type of tumour seen by the radiologist is sent for histological investigation (microscopic analysis) through a biopsy, where the tumor's grade and stage, which are used in the latter stages of treatment, are ascertained through biopsy. However, a secondary issue with the cancer detection process is that only 15 to 30% of instances that are referred for biopsy result in malignant findings. Since deep learning demonstrated remarkable performance in visual recognition challenges, it has been widely applied to a variety of tasks. Similar examples include deep learning applications in healthcare, which are gaining a lot of interest from the research community. Deep learning is used to identify, categories tumours, and breast cancer is a significant global health concern. The medical sciences could now make more accurate diagnoses and detections due to recent advancements in machine learning techniques. Hence due to systems potential accuracy, it could offer optimistic outcomes when used to read malignant images. In imaging domains, deep learning-based methods have achieved remarkable success in constituent segmentation (UNet), localization (DenseNet), and classification (VGG-19). This study examines, how deep learning methods are assisting in the highly accurate diagnosis of benign or malignant tumours based on screened images. In contrast to a mammogram, which is covered in detail, this paper briefly discusses imaging methods for cancer detection. Early detection and cost effectiveness are two main benefits of applying machine learning and deep learning techniques to mammograms. | 10.1007/s11831-023-10052-9 | breast mammograms diagnosis using deep learning: state of art tutorial review | usually, screening (mostly mammography) is used by radiologists to manually detect breast cancer. the likelihood of identifying suspected cases as false positives or false negatives is significant, contingent on the experience of the radiologist and the kind of imaging screening device/method utilized. the confirmation of the type of tumour seen by the radiologist is sent for histological investigation (microscopic analysis) through a biopsy, where the tumor's grade and stage, which are used in the latter stages of treatment, are ascertained through biopsy. however, a secondary issue with the cancer detection process is that only 15 to 30% of instances that are referred for biopsy result in malignant findings. since deep learning demonstrated remarkable performance in visual recognition challenges, it has been widely applied to a variety of tasks. similar examples include deep learning applications in healthcare, which are gaining a lot of interest from the research community. deep learning is used to identify, categories tumours, and breast cancer is a significant global health concern. the medical sciences could now make more accurate diagnoses and detections due to recent advancements in machine learning techniques. hence due to systems potential accuracy, it could offer optimistic outcomes when used to read malignant images. in imaging domains, deep learning-based methods have achieved remarkable success in constituent segmentation (unet), localization (densenet), and classification (vgg-19). this study examines, how deep learning methods are assisting in the highly accurate diagnosis of benign or malignant tumours based on screened images. in contrast to a mammogram, which is covered in detail, this paper briefly discusses imaging methods for cancer detection. early detection and cost effectiveness are two main benefits of applying machine learning and deep learning techniques to mammograms. | [
"(mostly mammography",
"radiologists",
"breast cancer",
"the likelihood",
"suspected cases",
"false positives",
"false negatives",
"the experience",
"the radiologist",
"the kind",
"device/method",
"the confirmation",
"the type",
"tumour",
"the radiologist",
"histological investigation",
"microscopic analysis",
"a biopsy",
"the tumor's grade",
"stage",
"which",
"the latter stages",
"treatment",
"biopsy",
"a secondary issue",
"the cancer detection process",
"only 15 to 30%",
"instances",
"that",
"biopsy result",
"malignant findings",
"deep learning",
"remarkable performance",
"visual recognition challenges",
"it",
"a variety",
"tasks",
"similar examples",
"deep learning applications",
"healthcare",
"which",
"a lot",
"interest",
"the research community",
"deep learning",
"categories tumours",
"breast cancer",
"a significant global health concern",
"the medical sciences",
"more accurate diagnoses",
"detections",
"recent advancements",
"machine learning techniques",
"systems potential accuracy",
"it",
"optimistic outcomes",
"malignant images",
"imaging",
"domains",
"deep learning-based methods",
"remarkable success",
"constituent segmentation",
"unet",
"localization",
"densenet",
"classification",
"vgg-19",
"this study examines",
"deep learning methods",
"the highly accurate diagnosis",
"benign or malignant tumours",
"screened images",
"contrast",
"a mammogram",
"which",
"detail",
"this paper",
"imaging methods",
"cancer detection",
"early detection and cost effectiveness",
"two main benefits",
"machine learning",
"deep learning techniques",
"mammograms",
"only 15 to",
"two"
] |
Optimization of Fed-Batch Baker’s Yeast Fermentation Using Deep Reinforcement Learning | [
"Wan Ying Chai",
"Min Keng Tan",
"Kenneth Tze Kin Teo",
"Heng Jin Tham"
] | Fermentation is widely used in chemical industries to produce valuable products. It consumes less energy and has a lesser environmental impact compared to conventional chemical processes. However, the inherent nonlinearity of the fermentation process and limited comprehension of its metabolic mechanisms present challenges for control and optimization, particularly in minimizing the formation of by-products. Reinforcement learning is a machine learning method where an agent learns through exploration and experience. By receiving feedback in the form of rewards, it computes an optimal policy which produces maximum cumulative reward. The integration of deep learning with reinforcement learning has further improved the efficiency of classical reinforcement learning, particularly in continuous control. This paper focuses on optimizing substrate feeding rate in a simulated fed-batch baker’s yeast fermentation using deep reinforcement learning. Artificial neural network (ANN) was applied as the function approximator to estimate the state-action function for determining the substrate feeding rate within a large state space, which includes substrate concentration, yeast concentration, and the change in ethanol concentration. The deep reinforcement learning algorithm was formulated based on the optimization objective of maximizing yeast production while minimizing ethanol formation. The performance of the feeding strategy proposed using deep reinforcement learning was compared to a commonly used pre-determined exponential feeding profile. The results show that the proposed feeding strategy outperformed the exponential feeding strategy with the yeast yield increased by 25.66% with negligible ethanol production. In addition, the proposed algorithm exhibits effective handling of various initial conditions compared to the exponential feeding approach. | 10.1007/s41660-024-00406-6 | optimization of fed-batch baker’s yeast fermentation using deep reinforcement learning | fermentation is widely used in chemical industries to produce valuable products. it consumes less energy and has a lesser environmental impact compared to conventional chemical processes. however, the inherent nonlinearity of the fermentation process and limited comprehension of its metabolic mechanisms present challenges for control and optimization, particularly in minimizing the formation of by-products. reinforcement learning is a machine learning method where an agent learns through exploration and experience. by receiving feedback in the form of rewards, it computes an optimal policy which produces maximum cumulative reward. the integration of deep learning with reinforcement learning has further improved the efficiency of classical reinforcement learning, particularly in continuous control. this paper focuses on optimizing substrate feeding rate in a simulated fed-batch baker’s yeast fermentation using deep reinforcement learning. artificial neural network (ann) was applied as the function approximator to estimate the state-action function for determining the substrate feeding rate within a large state space, which includes substrate concentration, yeast concentration, and the change in ethanol concentration. the deep reinforcement learning algorithm was formulated based on the optimization objective of maximizing yeast production while minimizing ethanol formation. the performance of the feeding strategy proposed using deep reinforcement learning was compared to a commonly used pre-determined exponential feeding profile. the results show that the proposed feeding strategy outperformed the exponential feeding strategy with the yeast yield increased by 25.66% with negligible ethanol production. in addition, the proposed algorithm exhibits effective handling of various initial conditions compared to the exponential feeding approach. | [
"fermentation",
"chemical industries",
"valuable products",
"it",
"less energy",
"a lesser environmental impact",
"conventional chemical processes",
"however, the inherent nonlinearity",
"the fermentation process",
"limited comprehension",
"its metabolic mechanisms",
"present challenges",
"control",
"optimization",
"the formation",
"products",
"reinforcement learning",
"a machine learning method",
"an agent",
"exploration",
"experience",
"feedback",
"the form",
"rewards",
"it",
"an optimal policy",
"which",
"maximum cumulative reward",
"the integration",
"deep learning",
"reinforcement learning",
"the efficiency",
"classical reinforcement learning",
"continuous control",
"this paper",
"substrate feeding rate",
"a simulated fed-batch baker’s yeast fermentation",
"deep reinforcement learning",
"artificial neural network",
"ann",
"the function approximator",
"the state-action function",
"the substrate feeding rate",
"a large state space",
"which",
"substrate concentration",
"yeast concentration",
"the change",
"ethanol concentration",
"the deep reinforcement learning algorithm",
"the optimization objective",
"yeast production",
"ethanol formation",
"the performance",
"the feeding strategy",
"deep reinforcement learning",
"a commonly used pre-determined exponential feeding profile",
"the results",
"the proposed feeding strategy",
"the exponential feeding strategy",
"the yeast yield",
"25.66%",
"negligible ethanol production",
"addition",
"the proposed algorithm",
"effective handling",
"various initial conditions",
"the exponential feeding approach",
"baker",
"25.66%"
] |
Deep residual learning with Anscombe transformation for low-dose digital tomosynthesis | [
"Youngjin Lee",
"Seungwan Lee",
"Chanrok Park"
] | Deep learning-based convolutional neural networks (CNNs) have been proposed for enhancing the quality of digital tomosynthesis (DTS) images. However, the direct applications of the conventional CNNs for low-dose DTS imaging are limited to provide acceptable image quality due to the inaccurate recognition of complex texture patterns. In this study, a deep residual learning network combined with the Anscombe transformation was proposed for simplifying the complex texture and restoring the low-dose DTS image quality. The proposed network consisted of convolution layers, max-pooling layers, up-sampling layers, and skip connections. The network training was performed to learn the residual images between the ground-truth and low-dose projections, which were converted using the Anscombe transformation. As a result, the proposed network enhanced the quantitative accuracy and noise characteristic of DTS images by 1.01–1.27 and 1.14–1.71 times, respectively, in comparison to low-dose DTS images and other deep learning networks. The spatial resolution of the DTS image restored using the proposed network was 1.12 times higher than that obtained using a deep image learning network. In conclusion, the proposed network can restore the low-dose DTS image quality and provide an optimal model for low-dose DTS imaging. | 10.1007/s40042-024-01117-4 | deep residual learning with anscombe transformation for low-dose digital tomosynthesis | deep learning-based convolutional neural networks (cnns) have been proposed for enhancing the quality of digital tomosynthesis (dts) images. however, the direct applications of the conventional cnns for low-dose dts imaging are limited to provide acceptable image quality due to the inaccurate recognition of complex texture patterns. in this study, a deep residual learning network combined with the anscombe transformation was proposed for simplifying the complex texture and restoring the low-dose dts image quality. the proposed network consisted of convolution layers, max-pooling layers, up-sampling layers, and skip connections. the network training was performed to learn the residual images between the ground-truth and low-dose projections, which were converted using the anscombe transformation. as a result, the proposed network enhanced the quantitative accuracy and noise characteristic of dts images by 1.01–1.27 and 1.14–1.71 times, respectively, in comparison to low-dose dts images and other deep learning networks. the spatial resolution of the dts image restored using the proposed network was 1.12 times higher than that obtained using a deep image learning network. in conclusion, the proposed network can restore the low-dose dts image quality and provide an optimal model for low-dose dts imaging. | [
"deep learning-based convolutional neural networks",
"cnns",
"the quality",
"digital tomosynthesis (dts) images",
"the direct applications",
"the conventional cnns",
"low-dose dts imaging",
"acceptable image quality",
"the inaccurate recognition",
"complex texture patterns",
"this study",
"a deep residual learning network",
"the anscombe transformation",
"the complex texture",
"the low-dose dts image quality",
"the proposed network",
"convolution layers",
"max-pooling layers",
"up-sampling layers",
"skip connections",
"the network training",
"the residual images",
"the ground-truth and low-dose projections",
"which",
"the anscombe transformation",
"a result",
"the proposed network",
"the quantitative accuracy",
"dts images",
"1.01–1.27 and 1.14–1.71 times",
"comparison",
"low-dose dts images",
"other deep learning networks",
"the spatial resolution",
"the dts image",
"the proposed network",
"that",
"a deep image learning network",
"conclusion",
"the proposed network",
"the low-dose dts image quality",
"an optimal model",
"low-dose dts imaging",
"max",
"1.01–1.27",
"1.14–1.71",
"1.12"
] |
An integrated deep-learning model for smart waste classification | [
"Shivendu Mishra",
"Ritika Yaduvanshi",
"Prince Rajpoot",
"Sharad Verma",
"Amit Kumar Pandey",
"Digvijay Pandey"
] | Efficient waste management is essential for human well-being and environmental health, as neglecting proper disposal practices can lead to financial losses and the depletion of natural resources. Given the rapid urbanization and population growth, developing an automated, innovative waste classification model becomes imperative. To address this need, our paper introduces a novel and robust solution — a smart waste classification model that leverages a hybrid deep learning model (Optimized DenseNet-121 + SVM) to categorize waste items using the TrashNet datasets. Our proposed approach uses the advanced deep learning model DenseNet-121, optimized for superior performance, to extract meaningful features from an expanded TrashNet dataset. These features are subsequently fed into a support vector machine (SVM) for precise classification. Employing data augmentation techniques further enhances classification accuracy while mitigating the risk of overfitting, especially when working with limited TrashNet data. The results of our experimental evaluation of this hybrid deep learning model are highly promising, with an impressive accuracy rate of 99.84%. This accuracy surpasses similar existing models, affirming the efficacy and potential of our approach to revolutionizing waste classification for a sustainable and cleaner future. | 10.1007/s10661-024-12410-x | an integrated deep-learning model for smart waste classification | efficient waste management is essential for human well-being and environmental health, as neglecting proper disposal practices can lead to financial losses and the depletion of natural resources. given the rapid urbanization and population growth, developing an automated, innovative waste classification model becomes imperative. to address this need, our paper introduces a novel and robust solution — a smart waste classification model that leverages a hybrid deep learning model (optimized densenet-121 + svm) to categorize waste items using the trashnet datasets. our proposed approach uses the advanced deep learning model densenet-121, optimized for superior performance, to extract meaningful features from an expanded trashnet dataset. these features are subsequently fed into a support vector machine (svm) for precise classification. employing data augmentation techniques further enhances classification accuracy while mitigating the risk of overfitting, especially when working with limited trashnet data. the results of our experimental evaluation of this hybrid deep learning model are highly promising, with an impressive accuracy rate of 99.84%. this accuracy surpasses similar existing models, affirming the efficacy and potential of our approach to revolutionizing waste classification for a sustainable and cleaner future. | [
"efficient waste management",
"human well-being and environmental health",
"proper disposal practices",
"financial losses",
"the depletion",
"natural resources",
"the rapid urbanization",
"population growth",
"an automated, innovative waste classification model",
"this need",
"our paper",
"a novel and robust solution",
"a smart waste classification model",
"that",
"a hybrid deep learning model",
"svm",
"waste items",
"the trashnet datasets",
"our proposed approach",
"the advanced deep learning model densenet-121",
"superior performance",
"meaningful features",
"an expanded trashnet dataset",
"these features",
"a support vector machine",
"svm",
"precise classification",
"data augmentation",
"classification accuracy",
"the risk",
"overfitting",
"limited trashnet data",
"the results",
"our experimental evaluation",
"this hybrid deep learning model",
"an impressive accuracy rate",
"99.84%",
"this accuracy",
"similar existing models",
"the efficacy",
"potential",
"our approach",
"waste classification",
"a sustainable and cleaner future",
"99.84%"
] |
Deep learning based multiclass classification for citrus anomaly detection in agriculture | [
"Ebru Ergün"
] | In regions where citrus crops are threatened by diseases caused by fungi, bacteria, pests and viruses, growers are actively seeking automated technologies that can accurately detect citrus anomalies to minimize economic losses. Recent advances in deep learning techniques have shown potential in automating and improving the accuracy of citrus anomaly categorization. This research explores the use of deep learning methods, specifically DenseNet, to construct robust models capable of accurately distinguishing between different types of citrus anomalies. The dataset consists of high-resolution images of different orange leaves of the species Citrus sinensis osbeck, collected from orange groves in the states of Tamaulipas and San Luis Potosi in northeastern Mexico was used in study. Experimental results demonstrated the effectiveness of the proposed deep learning models in simultaneously identifying 12 different classes of citrus anomalies. Evaluation metrics, including accuracy, recall, precision and the confusion matrix, underscore the discriminative power of the models. Among the convolutional neural network architectures used, DenseNet achieved the highest classification accuracy at 99.50%. The study concluded by highlighting the potential for scalable and effective citrus anomaly classification and management using deep learning-based systems. | 10.1007/s11760-024-03452-2 | deep learning based multiclass classification for citrus anomaly detection in agriculture | in regions where citrus crops are threatened by diseases caused by fungi, bacteria, pests and viruses, growers are actively seeking automated technologies that can accurately detect citrus anomalies to minimize economic losses. recent advances in deep learning techniques have shown potential in automating and improving the accuracy of citrus anomaly categorization. this research explores the use of deep learning methods, specifically densenet, to construct robust models capable of accurately distinguishing between different types of citrus anomalies. the dataset consists of high-resolution images of different orange leaves of the species citrus sinensis osbeck, collected from orange groves in the states of tamaulipas and san luis potosi in northeastern mexico was used in study. experimental results demonstrated the effectiveness of the proposed deep learning models in simultaneously identifying 12 different classes of citrus anomalies. evaluation metrics, including accuracy, recall, precision and the confusion matrix, underscore the discriminative power of the models. among the convolutional neural network architectures used, densenet achieved the highest classification accuracy at 99.50%. the study concluded by highlighting the potential for scalable and effective citrus anomaly classification and management using deep learning-based systems. | [
"regions",
"citrus crops",
"diseases",
"fungi",
"bacteria",
"pests",
"viruses",
"growers",
"automated technologies",
"that",
"citrus anomalies",
"economic losses",
"recent advances",
"deep learning techniques",
"potential",
"the accuracy",
"citrus anomaly categorization",
"this research",
"the use",
"deep learning methods",
"specifically densenet",
"robust models",
"different types",
"citrus anomalies",
"the dataset",
"high-resolution images",
"different orange leaves",
"the species citrus sinensis osbeck",
"orange groves",
"the states",
"tamaulipas",
"san luis potosi",
"northeastern mexico",
"study",
"experimental results",
"the effectiveness",
"the proposed deep learning models",
"12 different classes",
"citrus anomalies",
"evaluation metrics",
"accuracy",
"recall",
"precision",
"the confusion matrix",
"the discriminative power",
"the models",
"the convolutional neural network architectures",
"densenet",
"the highest classification accuracy",
"99.50%",
"the study",
"the potential",
"scalable and effective citrus anomaly classification",
"management",
"deep learning-based systems",
"san luis",
"mexico",
"12",
"99.50%"
] |
Skin cancer detection using ensemble of machine learning and deep learning techniques | [
"Jitendra V. Tembhurne",
"Nachiketa Hebbar",
"Hemprasad Y. Patil",
"Tausif Diwan"
] | Skin cancer is one of the most common forms of cancer, which makes it pertinent to be able to diagnose it accurately. In particular, melanoma is a form of skin cancer that is fatal and accounts for 6 of every 7-skin cancer related death. Moreover, in hospitals where dermatologists have to diagnose multiple cases of skin cancer, there are high possibilities of false negatives in diagnosis. To avoid such incidents, there here has been exhaustive research conducted by the research community all over the world to build highly accurate automated tools for skin cancer detection. In this paper, we introduce a novel approach of combining machine learning and deep learning techniques to solve the problem of skin cancer detection. The deep learning model uses state-of-the-art neural networks to extract features from images whereas the machine learning model processes image features which are obtained after performing the techniques such as Contourlet Transform and Local Binary Pattern Histogram. Meaningful feature extraction is crucial for any image classification roblem. As a result, by combining the manual and automated features, our designed model achieves a higher accuracy of 93% with an individual recall score of 99.7% and 86% for the benign and malignant forms of cancer, respectively. We benchmarked the model on publicly available Kaggle dataset containing processed images from ISIC Archive dataset. The proposed ensemble outperforms both expert dermatologists as well as other state-of-the-art deep learning and machine learning methods. Thus, this novel method can be of high assistance to dermatologists to help prevent any misdiagnosis. | 10.1007/s11042-023-14697-3 | skin cancer detection using ensemble of machine learning and deep learning techniques | skin cancer is one of the most common forms of cancer, which makes it pertinent to be able to diagnose it accurately. in particular, melanoma is a form of skin cancer that is fatal and accounts for 6 of every 7-skin cancer related death. moreover, in hospitals where dermatologists have to diagnose multiple cases of skin cancer, there are high possibilities of false negatives in diagnosis. to avoid such incidents, there here has been exhaustive research conducted by the research community all over the world to build highly accurate automated tools for skin cancer detection. in this paper, we introduce a novel approach of combining machine learning and deep learning techniques to solve the problem of skin cancer detection. the deep learning model uses state-of-the-art neural networks to extract features from images whereas the machine learning model processes image features which are obtained after performing the techniques such as contourlet transform and local binary pattern histogram. meaningful feature extraction is crucial for any image classification roblem. as a result, by combining the manual and automated features, our designed model achieves a higher accuracy of 93% with an individual recall score of 99.7% and 86% for the benign and malignant forms of cancer, respectively. we benchmarked the model on publicly available kaggle dataset containing processed images from isic archive dataset. the proposed ensemble outperforms both expert dermatologists as well as other state-of-the-art deep learning and machine learning methods. thus, this novel method can be of high assistance to dermatologists to help prevent any misdiagnosis. | [
"skin cancer",
"the most common forms",
"cancer",
"which",
"it",
"it",
"melanoma",
"a form",
"skin cancer",
"that",
"every 7-skin cancer related death",
"hospitals",
"dermatologists",
"multiple cases",
"skin cancer",
"high possibilities",
"false negatives",
"diagnosis",
"such incidents",
"exhaustive research",
"the research community",
"the world",
"highly accurate automated tools",
"skin cancer detection",
"this paper",
"we",
"a novel approach",
"machine learning",
"deep learning techniques",
"the problem",
"skin cancer detection",
"the deep learning model",
"the-art",
"features",
"images",
"the machine learning model",
"image features",
"which",
"the techniques",
"contourlet transform",
"local binary pattern histogram",
"meaningful feature extraction",
"any image classification roblem",
"a result",
"the manual and automated features",
"our designed model",
"a higher accuracy",
"93%",
"an individual recall score",
"99.7%",
"86%",
"the benign and malignant forms",
"cancer",
"we",
"the model",
"publicly available kaggle dataset",
"processed images",
"isic archive dataset",
"the proposed ensemble outperforms",
"both expert dermatologists",
"the-art",
"this novel method",
"high assistance",
"dermatologists",
"any misdiagnosis",
"one",
"6",
"7",
"93%",
"99.7% and",
"86%"
] |
Prediction of fiber Rayleigh scattering responses based on deep learning | [
"Yongxin Liang",
"Jianhui Sun",
"Jialei Zhang",
"Yuyao Wang",
"Anchi Wan",
"Shibo Zhang",
"Zhenyu Ye",
"Shengtao Lin",
"Zinan Wang"
] | Distributed acoustic sensing (DAS) is a fiber sensing technology based on Rayleigh scattering, which transforms optical fiber into a series of sensing units. It has become an indispensable part in the field of seismic monitoring, vehicle tracking, and pipeline monitoring. Fiber Rayleigh scattering responses lay at the core of DAS. However, there are few in-depth studies on the purpose of acquiring fiber Rayleigh scattering responses. In this paper, we establish a deep learning framework based on the bidirectional gated recurrent unit, which is the first time to predict the fiber Rayleigh scattering responses, to the best of our knowledge. The deep learning framework is trained with a numerical simulation dataset only, but it can process experimental data successfully. Moreover, since the responses could have a wider effective bandwidth than the experimental probing pulses, a finer spatial resolution could be obtained after demodulation. This work indicates that the deep learning framework can capture the characteristics of the fiber Rayleigh scattering responses effectively, which paves the way for intelligent DAS. | 10.1007/s11432-022-3734-0 | prediction of fiber rayleigh scattering responses based on deep learning | distributed acoustic sensing (das) is a fiber sensing technology based on rayleigh scattering, which transforms optical fiber into a series of sensing units. it has become an indispensable part in the field of seismic monitoring, vehicle tracking, and pipeline monitoring. fiber rayleigh scattering responses lay at the core of das. however, there are few in-depth studies on the purpose of acquiring fiber rayleigh scattering responses. in this paper, we establish a deep learning framework based on the bidirectional gated recurrent unit, which is the first time to predict the fiber rayleigh scattering responses, to the best of our knowledge. the deep learning framework is trained with a numerical simulation dataset only, but it can process experimental data successfully. moreover, since the responses could have a wider effective bandwidth than the experimental probing pulses, a finer spatial resolution could be obtained after demodulation. this work indicates that the deep learning framework can capture the characteristics of the fiber rayleigh scattering responses effectively, which paves the way for intelligent das. | [
"acoustic sensing",
"das",
"a fiber sensing technology",
"rayleigh scattering",
"which",
"optical fiber",
"a series",
"sensing units",
"it",
"an indispensable part",
"the field",
"seismic monitoring",
"vehicle tracking",
"pipeline monitoring",
"fiber rayleigh scattering responses",
"the core",
"das",
"few in-depth studies",
"the purpose",
"fiber rayleigh scattering responses",
"this paper",
"we",
"a deep learning framework",
"the bidirectional gated recurrent unit",
"which",
"the first time",
"the fiber rayleigh scattering responses",
"our knowledge",
"the deep learning framework",
"a numerical simulation",
"it",
"experimental data",
"the responses",
"the experimental probing pulses",
"a finer spatial resolution",
"demodulation",
"this work",
"the deep learning framework",
"the characteristics",
"the fiber rayleigh scattering responses",
"which",
"the way",
"intelligent das",
"das",
"das",
"first"
] |
Machine learning, deep learning and hernia surgery. Are we pushing the limits of abdominal core health? A qualitative systematic review | [
"D. L. Lima",
"J. Kasakewitch",
"D. Q. Nguyen",
"R. Nogueira",
"L. T. Cavazzola",
"B. T. Heniford",
"F. Malcher"
] | IntroductionThis systematic review aims to evaluate the use of machine learning and artificial intelligence in hernia surgery.MethodsThe PRISMA guidelines were followed throughout this systematic review. The ROBINS—I and Rob 2 tools were used to perform qualitative assessment of all studies included in this review. Recommendations were then summarized for the following pre-defined key items: protocol, research question, search strategy, study eligibility, data extraction, study design, risk of bias, publication bias, and statistical analysis.ResultsA total of 13 articles were ultimately included for this review, describing the use of machine learning and deep learning for hernia surgery. All studies were published from 2020 to 2023. Articles varied regarding the population studied, type of machine learning or Deep Learning Model (DLM) used, and hernia type. Of the thirteen included studies, all included either inguinal, ventral, or incisional hernias. Four studies evaluated recognition of surgical steps during inguinal hernia repair videos. Two studies predicted outcomes using image-based DMLs. Seven studies developed and validated deep learning algorithms to predict outcomes and identify factors associated with postoperative complications.ConclusionThe use of ML for abdominal wall reconstruction has been shown to be a promising tool for predicting outcomes and identifying factors that could lead to postoperative complications. | 10.1007/s10029-024-03069-x | machine learning, deep learning and hernia surgery. are we pushing the limits of abdominal core health? a qualitative systematic review | introductionthis systematic review aims to evaluate the use of machine learning and artificial intelligence in hernia surgery.methodsthe prisma guidelines were followed throughout this systematic review. the robins—i and rob 2 tools were used to perform qualitative assessment of all studies included in this review. recommendations were then summarized for the following pre-defined key items: protocol, research question, search strategy, study eligibility, data extraction, study design, risk of bias, publication bias, and statistical analysis.resultsa total of 13 articles were ultimately included for this review, describing the use of machine learning and deep learning for hernia surgery. all studies were published from 2020 to 2023. articles varied regarding the population studied, type of machine learning or deep learning model (dlm) used, and hernia type. of the thirteen included studies, all included either inguinal, ventral, or incisional hernias. four studies evaluated recognition of surgical steps during inguinal hernia repair videos. two studies predicted outcomes using image-based dmls. seven studies developed and validated deep learning algorithms to predict outcomes and identify factors associated with postoperative complications.conclusionthe use of ml for abdominal wall reconstruction has been shown to be a promising tool for predicting outcomes and identifying factors that could lead to postoperative complications. | [
"introductionthis systematic review",
"the use",
"machine learning",
"artificial intelligence",
"hernia surgery.methodsthe prisma guidelines",
"this systematic review",
"the robins",
"i",
"rob",
"2 tools",
"qualitative assessment",
"all studies",
"this review",
"recommendations",
"the following pre-defined key items",
"protocol",
"research question",
"search strategy",
"study eligibility",
"data extraction",
"study design",
"risk",
"bias",
"publication bias",
"statistical analysis.resultsa total",
"13 articles",
"this review",
"the use",
"machine learning",
"deep learning",
"hernia surgery",
"all studies",
"articles",
"the population",
"type",
"machine learning",
"deep learning model",
"dlm",
"hernia type",
"the thirteen included studies",
"all",
"either inguinal, ventral, or incisional hernias",
"four studies",
"recognition",
"surgical steps",
"inguinal hernia repair videos",
"two studies",
"outcomes",
"image-based dmls",
"seven studies",
"deep learning algorithms",
"outcomes",
"factors",
"postoperative complications.conclusionthe use",
"ml",
"abdominal wall reconstruction",
"a promising tool",
"outcomes",
"factors",
"that",
"postoperative complications",
"hernia",
"2",
"13",
"2020",
"thirteen",
"four",
"two",
"seven"
] |
Big data and deep learning for RNA biology | [
"Hyeonseo Hwang",
"Hyeonseong Jeon",
"Nagyeong Yeo",
"Daehyun Baek"
] | The exponential growth of big data in RNA biology (RB) has led to the development of deep learning (DL) models that have driven crucial discoveries. As constantly evidenced by DL studies in other fields, the successful implementation of DL in RB depends heavily on the effective utilization of large-scale datasets from public databases. In achieving this goal, data encoding methods, learning algorithms, and techniques that align well with biological domain knowledge have played pivotal roles. In this review, we provide guiding principles for applying these DL concepts to various problems in RB by demonstrating successful examples and associated methodologies. We also discuss the remaining challenges in developing DL models for RB and suggest strategies to overcome these challenges. Overall, this review aims to illuminate the compelling potential of DL for RB and ways to apply this powerful technology to investigate the intriguing biology of RNA more effectively. | 10.1038/s12276-024-01243-w | big data and deep learning for rna biology | the exponential growth of big data in rna biology (rb) has led to the development of deep learning (dl) models that have driven crucial discoveries. as constantly evidenced by dl studies in other fields, the successful implementation of dl in rb depends heavily on the effective utilization of large-scale datasets from public databases. in achieving this goal, data encoding methods, learning algorithms, and techniques that align well with biological domain knowledge have played pivotal roles. in this review, we provide guiding principles for applying these dl concepts to various problems in rb by demonstrating successful examples and associated methodologies. we also discuss the remaining challenges in developing dl models for rb and suggest strategies to overcome these challenges. overall, this review aims to illuminate the compelling potential of dl for rb and ways to apply this powerful technology to investigate the intriguing biology of rna more effectively. | [
"the exponential growth",
"big data",
"rna biology",
"rb",
"the development",
"deep learning (dl) models",
"that",
"crucial discoveries",
"dl studies",
"other fields",
"the successful implementation",
"dl",
"the effective utilization",
"large-scale datasets",
"public databases",
"this goal",
"data encoding methods",
"algorithms",
"techniques",
"that",
"biological domain knowledge",
"pivotal roles",
"this review",
"we",
"guiding principles",
"these dl concepts",
"various problems",
"rb",
"successful examples",
"associated methodologies",
"we",
"the remaining challenges",
"dl models",
"rb",
"strategies",
"these challenges",
"this review",
"the compelling potential",
"dl",
"ways",
"this powerful technology",
"the intriguing biology",
"rna"
] |
Classification of hazelnut varieties based on bigtransfer deep learning model | [
"Emrah Dönmez",
"Serhat Kılıçarslan",
"Aykut Diker"
] | Hazelnut is an agricultural product that contributes greatly to the economy of the countries where it is grown. The human factor plays a major role in hazelnut classification. The typical approach involves manual inspection of each sample by experts, a process that is both labor-intensive and time-consuming, and often suffers from limited sensitivity. The deep learning techniques are extremely important in the classification and detection of agricultural products. Deep learning has great potential in the agricultural sector. This technology can improve product quality, increase productivity, and offer farmers the ability to classify and detect their produce more effectively. This is important for sustainability and efficiency in the agricultural industry. In this paper aims to the application of deep learning algorithms to streamline hazelnut classification, reducing the need for manual labor, time, and cost in the sorting process. The study utilized hazelnut images from three different varieties: Giresun, Ordu, and Van, comprising a dataset of 1165 images for Giresun, 1324 for Ordu, and 1138 for Van hazelnuts. This dataset is an open-access dataset. In the study, experiments were carried out on the determination of hazelnut varieties with BigTransfer (BiT)-M R50 × 1, BiT-M R101 × 3 and BiT-M R152 × 4 models. Deep learning models, including big transfer was employed for classification. The classification task involved 3627 nut images and resulted in a remarkable accuracy of 99.49% with the BiT-M R152 × 4 model. These innovative methods can also lead to patentable products and devices in various industries, thereby boosting the economic value of the country. | 10.1007/s00217-024-04468-1 | classification of hazelnut varieties based on bigtransfer deep learning model | hazelnut is an agricultural product that contributes greatly to the economy of the countries where it is grown. the human factor plays a major role in hazelnut classification. the typical approach involves manual inspection of each sample by experts, a process that is both labor-intensive and time-consuming, and often suffers from limited sensitivity. the deep learning techniques are extremely important in the classification and detection of agricultural products. deep learning has great potential in the agricultural sector. this technology can improve product quality, increase productivity, and offer farmers the ability to classify and detect their produce more effectively. this is important for sustainability and efficiency in the agricultural industry. in this paper aims to the application of deep learning algorithms to streamline hazelnut classification, reducing the need for manual labor, time, and cost in the sorting process. the study utilized hazelnut images from three different varieties: giresun, ordu, and van, comprising a dataset of 1165 images for giresun, 1324 for ordu, and 1138 for van hazelnuts. this dataset is an open-access dataset. in the study, experiments were carried out on the determination of hazelnut varieties with bigtransfer (bit)-m r50 × 1, bit-m r101 × 3 and bit-m r152 × 4 models. deep learning models, including big transfer was employed for classification. the classification task involved 3627 nut images and resulted in a remarkable accuracy of 99.49% with the bit-m r152 × 4 model. these innovative methods can also lead to patentable products and devices in various industries, thereby boosting the economic value of the country. | [
"hazelnut",
"an agricultural product",
"that",
"the economy",
"the countries",
"it",
"the human factor",
"a major role",
"hazelnut classification",
"the typical approach",
"manual inspection",
"each sample",
"experts",
"a process",
"that",
"limited sensitivity",
"the deep learning techniques",
"the classification",
"detection",
"agricultural products",
"deep learning",
"great potential",
"the agricultural sector",
"this technology",
"product quality",
"productivity",
"farmers",
"the ability",
"their produce",
"this",
"sustainability",
"efficiency",
"the agricultural industry",
"this paper",
"the application",
"deep learning algorithms",
"hazelnut classification",
"the need",
"manual labor",
"time",
"cost",
"the sorting process",
"the study",
"hazelnut images",
"three different varieties",
"giresun",
"ordu",
"van",
"a dataset",
"1165 images",
"giresun",
"ordu",
"van hazelnuts",
"this dataset",
"an open-access dataset",
"the study",
"experiments",
"the determination",
"hazelnut varieties",
"bigtransfer",
"r101",
"r152 × 4 models",
"deep learning models",
"big transfer",
"classification",
"the classification task",
"3627 nut images",
"a remarkable accuracy",
"99.49%",
"the bit-m",
"r152",
"4 model",
"these innovative methods",
"patentable products",
"devices",
"various industries",
"the economic value",
"the country",
"three",
"van",
"1165",
"1324",
"1138",
"bit)-m",
"× 4",
"3627",
"99.49%",
"× 4"
] |
Deep learning solutions for smart city challenges in urban development | [
"Pengjun Wu",
"Zhanzhi Zhang",
"Xueyi Peng",
"Ran Wang"
] | In the realm of urban planning, the integration of deep learning technologies has emerged as a transformative force, promising to revolutionize the way cities are designed, managed, and optimized. This research embarks on a multifaceted exploration that combines the power of deep learning with Bayesian regularization techniques to enhance the performance and reliability of neural networks tailored for urban planning applications. Deep learning, characterized by its ability to extract complex patterns from vast urban datasets, has the potential to offer unprecedented insights into urban dynamics, transportation networks, and environmental sustainability. However, the complexity of these models often leads to challenges such as overfitting and limited interpretability. To address these issues, Bayesian regularization methods are employed to imbue neural networks with a principled framework that enhances generalization while quantifying predictive uncertainty. This research unfolds with the practical implementation of Bayesian regularization within neural networks, focusing on applications ranging from traffic prediction, urban infrastructure, data privacy, safety and security. By integrating Bayesian regularization, the aim is to, not only improve model performance in terms of accuracy and reliability but also to provide planners and decision-makers with probabilistic insights into the outcomes of various urban interventions. In tandem with quantitative assessments, graphical analysis is wielded as a crucial tool to visualize the inner workings of deep learning models in the context of urban planning. Through graphical representations, network visualizations, and decision boundary analysis, we uncover how Bayesian regularization influences neural network architecture and enhances interpretability. | 10.1038/s41598-024-55928-3 | deep learning solutions for smart city challenges in urban development | in the realm of urban planning, the integration of deep learning technologies has emerged as a transformative force, promising to revolutionize the way cities are designed, managed, and optimized. this research embarks on a multifaceted exploration that combines the power of deep learning with bayesian regularization techniques to enhance the performance and reliability of neural networks tailored for urban planning applications. deep learning, characterized by its ability to extract complex patterns from vast urban datasets, has the potential to offer unprecedented insights into urban dynamics, transportation networks, and environmental sustainability. however, the complexity of these models often leads to challenges such as overfitting and limited interpretability. to address these issues, bayesian regularization methods are employed to imbue neural networks with a principled framework that enhances generalization while quantifying predictive uncertainty. this research unfolds with the practical implementation of bayesian regularization within neural networks, focusing on applications ranging from traffic prediction, urban infrastructure, data privacy, safety and security. by integrating bayesian regularization, the aim is to, not only improve model performance in terms of accuracy and reliability but also to provide planners and decision-makers with probabilistic insights into the outcomes of various urban interventions. in tandem with quantitative assessments, graphical analysis is wielded as a crucial tool to visualize the inner workings of deep learning models in the context of urban planning. through graphical representations, network visualizations, and decision boundary analysis, we uncover how bayesian regularization influences neural network architecture and enhances interpretability. | [
"the realm",
"urban planning",
"the integration",
"deep learning technologies",
"a transformative force",
"the way",
"cities",
"this research embarks",
"a multifaceted exploration",
"that",
"the power",
"deep learning",
"bayesian regularization techniques",
"the performance",
"reliability",
"neural networks",
"urban planning applications",
"deep learning",
"its ability",
"complex patterns",
"vast urban datasets",
"the potential",
"unprecedented insights",
"urban dynamics",
"transportation networks",
"environmental sustainability",
"the complexity",
"these models",
"challenges",
"limited interpretability",
"these issues",
"bayesian regularization methods",
"imbue neural networks",
"a principled framework",
"that",
"generalization",
"predictive uncertainty",
"this research",
"the practical implementation",
"bayesian regularization",
"neural networks",
"applications",
"traffic prediction",
"urban infrastructure",
"data privacy",
"safety",
"security",
"bayesian regularization",
"the aim",
"model performance",
"terms",
"accuracy",
"reliability",
"planners",
"decision-makers",
"probabilistic insights",
"the outcomes",
"various urban interventions",
"tandem",
"quantitative assessments",
"graphical analysis",
"a crucial tool",
"the inner workings",
"deep learning models",
"the context",
"urban planning",
"graphical representations",
"network visualizations",
"decision boundary analysis",
"we",
"bayesian regularization",
"neural network architecture",
"interpretability",
"bayesian"
] |
Age transformation based on deep learning: a survey | [
"Yingchun Guo",
"Xin Su",
"Gang Yan",
"Ye Zhu",
"Xueqi Lv"
] | Age transformation aims to preserve personalized facial information while altering a given face to appear at a target age. This technique finds extensive applications in fields such as face recognition, movie special effects, and social entertainment, among others. With the advancement of deep learning, particularly Generative Adversarial Networks (GANs), research on age transformation has made significant progress, leading to the emergence of a diverse range of deep learning-based methods. However, a comprehensive and systematic literature review of these methods is currently lacking. In this survey, we provide an all-encompassing review of deep learning methods for facial aging. Firstly, we summarize the key aspects of feature preservation during the age transformation process. Subsequently, we present a comprehensive overview of facial age transformation techniques, categorized according to various deep learning network architectures. Additionally, we conduct an analysis and comparison of commonly used face image datasets, offering recommendations for dataset selection. Furthermore, we consolidate the qualitative and quantitative evaluation metrics commonly employed in age transformation methodologies through experimental assessment. Finally, we address potential areas of future research in age transformation methods, based on the current challenges and limitations. | 10.1007/s00521-023-09376-1 | age transformation based on deep learning: a survey | age transformation aims to preserve personalized facial information while altering a given face to appear at a target age. this technique finds extensive applications in fields such as face recognition, movie special effects, and social entertainment, among others. with the advancement of deep learning, particularly generative adversarial networks (gans), research on age transformation has made significant progress, leading to the emergence of a diverse range of deep learning-based methods. however, a comprehensive and systematic literature review of these methods is currently lacking. in this survey, we provide an all-encompassing review of deep learning methods for facial aging. firstly, we summarize the key aspects of feature preservation during the age transformation process. subsequently, we present a comprehensive overview of facial age transformation techniques, categorized according to various deep learning network architectures. additionally, we conduct an analysis and comparison of commonly used face image datasets, offering recommendations for dataset selection. furthermore, we consolidate the qualitative and quantitative evaluation metrics commonly employed in age transformation methodologies through experimental assessment. finally, we address potential areas of future research in age transformation methods, based on the current challenges and limitations. | [
"age transformation",
"personalized facial information",
"a given face",
"a target age",
"this technique",
"extensive applications",
"fields",
"face recognition",
"movie special effects",
"social entertainment",
"others",
"the advancement",
"deep learning",
"particularly generative adversarial networks",
"gans",
"age transformation",
"significant progress",
"the emergence",
"a diverse range",
"deep learning-based methods",
"a comprehensive and systematic literature review",
"these methods",
"this survey",
"we",
"an all-encompassing review",
"deep learning methods",
"facial aging",
"we",
"the key aspects",
"feature preservation",
"the age transformation process",
"we",
"a comprehensive overview",
"facial age transformation techniques",
"various deep learning network architectures",
"we",
"an analysis",
"comparison",
"commonly used face image datasets",
"recommendations",
"dataset selection",
"we",
"the qualitative and quantitative evaluation metrics",
"age transformation methodologies",
"experimental assessment",
"we",
"potential areas",
"future research",
"age transformation methods",
"the current challenges",
"limitations",
"firstly"
] |
Advances of Pipeline Model Parallelism for Deep Learning Training: An Overview | [
"Lei Guan \n (关 磊)",
"Dong-Sheng Li \n (李东升)",
"Ji-Ye Liang \n (梁吉业)",
"Wen-Jian Wang \n (王文剑)",
"Ke-Shi Ge \n (葛可适)",
"Xi-Cheng Lu \n (卢锡城)"
] | Deep learning has become the cornerstone of artificial intelligence, playing an increasingly important role in human production and lifestyle. However, as the complexity of problem-solving increases, deep learning models become increasingly intricate, resulting in a proliferation of large language models with an astonishing number of parameters. Pipeline model parallelism (PMP) has emerged as one of the mainstream approaches to addressing the significant challenge of training “big models”. This paper presents a comprehensive review of PMP. It covers the basic concepts and main challenges of PMP. It also comprehensively compares synchronous and asynchronous pipeline schedules for PMP approaches, and discusses the main techniques to achieve load balance for both intra-node and inter-node training. Furthermore, the main techniques to optimize computation, storage, and communication are presented, with potential research directions being discussed. | 10.1007/s11390-024-3872-3 | advances of pipeline model parallelism for deep learning training: an overview | deep learning has become the cornerstone of artificial intelligence, playing an increasingly important role in human production and lifestyle. however, as the complexity of problem-solving increases, deep learning models become increasingly intricate, resulting in a proliferation of large language models with an astonishing number of parameters. pipeline model parallelism (pmp) has emerged as one of the mainstream approaches to addressing the significant challenge of training “big models”. this paper presents a comprehensive review of pmp. it covers the basic concepts and main challenges of pmp. it also comprehensively compares synchronous and asynchronous pipeline schedules for pmp approaches, and discusses the main techniques to achieve load balance for both intra-node and inter-node training. furthermore, the main techniques to optimize computation, storage, and communication are presented, with potential research directions being discussed. | [
"deep learning",
"the cornerstone",
"artificial intelligence",
"an increasingly important role",
"human production",
"lifestyle",
"the complexity",
"problem-solving increases",
"deep learning models",
"a proliferation",
"large language models",
"an astonishing number",
"parameters",
"pipeline model parallelism",
"the mainstream approaches",
"the significant challenge",
"“big models",
"this paper",
"a comprehensive review",
"pmp",
"it",
"the basic concepts",
"main challenges",
"pmp",
"it",
"synchronous and asynchronous pipeline schedules",
"pmp approaches",
"the main techniques",
"load balance",
"both intra-node and inter-node training",
"the main techniques",
"computation",
"storage",
"communication",
"potential research directions"
] |
Deep learning-based channel estimation for wireless ultraviolet MIMO communication systems | [
"Taifei Zhao",
"Yuxin Sun",
"Xinzhe Lü",
"Shuang Zhang"
] | To solve the problems of pulse broadening and channel fading caused by atmospheric scattering and turbulence, multiple-input multiple-output (MIMO) technology is a valid way. A wireless ultraviolet (UV) MIMO channel estimation approach based on deep learning is provided in this paper. The deep learning is used to convert the channel estimation into the image processing. By combining convolutional neural network (CNN) and attention mechanism (AM), the learning model is designed to extract the depth features of channel state information (CSI). The simulation results show that the approach proposed in this paper can perform channel estimation effectively for UV MIMO communication and can better suppress the fading caused by scattering and turbulence in the MIMO scattering channel. | 10.1007/s11801-024-3069-6 | deep learning-based channel estimation for wireless ultraviolet mimo communication systems | to solve the problems of pulse broadening and channel fading caused by atmospheric scattering and turbulence, multiple-input multiple-output (mimo) technology is a valid way. a wireless ultraviolet (uv) mimo channel estimation approach based on deep learning is provided in this paper. the deep learning is used to convert the channel estimation into the image processing. by combining convolutional neural network (cnn) and attention mechanism (am), the learning model is designed to extract the depth features of channel state information (csi). the simulation results show that the approach proposed in this paper can perform channel estimation effectively for uv mimo communication and can better suppress the fading caused by scattering and turbulence in the mimo scattering channel. | [
"the problems",
"pulse",
"channel fading",
"atmospheric scattering",
"turbulence",
"multiple-input multiple-output (mimo) technology",
"a valid way",
"a wireless ultraviolet (uv) mimo channel estimation approach",
"deep learning",
"this paper",
"the deep learning",
"the channel estimation",
"the image processing",
"convolutional neural network",
"cnn",
"attention mechanism",
"am",
"the learning model",
"the depth features",
"channel state information",
"csi",
"the simulation results",
"the approach",
"this paper",
"channel estimation",
"uv mimo communication",
"the fading",
"scattering",
"turbulence",
"the mimo scattering channel",
"mimo channel estimation",
"cnn"
] |
Imbalcbl: addressing deep learning challenges with small and imbalanced datasets | [
"Saqib ul Sabha",
"Assif Assad",
"Sadaf Shafi",
"Nusrat Mohi Ud Din",
"Rayees Ahmad Dar",
"Muzafar Rasool Bhat"
] | Deep learning, while transformative for computer vision, frequently falters when confronted with small and imbalanced datasets. Despite substantial progress in this domain, prevailing models often underachieve under these constraints. Addressing this, we introduce an innovative contrast-based learning strategy for small and imbalanced data that significantly bolsters the proficiency of deep learning architectures on these challenging datasets. By ingeniously concatenating training images, the effective training dataset expands from n to \(n^2\), affording richer data for model training, even when n is very small. Remarkably, our solution remains indifferent to specific loss functions or network architectures, endorsing its adaptability for diverse classification scenarios. Rigorously benchmarked against four benchmark datasets, our approach was juxtaposed with state-of-the-art oversampling paradigms. The empirical evidence underscores our method’s superior efficacy, outshining contemporaries across metrics like Balanced accuracy, F1 score, and Geometric mean. Noteworthy increments include 7–16% on the Covid-19 dataset, 4–20% for Honey bees, 1–6% on CIFAR-10, and 1–9% on FashionMNIST. In essence, our proposed method offers a potent remedy for the perennial issues stemming from scanty and skewed data in deep learning. | 10.1007/s13198-024-02346-3 | imbalcbl: addressing deep learning challenges with small and imbalanced datasets | deep learning, while transformative for computer vision, frequently falters when confronted with small and imbalanced datasets. despite substantial progress in this domain, prevailing models often underachieve under these constraints. addressing this, we introduce an innovative contrast-based learning strategy for small and imbalanced data that significantly bolsters the proficiency of deep learning architectures on these challenging datasets. by ingeniously concatenating training images, the effective training dataset expands from n to \(n^2\), affording richer data for model training, even when n is very small. remarkably, our solution remains indifferent to specific loss functions or network architectures, endorsing its adaptability for diverse classification scenarios. rigorously benchmarked against four benchmark datasets, our approach was juxtaposed with state-of-the-art oversampling paradigms. the empirical evidence underscores our method’s superior efficacy, outshining contemporaries across metrics like balanced accuracy, f1 score, and geometric mean. noteworthy increments include 7–16% on the covid-19 dataset, 4–20% for honey bees, 1–6% on cifar-10, and 1–9% on fashionmnist. in essence, our proposed method offers a potent remedy for the perennial issues stemming from scanty and skewed data in deep learning. | [
"deep learning",
"computer vision",
"small and imbalanced datasets",
"substantial progress",
"this domain",
"prevailing models",
"these constraints",
"this",
"we",
"an innovative contrast-based learning strategy",
"small and imbalanced data",
"that",
"the proficiency",
"deep learning architectures",
"these challenging datasets",
"training images",
"the effective training dataset",
"n",
"\\(n^2\\",
"richer data",
"model training",
"n",
"our solution",
"specific loss functions",
"network architectures",
"its adaptability",
"diverse classification scenarios",
"four benchmark datasets",
"our approach",
"the-art",
"the empirical evidence",
"our method’s superior efficacy",
"contemporaries",
"metrics",
"balanced accuracy",
"f1 score",
"geometric mean",
"noteworthy increments",
"7–16%",
"the covid-19 dataset",
"4–20%",
"honey bees",
"1–6%",
"cifar-10",
"1–9%",
"fashionmnist",
"essence",
"our proposed method",
"a potent remedy",
"the perennial issues",
"scanty",
"skewed data",
"deep learning",
"four",
"noteworthy",
"7–16%",
"covid-19",
"4–20%",
"1–6%",
"cifar-10",
"1–9%"
] |
ConvDepthTransEnsembleNet: An Improved Deep Learning Approach for Rice Crop Leaf Disease Classification | [
"Kavita Bathe",
"Nita Patil",
"Sanjay Patil",
"Devanand Bathe",
"Kuldeep Kumar"
] | Agricultural sector, in association with its allied sectors, plays pivotal role in the progress of a country. In spite of notable contribution, Agricultural sector suffers from several challenges. Crop loss due to diseases is considered as one of the major challenges. Rice is one of the major cereal crops in India. Its production is highly impacted due to crop diseases. Early and accurate detection of rice crop leaf diseases is essential for aforementioned issue. Over the years, conventional methods of crop leaf disease detection are used but with its own limitations. Recent technological advancement in computer vision, deep learning has created new pathways in agricultural sector. Deep learning models require huge data which is quite a great challenge. In such case, building up of a robust deep learning model could work with limited and unbalanced dataset, with good generalization performances. In this study, weighted deep ensemble learning approach is used for performance improvement of rice crop leaf disease classification task. The ensemble method ConvDepthTransEnsembleNet is proposed. To show the effectiveness of proposed work, experiments are conducted on diverse datasets. ConvDepthTransEnsembleNet is a lightweight model which has achieved the accuracy of 96.88% on limited and unbalanced dataset. The experimental results show that proposed model outperforms the individual classifiers based on conventional methods and transfer learning approach in terms of significant reduction in parameters and improved upon generalization performance. The proposed model is highly useful for implementing deep learning models with resources constrained devices. | 10.1007/s42979-024-02783-8 | convdepthtransensemblenet: an improved deep learning approach for rice crop leaf disease classification | agricultural sector, in association with its allied sectors, plays pivotal role in the progress of a country. in spite of notable contribution, agricultural sector suffers from several challenges. crop loss due to diseases is considered as one of the major challenges. rice is one of the major cereal crops in india. its production is highly impacted due to crop diseases. early and accurate detection of rice crop leaf diseases is essential for aforementioned issue. over the years, conventional methods of crop leaf disease detection are used but with its own limitations. recent technological advancement in computer vision, deep learning has created new pathways in agricultural sector. deep learning models require huge data which is quite a great challenge. in such case, building up of a robust deep learning model could work with limited and unbalanced dataset, with good generalization performances. in this study, weighted deep ensemble learning approach is used for performance improvement of rice crop leaf disease classification task. the ensemble method convdepthtransensemblenet is proposed. to show the effectiveness of proposed work, experiments are conducted on diverse datasets. convdepthtransensemblenet is a lightweight model which has achieved the accuracy of 96.88% on limited and unbalanced dataset. the experimental results show that proposed model outperforms the individual classifiers based on conventional methods and transfer learning approach in terms of significant reduction in parameters and improved upon generalization performance. the proposed model is highly useful for implementing deep learning models with resources constrained devices. | [
"agricultural sector",
"association",
"its allied sectors",
"pivotal role",
"the progress",
"a country",
"spite",
"notable contribution",
"agricultural sector",
"several challenges",
"crop loss",
"diseases",
"the major challenges",
"rice",
"the major cereal crops",
"india",
"its production",
"crop diseases",
"early and accurate detection",
"rice crop leaf diseases",
"aforementioned issue",
"the years",
"conventional methods",
"crop leaf disease detection",
"its own limitations",
"recent technological advancement",
"computer vision",
"deep learning",
"new pathways",
"agricultural sector",
"deep learning models",
"huge data",
"which",
"quite a great challenge",
"such case",
"a robust deep learning model",
"limited and unbalanced dataset",
"good generalization performances",
"this study",
"deep ensemble learning approach",
"performance improvement",
"rice crop leaf disease classification task",
"the ensemble method convdepthtransensemblenet",
"the effectiveness",
"proposed work",
"experiments",
"diverse datasets",
"convdepthtransensemblenet",
"a lightweight model",
"which",
"the accuracy",
"96.88%",
"limited and unbalanced dataset",
"the experimental results",
"proposed model",
"the individual classifiers",
"conventional methods",
"learning approach",
"terms",
"significant reduction",
"parameters",
"generalization performance",
"the proposed model",
"deep learning models",
"resources constrained devices",
"rice",
"one",
"india",
"the years",
"deep",
"96.88%"
] |
Transfer learning based cascaded deep learning network and mask recognition for COVID-19 | [
"Fengyin Li",
"Xiaojiao Wang",
"Yuhong Sun",
"Tao Li",
"Junrong Ge"
] | The COVID-19 is still spreading today, and it has caused great harm to human beings. The system at the entrance of public places such as shopping malls and stations should check whether pedestrians are wearing masks. However, pedestrians often pass the system inspection by wearing cotton masks, scarves, etc. Therefore, the detection system not only needs to check whether pedestrians are wearing masks, but also needs to detect the type of masks. Based on the lightweight network architecture MobilenetV3, this paper proposes a cascaded deep learning network based on transfer learning, and then designs a mask recognition system based on the cascaded deep learning network. By modifying the activation function of the MobilenetV3 output layer and the structure of the model, two MobilenetV3 networks suitable for cascading are obtained. By introducing transfer learning into the training process of two modified MobilenetV3 networks and a multi-task convolutional neural network, the ImagNet underlying parameters of the network models are obtained in advance, which reduces the computational load of the models. The cascaded deep learning network consists of a multi-task convolutional neural network cascaded with these two modified MobilenetV3 networks. A multi-task convolutional neural network is used to detect faces in images, and two modified MobilenetV3 networks are used as the backbone network to extract the features of masks. After comparing with the classification results of the modified MobilenetV3 neural network before cascading, the classification accuracy of the cascading learning network is improved by 7%, and the excellent performance of the cascading network can be seen. | 10.1007/s11280-023-01149-z | transfer learning based cascaded deep learning network and mask recognition for covid-19 | the covid-19 is still spreading today, and it has caused great harm to human beings. the system at the entrance of public places such as shopping malls and stations should check whether pedestrians are wearing masks. however, pedestrians often pass the system inspection by wearing cotton masks, scarves, etc. therefore, the detection system not only needs to check whether pedestrians are wearing masks, but also needs to detect the type of masks. based on the lightweight network architecture mobilenetv3, this paper proposes a cascaded deep learning network based on transfer learning, and then designs a mask recognition system based on the cascaded deep learning network. by modifying the activation function of the mobilenetv3 output layer and the structure of the model, two mobilenetv3 networks suitable for cascading are obtained. by introducing transfer learning into the training process of two modified mobilenetv3 networks and a multi-task convolutional neural network, the imagnet underlying parameters of the network models are obtained in advance, which reduces the computational load of the models. the cascaded deep learning network consists of a multi-task convolutional neural network cascaded with these two modified mobilenetv3 networks. a multi-task convolutional neural network is used to detect faces in images, and two modified mobilenetv3 networks are used as the backbone network to extract the features of masks. after comparing with the classification results of the modified mobilenetv3 neural network before cascading, the classification accuracy of the cascading learning network is improved by 7%, and the excellent performance of the cascading network can be seen. | [
"the covid-19",
"it",
"great harm",
"human beings",
"the system",
"the entrance",
"public places",
"shopping malls",
"stations",
"pedestrians",
"masks",
"pedestrians",
"the system inspection",
"cotton masks",
"scarves",
"the detection system",
"pedestrians",
"masks",
"the type",
"masks",
"the lightweight network architecture",
"this paper",
"a cascaded deep learning network",
"transfer learning",
"a mask recognition system",
"the cascaded deep learning network",
"the activation function",
"the mobilenetv3 output layer",
"the structure",
"the model",
"two mobilenetv3 networks",
"transfer",
"the training process",
"two modified mobilenetv3 networks",
"a multi-task convolutional neural network",
"the imagnet underlying parameters",
"the network models",
"advance",
"which",
"the computational load",
"the models",
"the cascaded deep learning network",
"a multi-task convolutional neural network",
"these two modified mobilenetv3 networks",
"a multi-task convolutional neural network",
"faces",
"images",
"two modified mobilenetv3 networks",
"the backbone network",
"the features",
"masks",
"the classification results",
"the modified mobilenetv3 neural network",
"the classification accuracy",
"the cascading learning network",
"7%",
"the excellent performance",
"the cascading network",
"covid-19",
"today",
"mobilenetv3",
"mobilenetv3",
"two",
"mobilenetv3",
"two",
"mobilenetv3",
"two",
"mobilenetv3",
"two",
"mobilenetv3",
"mobilenetv3",
"7%"
] |
Efficient screening framework for organic solar cells with deep learning and ensemble learning | [
"Hongshuai Wang",
"Jie Feng",
"Zhihao Dong",
"Lujie Jin",
"Miaomiao Li",
"Jianyu Yuan",
"Youyong Li"
] | Organic photovoltaics have attracted worldwide interest due to their unique advantages in developing low-cost, lightweight, and flexible power sources. Functional molecular design and synthesis have been put forward to accelerate the discovery of ideal organic semiconductors. However, it is extremely expensive to conduct experimental screening of the wide organic compound space. Here we develop a framework by combining a deep learning model (graph neural network) and an ensemble learning model (Light Gradient Boosting Machine), which enables rapid and accurate screening of organic photovoltaic molecules. This framework establishes the relationship between molecular structure, molecular properties, and device efficiency. Our framework evaluates the chemical structure of the organic photovoltaic molecules directly and accurately. Since it does not involve density functional theory calculations, it makes fast predictions. The reliability of our framework is verified with data from previous reports and our newly synthesized organic molecules. Our work provides an efficient method for developing new organic optoelectronic materials. | 10.1038/s41524-023-01155-9 | efficient screening framework for organic solar cells with deep learning and ensemble learning | organic photovoltaics have attracted worldwide interest due to their unique advantages in developing low-cost, lightweight, and flexible power sources. functional molecular design and synthesis have been put forward to accelerate the discovery of ideal organic semiconductors. however, it is extremely expensive to conduct experimental screening of the wide organic compound space. here we develop a framework by combining a deep learning model (graph neural network) and an ensemble learning model (light gradient boosting machine), which enables rapid and accurate screening of organic photovoltaic molecules. this framework establishes the relationship between molecular structure, molecular properties, and device efficiency. our framework evaluates the chemical structure of the organic photovoltaic molecules directly and accurately. since it does not involve density functional theory calculations, it makes fast predictions. the reliability of our framework is verified with data from previous reports and our newly synthesized organic molecules. our work provides an efficient method for developing new organic optoelectronic materials. | [
"organic photovoltaics",
"worldwide interest",
"their unique advantages",
"low-cost, lightweight, and flexible power sources",
"functional molecular design",
"synthesis",
"the discovery",
"ideal organic semiconductors",
"it",
"experimental screening",
"the wide organic compound space",
"we",
"a framework",
"a deep learning model",
"graph neural network",
"an ensemble learning model",
"light gradient boosting machine",
"which",
"rapid and accurate screening",
"organic photovoltaic molecules",
"this framework",
"the relationship",
"molecular structure",
"molecular properties",
"device efficiency",
"our framework",
"the chemical structure",
"the organic photovoltaic molecules",
"it",
"density functional theory calculations",
"it",
"fast predictions",
"the reliability",
"our framework",
"data",
"previous reports",
"our newly synthesized organic molecules",
"our work",
"an efficient method",
"new organic optoelectronic materials",
"the organic photovoltaic molecules"
] |
Identifying strawberry appearance quality based on unsupervised deep learning | [
"Hongfei Zhu",
"Xingyu Liu",
"Hao Zheng",
"Lianhe Yang",
"Xuchen Li",
"Zhongzhi Han"
] | The strawberry appearance is an essential standard for judging the quality, so it is crucial to accurately identify the strawberry appearance quality for intelligent picking. This study proposed a new strawberry appearance quality detection based on unsupervised deep learning. Firstly, using deep learning (Resnet18, Resnet50, and Resnet101) to extract the strawberry image feature information. And using the t-SNE (t-distribution stochastic neighbor embedding) to reduce the feature vectors’ dimension. Finally, the unsupervised learning method (Gaussian Mixture Model) was used to cluster strawberries’ feature points. The results showed that: (1) the clustering performance based on Resnet101 was effective in 2-dimensional space, the cluster accuracy was 94.89%, and the validation accuracy was 91.79%. (2) The clustering method based on Resnet50 had good performance in the 3-dimensional space, the cluster accuracy was 96.10%, and the validation accuracy was 93.08%. (3) The accuracy of deep features plus RF (random forest) was 95.00% under limited data. Thus this method will promote intelligent picking strawberry equipment and it will overcome the supervised learning drawback that divides image datasets according to prior knowledge. | 10.1007/s11119-023-10085-x | identifying strawberry appearance quality based on unsupervised deep learning | the strawberry appearance is an essential standard for judging the quality, so it is crucial to accurately identify the strawberry appearance quality for intelligent picking. this study proposed a new strawberry appearance quality detection based on unsupervised deep learning. firstly, using deep learning (resnet18, resnet50, and resnet101) to extract the strawberry image feature information. and using the t-sne (t-distribution stochastic neighbor embedding) to reduce the feature vectors’ dimension. finally, the unsupervised learning method (gaussian mixture model) was used to cluster strawberries’ feature points. the results showed that: (1) the clustering performance based on resnet101 was effective in 2-dimensional space, the cluster accuracy was 94.89%, and the validation accuracy was 91.79%. (2) the clustering method based on resnet50 had good performance in the 3-dimensional space, the cluster accuracy was 96.10%, and the validation accuracy was 93.08%. (3) the accuracy of deep features plus rf (random forest) was 95.00% under limited data. thus this method will promote intelligent picking strawberry equipment and it will overcome the supervised learning drawback that divides image datasets according to prior knowledge. | [
"the strawberry appearance",
"an essential standard",
"the quality",
"it",
"the strawberry appearance quality",
"intelligent picking",
"this study",
"a new strawberry appearance quality detection",
"unsupervised deep learning",
"deep learning",
"resnet18",
"resnet50",
"resnet101",
"the strawberry image feature information",
"(t-distribution",
"the feature vectors’ dimension",
"the unsupervised learning method",
"gaussian mixture model",
"strawberries",
"feature points",
"the results",
"the clustering performance",
"resnet101",
"2-dimensional space",
"the cluster accuracy",
"94.89%",
"the validation accuracy",
"91.79%",
"(2) the clustering method",
"resnet50",
"good performance",
"the 3-dimensional space",
"the cluster accuracy",
"96.10%",
"the validation accuracy",
"93.08%",
"(3) the accuracy",
"deep features",
"rf (random forest",
"95.00%",
"limited data",
"this method",
"intelligent picking strawberry equipment",
"it",
"the supervised learning drawback",
"that",
"image datasets",
"prior knowledge",
"strawberry",
"firstly",
"resnet18",
"resnet50",
"strawberry",
"gaussian",
"1",
"2",
"94.89%",
"91.79%",
"2",
"resnet50",
"3",
"96.10%",
"93.08%",
"3",
"95.00%"
] |
A survey on advanced machine learning and deep learning techniques assisting in renewable energy generation | [
"Sri Revathi B."
] | The sustainability of the earth depends on renewable energy. Forecasting the output of renewable energy has a big impact on how we operate and manage our power networks. Accurate forecasting of renewable energy generation is crucial to ensuring grid dependability and permanence and reducing the risk and cost of the energy market and infrastructure. Although there are several approaches to forecasting solar radiation on a global scale, the two most common ones are machine learning algorithms and cloud pictures combined with physical models. The objective is to present a summary of machine learning-based techniques for solar irradiation forecasting in this context. Renewable energy is being used more and more in the world’s energy grid. Numerous strategies, including hybrids, physical models, statistical approaches, and artificial intelligence techniques, have been developed to anticipate the use of renewable energy. This paper examines methods for forecasting renewable energy based on deep learning and machine learning. Review and analysis of deep learning and machine learning forecasts for renewable energy come first. The second paragraph describes metaheuristic optimization techniques for renewable energy. The third topic was the open issue of projecting renewable energy. I will wrap up with a few potential future job objectives. | 10.1007/s11356-023-29064-w | a survey on advanced machine learning and deep learning techniques assisting in renewable energy generation | the sustainability of the earth depends on renewable energy. forecasting the output of renewable energy has a big impact on how we operate and manage our power networks. accurate forecasting of renewable energy generation is crucial to ensuring grid dependability and permanence and reducing the risk and cost of the energy market and infrastructure. although there are several approaches to forecasting solar radiation on a global scale, the two most common ones are machine learning algorithms and cloud pictures combined with physical models. the objective is to present a summary of machine learning-based techniques for solar irradiation forecasting in this context. renewable energy is being used more and more in the world’s energy grid. numerous strategies, including hybrids, physical models, statistical approaches, and artificial intelligence techniques, have been developed to anticipate the use of renewable energy. this paper examines methods for forecasting renewable energy based on deep learning and machine learning. review and analysis of deep learning and machine learning forecasts for renewable energy come first. the second paragraph describes metaheuristic optimization techniques for renewable energy. the third topic was the open issue of projecting renewable energy. i will wrap up with a few potential future job objectives. | [
"the sustainability",
"the earth",
"renewable energy",
"the output",
"renewable energy",
"a big impact",
"we",
"our power networks",
"accurate forecasting",
"renewable energy generation",
"grid dependability",
"permanence",
"the risk",
"cost",
"the energy market",
"infrastructure",
"several approaches",
"solar radiation",
"a global scale",
"the two most common ones",
"algorithms",
"cloud pictures",
"physical models",
"the objective",
"a summary",
"machine learning-based techniques",
"solar irradiation forecasting",
"this context",
"renewable energy",
"the world’s energy grid",
"numerous strategies",
"hybrids",
"physical models",
"statistical approaches",
"artificial intelligence techniques",
"the use",
"renewable energy",
"this paper",
"methods",
"renewable energy",
"deep learning",
"machine learning",
"review",
"analysis",
"deep learning",
"machine",
"forecasts",
"renewable energy",
"the second paragraph",
"metaheuristic optimization techniques",
"renewable energy",
"the third topic",
"the open issue",
"renewable energy",
"i",
"a few potential future job objectives",
"earth",
"two",
"first",
"second",
"third"
] |
Varicocele detection in ultrasound images using deep learning | [
"Omar AlZoubi",
"Mohammad Abu Awad",
"Ayman M. Abdalla",
"Laaly Samrraie"
] | Varicocele is a disease exhibited by an abnormal dilation of the scrotal venous pampiniform plexus. It is a common cause of infertility and in some cases may cause pain or discomfort. This paper presents a new approach to automatic varicocele classification by employing convolutional neural networks with deep learning. The available dataset consists of images converted into different color modes. Each color mode dataset is partitioned, augmented, and trained by employing a deep learning network. Experiments were run with all possible combinations of four different pre-trained models and over three color modes to determine the best combination that achieves the highest performance with and without augmentation. The implementation results of training and testing were evaluated with several metrics. The analysis of the results demonstrated the efficacy of the proposed system as it identified and classified varicocele with a relatively high accuracy that surpassed the previous works with a significant improvement. | 10.1007/s11042-023-17865-7 | varicocele detection in ultrasound images using deep learning | varicocele is a disease exhibited by an abnormal dilation of the scrotal venous pampiniform plexus. it is a common cause of infertility and in some cases may cause pain or discomfort. this paper presents a new approach to automatic varicocele classification by employing convolutional neural networks with deep learning. the available dataset consists of images converted into different color modes. each color mode dataset is partitioned, augmented, and trained by employing a deep learning network. experiments were run with all possible combinations of four different pre-trained models and over three color modes to determine the best combination that achieves the highest performance with and without augmentation. the implementation results of training and testing were evaluated with several metrics. the analysis of the results demonstrated the efficacy of the proposed system as it identified and classified varicocele with a relatively high accuracy that surpassed the previous works with a significant improvement. | [
"varicocele",
"a disease",
"an abnormal dilation",
"the scrotal venous pampiniform plexus",
"it",
"a common cause",
"infertility",
"some cases",
"pain",
"discomfort",
"this paper",
"a new approach",
"automatic varicocele classification",
"convolutional neural networks",
"deep learning",
"the available dataset",
"images",
"different color modes",
"each color mode dataset",
"a deep learning network",
"experiments",
"all possible combinations",
"four different pre-trained models",
"over three color modes",
"the best combination",
"that",
"the highest performance",
"augmentation",
"the implementation results",
"training",
"testing",
"several metrics",
"the analysis",
"the results",
"the efficacy",
"the proposed system",
"it",
"a relatively high accuracy",
"that",
"the previous works",
"a significant improvement",
"four",
"three"
] |
Enhancing image retrieval through entropy-based deep metric learning | [
"Kambiz Rahbar",
"Fatemeh Taheri"
] | The increasing demand for effective retrieval from image datasets has been driven by the rapid growth of digital images. Image retrieval is a method for creating a structured database based on searching for similar images to a user's query image. Effective image representation and extraction of distinctive features are among the challenges faced by image retrieval systems. The proposed approach utilizes a triplet loss function based on binary cross-entropy to train a Siamese network, allowing for deep metric learning and creating a discriminative feature space with maximum discrimination between classes and minimum intra-class distance. In this approach, a pre-trained neural network is serialized with Siamese network. Initially, image features are extracted using a pre-trained convolutional neural network. Then, a Siamese network is trained to create a discriminative feature space using deep metric learning. Learning in this method is controlled by a triplet loss function based on binary cross-entropy with anchor, positive, and negative samples. The transformation from the initial feature space to the discriminative feature space is achieved through deep metric learning in the Siamese network. The proposed approach proves to effective in discriminating features of diverse classes in the dataset. Visualization of the feature space is also demonstrated using the t-SNE statistical technique. Furthermore, the explainability of the proposed approach is presented by examining the Shapley value. The proposed approach was examined on the Corel10K, Caltech101, and Caltech256 datasets. The best reported results with the precision metric were 0.988 for the Corel10K dataset and 0.951 and 0.902 for the Caltech101 and Caltech256 datasets, respectively. Experimental results show that the proposed approach effectively improves the retrieval results of similar samples to the query image creating discrimination in the feature vectors, even with dimensionality reduction. | 10.1007/s11042-024-19296-4 | enhancing image retrieval through entropy-based deep metric learning | the increasing demand for effective retrieval from image datasets has been driven by the rapid growth of digital images. image retrieval is a method for creating a structured database based on searching for similar images to a user's query image. effective image representation and extraction of distinctive features are among the challenges faced by image retrieval systems. the proposed approach utilizes a triplet loss function based on binary cross-entropy to train a siamese network, allowing for deep metric learning and creating a discriminative feature space with maximum discrimination between classes and minimum intra-class distance. in this approach, a pre-trained neural network is serialized with siamese network. initially, image features are extracted using a pre-trained convolutional neural network. then, a siamese network is trained to create a discriminative feature space using deep metric learning. learning in this method is controlled by a triplet loss function based on binary cross-entropy with anchor, positive, and negative samples. the transformation from the initial feature space to the discriminative feature space is achieved through deep metric learning in the siamese network. the proposed approach proves to effective in discriminating features of diverse classes in the dataset. visualization of the feature space is also demonstrated using the t-sne statistical technique. furthermore, the explainability of the proposed approach is presented by examining the shapley value. the proposed approach was examined on the corel10k, caltech101, and caltech256 datasets. the best reported results with the precision metric were 0.988 for the corel10k dataset and 0.951 and 0.902 for the caltech101 and caltech256 datasets, respectively. experimental results show that the proposed approach effectively improves the retrieval results of similar samples to the query image creating discrimination in the feature vectors, even with dimensionality reduction. | [
"the increasing demand",
"effective retrieval",
"image datasets",
"the rapid growth",
"digital images",
"image retrieval",
"a method",
"a structured database",
"similar images",
"a user's query image",
"effective image representation",
"extraction",
"distinctive features",
"the challenges",
"image retrieval systems",
"the proposed approach",
"a triplet loss function",
"binary cross",
"-",
"a siamese network",
"deep metric learning",
"a discriminative feature space",
"maximum discrimination",
"classes",
"minimum intra-class distance",
"this approach",
"a pre-trained neural network",
"siamese network",
"image features",
"a pre-trained convolutional neural network",
"a siamese network",
"a discriminative feature space",
"deep metric learning",
"this method",
"a triplet loss function",
"binary cross",
"-",
"anchor",
"negative samples",
"the transformation",
"the initial feature space",
"the discriminative feature space",
"deep metric learning",
"the siamese network",
"the proposed approach",
"features",
"diverse classes",
"the dataset",
"visualization",
"the feature space",
"the t-sne statistical technique",
"the explainability",
"the proposed approach",
"the shapley value",
"the proposed approach",
"the corel10k",
"caltech101",
"caltech256 datasets",
"the best reported results",
"the precision metric",
"the corel10k dataset",
"the caltech101",
"caltech256",
"datasets",
"experimental results",
"the proposed approach",
"the retrieval results",
"similar samples",
"the query image",
"discrimination",
"the feature vectors",
"dimensionality reduction",
"siamese",
"corel10k",
"0.988",
"corel10k",
"0.951",
"0.902"
] |
Legal sentence boundary detection using hybrid deep learning and statistical models | [
"Reshma Sheik",
"Sneha Rao Ganta",
"S. Jaya Nirmala"
] | Sentence boundary detection (SBD) represents an important first step in natural language processing since accurately identifying sentence boundaries significantly impacts downstream applications. Nevertheless, detecting sentence boundaries within legal texts poses a unique and challenging problem due to their distinct structural and linguistic features. Our approach utilizes deep learning models to leverage delimiter and surrounding context information as input, enabling precise detection of sentence boundaries in English legal texts. We evaluate various deep learning models, including domain-specific transformer models like LegalBERT and CaseLawBERT. To assess the efficacy of our deep learning models, we compare them with a state-of-the-art domain-specific statistical conditional random field (CRF) model. After considering model size, F1-score, and inference time, we identify the Convolutional Neural Network Model (CNN) as the top-performing deep learning model. To further enhance performance, we integrate the features of the CNN model into the subsequent CRF model, creating a hybrid architecture that combines the strengths of both models. Our experiments demonstrate that the hybrid model outperforms the baseline model, achieving a 4% improvement in the F1-score. Additional experiments showcase the superiority of the hybrid model over SBD open-source libraries when confronted with an out-of-domain test set. These findings underscore the importance of efficient SBD in legal texts and emphasize the advantages of employing deep learning models and hybrid architectures to achieve optimal performance. | 10.1007/s10506-024-09394-x | legal sentence boundary detection using hybrid deep learning and statistical models | sentence boundary detection (sbd) represents an important first step in natural language processing since accurately identifying sentence boundaries significantly impacts downstream applications. nevertheless, detecting sentence boundaries within legal texts poses a unique and challenging problem due to their distinct structural and linguistic features. our approach utilizes deep learning models to leverage delimiter and surrounding context information as input, enabling precise detection of sentence boundaries in english legal texts. we evaluate various deep learning models, including domain-specific transformer models like legalbert and caselawbert. to assess the efficacy of our deep learning models, we compare them with a state-of-the-art domain-specific statistical conditional random field (crf) model. after considering model size, f1-score, and inference time, we identify the convolutional neural network model (cnn) as the top-performing deep learning model. to further enhance performance, we integrate the features of the cnn model into the subsequent crf model, creating a hybrid architecture that combines the strengths of both models. our experiments demonstrate that the hybrid model outperforms the baseline model, achieving a 4% improvement in the f1-score. additional experiments showcase the superiority of the hybrid model over sbd open-source libraries when confronted with an out-of-domain test set. these findings underscore the importance of efficient sbd in legal texts and emphasize the advantages of employing deep learning models and hybrid architectures to achieve optimal performance. | [
"boundary detection",
"sbd",
"an important first step",
"natural language processing",
"sentence boundaries",
"downstream applications",
"sentence boundaries",
"legal texts",
"a unique and challenging problem",
"their distinct structural and linguistic features",
"our approach",
"deep learning models",
"delimiter",
"context information",
"input",
"precise detection",
"sentence boundaries",
"english legal texts",
"we",
"various deep learning models",
"domain-specific transformer models",
"legalbert",
"caselawbert",
"the efficacy",
"our deep learning models",
"we",
"them",
"the-art",
"crf",
"model size, f1-score, and inference time",
"we",
"the convolutional neural network model",
"cnn",
"the top-performing deep learning model",
"performance",
"we",
"the features",
"the cnn model",
"the subsequent crf model",
"a hybrid architecture",
"that",
"the strengths",
"both models",
"our experiments",
"the hybrid model",
"the baseline model",
"a 4% improvement",
"the f1-score",
"additional experiments",
"the superiority",
"the hybrid model",
"sbd open-source libraries",
"domain",
"these findings",
"the importance",
"efficient sbd",
"legal texts",
"the advantages",
"deep learning models",
"hybrid architectures",
"optimal performance",
"first",
"english",
"cnn",
"cnn",
"4%"
] |
Deep Learning Methods for Vibration-Based Structural Health Monitoring: A Review | [
"Hao Wang",
"Baoli Wang",
"Caixia Cui"
] | The vibration signal is an effective diagnostic tool in structural health monitoring (SHM) fields that is closely related to abnormal states. Deep learning methods have got remarkable success in utilizing vibration signals for damage detection. This paper presents a systematic review of deep learning methods for SHM, focusing on the utilization of vibration signal data from different model perspectives. In recent years, there has been a significant increase in research on deep learning for vibration-based SHM. The accuracy of such works is equivalent to that of traditional machine learning approaches, and better results could be achieved by integrating multiple approaches. Furthermore, we found that transfer learning methods yield promising results when limited data are available to train the model. This paper aims to comprehensively review deep learning research on health monitoring using vibration signal data from multiple perspectives, with a particular emphasis on transfer learning methods for SHM. It fills the gap that existing reviews lack in the discussion of transfer learning for SHM. Finally, we analyze the challenges faced by current research and provide recommendations for future work. | 10.1007/s40996-023-01287-4 | deep learning methods for vibration-based structural health monitoring: a review | the vibration signal is an effective diagnostic tool in structural health monitoring (shm) fields that is closely related to abnormal states. deep learning methods have got remarkable success in utilizing vibration signals for damage detection. this paper presents a systematic review of deep learning methods for shm, focusing on the utilization of vibration signal data from different model perspectives. in recent years, there has been a significant increase in research on deep learning for vibration-based shm. the accuracy of such works is equivalent to that of traditional machine learning approaches, and better results could be achieved by integrating multiple approaches. furthermore, we found that transfer learning methods yield promising results when limited data are available to train the model. this paper aims to comprehensively review deep learning research on health monitoring using vibration signal data from multiple perspectives, with a particular emphasis on transfer learning methods for shm. it fills the gap that existing reviews lack in the discussion of transfer learning for shm. finally, we analyze the challenges faced by current research and provide recommendations for future work. | [
"the vibration signal",
"an effective diagnostic tool",
"structural health monitoring",
"shm) fields",
"that",
"abnormal states",
"deep learning methods",
"remarkable success",
"vibration signals",
"damage detection",
"this paper",
"a systematic review",
"deep learning methods",
"shm",
"the utilization",
"vibration signal data",
"different model perspectives",
"recent years",
"a significant increase",
"research",
"deep learning",
"vibration-based shm",
"the accuracy",
"such works",
"that",
"traditional machine learning approaches",
"better results",
"multiple approaches",
"we",
"transfer learning methods",
"promising results",
"limited data",
"the model",
"this paper",
"deep learning research",
"health monitoring",
"vibration signal data",
"multiple perspectives",
"a particular emphasis",
"transfer learning methods",
"shm",
"it",
"the gap",
"existing reviews",
"the discussion",
"shm",
"we",
"the challenges",
"current research",
"recommendations",
"future work",
"recent years"
] |
Hyperbolic Deep Learning in Computer Vision: A Survey | [
"Pascal Mettes",
"Mina Ghadimi Atigh",
"Martin Keller-Ressel",
"Jeffrey Gu",
"Serena Yeung"
] | Deep representation learning is a ubiquitous part of modern computer vision. While Euclidean space has been the de facto standard manifold for learning visual representations, hyperbolic space has recently gained rapid traction for learning in computer vision. Specifically, hyperbolic learning has shown a strong potential to embed hierarchical structures, learn from limited samples, quantify uncertainty, add robustness, limit error severity, and more. In this paper, we provide a categorization and in-depth overview of current literature on hyperbolic learning for computer vision. We research both supervised and unsupervised literature and identify three main research themes in each direction. We outline how hyperbolic learning is performed in all themes and discuss the main research problems that benefit from current advances in hyperbolic learning for computer vision. Moreover, we provide a high-level intuition behind hyperbolic geometry and outline open research questions to further advance research in this direction. | 10.1007/s11263-024-02043-5 | hyperbolic deep learning in computer vision: a survey | deep representation learning is a ubiquitous part of modern computer vision. while euclidean space has been the de facto standard manifold for learning visual representations, hyperbolic space has recently gained rapid traction for learning in computer vision. specifically, hyperbolic learning has shown a strong potential to embed hierarchical structures, learn from limited samples, quantify uncertainty, add robustness, limit error severity, and more. in this paper, we provide a categorization and in-depth overview of current literature on hyperbolic learning for computer vision. we research both supervised and unsupervised literature and identify three main research themes in each direction. we outline how hyperbolic learning is performed in all themes and discuss the main research problems that benefit from current advances in hyperbolic learning for computer vision. moreover, we provide a high-level intuition behind hyperbolic geometry and outline open research questions to further advance research in this direction. | [
"deep representation learning",
"a ubiquitous part",
"modern computer vision",
"euclidean space",
"the de facto standard manifold",
"visual representations",
"hyperbolic space",
"rapid traction",
"computer vision",
"hyperbolic learning",
"a strong potential",
"hierarchical structures",
"limited samples",
"quantify uncertainty",
"robustness",
"error severity",
"this paper",
"we",
"a categorization",
"-depth",
"current literature",
"hyperbolic learning",
"computer vision",
"we",
"literature",
"three main research themes",
"each direction",
"we",
"hyperbolic learning",
"all themes",
"the main research problems",
"that",
"current advances",
"hyperbolic learning",
"computer vision",
"we",
"a high-level intuition",
"hyperbolic geometry",
"open research questions",
"research",
"this direction",
"three"
] |
From distributed machine to distributed deep learning: a comprehensive survey | [
"Mohammad Dehghani",
"Zahra Yazdanparast"
] | Artificial intelligence has made remarkable progress in handling complex tasks, thanks to advances in hardware acceleration and machine learning algorithms. However, to acquire more accurate outcomes and solve more complex issues, algorithms should be trained with more data. Processing this huge amount of data could be time-consuming and require a great deal of computation. To address these issues, distributed machine learning has been proposed, which involves distributing the data and algorithm across several machines. There has been considerable effort put into developing distributed machine learning algorithms, and different methods have been proposed so far. We divide these algorithms in classification and clustering (traditional machine learning), deep learning and deep reinforcement learning groups. Distributed deep learning has gained more attention in recent years and most of the studies have focused on this approach. Therefore, we mostly concentrate on this category. Based on the investigation of the mentioned algorithms, we highlighted the limitations that should be addressed in future research. | 10.1186/s40537-023-00829-x | from distributed machine to distributed deep learning: a comprehensive survey | artificial intelligence has made remarkable progress in handling complex tasks, thanks to advances in hardware acceleration and machine learning algorithms. however, to acquire more accurate outcomes and solve more complex issues, algorithms should be trained with more data. processing this huge amount of data could be time-consuming and require a great deal of computation. to address these issues, distributed machine learning has been proposed, which involves distributing the data and algorithm across several machines. there has been considerable effort put into developing distributed machine learning algorithms, and different methods have been proposed so far. we divide these algorithms in classification and clustering (traditional machine learning), deep learning and deep reinforcement learning groups. distributed deep learning has gained more attention in recent years and most of the studies have focused on this approach. therefore, we mostly concentrate on this category. based on the investigation of the mentioned algorithms, we highlighted the limitations that should be addressed in future research. | [
"artificial intelligence",
"remarkable progress",
"complex tasks",
"thanks",
"advances",
"hardware acceleration",
"machine learning",
"more accurate outcomes",
"more complex issues",
"algorithms",
"more data",
"this huge amount",
"data",
"a great deal",
"computation",
"these issues",
"distributed machine learning",
"which",
"the data",
"algorithm",
"several machines",
"considerable effort",
"developing distributed machine learning algorithms",
"different methods",
"we",
"these algorithms",
"classification",
"clustering",
"(traditional machine learning",
"deep learning",
"deep reinforcement learning groups",
"distributed deep learning",
"more attention",
"recent years",
"the studies",
"this approach",
"we",
"this category",
"the investigation",
"the mentioned algorithms",
"we",
"the limitations",
"that",
"future research",
"recent years"
] |
Deep learning-assisted characterization of nanoparticle growth processes: unveiling SAXS structure evolution | [
"Yikun Li",
"Lunyang Liu",
"Xiaoning Zhao",
"Shuming Zhou",
"Xuehui Wu",
"Yuecheng Lai",
"Zhongjun Chen",
"Jizhong Chen",
"Xueqing Xing"
] | PurposeThe purpose of this study is to explore deep learning methods for processing high-throughput small-angle X-ray scattering (SAXS) experimental data.MethodsThe deep learning algorithm was trained and validated using simulated SAXS data, which were generated in batches based on the theoretical SAXS formula using Python code. Our self-developed SAXSNET, a convolutional neural network based on PyTorch, was employed to classify SAXS data for various shapes of nanoparticles. Additionally, we conducted comparative analysis of classification algorithms including ResNet-18, ResNet-34 and Vision Transformer. Random Forest and XGboost regression algorithms were used for the nanoparticle size prediction. Finally, we evaluated the aforementioned shape classification and numerical regression methods using actual experimental data. A pipeline segment is established for the processing of SAXS data, incorporating deep learning classification algorithms and numerical regression algorithms.ResultsAfter being trained with simulated data, the four deep learning algorithms achieved a prediction accuracy of over 96% on the validation set. The fine-tuned deep learning model demonstrated robust generalization capabilities for predicting the shapes of experimental data, enabling rapid and accurate identification of morphological changes in nanoparticles during experiments. The Random Forest and XGboost regression algorithms can simultaneously provide faster and more accurate predictions of nanoparticle size.ConclusionThe pipeline segment constructed in this study, integrating deep learning classification and regression algorithms, enables real-time processing of high-throughput SAXS data. It aims to effectively mitigates the impact of human factors on data processing results and enhances the standardization, automation, and intelligence of synchrotron radiation experiments. | 10.1007/s41605-024-00471-y | deep learning-assisted characterization of nanoparticle growth processes: unveiling saxs structure evolution | purposethe purpose of this study is to explore deep learning methods for processing high-throughput small-angle x-ray scattering (saxs) experimental data.methodsthe deep learning algorithm was trained and validated using simulated saxs data, which were generated in batches based on the theoretical saxs formula using python code. our self-developed saxsnet, a convolutional neural network based on pytorch, was employed to classify saxs data for various shapes of nanoparticles. additionally, we conducted comparative analysis of classification algorithms including resnet-18, resnet-34 and vision transformer. random forest and xgboost regression algorithms were used for the nanoparticle size prediction. finally, we evaluated the aforementioned shape classification and numerical regression methods using actual experimental data. a pipeline segment is established for the processing of saxs data, incorporating deep learning classification algorithms and numerical regression algorithms.resultsafter being trained with simulated data, the four deep learning algorithms achieved a prediction accuracy of over 96% on the validation set. the fine-tuned deep learning model demonstrated robust generalization capabilities for predicting the shapes of experimental data, enabling rapid and accurate identification of morphological changes in nanoparticles during experiments. the random forest and xgboost regression algorithms can simultaneously provide faster and more accurate predictions of nanoparticle size.conclusionthe pipeline segment constructed in this study, integrating deep learning classification and regression algorithms, enables real-time processing of high-throughput saxs data. it aims to effectively mitigates the impact of human factors on data processing results and enhances the standardization, automation, and intelligence of synchrotron radiation experiments. | [
"purposethe purpose",
"this study",
"deep learning methods",
"saxs",
"data.methodsthe deep learning algorithm",
"simulated saxs data",
"which",
"batches",
"the theoretical saxs formula",
"python code",
"our self-developed saxsnet",
"a convolutional neural network",
"pytorch",
"saxs data",
"various shapes",
"nanoparticles",
"we",
"comparative analysis",
"classification algorithms",
"resnet-18",
"resnet-34",
"vision transformer",
"random forest",
"xgboost regression algorithms",
"the nanoparticle size prediction",
"we",
"the aforementioned shape classification and numerical regression methods",
"actual experimental data",
"a pipeline segment",
"the processing",
"saxs data",
"deep learning classification algorithms",
"numerical regression",
"simulated data",
"the four deep learning algorithms",
"a prediction accuracy",
"over 96%",
"the validation set",
"the fine-tuned deep learning model",
"robust generalization capabilities",
"the shapes",
"experimental data",
"rapid and accurate identification",
"morphological changes",
"nanoparticles",
"experiments",
"the random forest",
"xgboost regression algorithms",
"faster and more accurate predictions",
"nanoparticle",
"size.conclusionthe pipeline segment",
"this study",
"deep learning classification and regression algorithms",
"real-time processing",
"high-throughput saxs data",
"it",
"the impact",
"human factors",
"data processing results",
"the standardization",
"automation",
"intelligence",
"synchrotron radiation experiments",
"resnet-18",
"resnet-34",
"four",
"over 96%"
] |
Machine learning, deep learning and hernia surgery. Are we pushing the limits of abdominal core health? A qualitative systematic review | [
"D. L. Lima",
"J. Kasakewitch",
"D. Q. Nguyen",
"R. Nogueira",
"L. T. Cavazzola",
"B. T. Heniford",
"F. Malcher"
] | IntroductionThis systematic review aims to evaluate the use of machine learning and artificial intelligence in hernia surgery.MethodsThe PRISMA guidelines were followed throughout this systematic review. The ROBINS—I and Rob 2 tools were used to perform qualitative assessment of all studies included in this review. Recommendations were then summarized for the following pre-defined key items: protocol, research question, search strategy, study eligibility, data extraction, study design, risk of bias, publication bias, and statistical analysis.ResultsA total of 13 articles were ultimately included for this review, describing the use of machine learning and deep learning for hernia surgery. All studies were published from 2020 to 2023. Articles varied regarding the population studied, type of machine learning or Deep Learning Model (DLM) used, and hernia type. Of the thirteen included studies, all included either inguinal, ventral, or incisional hernias. Four studies evaluated recognition of surgical steps during inguinal hernia repair videos. Two studies predicted outcomes using image-based DMLs. Seven studies developed and validated deep learning algorithms to predict outcomes and identify factors associated with postoperative complications.ConclusionThe use of ML for abdominal wall reconstruction has been shown to be a promising tool for predicting outcomes and identifying factors that could lead to postoperative complications. | 10.1007/s10029-024-03069-x | machine learning, deep learning and hernia surgery. are we pushing the limits of abdominal core health? a qualitative systematic review | introductionthis systematic review aims to evaluate the use of machine learning and artificial intelligence in hernia surgery.methodsthe prisma guidelines were followed throughout this systematic review. the robins—i and rob 2 tools were used to perform qualitative assessment of all studies included in this review. recommendations were then summarized for the following pre-defined key items: protocol, research question, search strategy, study eligibility, data extraction, study design, risk of bias, publication bias, and statistical analysis.resultsa total of 13 articles were ultimately included for this review, describing the use of machine learning and deep learning for hernia surgery. all studies were published from 2020 to 2023. articles varied regarding the population studied, type of machine learning or deep learning model (dlm) used, and hernia type. of the thirteen included studies, all included either inguinal, ventral, or incisional hernias. four studies evaluated recognition of surgical steps during inguinal hernia repair videos. two studies predicted outcomes using image-based dmls. seven studies developed and validated deep learning algorithms to predict outcomes and identify factors associated with postoperative complications.conclusionthe use of ml for abdominal wall reconstruction has been shown to be a promising tool for predicting outcomes and identifying factors that could lead to postoperative complications. | [
"introductionthis systematic review",
"the use",
"machine learning",
"artificial intelligence",
"hernia surgery.methodsthe prisma guidelines",
"this systematic review",
"the robins",
"i",
"rob",
"2 tools",
"qualitative assessment",
"all studies",
"this review",
"recommendations",
"the following pre-defined key items",
"protocol",
"research question",
"search strategy",
"study eligibility",
"data extraction",
"study design",
"risk",
"bias",
"publication bias",
"statistical analysis.resultsa total",
"13 articles",
"this review",
"the use",
"machine learning",
"deep learning",
"hernia surgery",
"all studies",
"articles",
"the population",
"type",
"machine learning",
"deep learning model",
"dlm",
"hernia type",
"the thirteen included studies",
"all",
"either inguinal, ventral, or incisional hernias",
"four studies",
"recognition",
"surgical steps",
"inguinal hernia repair videos",
"two studies",
"outcomes",
"image-based dmls",
"seven studies",
"deep learning algorithms",
"outcomes",
"factors",
"postoperative complications.conclusionthe use",
"ml",
"abdominal wall reconstruction",
"a promising tool",
"outcomes",
"factors",
"that",
"postoperative complications",
"hernia",
"2",
"13",
"2020",
"thirteen",
"four",
"two",
"seven"
] |
A survey: evolutionary deep learning | [
"Yifan Li",
"Jing Liu"
] | Deep learning (DL) has made remarkable progress on various real-world tasks, but its construction pipeline strongly relies on human scientists. Furthermore, evolutionary computing (EC), as an optimization tool based on the biological evolution mechanism, has good performance on complex optimization problems. It provides a new way to construct DL models and has generated many sparks in the DL field, especially in automatic machine learning (AutoML). Although many reviews have been conducted on AutoML, in recent years, few comprehensive works have studied on the application of EC in DL, which is called evolutionary deep learning (EDL). After a thorough investigation, we think that EDL can be divided into four parts: (1) learning rule optimization, (2) hyperparameter optimization, (3) neural architecture search, and (4) other EDL-related works. In this work, we introduce the classic optimization methods and the challenges of EDL with respect to these four parts, review the related work, and then present the future research prospects. This work clearly and comprehensively reviews the concept and research content of EDL, which can help readers quickly find the intersection between EC and DL and seek their inspiration. | 10.1007/s00500-023-08316-4 | a survey: evolutionary deep learning | deep learning (dl) has made remarkable progress on various real-world tasks, but its construction pipeline strongly relies on human scientists. furthermore, evolutionary computing (ec), as an optimization tool based on the biological evolution mechanism, has good performance on complex optimization problems. it provides a new way to construct dl models and has generated many sparks in the dl field, especially in automatic machine learning (automl). although many reviews have been conducted on automl, in recent years, few comprehensive works have studied on the application of ec in dl, which is called evolutionary deep learning (edl). after a thorough investigation, we think that edl can be divided into four parts: (1) learning rule optimization, (2) hyperparameter optimization, (3) neural architecture search, and (4) other edl-related works. in this work, we introduce the classic optimization methods and the challenges of edl with respect to these four parts, review the related work, and then present the future research prospects. this work clearly and comprehensively reviews the concept and research content of edl, which can help readers quickly find the intersection between ec and dl and seek their inspiration. | [
"deep learning",
"dl",
"remarkable progress",
"various real-world tasks",
"its construction pipeline",
"human scientists",
"evolutionary computing",
"ec",
"an optimization tool",
"the biological evolution mechanism",
"good performance",
"complex optimization problems",
"it",
"a new way",
"dl models",
"many sparks",
"the dl field",
"automatic machine learning",
"automl",
"many reviews",
"automl",
"recent years",
"few comprehensive works",
"the application",
"ec",
"dl",
"which",
"evolutionary deep learning",
"edl",
"a thorough investigation",
"we",
"edl",
"four parts",
"rule optimization",
"(2) hyperparameter optimization",
"3) neural architecture search",
"(4) other edl-related works",
"this work",
"we",
"the classic optimization methods",
"the challenges",
"edl",
"respect",
"these four parts",
"the related work",
"the future research prospects",
"this work",
"the concept",
"research content",
"edl",
"which",
"readers",
"the intersection",
"ec",
"dl",
"their inspiration",
"ec",
"recent years",
"ec",
"four",
"1",
"2",
"3",
"4",
"four",
"ec",
"dl"
] |
Impact of the Preprocessing Steps in Deep Learning-Based Image Classifications | [
"H. James Deva Koresh"
] | Deep learning softwares are designed using artificial neural networks for various applications by training and testing them with an appropriate dataset. The raw image samples available in the dataset may contain noisy and unclear information due to radiation, heat and poor lighting conditions. Therefore, the researchers are trying to filter and enhance such noisy images through preprocessing steps for providing a valid feature information to the neural network layers included in the deep learning software. However, there are certain claims that roam around the researchers such as an image may lose some useful information when it is not preprocessed with an appropriate filter or enhancement technique. Hence, the work reviews the efficacy of the methodologies that are designed with and without a preprocessing step. Also, the work summarizes the common reasons and statements highlighted by the researchers for using and avoiding the preprocessing steps on designing a deep learning approach. The study is conducted to provide a clarity toward the requirement and non-requirement of preprocessing step in a deep learning software. | 10.1007/s40009-023-01372-2 | impact of the preprocessing steps in deep learning-based image classifications | deep learning softwares are designed using artificial neural networks for various applications by training and testing them with an appropriate dataset. the raw image samples available in the dataset may contain noisy and unclear information due to radiation, heat and poor lighting conditions. therefore, the researchers are trying to filter and enhance such noisy images through preprocessing steps for providing a valid feature information to the neural network layers included in the deep learning software. however, there are certain claims that roam around the researchers such as an image may lose some useful information when it is not preprocessed with an appropriate filter or enhancement technique. hence, the work reviews the efficacy of the methodologies that are designed with and without a preprocessing step. also, the work summarizes the common reasons and statements highlighted by the researchers for using and avoiding the preprocessing steps on designing a deep learning approach. the study is conducted to provide a clarity toward the requirement and non-requirement of preprocessing step in a deep learning software. | [
"deep learning softwares",
"artificial neural networks",
"various applications",
"training",
"them",
"an appropriate dataset",
"the raw image samples",
"the dataset",
"noisy and unclear information",
"radiation",
"heat",
"poor lighting conditions",
"the researchers",
"such noisy images",
"steps",
"a valid feature information",
"the neural network layers",
"the deep learning software",
"certain claims",
"roam",
"the researchers",
"an image",
"some useful information",
"it",
"an appropriate filter",
"enhancement technique",
"the work",
"the efficacy",
"the methodologies",
"that",
"a preprocessing step",
"the work",
"the common reasons",
"statements",
"the researchers",
"the preprocessing steps",
"a deep learning approach",
"the study",
"a clarity",
"the requirement",
"-",
"requirement",
"step",
"a deep learning software"
] |
Deep learning applications in games: a survey from a data perspective | [
"Zhipeng Hu",
"Yu Ding",
"Runze Wu",
"Lincheng Li",
"Rongsheng Zhang",
"Yujing Hu",
"Feng Qiu",
"Zhimeng Zhang",
"Kai Wang",
"Shiwei Zhao",
"Yongqiang Zhang",
"Ji Jiang",
"Yadong Xi",
"Jiashu Pu",
"Wei Zhang",
"Suzhen Wang",
"Ke Chen",
"Tianze Zhou",
"Jiarui Chen",
"Yan Song",
"Tangjie Lv",
"Changjie Fan"
] | This paper presents a comprehensive review of deep learning applications in the video game industry, focusing on how these techniques can be utilized in game development, experience, and operation. As relying on computation techniques, the game world can be viewed as an integration of various complex data. This examines the use of deep learning in processing various types of game data. The paper classifies the game data into asset data, interaction data, and player data, according to their utilization in game development, experience, and operation, respectively. Specifically, this paper discusses deep learning applications in generating asset data such as object images, 3D scenes, avatar models, and facial animations; enhancing interaction data through improved text-based conversations and decision-making behaviors; and analyzing player data for cheat detection and match-making purposes. Although this review may not cover all existing applications of deep learning, it aims to provide a thorough presentation of the current state of deep learning in the gaming industry and its potential to revolutionize game production by reducing costs and improving the overall player experience. | 10.1007/s10489-023-05094-2 | deep learning applications in games: a survey from a data perspective | this paper presents a comprehensive review of deep learning applications in the video game industry, focusing on how these techniques can be utilized in game development, experience, and operation. as relying on computation techniques, the game world can be viewed as an integration of various complex data. this examines the use of deep learning in processing various types of game data. the paper classifies the game data into asset data, interaction data, and player data, according to their utilization in game development, experience, and operation, respectively. specifically, this paper discusses deep learning applications in generating asset data such as object images, 3d scenes, avatar models, and facial animations; enhancing interaction data through improved text-based conversations and decision-making behaviors; and analyzing player data for cheat detection and match-making purposes. although this review may not cover all existing applications of deep learning, it aims to provide a thorough presentation of the current state of deep learning in the gaming industry and its potential to revolutionize game production by reducing costs and improving the overall player experience. | [
"this paper",
"a comprehensive review",
"deep learning applications",
"the video game industry",
"these techniques",
"game development",
"experience",
"operation",
"computation techniques",
"the game world",
"an integration",
"various complex data",
"this",
"the use",
"deep learning",
"various types",
"game data",
"the paper",
"the game data",
"asset data",
"interaction data",
"player data",
"their utilization",
"game development",
"experience",
"operation",
"this paper",
"deep learning applications",
"asset data",
"object images",
"3d scenes",
"avatar models",
"facial animations",
"interaction data",
"improved text-based conversations",
"decision-making behaviors",
"player data",
"cheat detection",
"match-making purposes",
"this review",
"all existing applications",
"deep learning",
"it",
"a thorough presentation",
"the current state",
"deep learning",
"the gaming industry",
"its potential",
"game production",
"costs",
"the overall player experience",
"3d"
] |
Classification of hazelnut varieties based on bigtransfer deep learning model | [
"Emrah Dönmez",
"Serhat Kılıçarslan",
"Aykut Diker"
] | Hazelnut is an agricultural product that contributes greatly to the economy of the countries where it is grown. The human factor plays a major role in hazelnut classification. The typical approach involves manual inspection of each sample by experts, a process that is both labor-intensive and time-consuming, and often suffers from limited sensitivity. The deep learning techniques are extremely important in the classification and detection of agricultural products. Deep learning has great potential in the agricultural sector. This technology can improve product quality, increase productivity, and offer farmers the ability to classify and detect their produce more effectively. This is important for sustainability and efficiency in the agricultural industry. In this paper aims to the application of deep learning algorithms to streamline hazelnut classification, reducing the need for manual labor, time, and cost in the sorting process. The study utilized hazelnut images from three different varieties: Giresun, Ordu, and Van, comprising a dataset of 1165 images for Giresun, 1324 for Ordu, and 1138 for Van hazelnuts. This dataset is an open-access dataset. In the study, experiments were carried out on the determination of hazelnut varieties with BigTransfer (BiT)-M R50 × 1, BiT-M R101 × 3 and BiT-M R152 × 4 models. Deep learning models, including big transfer was employed for classification. The classification task involved 3627 nut images and resulted in a remarkable accuracy of 99.49% with the BiT-M R152 × 4 model. These innovative methods can also lead to patentable products and devices in various industries, thereby boosting the economic value of the country. | 10.1007/s00217-024-04468-1 | classification of hazelnut varieties based on bigtransfer deep learning model | hazelnut is an agricultural product that contributes greatly to the economy of the countries where it is grown. the human factor plays a major role in hazelnut classification. the typical approach involves manual inspection of each sample by experts, a process that is both labor-intensive and time-consuming, and often suffers from limited sensitivity. the deep learning techniques are extremely important in the classification and detection of agricultural products. deep learning has great potential in the agricultural sector. this technology can improve product quality, increase productivity, and offer farmers the ability to classify and detect their produce more effectively. this is important for sustainability and efficiency in the agricultural industry. in this paper aims to the application of deep learning algorithms to streamline hazelnut classification, reducing the need for manual labor, time, and cost in the sorting process. the study utilized hazelnut images from three different varieties: giresun, ordu, and van, comprising a dataset of 1165 images for giresun, 1324 for ordu, and 1138 for van hazelnuts. this dataset is an open-access dataset. in the study, experiments were carried out on the determination of hazelnut varieties with bigtransfer (bit)-m r50 × 1, bit-m r101 × 3 and bit-m r152 × 4 models. deep learning models, including big transfer was employed for classification. the classification task involved 3627 nut images and resulted in a remarkable accuracy of 99.49% with the bit-m r152 × 4 model. these innovative methods can also lead to patentable products and devices in various industries, thereby boosting the economic value of the country. | [
"hazelnut",
"an agricultural product",
"that",
"the economy",
"the countries",
"it",
"the human factor",
"a major role",
"hazelnut classification",
"the typical approach",
"manual inspection",
"each sample",
"experts",
"a process",
"that",
"limited sensitivity",
"the deep learning techniques",
"the classification",
"detection",
"agricultural products",
"deep learning",
"great potential",
"the agricultural sector",
"this technology",
"product quality",
"productivity",
"farmers",
"the ability",
"their produce",
"this",
"sustainability",
"efficiency",
"the agricultural industry",
"this paper",
"the application",
"deep learning algorithms",
"hazelnut classification",
"the need",
"manual labor",
"time",
"cost",
"the sorting process",
"the study",
"hazelnut images",
"three different varieties",
"giresun",
"ordu",
"van",
"a dataset",
"1165 images",
"giresun",
"ordu",
"van hazelnuts",
"this dataset",
"an open-access dataset",
"the study",
"experiments",
"the determination",
"hazelnut varieties",
"bigtransfer",
"r101",
"r152 × 4 models",
"deep learning models",
"big transfer",
"classification",
"the classification task",
"3627 nut images",
"a remarkable accuracy",
"99.49%",
"the bit-m",
"r152",
"4 model",
"these innovative methods",
"patentable products",
"devices",
"various industries",
"the economic value",
"the country",
"three",
"van",
"1165",
"1324",
"1138",
"bit)-m",
"× 4",
"3627",
"99.49%",
"× 4"
] |
A Novel Cyber Security Model Using Deep Transfer Learning | [
"Ünal Çavuşoğlu",
"Devrim Akgun",
"Selman Hizal"
] | Preventing attackers from interrupting or totally stopping critical services in cloud systems is a vital and challenging task. Today, machine learning-based algorithms and models are widely used, especially for the intelligent detection of zero-day attacks. Recently, deep learning methods that provide automatic feature extraction are designed to detect attacks automatically. In this study, we constructed a new deep learning model based on transfer learning for detecting and protecting cloud systems from malicious attacks. The developed deep transfer learning-based IDS converts network traffic into 2D preprocessed feature maps.Then the feature maps are processed with the transferred and fine-tuned convolutional layers of the deep learning model before the dense layer for detection and classification of traffic data. The results computed using the NSL-KDD test dataset reveal that the developed models achieve 89.74% multiclass and 92.58% binary classification accuracy. We performed another evaluation using only 20% of the training dataset as test data, and 80% for training. In this case, the model achieved 99.83% and 99.85% multiclass and binary classification accuracy, respectively. | 10.1007/s13369-023-08092-1 | a novel cyber security model using deep transfer learning | preventing attackers from interrupting or totally stopping critical services in cloud systems is a vital and challenging task. today, machine learning-based algorithms and models are widely used, especially for the intelligent detection of zero-day attacks. recently, deep learning methods that provide automatic feature extraction are designed to detect attacks automatically. in this study, we constructed a new deep learning model based on transfer learning for detecting and protecting cloud systems from malicious attacks. the developed deep transfer learning-based ids converts network traffic into 2d preprocessed feature maps.then the feature maps are processed with the transferred and fine-tuned convolutional layers of the deep learning model before the dense layer for detection and classification of traffic data. the results computed using the nsl-kdd test dataset reveal that the developed models achieve 89.74% multiclass and 92.58% binary classification accuracy. we performed another evaluation using only 20% of the training dataset as test data, and 80% for training. in this case, the model achieved 99.83% and 99.85% multiclass and binary classification accuracy, respectively. | [
"attackers",
"critical services",
"cloud systems",
"a vital and challenging task",
"machine learning-based algorithms",
"models",
"the intelligent detection",
"zero-day attacks",
"deep learning methods",
"that",
"automatic feature extraction",
"attacks",
"this study",
"we",
"a new deep learning model",
"cloud systems",
"malicious attacks",
"the developed deep transfer learning-based ids converts network traffic",
"the feature maps",
"the transferred and fine-tuned convolutional layers",
"the deep learning model",
"the dense layer",
"detection",
"classification",
"traffic data",
"the results",
"the nsl-kdd test",
"the developed models",
"89.74% multiclass",
"92.58%",
"binary classification accuracy",
"we",
"another evaluation",
"only 20%",
"the training dataset",
"test data",
"80%",
"training",
"this case",
"the model",
"99.83%",
"99.85% multiclass and binary classification accuracy",
"today",
"zero-day",
"2d",
"89.74%",
"92.58%",
"only 20%",
"80%",
"99.83%",
"99.85%"
] |
Software fault prediction using deep learning techniques | [
"Iqra Batool",
"Tamim Ahmed Khan"
] | Software fault prediction (SFP) techniques identify faults at the early stages of the software development life cycle (SDLC). We find machine learning techniques commonly used for SFP compared to deep learning methods, which can produce more accurate results. Deep learning offers exceptional results in various domains, such as computer vision, natural language processing, and speech recognition. In this study, we use three deep learning methods, namely, long short-term memory (LSTM), bidirectional LSTM (BILSTM), and radial basis function network (RBFN) to predict software faults and compare our results with existing models to show how our results are more accurate. Our study uses Chidamber and Kemerer (CK) metrics-based datasets to conduct experiments and test our proposed algorithm. We conclude that LSTM and BILSTM perform better, whereas RBFN is faster in producing the required results. We use k-fold cross-validation to do the model evaluation. Our proposed models provide software developers with a more accurate and efficient SFP mechanism. | 10.1007/s11219-023-09642-4 | software fault prediction using deep learning techniques | software fault prediction (sfp) techniques identify faults at the early stages of the software development life cycle (sdlc). we find machine learning techniques commonly used for sfp compared to deep learning methods, which can produce more accurate results. deep learning offers exceptional results in various domains, such as computer vision, natural language processing, and speech recognition. in this study, we use three deep learning methods, namely, long short-term memory (lstm), bidirectional lstm (bilstm), and radial basis function network (rbfn) to predict software faults and compare our results with existing models to show how our results are more accurate. our study uses chidamber and kemerer (ck) metrics-based datasets to conduct experiments and test our proposed algorithm. we conclude that lstm and bilstm perform better, whereas rbfn is faster in producing the required results. we use k-fold cross-validation to do the model evaluation. our proposed models provide software developers with a more accurate and efficient sfp mechanism. | [
"software fault prediction",
"(sfp) techniques",
"faults",
"the early stages",
"the software development life cycle",
"sdlc",
"we",
"techniques",
"sfp",
"deep learning methods",
"which",
"more accurate results",
"deep learning",
"exceptional results",
"various domains",
"computer vision",
"natural language processing",
"speech recognition",
"this study",
"we",
"three deep learning methods",
"namely, long short-term memory",
"lstm",
"bidirectional lstm",
"bilstm",
"radial basis function network",
"rbfn",
"software faults",
"our results",
"existing models",
"our results",
"our study",
"chidamber",
"kemerer (ck) metrics-based datasets",
"experiments",
"our proposed algorithm",
"we",
"lstm",
"bilstm",
"rbfn",
"the required results",
"we",
"fold cross",
"-",
"validation",
"the model evaluation",
"our proposed models",
"software developers",
"a more accurate and efficient sfp mechanism",
"three"
] |
Software fault prediction using deep learning techniques | [
"Iqra Batool",
"Tamim Ahmed Khan"
] | Software fault prediction (SFP) techniques identify faults at the early stages of the software development life cycle (SDLC). We find machine learning techniques commonly used for SFP compared to deep learning methods, which can produce more accurate results. Deep learning offers exceptional results in various domains, such as computer vision, natural language processing, and speech recognition. In this study, we use three deep learning methods, namely, long short-term memory (LSTM), bidirectional LSTM (BILSTM), and radial basis function network (RBFN) to predict software faults and compare our results with existing models to show how our results are more accurate. Our study uses Chidamber and Kemerer (CK) metrics-based datasets to conduct experiments and test our proposed algorithm. We conclude that LSTM and BILSTM perform better, whereas RBFN is faster in producing the required results. We use k-fold cross-validation to do the model evaluation. Our proposed models provide software developers with a more accurate and efficient SFP mechanism. | 10.1007/s11219-023-09642-4 | software fault prediction using deep learning techniques | software fault prediction (sfp) techniques identify faults at the early stages of the software development life cycle (sdlc). we find machine learning techniques commonly used for sfp compared to deep learning methods, which can produce more accurate results. deep learning offers exceptional results in various domains, such as computer vision, natural language processing, and speech recognition. in this study, we use three deep learning methods, namely, long short-term memory (lstm), bidirectional lstm (bilstm), and radial basis function network (rbfn) to predict software faults and compare our results with existing models to show how our results are more accurate. our study uses chidamber and kemerer (ck) metrics-based datasets to conduct experiments and test our proposed algorithm. we conclude that lstm and bilstm perform better, whereas rbfn is faster in producing the required results. we use k-fold cross-validation to do the model evaluation. our proposed models provide software developers with a more accurate and efficient sfp mechanism. | [
"software fault prediction",
"(sfp) techniques",
"faults",
"the early stages",
"the software development life cycle",
"sdlc",
"we",
"techniques",
"sfp",
"deep learning methods",
"which",
"more accurate results",
"deep learning",
"exceptional results",
"various domains",
"computer vision",
"natural language processing",
"speech recognition",
"this study",
"we",
"three deep learning methods",
"namely, long short-term memory",
"lstm",
"bidirectional lstm",
"bilstm",
"radial basis function network",
"rbfn",
"software faults",
"our results",
"existing models",
"our results",
"our study",
"chidamber",
"kemerer (ck) metrics-based datasets",
"experiments",
"our proposed algorithm",
"we",
"lstm",
"bilstm",
"rbfn",
"the required results",
"we",
"fold cross",
"-",
"validation",
"the model evaluation",
"our proposed models",
"software developers",
"a more accurate and efficient sfp mechanism",
"three"
] |
Imbalcbl: addressing deep learning challenges with small and imbalanced datasets | [
"Saqib ul Sabha",
"Assif Assad",
"Sadaf Shafi",
"Nusrat Mohi Ud Din",
"Rayees Ahmad Dar",
"Muzafar Rasool Bhat"
] | Deep learning, while transformative for computer vision, frequently falters when confronted with small and imbalanced datasets. Despite substantial progress in this domain, prevailing models often underachieve under these constraints. Addressing this, we introduce an innovative contrast-based learning strategy for small and imbalanced data that significantly bolsters the proficiency of deep learning architectures on these challenging datasets. By ingeniously concatenating training images, the effective training dataset expands from n to \(n^2\), affording richer data for model training, even when n is very small. Remarkably, our solution remains indifferent to specific loss functions or network architectures, endorsing its adaptability for diverse classification scenarios. Rigorously benchmarked against four benchmark datasets, our approach was juxtaposed with state-of-the-art oversampling paradigms. The empirical evidence underscores our method’s superior efficacy, outshining contemporaries across metrics like Balanced accuracy, F1 score, and Geometric mean. Noteworthy increments include 7–16% on the Covid-19 dataset, 4–20% for Honey bees, 1–6% on CIFAR-10, and 1–9% on FashionMNIST. In essence, our proposed method offers a potent remedy for the perennial issues stemming from scanty and skewed data in deep learning. | 10.1007/s13198-024-02346-3 | imbalcbl: addressing deep learning challenges with small and imbalanced datasets | deep learning, while transformative for computer vision, frequently falters when confronted with small and imbalanced datasets. despite substantial progress in this domain, prevailing models often underachieve under these constraints. addressing this, we introduce an innovative contrast-based learning strategy for small and imbalanced data that significantly bolsters the proficiency of deep learning architectures on these challenging datasets. by ingeniously concatenating training images, the effective training dataset expands from n to \(n^2\), affording richer data for model training, even when n is very small. remarkably, our solution remains indifferent to specific loss functions or network architectures, endorsing its adaptability for diverse classification scenarios. rigorously benchmarked against four benchmark datasets, our approach was juxtaposed with state-of-the-art oversampling paradigms. the empirical evidence underscores our method’s superior efficacy, outshining contemporaries across metrics like balanced accuracy, f1 score, and geometric mean. noteworthy increments include 7–16% on the covid-19 dataset, 4–20% for honey bees, 1–6% on cifar-10, and 1–9% on fashionmnist. in essence, our proposed method offers a potent remedy for the perennial issues stemming from scanty and skewed data in deep learning. | [
"deep learning",
"computer vision",
"small and imbalanced datasets",
"substantial progress",
"this domain",
"prevailing models",
"these constraints",
"this",
"we",
"an innovative contrast-based learning strategy",
"small and imbalanced data",
"that",
"the proficiency",
"deep learning architectures",
"these challenging datasets",
"training images",
"the effective training dataset",
"n",
"\\(n^2\\",
"richer data",
"model training",
"n",
"our solution",
"specific loss functions",
"network architectures",
"its adaptability",
"diverse classification scenarios",
"four benchmark datasets",
"our approach",
"the-art",
"the empirical evidence",
"our method’s superior efficacy",
"contemporaries",
"metrics",
"balanced accuracy",
"f1 score",
"geometric mean",
"noteworthy increments",
"7–16%",
"the covid-19 dataset",
"4–20%",
"honey bees",
"1–6%",
"cifar-10",
"1–9%",
"fashionmnist",
"essence",
"our proposed method",
"a potent remedy",
"the perennial issues",
"scanty",
"skewed data",
"deep learning",
"four",
"noteworthy",
"7–16%",
"covid-19",
"4–20%",
"1–6%",
"cifar-10",
"1–9%"
] |
Deep Learning-Based Speed Breaker Detection | [
"Mohamed Anas VT",
"Mohd Omar",
"Jameel Ahamad",
"Khaleel Ahmad",
"Mohd Anas Khan"
] | Traffic operation factors include vehicle and cargo damage, environmental effects, access for emergency vehicles, transit routes, and traffic speeds and volumes. In India, traffic rule violations and high-speed vehicles contribute to a large number of accidents and fatalities on the road. To mitigate this issue, the government has implemented speed breakers as a safety measure. In most cases, speed humps are only advised for use on streets with a speed restriction of 30 mph (50 km/h) or less. However, many speed breakers lack proper signboards, and drivers often do not see them, leading to accidents. In light of this, a speed breaker detection system using deep learning is relevant and necessary in the current scenario. This research proposes a system that uses deep learning image detection algorithms to detect speed breakers in real-time, thereby increasing their visibility and reducing the risk of accidents. The system is trained using an Indian dataset and will be tested in various driving scenarios to evaluate its performance. The results are expected to show that the proposed system will outperform existing methods and have the potential to significantly improve road safety in India. | 10.1007/s42979-024-02891-5 | deep learning-based speed breaker detection | traffic operation factors include vehicle and cargo damage, environmental effects, access for emergency vehicles, transit routes, and traffic speeds and volumes. in india, traffic rule violations and high-speed vehicles contribute to a large number of accidents and fatalities on the road. to mitigate this issue, the government has implemented speed breakers as a safety measure. in most cases, speed humps are only advised for use on streets with a speed restriction of 30 mph (50 km/h) or less. however, many speed breakers lack proper signboards, and drivers often do not see them, leading to accidents. in light of this, a speed breaker detection system using deep learning is relevant and necessary in the current scenario. this research proposes a system that uses deep learning image detection algorithms to detect speed breakers in real-time, thereby increasing their visibility and reducing the risk of accidents. the system is trained using an indian dataset and will be tested in various driving scenarios to evaluate its performance. the results are expected to show that the proposed system will outperform existing methods and have the potential to significantly improve road safety in india. | [
"traffic operation factors",
"vehicle and cargo damage",
"environmental effects",
"access",
"emergency vehicles",
"transit routes",
"traffic speeds",
"volumes",
"india",
"traffic rule violations",
"high-speed vehicles",
"a large number",
"accidents",
"fatalities",
"the road",
"this issue",
"the government",
"speed breakers",
"a safety measure",
"most cases",
"speed humps",
"use",
"streets",
"a speed restriction",
"30 mph",
"50 km/h",
"many speed breakers",
"proper signboards",
"drivers",
"them",
"accidents",
"light",
"this",
"a speed breaker detection system",
"deep learning",
"the current scenario",
"this research",
"a system",
"that",
"deep learning image detection algorithms",
"speed breakers",
"real-time",
"their visibility",
"the risk",
"accidents",
"the system",
"an indian dataset",
"various driving scenarios",
"its performance",
"the results",
"the proposed system",
"existing methods",
"the potential",
"road safety",
"india",
"india",
"30 mph",
"50 km",
"indian",
"india"
] |
Manifold learning by a deep Gaussian process autoencoder | [
"Francesco Camastra",
"Angelo Casolaro",
"Gennaro Iannuzzo"
] | The paper presents a novel manifold learning algorithm, the deep Gaussian process autoencoder (DPGA), based on deep Gaussian processes. Deep Gaussian process autoencoder algorithm has the following two main characteristics. The former is a bottleneck structure, borrowed by variational autoencoders and the latter is based on the so-called doubly stochastic variational inference for deep Gaussian processes architecture (DSVI). The main novelties of the paper consist in DGPA algorithm and the experimental protocol for evaluating it. In fact, to the best of our knowledge, deep Gaussian processes algorithms have not been applied to manifold learning, yet. Besides, an experimental protocol is introduced, the so-called manifold learning performance protocol (MLPP), to compare quantitatively the geometric preserved properties of manifold learning projections of the proposed deep Gaussian process autoencoder with the ones of state-of-the-art manifold learning algorithms. Extensive experimental tests on eleven synthetic and five real datasets show that deep Gaussian process autoencoder compares favorably with the other manifold learning competitors. | 10.1007/s00521-023-08536-7 | manifold learning by a deep gaussian process autoencoder | the paper presents a novel manifold learning algorithm, the deep gaussian process autoencoder (dpga), based on deep gaussian processes. deep gaussian process autoencoder algorithm has the following two main characteristics. the former is a bottleneck structure, borrowed by variational autoencoders and the latter is based on the so-called doubly stochastic variational inference for deep gaussian processes architecture (dsvi). the main novelties of the paper consist in dgpa algorithm and the experimental protocol for evaluating it. in fact, to the best of our knowledge, deep gaussian processes algorithms have not been applied to manifold learning, yet. besides, an experimental protocol is introduced, the so-called manifold learning performance protocol (mlpp), to compare quantitatively the geometric preserved properties of manifold learning projections of the proposed deep gaussian process autoencoder with the ones of state-of-the-art manifold learning algorithms. extensive experimental tests on eleven synthetic and five real datasets show that deep gaussian process autoencoder compares favorably with the other manifold learning competitors. | [
"the paper",
"a novel manifold learning algorithm",
"the deep gaussian process autoencoder",
"dpga",
"deep gaussian processes",
"deep gaussian process",
"autoencoder algorithm",
"the following two main characteristics",
"a bottleneck structure",
"variational autoencoders",
"the so-called doubly stochastic variational inference",
"deep gaussian processes architecture",
"the main novelties",
"the paper consist",
"dgpa algorithm",
"the experimental protocol",
"it",
"fact",
"our knowledge",
"deep gaussian processes algorithms",
"manifold learning",
"an experimental protocol",
"mlpp",
"the geometric preserved properties",
"manifold learning projections",
"the proposed deep gaussian process",
"the ones",
"the-art",
"extensive experimental tests",
"eleven synthetic",
"five real datasets",
"deep gaussian process autoencoder",
"the other manifold learning competitors",
"deep gaussian",
"two",
"deep gaussian",
"eleven",
"five",
"deep gaussian"
] |
A survey of deep learning-based 3D shape generation | [
"Qun-Ce Xu",
"Tai-Jiang Mu",
"Yong-Liang Yang"
] | Deep learning has been successfully used for tasks in the 2D image domain. Research on 3D computer vision and deep geometry learning has also attracted attention. Considerable achievements have been made regarding feature extraction and discrimination of 3D shapes. Following recent advances in deep generative models such as generative adversarial networks, effective generation of 3D shapes has become an active research topic. Unlike 2D images with a regular grid structure, 3D shapes have various representations, such as voxels, point clouds, meshes, and implicit functions. For deep learning of 3D shapes, shape representation has to be taken into account as there is no unified representation that can cover all tasks well. Factors such as the representativeness of geometry and topology often largely affect the quality of the generated 3D shapes. In this survey, we comprehensively review works on deep-learning-based 3D shape generation by classifying and discussing them in terms of the underlying shape representation and the architecture of the shape generator. The advantages and disadvantages of each class are further analyzed. We also consider the 3D shape datasets commonly used for shape generation. Finally, we present several potential research directions that hopefully can inspire future works on this topic. | 10.1007/s41095-022-0321-5 | a survey of deep learning-based 3d shape generation | deep learning has been successfully used for tasks in the 2d image domain. research on 3d computer vision and deep geometry learning has also attracted attention. considerable achievements have been made regarding feature extraction and discrimination of 3d shapes. following recent advances in deep generative models such as generative adversarial networks, effective generation of 3d shapes has become an active research topic. unlike 2d images with a regular grid structure, 3d shapes have various representations, such as voxels, point clouds, meshes, and implicit functions. for deep learning of 3d shapes, shape representation has to be taken into account as there is no unified representation that can cover all tasks well. factors such as the representativeness of geometry and topology often largely affect the quality of the generated 3d shapes. in this survey, we comprehensively review works on deep-learning-based 3d shape generation by classifying and discussing them in terms of the underlying shape representation and the architecture of the shape generator. the advantages and disadvantages of each class are further analyzed. we also consider the 3d shape datasets commonly used for shape generation. finally, we present several potential research directions that hopefully can inspire future works on this topic. | [
"deep learning",
"tasks",
"the 2d image domain",
"research",
"3d computer vision",
"deep geometry learning",
"attention",
"considerable achievements",
"feature extraction",
"discrimination",
"3d shapes",
"recent advances",
"deep generative models",
"generative adversarial networks",
"effective generation",
"3d shapes",
"an active research topic",
"2d images",
"a regular grid structure",
"3d shapes",
"various representations",
"voxels",
"point clouds",
"meshes",
"implicit functions",
"deep learning",
"3d shapes",
"shape representation",
"account",
"no unified representation",
"that",
"all tasks",
"factors",
"the representativeness",
"geometry",
"topology",
"the quality",
"the generated 3d shapes",
"this survey",
"we",
"deep-learning-based 3d shape generation",
"them",
"terms",
"the underlying shape representation",
"the architecture",
"the shape generator",
"the advantages",
"disadvantages",
"each class",
"we",
"the 3d shape datasets",
"shape generation",
"we",
"several potential research directions",
"that",
"future works",
"this topic",
"2d",
"3d",
"3d",
"3d",
"2d",
"3d",
"3d",
"3d",
"3d",
"3d"
] |
Accelerated cardiac magnetic resonance imaging using deep learning for volumetric assessment in children | [
"Melina Koechli",
"Fraser M. Callaghan",
"Barbara E. U. Burkhardt",
"Maélène Lohézic",
"Xucheng Zhu",
"Beate Rücker",
"Emanuela R. Valsangiacomo Buechel",
"Christian J. Kellenberger",
"Julia Geiger"
] | BackgroundVentricular volumetry using a short-axis stack of two-dimensional (D) cine balanced steady-state free precession (bSSFP) sequences is crucial in any cardiac magnetic resonance imaging (MRI) examination. This task becomes particularly challenging in children due to multiple breath-holds.ObjectiveTo assess the diagnostic performance of accelerated 3-RR cine MRI sequences using deep learning reconstruction compared with standard 2-D cine bSSFP sequences.Material and methodsTwenty-nine consecutive patients (mean age 11 ± 5, median 12, range 1–17 years) undergoing cardiac MRI were scanned with a conventional segmented 2-D cine and a deep learning accelerated cine (three heartbeats) acquisition on a 1.5-tesla scanner. Short-axis volumetrics were performed (semi-)automatically in both datasets retrospectively by two experienced readers who visually assessed image quality employing a 4-point grading scale. Scan times and image quality were compared using the Wilcoxon rank-sum test. Volumetrics were assessed with linear regression and Bland–Altman analyses, and measurement agreement with intraclass correlation coefficient (ICC).ResultsMean acquisition time was significantly reduced with the 3-RR deep learning cine compared to the standard cine sequence (45.5 ± 13.8 s vs. 218.3 ± 44.8 s; P < 0.001). No significant differences in biventricular volumetrics were found. Left ventricular (LV) mass was increased in the deep learning cine compared with the standard cine sequence (71.4 ± 33.1 g vs. 69.9 ± 32.5 g; P < 0.05). All volumetric measurements had an excellent agreement with ICC > 0.9 except for ejection fraction (EF) (LVEF 0.81, RVEF 0.73). The image quality of deep learning cine images was decreased for end-diastolic and end-systolic contours, papillary muscles, and valve depiction (2.9 ± 0.5 vs. 3.5 ± 0.4; P < 0.05).ConclusionDeep learning cine volumetrics did not differ significantly from standard cine results except for LV mass, which was slightly overestimated with deep learning cine. Deep learning cine sequences result in a significant reduction in scan time with only slightly lower image quality.Graphical Abstract | 10.1007/s00247-024-05978-6 | accelerated cardiac magnetic resonance imaging using deep learning for volumetric assessment in children | backgroundventricular volumetry using a short-axis stack of two-dimensional (d) cine balanced steady-state free precession (bssfp) sequences is crucial in any cardiac magnetic resonance imaging (mri) examination. this task becomes particularly challenging in children due to multiple breath-holds.objectiveto assess the diagnostic performance of accelerated 3-rr cine mri sequences using deep learning reconstruction compared with standard 2-d cine bssfp sequences.material and methodstwenty-nine consecutive patients (mean age 11 ± 5, median 12, range 1–17 years) undergoing cardiac mri were scanned with a conventional segmented 2-d cine and a deep learning accelerated cine (three heartbeats) acquisition on a 1.5-tesla scanner. short-axis volumetrics were performed (semi-)automatically in both datasets retrospectively by two experienced readers who visually assessed image quality employing a 4-point grading scale. scan times and image quality were compared using the wilcoxon rank-sum test. volumetrics were assessed with linear regression and bland–altman analyses, and measurement agreement with intraclass correlation coefficient (icc).resultsmean acquisition time was significantly reduced with the 3-rr deep learning cine compared to the standard cine sequence (45.5 ± 13.8 s vs. 218.3 ± 44.8 s; p < 0.001). no significant differences in biventricular volumetrics were found. left ventricular (lv) mass was increased in the deep learning cine compared with the standard cine sequence (71.4 ± 33.1 g vs. 69.9 ± 32.5 g; p < 0.05). all volumetric measurements had an excellent agreement with icc > 0.9 except for ejection fraction (ef) (lvef 0.81, rvef 0.73). the image quality of deep learning cine images was decreased for end-diastolic and end-systolic contours, papillary muscles, and valve depiction (2.9 ± 0.5 vs. 3.5 ± 0.4; p < 0.05).conclusiondeep learning cine volumetrics did not differ significantly from standard cine results except for lv mass, which was slightly overestimated with deep learning cine. deep learning cine sequences result in a significant reduction in scan time with only slightly lower image quality.graphical abstract | [
"backgroundventricular volumetry",
"a short-axis stack",
"two-dimensional (d) cine",
"steady-state free precession (bssfp) sequences",
"imaging",
"this task",
"children",
"multiple breath-holds.objectiveto",
"the diagnostic performance",
"accelerated 3-rr cine mri sequences",
"deep learning reconstruction",
"standard 2-d cine bssfp sequences.material and methodstwenty-nine consecutive patients",
"(mean age",
"median",
"range 1–17 years",
"cardiac mri",
"a conventional segmented 2-d cine",
"a deep learning",
"cine",
"(three heartbeats) acquisition",
"a 1.5-tesla scanner",
"short-axis volumetrics",
"both datasets",
"two experienced readers",
"who",
"image quality",
"a 4-point grading scale",
"scan times",
"image quality",
"the wilcoxon rank-sum test",
"volumetrics",
"linear regression",
"bland–altman analyses",
"measurement agreement",
"intraclass correlation",
"coefficient",
"icc).resultsmean acquisition time",
"the 3-rr deep learning cine",
"the standard cine sequence",
"45.5 ± 13.8 s",
"218.3 ±",
"44.8 s",
"no significant differences",
"biventricular volumetrics",
"ventricular (lv) mass",
"the deep learning cine",
"the standard cine sequence",
"33.1 g",
"69.9 ±",
"p",
"all volumetric measurements",
"an excellent agreement",
"icc",
"ejection fraction",
"ef",
"lvef",
"rvef",
"the image quality",
"deep learning cine images",
"end-diastolic and end-systolic contours",
"papillary muscles",
"valve depiction",
"2.9 ±",
"3.5 ±",
"cine volumetrics",
"standard cine results",
"lv mass",
"which",
"deep learning cine",
"cine sequences",
"a significant reduction",
"scan time",
"only slightly lower image",
"backgroundventricular volumetry",
"two",
"breath-holds.objectiveto",
"3",
"2",
"age 11 ± 5",
"12",
"1–17 years",
"2",
"three",
"1.5",
"two",
"4",
"scan times",
"3",
"218.3",
"44.8",
"p < 0.001",
"ventricular",
"33.1",
"69.9",
"32.5",
"0.9",
"0.81",
"0.73",
"2.9 ± 0.5",
"3.5",
"0.4"
] |
Comparative Analysis of Diabetic Retinopathy Classification Approaches Using Machine Learning and Deep Learning Techniques | [
"Ruchika Bala",
"Arun Sharma",
"Nidhi Goel"
] | Diabetic retinopathy (DR) is an eye disease caused due to excess of sugar in retinal blood vessels and obstructs vision. Regular and timely diagnosis can prevent the severity of diabetic retinopathy at an initial stage. Manual diagnosis of diabetic retinopathy is time-consuming and thus a plethora of work has been done by researchers to automate the classification of diabetic retinopathy using machine learning and deep learning techniques. The present review pivots around the research papers covering recent and effective automated DR classification techniques from 2011 to 2022. A comparative analysis of these papers highlights the summary of DR classification datasets, pre-processing techniques, various advanced classification algorithms, and their performance. Along with the summary and analysis of the classification techniques, the present paper demonstrates the experimentation of eight pre-trained convolution neural network models on DR benchmark classification datasets. This is to help the researchers to choose the best pre-trained classification model for the corresponding DR dataset. The use of deep learning and machine learning algorithms demonstrated excellent performance; however, researchers still need to explore the design constraints of the classification models to have effective results. Attention mechanisms and vision transformers are recent breakthroughs that can be used to solve classification challenges. The objective of this article is to provide a single platform for researchers to access state-of-the-art work in the classification of diabetic retinopathy, the results of various pre-trained models on DR benchmark classification datasets, and future research prospects for the researchers working in this challenging area. | 10.1007/s11831-023-10002-5 | comparative analysis of diabetic retinopathy classification approaches using machine learning and deep learning techniques | diabetic retinopathy (dr) is an eye disease caused due to excess of sugar in retinal blood vessels and obstructs vision. regular and timely diagnosis can prevent the severity of diabetic retinopathy at an initial stage. manual diagnosis of diabetic retinopathy is time-consuming and thus a plethora of work has been done by researchers to automate the classification of diabetic retinopathy using machine learning and deep learning techniques. the present review pivots around the research papers covering recent and effective automated dr classification techniques from 2011 to 2022. a comparative analysis of these papers highlights the summary of dr classification datasets, pre-processing techniques, various advanced classification algorithms, and their performance. along with the summary and analysis of the classification techniques, the present paper demonstrates the experimentation of eight pre-trained convolution neural network models on dr benchmark classification datasets. this is to help the researchers to choose the best pre-trained classification model for the corresponding dr dataset. the use of deep learning and machine learning algorithms demonstrated excellent performance; however, researchers still need to explore the design constraints of the classification models to have effective results. attention mechanisms and vision transformers are recent breakthroughs that can be used to solve classification challenges. the objective of this article is to provide a single platform for researchers to access state-of-the-art work in the classification of diabetic retinopathy, the results of various pre-trained models on dr benchmark classification datasets, and future research prospects for the researchers working in this challenging area. | [
"diabetic retinopathy",
"dr",
"an eye disease",
"excess",
"sugar",
"retinal blood vessels",
"vision",
"regular and timely diagnosis",
"the severity",
"diabetic retinopathy",
"an initial stage",
"manual diagnosis",
"diabetic retinopathy",
"a plethora",
"work",
"researchers",
"the classification",
"machine learning",
"deep learning techniques",
"the present review pivots",
"the research papers",
"recent and effective automated dr classification techniques",
"a comparative analysis",
"these papers",
"the summary",
"dr classification datasets",
"pre-processing techniques",
"various advanced classification algorithms",
"their performance",
"the summary",
"analysis",
"the classification techniques",
"the present paper",
"the experimentation",
"eight pre-trained convolution neural network models",
"dr benchmark classification datasets",
"this",
"the researchers",
"the best pre-trained classification model",
"the corresponding dr dataset",
"the use",
"deep learning and machine learning algorithms",
"excellent performance",
"researchers",
"the design constraints",
"the classification models",
"effective results",
"attention mechanisms",
"vision transformers",
"recent breakthroughs",
"that",
"classification challenges",
"the objective",
"this article",
"a single platform",
"researchers",
"the-art",
"the classification",
"diabetic retinopathy",
"the results",
"various pre-trained models",
"dr benchmark classification datasets",
"future research prospects",
"the researchers",
"this challenging area",
"2011 to 2022",
"eight"
] |
Heart disease prediction using machine learning, deep Learning and optimization techniques-A semantic review | [
"Girish Shrikrushnarao Bhavekar",
"Agam Das Goswami",
"Chafle Pratiksha Vasantrao",
"Amit K. Gaikwad",
"Amol V. Zade",
"Harsha Vyawahare"
] | Cardiovascular disease holds the position of being the foremost cause of death worldwide. Heart Disease Prediction (HDP) is a difficult task as it needs advanced knowledge with better experience. Moreover, it encounters numerous significant challenges in clinical data analysis. While many researchers have focused on predicting heart disease, the performance metric, namely prediction accuracy, remains suboptimal. The accurate HDP can help the person to prevent himself from life threats and at the same time, inaccurate prediction can prove to be fatal. To solve these issues, in this review work several Deep Learning (DL), Machine Learning (ML) and optimization based HDP techniques are discussed. In recent times, many researchers have been utilizing different DL and ML algorithms to help the professionals and health care industry for the prediction of heart disease. Further, it discussed about various optimization-based algorithms and its performance analysis. Therefore, this review paper suggests that the optimization-based HDP algorithm could assist doctors in predicting the occurrence of heart disease in advance and offering suitable treatment. | 10.1007/s11042-024-19680-0 | heart disease prediction using machine learning, deep learning and optimization techniques-a semantic review | cardiovascular disease holds the position of being the foremost cause of death worldwide. heart disease prediction (hdp) is a difficult task as it needs advanced knowledge with better experience. moreover, it encounters numerous significant challenges in clinical data analysis. while many researchers have focused on predicting heart disease, the performance metric, namely prediction accuracy, remains suboptimal. the accurate hdp can help the person to prevent himself from life threats and at the same time, inaccurate prediction can prove to be fatal. to solve these issues, in this review work several deep learning (dl), machine learning (ml) and optimization based hdp techniques are discussed. in recent times, many researchers have been utilizing different dl and ml algorithms to help the professionals and health care industry for the prediction of heart disease. further, it discussed about various optimization-based algorithms and its performance analysis. therefore, this review paper suggests that the optimization-based hdp algorithm could assist doctors in predicting the occurrence of heart disease in advance and offering suitable treatment. | [
"cardiovascular disease",
"the position",
"the foremost cause",
"death",
"heart disease prediction",
"hdp",
"a difficult task",
"it",
"advanced knowledge",
"better experience",
"it",
"numerous significant challenges",
"clinical data analysis",
"many researchers",
"heart disease",
"the performance",
"namely prediction accuracy",
"the accurate hdp",
"the person",
"himself",
"life threats",
"the same time",
"inaccurate prediction",
"these issues",
"this review",
"several deep learning",
"dl",
"machine learning",
"optimization",
"hdp techniques",
"recent times",
"many researchers",
"different dl",
"ml",
"the professionals",
"health care industry",
"the prediction",
"heart disease",
"it",
"various optimization-based algorithms",
"its performance analysis",
"this review paper",
"the optimization-based hdp algorithm",
"doctors",
"the occurrence",
"heart disease",
"advance",
"suitable treatment"
] |
A domain knowledge-based interpretable deep learning system for improving clinical breast ultrasound diagnosis | [
"Lin Yan",
"Zhiying Liang",
"Hao Zhang",
"Gaosong Zhang",
"Weiwei Zheng",
"Chunguang Han",
"Dongsheng Yu",
"Hanqi Zhang",
"Xinxin Xie",
"Chang Liu",
"Wenxin Zhang",
"Hui Zheng",
"Jing Pei",
"Dinggang Shen",
"Xuejun Qian"
] | BackgroundThough deep learning has consistently demonstrated advantages in the automatic interpretation of breast ultrasound images, its black-box nature hinders potential interactions with radiologists, posing obstacles for clinical deployment.MethodsWe proposed a domain knowledge-based interpretable deep learning system for improving breast cancer risk prediction via paired multimodal ultrasound images. The deep learning system was developed on 4320 multimodal breast ultrasound images of 1440 biopsy-confirmed lesions from 1348 prospectively enrolled patients across two hospitals between August 2019 and December 2022. The lesions were allocated to 70% training cohort, 10% validation cohort, and 20% test cohort based on case recruitment date.ResultsHere, we show that the interpretable deep learning system can predict breast cancer risk as accurately as experienced radiologists, with an area under the receiver operating characteristic curve of 0.902 (95% confidence interval = 0.882 – 0.921), sensitivity of 75.2%, and specificity of 91.8% on the test cohort. With the aid of the deep learning system, particularly its inherent explainable features, junior radiologists tend to achieve better clinical outcomes, while senior radiologists experience increased confidence levels. Multimodal ultrasound images augmented with domain knowledge-based reasoning cues enable an effective human-machine collaboration at a high level of prediction performance.ConclusionsSuch a clinically applicable deep learning system may be incorporated into future breast cancer screening and support assisted or second-read workflows. | 10.1038/s43856-024-00518-7 | a domain knowledge-based interpretable deep learning system for improving clinical breast ultrasound diagnosis | backgroundthough deep learning has consistently demonstrated advantages in the automatic interpretation of breast ultrasound images, its black-box nature hinders potential interactions with radiologists, posing obstacles for clinical deployment.methodswe proposed a domain knowledge-based interpretable deep learning system for improving breast cancer risk prediction via paired multimodal ultrasound images. the deep learning system was developed on 4320 multimodal breast ultrasound images of 1440 biopsy-confirmed lesions from 1348 prospectively enrolled patients across two hospitals between august 2019 and december 2022. the lesions were allocated to 70% training cohort, 10% validation cohort, and 20% test cohort based on case recruitment date.resultshere, we show that the interpretable deep learning system can predict breast cancer risk as accurately as experienced radiologists, with an area under the receiver operating characteristic curve of 0.902 (95% confidence interval = 0.882 – 0.921), sensitivity of 75.2%, and specificity of 91.8% on the test cohort. with the aid of the deep learning system, particularly its inherent explainable features, junior radiologists tend to achieve better clinical outcomes, while senior radiologists experience increased confidence levels. multimodal ultrasound images augmented with domain knowledge-based reasoning cues enable an effective human-machine collaboration at a high level of prediction performance.conclusionssuch a clinically applicable deep learning system may be incorporated into future breast cancer screening and support assisted or second-read workflows. | [
"deep learning",
"advantages",
"the automatic interpretation",
"breast ultrasound images",
"its black-box nature",
"potential interactions",
"radiologists",
"obstacles",
"clinical deployment.methodswe",
"a domain knowledge-based interpretable deep learning system",
"breast cancer risk prediction",
"paired multimodal ultrasound images",
"the deep learning system",
"4320 multimodal breast ultrasound images",
"1440 biopsy-confirmed lesions",
"prospectively enrolled patients",
"two hospitals",
"august",
"december",
"the lesions",
"70% training cohort",
"10% validation cohort",
"20% test cohort",
"case recruitment",
"we",
"the interpretable deep learning system",
"breast cancer risk",
"experienced radiologists",
"an area",
"the receiver operating characteristic curve",
"(95% confidence interval",
"75.2%",
"91.8%",
"the test cohort",
"the aid",
"the deep learning system",
"particularly its inherent explainable features",
"junior radiologists",
"better clinical outcomes",
"senior radiologists",
"increased confidence levels",
"multimodal ultrasound images",
"domain knowledge-based reasoning cues",
"an effective human-machine collaboration",
"a high level",
"prediction",
"a clinically applicable deep learning system",
"future breast cancer screening",
"support",
"second-read workflows",
"4320",
"1440",
"1348",
"two",
"august 2019",
"december 2022",
"70%",
"10%",
"20%",
"0.902",
"95%",
"0.882",
"0.921",
"75.2%",
"91.8%",
"second"
] |
Deep learning for osteoporosis screening using an anteroposterior hip radiograph image | [
"Artit Boonrod",
"Prarinthorn Piyaprapaphan",
"Nut Kittipongphat",
"Daris Theerakulpisut",
"Arunnit Boonrod"
] | PurposeOsteoporosis is a common bone disorder characterized by decreased bone mineral density (BMD) and increased bone fragility, which can lead to fractures and eventually cause morbidity and mortality. It is of great concern that the one-year mortality rate for osteoporotic hip fractures could be as high as 22%, regardless of the treatment. Currently, BMD measurement is the standard method for osteoporosis diagnosis, but it is costly and requires special equipment. While a plain radiograph can be obtained more simply and inexpensively, it is not used for diagnosis. Deep learning technologies had been applied to various medical contexts, yet few to osteoporosis unless they were trained on the advanced investigative images, such as computed tomography. The purpose of this study was to develop a deep learning model using the anteroposterior hip radiograph images and measure its diagnostic accuracy for osteoporosis.MethodsWe retrospectively collected all anteroposterior hip radiograph images of patients from 2013 to 2021 at a tertiary care hospital. The BMD measurements of the included patients were reviewed, and the radiograph images that had a time interval of more than two years from the measurements were excluded. All images were randomized using a computer-generated unequal allocation into two datasets, i.e., 80% of images were used for the training dataset and the remaining 20% for the test dataset. The T score of BMD obtained from the ipsilateral femoral neck of the same patient closest to the date of the performed radiograph was chosen. The T score cutoff value of − 2.5 was used to diagnose osteoporosis. Five deep learning models were trained on the training dataset, and their diagnostic performances were evaluated using the test dataset. Finally, the best model was determined by the area under the curves (AUC).ResultsA total of 363 anteroposterior hip radiograph images were identified. The average time interval between the performed radiograph and the BMD measurement was 6.6 months. Two-hundred-thirteen images were labeled as non-osteoporosis (T score > − 2.5), and the other 150 images as osteoporosis (T score ≤ − 2.5). The best-selected deep learning model achieved an AUC of 0.91 and accuracy of 0.82.ConclusionsThis study demonstrates the potential of deep learning for osteoporosis screening using anteroposterior hip radiographs. The results suggest that the deep learning model might potentially be used as a screening tool to find patients at risk for osteoporosis to perform further BMD measurement. | 10.1007/s00590-024-04032-3 | deep learning for osteoporosis screening using an anteroposterior hip radiograph image | purposeosteoporosis is a common bone disorder characterized by decreased bone mineral density (bmd) and increased bone fragility, which can lead to fractures and eventually cause morbidity and mortality. it is of great concern that the one-year mortality rate for osteoporotic hip fractures could be as high as 22%, regardless of the treatment. currently, bmd measurement is the standard method for osteoporosis diagnosis, but it is costly and requires special equipment. while a plain radiograph can be obtained more simply and inexpensively, it is not used for diagnosis. deep learning technologies had been applied to various medical contexts, yet few to osteoporosis unless they were trained on the advanced investigative images, such as computed tomography. the purpose of this study was to develop a deep learning model using the anteroposterior hip radiograph images and measure its diagnostic accuracy for osteoporosis.methodswe retrospectively collected all anteroposterior hip radiograph images of patients from 2013 to 2021 at a tertiary care hospital. the bmd measurements of the included patients were reviewed, and the radiograph images that had a time interval of more than two years from the measurements were excluded. all images were randomized using a computer-generated unequal allocation into two datasets, i.e., 80% of images were used for the training dataset and the remaining 20% for the test dataset. the t score of bmd obtained from the ipsilateral femoral neck of the same patient closest to the date of the performed radiograph was chosen. the t score cutoff value of − 2.5 was used to diagnose osteoporosis. five deep learning models were trained on the training dataset, and their diagnostic performances were evaluated using the test dataset. finally, the best model was determined by the area under the curves (auc).resultsa total of 363 anteroposterior hip radiograph images were identified. the average time interval between the performed radiograph and the bmd measurement was 6.6 months. two-hundred-thirteen images were labeled as non-osteoporosis (t score > − 2.5), and the other 150 images as osteoporosis (t score ≤ − 2.5). the best-selected deep learning model achieved an auc of 0.91 and accuracy of 0.82.conclusionsthis study demonstrates the potential of deep learning for osteoporosis screening using anteroposterior hip radiographs. the results suggest that the deep learning model might potentially be used as a screening tool to find patients at risk for osteoporosis to perform further bmd measurement. | [
"purposeosteoporosis",
"a common bone disorder",
"decreased bone mineral density",
"bmd",
"increased bone fragility",
"which",
"fractures",
"morbidity",
"mortality",
"it",
"great concern",
"the one-year mortality rate",
"osteoporotic hip fractures",
"22%",
"the treatment",
"bmd measurement",
"the standard method",
"osteoporosis diagnosis",
"it",
"special equipment",
"a plain radiograph",
"it",
"diagnosis",
"deep learning technologies",
"various medical contexts",
"they",
"the advanced investigative images",
"computed tomography",
"the purpose",
"this study",
"a deep learning model",
"the anteroposterior hip radiograph images",
"its diagnostic accuracy",
"osteoporosis.methodswe",
"all anteroposterior hip radiograph images",
"patients",
"a tertiary care hospital",
"the bmd measurements",
"the included patients",
"the radiograph images",
"that",
"a time interval",
"more than two years",
"the measurements",
"all images",
"a computer-generated unequal allocation",
"two datasets",
"80%",
"images",
"the training dataset",
"the remaining 20%",
"the test dataset",
"the t score",
"bmd",
"the ipsilateral femoral neck",
"the same patient",
"the date",
"the performed radiograph",
"the t",
"cutoff value",
"−",
"osteoporosis",
"five deep learning models",
"the training dataset",
"their diagnostic performances",
"the test dataset",
"the best model",
"the area",
"the curves",
"auc).resultsa total",
"363 anteroposterior hip radiograph images",
"the average time interval",
"the performed radiograph",
"the bmd measurement",
"6.6 months",
"two-hundred-thirteen images",
"osteoporosis",
"t",
"−",
"the other 150 images",
"osteoporosis",
"t",
"−",
"the best-selected deep learning model",
"an auc",
"accuracy",
"0.82.conclusionsthis study",
"the potential",
"deep learning",
"anteroposterior hip radiographs",
"the results",
"the deep learning model",
"a screening tool",
"patients",
"risk",
"osteoporosis",
"further bmd measurement",
"one-year",
"22%",
"2013",
"2021",
"tertiary",
"more than two years",
"two",
"80%",
"the remaining 20%",
"− 2.5",
"five",
"363",
"6.6 months",
"two-hundred-thirteen",
"2.5",
"150",
"2.5",
"0.91"
] |
Deep learning-based solution for smart contract vulnerabilities detection | [
"Xueyan Tang",
"Yuying Du",
"Alan Lai",
"Ze Zhang",
"Lingzhi Shi"
] | This paper aims to explore the application of deep learning in smart contract vulnerabilities detection. Smart contracts are an essential part of blockchain technology and are crucial for developing decentralized applications. However, smart contract vulnerabilities can cause financial losses and system crashes. Static analysis tools are frequently used to detect vulnerabilities in smart contracts, but they often result in false positives and false negatives because of their high reliance on predefined rules and lack of semantic analysis capabilities. Furthermore, these predefined rules quickly become obsolete and fail to adapt or generalize to new data. In contrast, deep learning methods do not require predefined detection rules and can learn the features of vulnerabilities during the training process. In this paper, we introduce a solution called Lightning Cat which is based on deep learning techniques. We train three deep learning models for detecting vulnerabilities in smart contract: Optimized-CodeBERT, Optimized-LSTM, and Optimized-CNN. Experimental results show that, in the Lightning Cat we propose, Optimized-CodeBERT model surpasses other methods, achieving an f1-score of 93.53%. To precisely extract vulnerability features, we acquire segments of vulnerable code functions to retain critical vulnerability features. Using the CodeBERT pre-training model for data preprocessing, we could capture the syntax and semantics of the code more accurately. To demonstrate the feasibility of our proposed solution, we evaluate its performance using the SolidiFI-benchmark dataset, which consists of 9369 vulnerable contracts injected with vulnerabilities from seven different types. | 10.1038/s41598-023-47219-0 | deep learning-based solution for smart contract vulnerabilities detection | this paper aims to explore the application of deep learning in smart contract vulnerabilities detection. smart contracts are an essential part of blockchain technology and are crucial for developing decentralized applications. however, smart contract vulnerabilities can cause financial losses and system crashes. static analysis tools are frequently used to detect vulnerabilities in smart contracts, but they often result in false positives and false negatives because of their high reliance on predefined rules and lack of semantic analysis capabilities. furthermore, these predefined rules quickly become obsolete and fail to adapt or generalize to new data. in contrast, deep learning methods do not require predefined detection rules and can learn the features of vulnerabilities during the training process. in this paper, we introduce a solution called lightning cat which is based on deep learning techniques. we train three deep learning models for detecting vulnerabilities in smart contract: optimized-codebert, optimized-lstm, and optimized-cnn. experimental results show that, in the lightning cat we propose, optimized-codebert model surpasses other methods, achieving an f1-score of 93.53%. to precisely extract vulnerability features, we acquire segments of vulnerable code functions to retain critical vulnerability features. using the codebert pre-training model for data preprocessing, we could capture the syntax and semantics of the code more accurately. to demonstrate the feasibility of our proposed solution, we evaluate its performance using the solidifi-benchmark dataset, which consists of 9369 vulnerable contracts injected with vulnerabilities from seven different types. | [
"this paper",
"the application",
"deep learning",
"smart contract vulnerabilities detection",
"smart contracts",
"an essential part",
"blockchain technology",
"decentralized applications",
"smart contract vulnerabilities",
"financial losses",
"system crashes",
"static analysis tools",
"vulnerabilities",
"smart contracts",
"they",
"false positives",
"false negatives",
"their high reliance",
"predefined rules",
"lack",
"semantic analysis capabilities",
"these predefined rules",
"new data",
"contrast",
"deep learning methods",
"predefined detection rules",
"the features",
"vulnerabilities",
"the training process",
"this paper",
"we",
"a solution",
"lightning cat",
"which",
"deep learning techniques",
"we",
"three deep learning models",
"vulnerabilities",
"smart contract",
"optimized-codebert",
"optimized-lstm",
"cnn",
"experimental results",
"the lightning cat",
"we",
"optimized-codebert model",
"other methods",
"an f1-score",
"93.53%",
"vulnerability features",
"we",
"segments",
"vulnerable code functions",
"critical vulnerability features",
"the codebert pre-training model",
"data",
"we",
"the syntax",
"semantics",
"the code",
"the feasibility",
"our proposed solution",
"we",
"its performance",
"the solidifi-benchmark dataset",
"which",
"9369 vulnerable contracts",
"vulnerabilities",
"seven different types",
"three",
"93.53%",
"9369",
"seven"
] |
Image deep learning in fault diagnosis of mechanical equipment | [
"Chuanhao Wang",
"Yongjian Sun",
"Xiaohong Wang"
] | With the development of industry, more and more crucial mechanical machinery generate wildness demand of effective fault diagnosis to ensure the safe operation. Over the past few decades, researchers have explored and developed a variety of approaches. In recent years, fault diagnosis based on deep learning has developed rapidly, which achieved satisfied results in the filed of mechanical equipment fault diagnosis. However, there is few review to systematically summarize and sort out these special image deep learning methods. In order to fill this gap, this paper concentrates on reviewing comprehensively the development of special image deep learning for mechanical equipment fault diagnosis in past 5 years. In general, a typical image fault diagnosis based on fault image deep learning generally consists of data acquisition, signal processing, model construction, feature learning and decision-making. Firstly, the method of signal preprocessing is introduced, and several common methods of converting signals into images are briefly compared and analyzed. Then, the principles and variants of deep learning models are expounded. Further more, the difficulties and challenges encountered at this stage are summarized. Last but not least, the future development and potential trends of the work are concluded, and it hoped that this work will facilitate and inspire further exploration for researchers in this area. | 10.1007/s10845-023-02176-3 | image deep learning in fault diagnosis of mechanical equipment | with the development of industry, more and more crucial mechanical machinery generate wildness demand of effective fault diagnosis to ensure the safe operation. over the past few decades, researchers have explored and developed a variety of approaches. in recent years, fault diagnosis based on deep learning has developed rapidly, which achieved satisfied results in the filed of mechanical equipment fault diagnosis. however, there is few review to systematically summarize and sort out these special image deep learning methods. in order to fill this gap, this paper concentrates on reviewing comprehensively the development of special image deep learning for mechanical equipment fault diagnosis in past 5 years. in general, a typical image fault diagnosis based on fault image deep learning generally consists of data acquisition, signal processing, model construction, feature learning and decision-making. firstly, the method of signal preprocessing is introduced, and several common methods of converting signals into images are briefly compared and analyzed. then, the principles and variants of deep learning models are expounded. further more, the difficulties and challenges encountered at this stage are summarized. last but not least, the future development and potential trends of the work are concluded, and it hoped that this work will facilitate and inspire further exploration for researchers in this area. | [
"the development",
"industry",
"more and more crucial mechanical machinery",
"wildness demand",
"effective fault diagnosis",
"the safe operation",
"the past few decades",
"researchers",
"a variety",
"approaches",
"recent years",
"diagnosis",
"deep learning",
"which",
"satisfied results",
"mechanical equipment fault diagnosis",
"few review",
"these special image deep learning methods",
"order",
"this gap",
"this paper",
"the development",
"special image",
"deep learning",
"mechanical equipment fault diagnosis",
"past 5 years",
"in general, a typical image fault diagnosis",
"fault image",
"deep learning",
"data acquisition",
"signal processing",
"model construction",
"feature learning",
"decision-making",
"the method",
"signal preprocessing",
"several common methods",
"signals",
"images",
"the principles",
"variants",
"deep learning models",
"the difficulties",
"challenges",
"this stage",
"the future development",
"potential trends",
"the work",
"it",
"this work",
"further exploration",
"researchers",
"this area",
"the past few decades",
"recent years",
"past 5 years",
"firstly"
] |
A deep learning framework for non-functional requirement classification | [
"Kiramat Rahman",
"Anwar Ghani",
"Sanjay Misra",
"Arif Ur Rahman"
] | Analyzing, identifying, and classifying nonfunctional requirements from requirement documents is time-consuming and challenging. Machine learning-based approaches have been proposed to minimize analysts’ efforts, labor, and stress. However, the traditional approach of supervised machine learning necessitates manual feature extraction, which is time-consuming. This study presents a novel deep-learning framework for NFR classification to overcome these limitations. The framework leverages a more profound architecture that naturally captures feature structures, possesses enhanced representational power, and efficiently captures a broader context than shallower structures. To evaluate the effectiveness of the proposed method, an experiment was conducted on two widely-used datasets, encompassing 914 NFR instances. Performance analysis was performed on the applied models, and the results were evaluated using various metrics. Notably, the DReqANN model outperforms the other models in classifying NFR, achieving precision between 81 and 99.8%, recall between 74 and 89%, and F1-score between 83 and 89%. These significant results highlight the exceptional efficacy of the proposed deep learning framework in addressing NFR classification tasks, showcasing its potential for advancing the field of NFR analysis and classification. | 10.1038/s41598-024-52802-0 | a deep learning framework for non-functional requirement classification | analyzing, identifying, and classifying nonfunctional requirements from requirement documents is time-consuming and challenging. machine learning-based approaches have been proposed to minimize analysts’ efforts, labor, and stress. however, the traditional approach of supervised machine learning necessitates manual feature extraction, which is time-consuming. this study presents a novel deep-learning framework for nfr classification to overcome these limitations. the framework leverages a more profound architecture that naturally captures feature structures, possesses enhanced representational power, and efficiently captures a broader context than shallower structures. to evaluate the effectiveness of the proposed method, an experiment was conducted on two widely-used datasets, encompassing 914 nfr instances. performance analysis was performed on the applied models, and the results were evaluated using various metrics. notably, the dreqann model outperforms the other models in classifying nfr, achieving precision between 81 and 99.8%, recall between 74 and 89%, and f1-score between 83 and 89%. these significant results highlight the exceptional efficacy of the proposed deep learning framework in addressing nfr classification tasks, showcasing its potential for advancing the field of nfr analysis and classification. | [
"nonfunctional requirements",
"requirement documents",
"machine learning-based approaches",
"analysts’ efforts",
"labor",
"stress",
"however, the traditional approach",
"supervised machine learning necessitates manual feature extraction",
"which",
"this study",
"a novel deep-learning framework",
"nfr classification",
"these limitations",
"the framework",
"a more profound architecture",
"that",
"feature structures",
"enhanced representational power",
"a broader context",
"shallower structures",
"the effectiveness",
"the proposed method",
"an experiment",
"two widely-used datasets",
"914 nfr instances",
"performance analysis",
"the applied models",
"the results",
"various metrics",
"the dreqann model",
"the other models",
"precision",
"99.8%",
"74 and 89%",
"83 and 89%",
"these significant results",
"the exceptional efficacy",
"the proposed deep learning framework",
"nfr classification tasks",
"its potential",
"the field",
"nfr analysis",
"classification",
"two",
"914",
"between 81",
"99.8%",
"between 74 and",
"89%",
"89%"
] |
Deep Learning-Enabled Image Classification for the Determination of Aluminum Ions | [
"Ce Wang",
"Zhaoliang Wang",
"Yifei Lu",
"Tingting Hao",
"Yufang Hu",
"Sui Wang",
"Zhiyong Guo"
] | AbstractIn this work, an image classification based on deep learning for quantitative field determination of aluminum ions (Al3+) was developed. Carbon quantum dots with yellow fluorescence were synthesized by a one-pot hydrothermal method which could specifically recognize Al3+ and produce enhanced green fluorescence. Using the convolutional neural network model in deep learning, an image classification was constructed to classify Al3+ samples at different concentrations. Then, a fitting method for classification information was proposed for the first time which could convert discontinuous, semi-quantitative concentration classification information into continuous, quantitative, and accurate concentration information. Recoveries of 92.0–110.3% in the concentration range of 0.3–320 μM were obtained with a lower limit of detection of 0.3 μM, exhibiting excellent accuracy and sensitivity. It could be completed in 2 min simply without requiring large equipment. Thus, the deep learning-enabled image classification paves a new way for the determination of metal ions. | 10.1134/S1061934823110114 | deep learning-enabled image classification for the determination of aluminum ions | abstractin this work, an image classification based on deep learning for quantitative field determination of aluminum ions (al3+) was developed. carbon quantum dots with yellow fluorescence were synthesized by a one-pot hydrothermal method which could specifically recognize al3+ and produce enhanced green fluorescence. using the convolutional neural network model in deep learning, an image classification was constructed to classify al3+ samples at different concentrations. then, a fitting method for classification information was proposed for the first time which could convert discontinuous, semi-quantitative concentration classification information into continuous, quantitative, and accurate concentration information. recoveries of 92.0–110.3% in the concentration range of 0.3–320 μm were obtained with a lower limit of detection of 0.3 μm, exhibiting excellent accuracy and sensitivity. it could be completed in 2 min simply without requiring large equipment. thus, the deep learning-enabled image classification paves a new way for the determination of metal ions. | [
"abstractin",
"this work",
"an image classification",
"deep learning",
"quantitative field determination",
"aluminum ions",
"al3",
"+",
"carbon quantum dots",
"yellow fluorescence",
"a one-pot hydrothermal method",
"which",
"al3",
"enhanced green fluorescence",
"the convolutional neural network model",
"deep learning",
"an image classification",
"al3",
"samples",
"different concentrations",
"a fitting method",
"classification information",
"the first time",
"which",
"discontinuous, semi-quantitative concentration classification information",
"continuous, quantitative, and accurate concentration information",
"recoveries",
"92.0–110.3%",
"the concentration range",
"0.3–320 μm",
"a lower limit",
"detection",
"0.3 μm",
"excellent accuracy",
"sensitivity",
"it",
"2 min",
"large equipment",
"the deep learning-enabled image classification",
"a new way",
"the determination",
"metal ions",
"abstractin",
"quantum",
"one",
"first",
"92.0–110.3%",
"0.3",
"2"
] |
A comparative analysis of classical machine learning and deep learning techniques for predicting lung cancer survivability | [
"Shigao Huang",
"Ibrahim Arpaci",
"Mostafa Al-Emran",
"Serhat Kılıçarslan",
"Mohammed A. Al-Sharafi"
] | Lung cancer, one of the deadliest forms of cancer, can significantly improve patient survival rates by 60–70% if detected in its early stages. The prediction of lung cancer patient survival has grown to be a popular area of research among medical and computer science experts. This study aims to predict the survival period of lung cancer patients using 12 demographic and clinical features. This is achieved through a comparative analysis between traditional machine learning and deep learning techniques, deviating from previous studies that primarily used CT or X-ray images. The dataset included 10,001 lung cancer patients, and the data attributes involved gender, age, race, T (tumor size), M (tumor dissemination to other organs), N (lymph node involvement), Chemo, DX-Bone, DX-Brain, DX-Liver, DX-Lung, and survival months. Six supervised machine learning and deep learning techniques were applied, including logistic-regression (Logistic), Bayes classifier (BayesNet), lazy-classifier (LWL), meta-classifier (AttributeSelectedClassifier (ASC)), rule-learner (OneR), decision-tree (J48), and deep neural network (DNN). The findings suggest that DNN surpassed the performance of the six traditional machine learning models in accurately predicting the survival duration of lung cancer patients, achieving an accuracy rate of 88.58%. This evidence is thought to assist healthcare experts in cost management and timely treatment provision. | 10.1007/s11042-023-16349-y | a comparative analysis of classical machine learning and deep learning techniques for predicting lung cancer survivability | lung cancer, one of the deadliest forms of cancer, can significantly improve patient survival rates by 60–70% if detected in its early stages. the prediction of lung cancer patient survival has grown to be a popular area of research among medical and computer science experts. this study aims to predict the survival period of lung cancer patients using 12 demographic and clinical features. this is achieved through a comparative analysis between traditional machine learning and deep learning techniques, deviating from previous studies that primarily used ct or x-ray images. the dataset included 10,001 lung cancer patients, and the data attributes involved gender, age, race, t (tumor size), m (tumor dissemination to other organs), n (lymph node involvement), chemo, dx-bone, dx-brain, dx-liver, dx-lung, and survival months. six supervised machine learning and deep learning techniques were applied, including logistic-regression (logistic), bayes classifier (bayesnet), lazy-classifier (lwl), meta-classifier (attributeselectedclassifier (asc)), rule-learner (oner), decision-tree (j48), and deep neural network (dnn). the findings suggest that dnn surpassed the performance of the six traditional machine learning models in accurately predicting the survival duration of lung cancer patients, achieving an accuracy rate of 88.58%. this evidence is thought to assist healthcare experts in cost management and timely treatment provision. | [
"lung cancer",
"the deadliest forms",
"cancer",
"patient survival rates",
"60–70%",
"its early stages",
"the prediction",
"lung cancer patient survival",
"a popular area",
"research",
"medical and computer science experts",
"this study",
"the survival period",
"lung cancer patients",
"12 demographic and clinical features",
"this",
"a comparative analysis",
"traditional machine learning",
"deep learning techniques",
"previous studies",
"that",
"ct or x-ray images",
"the dataset",
"10,001 lung cancer patients",
"the data attributes",
"gender",
"age",
"race",
"t",
"tumor size",
"m",
"tumor dissemination",
"other organs",
"(lymph node involvement",
"chemo",
"dx",
"brain",
"dx-liver",
"dx-lung",
"survival months",
"six supervised machine learning",
"deep learning techniques",
"bayes classifier",
"lazy-classifier (lwl",
"meta-classifier",
"(attributeselectedclassifier",
"asc",
"rule-learner",
"(oner",
"decision-tree",
"(j48",
"deep neural network",
"dnn",
"the findings",
"dnn",
"the performance",
"the six traditional machine learning models",
"the survival duration",
"lung cancer patients",
"an accuracy rate",
"88.58%",
"this evidence",
"healthcare experts",
"cost management",
"timely treatment provision",
"one",
"60–70%",
"12",
"10,001",
"months",
"six",
"bayes classifier",
"meta-classifier",
"six",
"88.58%"
] |
A comprehensive review of deep learning power in steady-state visual evoked potentials | [
"Z. T. Al-Qaysi",
"A. S. Albahri",
"M. A. Ahmed",
"Rula A. Hamid",
"M. A. Alsalem",
"O. S. Albahri",
"A. H. Alamoodi",
"Raad Z. Homod",
"Ghadeer Ghazi Shayea",
"Ali M. Duhaim"
] | Brain–computer interfacing (BCI) research, fueled by deep learning, integrates insights from diverse domains. A notable focus is on steady-state visual evoked potential (SSVEP) in BCI applications, requiring in-depth assessment through deep learning. EEG research frequently employs SSVEPs, which are regarded as normal brain responses to visual stimuli, particularly in investigations of visual perception and attention. This paper tries to give an in-depth analysis of the implications of deep learning for SSVEP-adapted BCI. A systematic search across four stable databases (Web of Science, PubMed, ScienceDirect, and IEEE) was developed to assemble a vast reservoir of relevant theoretical and scientific knowledge. A comprehensive search yielded 177 papers that appeared between 2010 and 2023. Thence a strict screening method from predetermined inclusion criteria finally generated 39 records. These selected works were the basis of the study, presenting alternate views, obstacles, limitations and interesting ideas. By providing a systematic presentation of the material, it has made a key scholarly contribution. It focuses on the technical aspects of SSVEP-based BCI, EEG technologies and complex applications of deep learning technology in these areas. The study delivers more penetrating reporting on the latest deep learning pattern recognition techniques than its predecessors, together with progress in data acquisition and recording means suitable for SSVEP-based BCI devices. Especially in the realms of deep learning technology orchestration, pattern recognition techniques, and EEG data collection, it has effectively closed four important research gaps. To increase the accessibility of this critical material, the results of the study take the form of easy-to-read tables just generated. Applying deep learning techniques in SSVEP-based BCI applications, as the research shows, also has its downsides. The study concludes that a radical framework will be presented which, includes intelligent decision-making tools for evaluation and benchmarking. Rather than just finding a comparable or similar analogy, this framework is intended to help guide future research and pragmatic applications, and to determine which SSVEP-based BCI applications have succeeded at responsibility for what they set out with. | 10.1007/s00521-024-10143-z | a comprehensive review of deep learning power in steady-state visual evoked potentials | brain–computer interfacing (bci) research, fueled by deep learning, integrates insights from diverse domains. a notable focus is on steady-state visual evoked potential (ssvep) in bci applications, requiring in-depth assessment through deep learning. eeg research frequently employs ssveps, which are regarded as normal brain responses to visual stimuli, particularly in investigations of visual perception and attention. this paper tries to give an in-depth analysis of the implications of deep learning for ssvep-adapted bci. a systematic search across four stable databases (web of science, pubmed, sciencedirect, and ieee) was developed to assemble a vast reservoir of relevant theoretical and scientific knowledge. a comprehensive search yielded 177 papers that appeared between 2010 and 2023. thence a strict screening method from predetermined inclusion criteria finally generated 39 records. these selected works were the basis of the study, presenting alternate views, obstacles, limitations and interesting ideas. by providing a systematic presentation of the material, it has made a key scholarly contribution. it focuses on the technical aspects of ssvep-based bci, eeg technologies and complex applications of deep learning technology in these areas. the study delivers more penetrating reporting on the latest deep learning pattern recognition techniques than its predecessors, together with progress in data acquisition and recording means suitable for ssvep-based bci devices. especially in the realms of deep learning technology orchestration, pattern recognition techniques, and eeg data collection, it has effectively closed four important research gaps. to increase the accessibility of this critical material, the results of the study take the form of easy-to-read tables just generated. applying deep learning techniques in ssvep-based bci applications, as the research shows, also has its downsides. the study concludes that a radical framework will be presented which, includes intelligent decision-making tools for evaluation and benchmarking. rather than just finding a comparable or similar analogy, this framework is intended to help guide future research and pragmatic applications, and to determine which ssvep-based bci applications have succeeded at responsibility for what they set out with. | [
"brain",
"computer",
"(bci) research",
"deep learning",
"insights",
"diverse domains",
"a notable focus",
"steady-state visual evoked potential",
"ssvep",
"bci applications",
"depth",
"deep learning",
"eeg research",
"ssveps",
"which",
"normal brain responses",
"visual stimuli",
"investigations",
"visual perception",
"attention",
"this paper",
"an in-depth analysis",
"the implications",
"deep learning",
"ssvep-adapted bci",
"a systematic search",
"four stable databases",
"web",
"science",
"sciencedirect",
"ieee",
"a vast reservoir",
"relevant theoretical and scientific knowledge",
"a comprehensive search",
"177 papers",
"that",
"a strict screening method",
"predetermined inclusion criteria",
"39 records",
"these selected works",
"the basis",
"the study",
"alternate views",
"obstacles",
"limitations",
"interesting ideas",
"a systematic presentation",
"the material",
"it",
"a key scholarly contribution",
"it",
"the technical aspects",
"ssvep-based bci",
"eeg technologies",
"complex applications",
"deep learning technology",
"these areas",
"the study",
"the latest deep learning pattern recognition techniques",
"its predecessors",
"progress",
"data acquisition",
"recording",
"ssvep-based bci devices",
"the realms",
"deep learning technology orchestration",
"pattern recognition techniques",
"eeg data collection",
"it",
"four important research gaps",
"the accessibility",
"this critical material",
"the results",
"the study",
"the form",
"read",
"deep learning techniques",
"ssvep-based bci applications",
"its downsides",
"the study",
"a radical framework",
"which",
"intelligent decision-making tools",
"evaluation",
"benchmarking",
"a comparable or similar analogy",
"this framework",
"future research and pragmatic applications",
"which",
"ssvep-based bci applications",
"responsibility",
"what",
"they",
"four",
"177",
"between 2010 and 2023",
"39",
"four"
] |
Quantum deep learning-based anomaly detection for enhanced network security | [
"Moe Hdaib",
"Sutharshan Rajasegarar",
"Lei Pan"
] | Identifying and mitigating aberrant activities within the network traffic is important to prevent adverse consequences caused by cyber security incidents, which have been increasing significantly in recent times. Existing research mainly focuses on classical machine learning and deep learning-based approaches for detecting such attacks. However, exploiting the power of quantum deep learning to process complex correlation of features for anomaly detection is not well explored. Hence, in this paper, we investigate quantum machine learning and quantum deep learning-based anomaly detection methodologies to accurately detect network attacks. In particular, we propose three novel quantum auto-encoder-based anomaly detection frameworks. Our primary aim is to create hybrid models that leverage the strengths of both quantum and deep learning methodologies for efficient anomaly recognition. The three frameworks are formed by integrating the quantum autoencoder with a quantum one-class support vector machine, a quantum random forest, and a quantum k-nearest neighbor approach. The anomaly detection capability of the frameworks is evaluated using benchmark datasets comprising computer and Internet of Things network flows. Our evaluation demonstrates that all three frameworks have a high potential to detect the network traffic anomalies accurately, while the framework that integrates the quantum autoencoder with the quantum k-nearest neighbor yields the highest accuracy. This demonstrates the promising potential for the development of quantum frameworks for anomaly detection, underscoring their relevance for future advancements in network security. | 10.1007/s42484-024-00163-2 | quantum deep learning-based anomaly detection for enhanced network security | identifying and mitigating aberrant activities within the network traffic is important to prevent adverse consequences caused by cyber security incidents, which have been increasing significantly in recent times. existing research mainly focuses on classical machine learning and deep learning-based approaches for detecting such attacks. however, exploiting the power of quantum deep learning to process complex correlation of features for anomaly detection is not well explored. hence, in this paper, we investigate quantum machine learning and quantum deep learning-based anomaly detection methodologies to accurately detect network attacks. in particular, we propose three novel quantum auto-encoder-based anomaly detection frameworks. our primary aim is to create hybrid models that leverage the strengths of both quantum and deep learning methodologies for efficient anomaly recognition. the three frameworks are formed by integrating the quantum autoencoder with a quantum one-class support vector machine, a quantum random forest, and a quantum k-nearest neighbor approach. the anomaly detection capability of the frameworks is evaluated using benchmark datasets comprising computer and internet of things network flows. our evaluation demonstrates that all three frameworks have a high potential to detect the network traffic anomalies accurately, while the framework that integrates the quantum autoencoder with the quantum k-nearest neighbor yields the highest accuracy. this demonstrates the promising potential for the development of quantum frameworks for anomaly detection, underscoring their relevance for future advancements in network security. | [
"aberrant activities",
"the network traffic",
"adverse consequences",
"cyber security incidents",
"which",
"recent times",
"existing research",
"classical machine learning",
"deep learning-based approaches",
"such attacks",
"the power",
"quantum",
"complex correlation",
"features",
"anomaly detection",
"this paper",
"we",
"quantum machine learning",
"quantum",
"deep learning-based anomaly detection methodologies",
"network attacks",
"we",
"three novel quantum auto-encoder-based anomaly detection frameworks",
"our primary aim",
"hybrid models",
"that",
"the strengths",
"both quantum",
"deep learning methodologies",
"efficient anomaly recognition",
"the three frameworks",
"the quantum",
"autoencoder",
"a quantum one-class support vector machine",
"a quantum random forest",
"a quantum k-nearest neighbor approach",
"the anomaly detection capability",
"the frameworks",
"benchmark datasets",
"computer",
"internet",
"things network",
"our evaluation",
"all three frameworks",
"a high potential",
"the network traffic anomalies",
"the framework",
"that",
"the quantum",
"the quantum k-nearest neighbor",
"the highest accuracy",
"this",
"the promising potential",
"the development",
"quantum frameworks",
"anomaly detection",
"their relevance",
"future advancements",
"network security",
"anomaly detection methodologies",
"three",
"anomaly detection frameworks",
"anomaly recognition",
"three",
"one",
"quantum",
"the anomaly detection capability",
"three",
"quantum",
"quantum"
] |
An enhanced deep learning approach for vascular wall fracture analysis | [
"Alexandros Tragoudas",
"Marta Alloisio",
"Elsayed S. Elsayed",
"T. Christian Gasser",
"Fadi Aldakheel"
] | This work outlines an efficient deep learning approach for analyzing vascular wall fractures using experimental data with openly accessible source codes (https://doi.org/10.25835/weuhha72) for reproduction. Vascular disease remains the primary cause of death globally to this day. Tissue damage in these vascular disorders is closely tied to how the diseases develop, which requires careful study. Therefore, the scientific community has dedicated significant efforts to capture the properties of vessel wall fractures. The symmetry-constrained compact tension (symconCT) test combined with digital image correlation (DIC) enabled the study of tissue fracture in various aorta specimens under different conditions. Main purpose of the experiments was to investigate the displacement and strain field ahead of the crack tip. These experimental data were to support the development and verification of computational models. The FEM model used the DIC information for the material parameters identification. Traditionally, the analysis of fracture processes in biological tissues involves extensive computational and experimental efforts due to the complex nature of tissue behavior under stress. These high costs have posed significant challenges, demanding efficient solutions to accelerate research progress and reduce embedded costs. Deep learning techniques have shown promise in overcoming these challenges by learning to indicate patterns and relationships between the input and label data. In this study, we integrate deep learning methodologies with the attention residual U-Net architecture to predict fracture responses in porcine aorta specimens, enhanced with a Monte Carlo dropout technique. By training the network on a sufficient amount of data, the model learns to capture the features influencing fracture progression. These parameterized datasets consist of pictures describing the evolution of tissue fracture path along with the DIC measurements. The integration of deep learning should not only enhance the predictive accuracy, but also significantly reduce the computational and experimental burden, thereby enabling a more efficient analysis of fracture response. | 10.1007/s00419-024-02589-3 | an enhanced deep learning approach for vascular wall fracture analysis | this work outlines an efficient deep learning approach for analyzing vascular wall fractures using experimental data with openly accessible source codes (https://doi.org/10.25835/weuhha72) for reproduction. vascular disease remains the primary cause of death globally to this day. tissue damage in these vascular disorders is closely tied to how the diseases develop, which requires careful study. therefore, the scientific community has dedicated significant efforts to capture the properties of vessel wall fractures. the symmetry-constrained compact tension (symconct) test combined with digital image correlation (dic) enabled the study of tissue fracture in various aorta specimens under different conditions. main purpose of the experiments was to investigate the displacement and strain field ahead of the crack tip. these experimental data were to support the development and verification of computational models. the fem model used the dic information for the material parameters identification. traditionally, the analysis of fracture processes in biological tissues involves extensive computational and experimental efforts due to the complex nature of tissue behavior under stress. these high costs have posed significant challenges, demanding efficient solutions to accelerate research progress and reduce embedded costs. deep learning techniques have shown promise in overcoming these challenges by learning to indicate patterns and relationships between the input and label data. in this study, we integrate deep learning methodologies with the attention residual u-net architecture to predict fracture responses in porcine aorta specimens, enhanced with a monte carlo dropout technique. by training the network on a sufficient amount of data, the model learns to capture the features influencing fracture progression. these parameterized datasets consist of pictures describing the evolution of tissue fracture path along with the dic measurements. the integration of deep learning should not only enhance the predictive accuracy, but also significantly reduce the computational and experimental burden, thereby enabling a more efficient analysis of fracture response. | [
"this work",
"an efficient deep learning approach",
"vascular wall fractures",
"experimental data",
"openly accessible source codes",
"https://doi.org/10.25835/weuhha72",
"reproduction",
"vascular disease",
"the primary cause",
"death",
"this day",
"tissue damage",
"these vascular disorders",
"the diseases",
"which",
"careful study",
"the scientific community",
"significant efforts",
"the properties",
"vessel wall fractures",
"the symmetry-constrained compact tension",
"(symconct) test",
"digital image correlation",
"the study",
"tissue fracture",
"various aorta",
"specimens",
"different conditions",
"main purpose",
"the experiments",
"the displacement and strain field",
"the crack tip",
"these experimental data",
"the development",
"verification",
"computational models",
"the fem model",
"the dic information",
"the material parameters identification",
"the analysis",
"fracture processes",
"biological tissues",
"extensive computational and experimental efforts",
"the complex nature",
"tissue behavior",
"stress",
"these high costs",
"significant challenges",
"efficient solutions",
"research progress",
"embedded costs",
"deep learning techniques",
"promise",
"these challenges",
"patterns",
"relationships",
"the input and label data",
"this study",
"we",
"deep learning methodologies",
"the attention residual u-net architecture",
"fracture responses",
"porcine aorta specimens",
"a monte carlo dropout technique",
"the network",
"a sufficient amount",
"data",
"the model",
"the features",
"fracture progression",
"these parameterized datasets",
"pictures",
"the evolution",
"tissue fracture",
"path",
"the dic measurements",
"the integration",
"deep learning",
"the predictive accuracy",
"the computational and experimental burden",
"a more efficient analysis",
"fracture response",
"this day"
] |
Deep Learning-Based Image and Video Inpainting: A Survey | [
"Weize Quan",
"Jiaxi Chen",
"Yanli Liu",
"Dong-Ming Yan",
"Peter Wonka"
] | Image and video inpainting is a classic problem in computer vision and computer graphics, aiming to fill in the plausible and realistic content in the missing areas of images and videos. With the advance of deep learning, this problem has achieved significant progress recently. The goal of this paper is to comprehensively review the deep learning-based methods for image and video inpainting. Specifically, we sort existing methods into different categories from the perspective of their high-level inpainting pipeline, present different deep learning architectures, including CNN, VAE, GAN, diffusion models, etc., and summarize techniques for module design. We review the training objectives and the common benchmark datasets. We present evaluation metrics for low-level pixel and high-level perceptional similarity, conduct a performance evaluation, and discuss the strengths and weaknesses of representative inpainting methods. We also discuss related real-world applications. Finally, we discuss open challenges and suggest potential future research directions. | 10.1007/s11263-023-01977-6 | deep learning-based image and video inpainting: a survey | image and video inpainting is a classic problem in computer vision and computer graphics, aiming to fill in the plausible and realistic content in the missing areas of images and videos. with the advance of deep learning, this problem has achieved significant progress recently. the goal of this paper is to comprehensively review the deep learning-based methods for image and video inpainting. specifically, we sort existing methods into different categories from the perspective of their high-level inpainting pipeline, present different deep learning architectures, including cnn, vae, gan, diffusion models, etc., and summarize techniques for module design. we review the training objectives and the common benchmark datasets. we present evaluation metrics for low-level pixel and high-level perceptional similarity, conduct a performance evaluation, and discuss the strengths and weaknesses of representative inpainting methods. we also discuss related real-world applications. finally, we discuss open challenges and suggest potential future research directions. | [
"image",
"a classic problem",
"computer vision",
"computer graphics",
"the plausible and realistic content",
"the missing areas",
"images",
"videos",
"the advance",
"deep learning",
"this problem",
"significant progress",
"the goal",
"this paper",
"the deep learning-based methods",
"image",
"we",
"existing methods",
"different categories",
"the perspective",
"their high-level inpainting pipeline",
"different deep learning architectures",
"cnn",
"vae",
"gan",
"diffusion models",
"techniques",
"module design",
"we",
"the training objectives",
"the common benchmark datasets",
"we",
"evaluation metrics",
"low-level pixel",
"high-level perceptional similarity",
"a performance evaluation",
"the strengths",
"weaknesses",
"representative inpainting methods",
"we",
"related real-world applications",
"we",
"open challenges",
"potential future research directions",
"cnn",
"vae, gan, diffusion models"
] |
Deep learning for osteoporosis screening using an anteroposterior hip radiograph image | [
"Artit Boonrod",
"Prarinthorn Piyaprapaphan",
"Nut Kittipongphat",
"Daris Theerakulpisut",
"Arunnit Boonrod"
] | PurposeOsteoporosis is a common bone disorder characterized by decreased bone mineral density (BMD) and increased bone fragility, which can lead to fractures and eventually cause morbidity and mortality. It is of great concern that the one-year mortality rate for osteoporotic hip fractures could be as high as 22%, regardless of the treatment. Currently, BMD measurement is the standard method for osteoporosis diagnosis, but it is costly and requires special equipment. While a plain radiograph can be obtained more simply and inexpensively, it is not used for diagnosis. Deep learning technologies had been applied to various medical contexts, yet few to osteoporosis unless they were trained on the advanced investigative images, such as computed tomography. The purpose of this study was to develop a deep learning model using the anteroposterior hip radiograph images and measure its diagnostic accuracy for osteoporosis.MethodsWe retrospectively collected all anteroposterior hip radiograph images of patients from 2013 to 2021 at a tertiary care hospital. The BMD measurements of the included patients were reviewed, and the radiograph images that had a time interval of more than two years from the measurements were excluded. All images were randomized using a computer-generated unequal allocation into two datasets, i.e., 80% of images were used for the training dataset and the remaining 20% for the test dataset. The T score of BMD obtained from the ipsilateral femoral neck of the same patient closest to the date of the performed radiograph was chosen. The T score cutoff value of − 2.5 was used to diagnose osteoporosis. Five deep learning models were trained on the training dataset, and their diagnostic performances were evaluated using the test dataset. Finally, the best model was determined by the area under the curves (AUC).ResultsA total of 363 anteroposterior hip radiograph images were identified. The average time interval between the performed radiograph and the BMD measurement was 6.6 months. Two-hundred-thirteen images were labeled as non-osteoporosis (T score > − 2.5), and the other 150 images as osteoporosis (T score ≤ − 2.5). The best-selected deep learning model achieved an AUC of 0.91 and accuracy of 0.82.ConclusionsThis study demonstrates the potential of deep learning for osteoporosis screening using anteroposterior hip radiographs. The results suggest that the deep learning model might potentially be used as a screening tool to find patients at risk for osteoporosis to perform further BMD measurement. | 10.1007/s00590-024-04032-3 | deep learning for osteoporosis screening using an anteroposterior hip radiograph image | purposeosteoporosis is a common bone disorder characterized by decreased bone mineral density (bmd) and increased bone fragility, which can lead to fractures and eventually cause morbidity and mortality. it is of great concern that the one-year mortality rate for osteoporotic hip fractures could be as high as 22%, regardless of the treatment. currently, bmd measurement is the standard method for osteoporosis diagnosis, but it is costly and requires special equipment. while a plain radiograph can be obtained more simply and inexpensively, it is not used for diagnosis. deep learning technologies had been applied to various medical contexts, yet few to osteoporosis unless they were trained on the advanced investigative images, such as computed tomography. the purpose of this study was to develop a deep learning model using the anteroposterior hip radiograph images and measure its diagnostic accuracy for osteoporosis.methodswe retrospectively collected all anteroposterior hip radiograph images of patients from 2013 to 2021 at a tertiary care hospital. the bmd measurements of the included patients were reviewed, and the radiograph images that had a time interval of more than two years from the measurements were excluded. all images were randomized using a computer-generated unequal allocation into two datasets, i.e., 80% of images were used for the training dataset and the remaining 20% for the test dataset. the t score of bmd obtained from the ipsilateral femoral neck of the same patient closest to the date of the performed radiograph was chosen. the t score cutoff value of − 2.5 was used to diagnose osteoporosis. five deep learning models were trained on the training dataset, and their diagnostic performances were evaluated using the test dataset. finally, the best model was determined by the area under the curves (auc).resultsa total of 363 anteroposterior hip radiograph images were identified. the average time interval between the performed radiograph and the bmd measurement was 6.6 months. two-hundred-thirteen images were labeled as non-osteoporosis (t score > − 2.5), and the other 150 images as osteoporosis (t score ≤ − 2.5). the best-selected deep learning model achieved an auc of 0.91 and accuracy of 0.82.conclusionsthis study demonstrates the potential of deep learning for osteoporosis screening using anteroposterior hip radiographs. the results suggest that the deep learning model might potentially be used as a screening tool to find patients at risk for osteoporosis to perform further bmd measurement. | [
"purposeosteoporosis",
"a common bone disorder",
"decreased bone mineral density",
"bmd",
"increased bone fragility",
"which",
"fractures",
"morbidity",
"mortality",
"it",
"great concern",
"the one-year mortality rate",
"osteoporotic hip fractures",
"22%",
"the treatment",
"bmd measurement",
"the standard method",
"osteoporosis diagnosis",
"it",
"special equipment",
"a plain radiograph",
"it",
"diagnosis",
"deep learning technologies",
"various medical contexts",
"they",
"the advanced investigative images",
"computed tomography",
"the purpose",
"this study",
"a deep learning model",
"the anteroposterior hip radiograph images",
"its diagnostic accuracy",
"osteoporosis.methodswe",
"all anteroposterior hip radiograph images",
"patients",
"a tertiary care hospital",
"the bmd measurements",
"the included patients",
"the radiograph images",
"that",
"a time interval",
"more than two years",
"the measurements",
"all images",
"a computer-generated unequal allocation",
"two datasets",
"80%",
"images",
"the training dataset",
"the remaining 20%",
"the test dataset",
"the t score",
"bmd",
"the ipsilateral femoral neck",
"the same patient",
"the date",
"the performed radiograph",
"the t",
"cutoff value",
"−",
"osteoporosis",
"five deep learning models",
"the training dataset",
"their diagnostic performances",
"the test dataset",
"the best model",
"the area",
"the curves",
"auc).resultsa total",
"363 anteroposterior hip radiograph images",
"the average time interval",
"the performed radiograph",
"the bmd measurement",
"6.6 months",
"two-hundred-thirteen images",
"osteoporosis",
"t",
"−",
"the other 150 images",
"osteoporosis",
"t",
"−",
"the best-selected deep learning model",
"an auc",
"accuracy",
"0.82.conclusionsthis study",
"the potential",
"deep learning",
"anteroposterior hip radiographs",
"the results",
"the deep learning model",
"a screening tool",
"patients",
"risk",
"osteoporosis",
"further bmd measurement",
"one-year",
"22%",
"2013",
"2021",
"tertiary",
"more than two years",
"two",
"80%",
"the remaining 20%",
"− 2.5",
"five",
"363",
"6.6 months",
"two-hundred-thirteen",
"2.5",
"150",
"2.5",
"0.91"
] |
Deep learning-based solution for smart contract vulnerabilities detection | [
"Xueyan Tang",
"Yuying Du",
"Alan Lai",
"Ze Zhang",
"Lingzhi Shi"
] | This paper aims to explore the application of deep learning in smart contract vulnerabilities detection. Smart contracts are an essential part of blockchain technology and are crucial for developing decentralized applications. However, smart contract vulnerabilities can cause financial losses and system crashes. Static analysis tools are frequently used to detect vulnerabilities in smart contracts, but they often result in false positives and false negatives because of their high reliance on predefined rules and lack of semantic analysis capabilities. Furthermore, these predefined rules quickly become obsolete and fail to adapt or generalize to new data. In contrast, deep learning methods do not require predefined detection rules and can learn the features of vulnerabilities during the training process. In this paper, we introduce a solution called Lightning Cat which is based on deep learning techniques. We train three deep learning models for detecting vulnerabilities in smart contract: Optimized-CodeBERT, Optimized-LSTM, and Optimized-CNN. Experimental results show that, in the Lightning Cat we propose, Optimized-CodeBERT model surpasses other methods, achieving an f1-score of 93.53%. To precisely extract vulnerability features, we acquire segments of vulnerable code functions to retain critical vulnerability features. Using the CodeBERT pre-training model for data preprocessing, we could capture the syntax and semantics of the code more accurately. To demonstrate the feasibility of our proposed solution, we evaluate its performance using the SolidiFI-benchmark dataset, which consists of 9369 vulnerable contracts injected with vulnerabilities from seven different types. | 10.1038/s41598-023-47219-0 | deep learning-based solution for smart contract vulnerabilities detection | this paper aims to explore the application of deep learning in smart contract vulnerabilities detection. smart contracts are an essential part of blockchain technology and are crucial for developing decentralized applications. however, smart contract vulnerabilities can cause financial losses and system crashes. static analysis tools are frequently used to detect vulnerabilities in smart contracts, but they often result in false positives and false negatives because of their high reliance on predefined rules and lack of semantic analysis capabilities. furthermore, these predefined rules quickly become obsolete and fail to adapt or generalize to new data. in contrast, deep learning methods do not require predefined detection rules and can learn the features of vulnerabilities during the training process. in this paper, we introduce a solution called lightning cat which is based on deep learning techniques. we train three deep learning models for detecting vulnerabilities in smart contract: optimized-codebert, optimized-lstm, and optimized-cnn. experimental results show that, in the lightning cat we propose, optimized-codebert model surpasses other methods, achieving an f1-score of 93.53%. to precisely extract vulnerability features, we acquire segments of vulnerable code functions to retain critical vulnerability features. using the codebert pre-training model for data preprocessing, we could capture the syntax and semantics of the code more accurately. to demonstrate the feasibility of our proposed solution, we evaluate its performance using the solidifi-benchmark dataset, which consists of 9369 vulnerable contracts injected with vulnerabilities from seven different types. | [
"this paper",
"the application",
"deep learning",
"smart contract vulnerabilities detection",
"smart contracts",
"an essential part",
"blockchain technology",
"decentralized applications",
"smart contract vulnerabilities",
"financial losses",
"system crashes",
"static analysis tools",
"vulnerabilities",
"smart contracts",
"they",
"false positives",
"false negatives",
"their high reliance",
"predefined rules",
"lack",
"semantic analysis capabilities",
"these predefined rules",
"new data",
"contrast",
"deep learning methods",
"predefined detection rules",
"the features",
"vulnerabilities",
"the training process",
"this paper",
"we",
"a solution",
"lightning cat",
"which",
"deep learning techniques",
"we",
"three deep learning models",
"vulnerabilities",
"smart contract",
"optimized-codebert",
"optimized-lstm",
"cnn",
"experimental results",
"the lightning cat",
"we",
"optimized-codebert model",
"other methods",
"an f1-score",
"93.53%",
"vulnerability features",
"we",
"segments",
"vulnerable code functions",
"critical vulnerability features",
"the codebert pre-training model",
"data",
"we",
"the syntax",
"semantics",
"the code",
"the feasibility",
"our proposed solution",
"we",
"its performance",
"the solidifi-benchmark dataset",
"which",
"9369 vulnerable contracts",
"vulnerabilities",
"seven different types",
"three",
"93.53%",
"9369",
"seven"
] |
Image deep learning in fault diagnosis of mechanical equipment | [
"Chuanhao Wang",
"Yongjian Sun",
"Xiaohong Wang"
] | With the development of industry, more and more crucial mechanical machinery generate wildness demand of effective fault diagnosis to ensure the safe operation. Over the past few decades, researchers have explored and developed a variety of approaches. In recent years, fault diagnosis based on deep learning has developed rapidly, which achieved satisfied results in the filed of mechanical equipment fault diagnosis. However, there is few review to systematically summarize and sort out these special image deep learning methods. In order to fill this gap, this paper concentrates on reviewing comprehensively the development of special image deep learning for mechanical equipment fault diagnosis in past 5 years. In general, a typical image fault diagnosis based on fault image deep learning generally consists of data acquisition, signal processing, model construction, feature learning and decision-making. Firstly, the method of signal preprocessing is introduced, and several common methods of converting signals into images are briefly compared and analyzed. Then, the principles and variants of deep learning models are expounded. Further more, the difficulties and challenges encountered at this stage are summarized. Last but not least, the future development and potential trends of the work are concluded, and it hoped that this work will facilitate and inspire further exploration for researchers in this area. | 10.1007/s10845-023-02176-3 | image deep learning in fault diagnosis of mechanical equipment | with the development of industry, more and more crucial mechanical machinery generate wildness demand of effective fault diagnosis to ensure the safe operation. over the past few decades, researchers have explored and developed a variety of approaches. in recent years, fault diagnosis based on deep learning has developed rapidly, which achieved satisfied results in the filed of mechanical equipment fault diagnosis. however, there is few review to systematically summarize and sort out these special image deep learning methods. in order to fill this gap, this paper concentrates on reviewing comprehensively the development of special image deep learning for mechanical equipment fault diagnosis in past 5 years. in general, a typical image fault diagnosis based on fault image deep learning generally consists of data acquisition, signal processing, model construction, feature learning and decision-making. firstly, the method of signal preprocessing is introduced, and several common methods of converting signals into images are briefly compared and analyzed. then, the principles and variants of deep learning models are expounded. further more, the difficulties and challenges encountered at this stage are summarized. last but not least, the future development and potential trends of the work are concluded, and it hoped that this work will facilitate and inspire further exploration for researchers in this area. | [
"the development",
"industry",
"more and more crucial mechanical machinery",
"wildness demand",
"effective fault diagnosis",
"the safe operation",
"the past few decades",
"researchers",
"a variety",
"approaches",
"recent years",
"diagnosis",
"deep learning",
"which",
"satisfied results",
"mechanical equipment fault diagnosis",
"few review",
"these special image deep learning methods",
"order",
"this gap",
"this paper",
"the development",
"special image",
"deep learning",
"mechanical equipment fault diagnosis",
"past 5 years",
"in general, a typical image fault diagnosis",
"fault image",
"deep learning",
"data acquisition",
"signal processing",
"model construction",
"feature learning",
"decision-making",
"the method",
"signal preprocessing",
"several common methods",
"signals",
"images",
"the principles",
"variants",
"deep learning models",
"the difficulties",
"challenges",
"this stage",
"the future development",
"potential trends",
"the work",
"it",
"this work",
"further exploration",
"researchers",
"this area",
"the past few decades",
"recent years",
"past 5 years",
"firstly"
] |
A deep learning framework for non-functional requirement classification | [
"Kiramat Rahman",
"Anwar Ghani",
"Sanjay Misra",
"Arif Ur Rahman"
] | Analyzing, identifying, and classifying nonfunctional requirements from requirement documents is time-consuming and challenging. Machine learning-based approaches have been proposed to minimize analysts’ efforts, labor, and stress. However, the traditional approach of supervised machine learning necessitates manual feature extraction, which is time-consuming. This study presents a novel deep-learning framework for NFR classification to overcome these limitations. The framework leverages a more profound architecture that naturally captures feature structures, possesses enhanced representational power, and efficiently captures a broader context than shallower structures. To evaluate the effectiveness of the proposed method, an experiment was conducted on two widely-used datasets, encompassing 914 NFR instances. Performance analysis was performed on the applied models, and the results were evaluated using various metrics. Notably, the DReqANN model outperforms the other models in classifying NFR, achieving precision between 81 and 99.8%, recall between 74 and 89%, and F1-score between 83 and 89%. These significant results highlight the exceptional efficacy of the proposed deep learning framework in addressing NFR classification tasks, showcasing its potential for advancing the field of NFR analysis and classification. | 10.1038/s41598-024-52802-0 | a deep learning framework for non-functional requirement classification | analyzing, identifying, and classifying nonfunctional requirements from requirement documents is time-consuming and challenging. machine learning-based approaches have been proposed to minimize analysts’ efforts, labor, and stress. however, the traditional approach of supervised machine learning necessitates manual feature extraction, which is time-consuming. this study presents a novel deep-learning framework for nfr classification to overcome these limitations. the framework leverages a more profound architecture that naturally captures feature structures, possesses enhanced representational power, and efficiently captures a broader context than shallower structures. to evaluate the effectiveness of the proposed method, an experiment was conducted on two widely-used datasets, encompassing 914 nfr instances. performance analysis was performed on the applied models, and the results were evaluated using various metrics. notably, the dreqann model outperforms the other models in classifying nfr, achieving precision between 81 and 99.8%, recall between 74 and 89%, and f1-score between 83 and 89%. these significant results highlight the exceptional efficacy of the proposed deep learning framework in addressing nfr classification tasks, showcasing its potential for advancing the field of nfr analysis and classification. | [
"nonfunctional requirements",
"requirement documents",
"machine learning-based approaches",
"analysts’ efforts",
"labor",
"stress",
"however, the traditional approach",
"supervised machine learning necessitates manual feature extraction",
"which",
"this study",
"a novel deep-learning framework",
"nfr classification",
"these limitations",
"the framework",
"a more profound architecture",
"that",
"feature structures",
"enhanced representational power",
"a broader context",
"shallower structures",
"the effectiveness",
"the proposed method",
"an experiment",
"two widely-used datasets",
"914 nfr instances",
"performance analysis",
"the applied models",
"the results",
"various metrics",
"the dreqann model",
"the other models",
"precision",
"99.8%",
"74 and 89%",
"83 and 89%",
"these significant results",
"the exceptional efficacy",
"the proposed deep learning framework",
"nfr classification tasks",
"its potential",
"the field",
"nfr analysis",
"classification",
"two",
"914",
"between 81",
"99.8%",
"between 74 and",
"89%",
"89%"
] |
Deep Learning-Enabled Image Classification for the Determination of Aluminum Ions | [
"Ce Wang",
"Zhaoliang Wang",
"Yifei Lu",
"Tingting Hao",
"Yufang Hu",
"Sui Wang",
"Zhiyong Guo"
] | AbstractIn this work, an image classification based on deep learning for quantitative field determination of aluminum ions (Al3+) was developed. Carbon quantum dots with yellow fluorescence were synthesized by a one-pot hydrothermal method which could specifically recognize Al3+ and produce enhanced green fluorescence. Using the convolutional neural network model in deep learning, an image classification was constructed to classify Al3+ samples at different concentrations. Then, a fitting method for classification information was proposed for the first time which could convert discontinuous, semi-quantitative concentration classification information into continuous, quantitative, and accurate concentration information. Recoveries of 92.0–110.3% in the concentration range of 0.3–320 μM were obtained with a lower limit of detection of 0.3 μM, exhibiting excellent accuracy and sensitivity. It could be completed in 2 min simply without requiring large equipment. Thus, the deep learning-enabled image classification paves a new way for the determination of metal ions. | 10.1134/S1061934823110114 | deep learning-enabled image classification for the determination of aluminum ions | abstractin this work, an image classification based on deep learning for quantitative field determination of aluminum ions (al3+) was developed. carbon quantum dots with yellow fluorescence were synthesized by a one-pot hydrothermal method which could specifically recognize al3+ and produce enhanced green fluorescence. using the convolutional neural network model in deep learning, an image classification was constructed to classify al3+ samples at different concentrations. then, a fitting method for classification information was proposed for the first time which could convert discontinuous, semi-quantitative concentration classification information into continuous, quantitative, and accurate concentration information. recoveries of 92.0–110.3% in the concentration range of 0.3–320 μm were obtained with a lower limit of detection of 0.3 μm, exhibiting excellent accuracy and sensitivity. it could be completed in 2 min simply without requiring large equipment. thus, the deep learning-enabled image classification paves a new way for the determination of metal ions. | [
"abstractin",
"this work",
"an image classification",
"deep learning",
"quantitative field determination",
"aluminum ions",
"al3",
"+",
"carbon quantum dots",
"yellow fluorescence",
"a one-pot hydrothermal method",
"which",
"al3",
"enhanced green fluorescence",
"the convolutional neural network model",
"deep learning",
"an image classification",
"al3",
"samples",
"different concentrations",
"a fitting method",
"classification information",
"the first time",
"which",
"discontinuous, semi-quantitative concentration classification information",
"continuous, quantitative, and accurate concentration information",
"recoveries",
"92.0–110.3%",
"the concentration range",
"0.3–320 μm",
"a lower limit",
"detection",
"0.3 μm",
"excellent accuracy",
"sensitivity",
"it",
"2 min",
"large equipment",
"the deep learning-enabled image classification",
"a new way",
"the determination",
"metal ions",
"abstractin",
"quantum",
"one",
"first",
"92.0–110.3%",
"0.3",
"2"
] |
A comparative analysis of classical machine learning and deep learning techniques for predicting lung cancer survivability | [
"Shigao Huang",
"Ibrahim Arpaci",
"Mostafa Al-Emran",
"Serhat Kılıçarslan",
"Mohammed A. Al-Sharafi"
] | Lung cancer, one of the deadliest forms of cancer, can significantly improve patient survival rates by 60–70% if detected in its early stages. The prediction of lung cancer patient survival has grown to be a popular area of research among medical and computer science experts. This study aims to predict the survival period of lung cancer patients using 12 demographic and clinical features. This is achieved through a comparative analysis between traditional machine learning and deep learning techniques, deviating from previous studies that primarily used CT or X-ray images. The dataset included 10,001 lung cancer patients, and the data attributes involved gender, age, race, T (tumor size), M (tumor dissemination to other organs), N (lymph node involvement), Chemo, DX-Bone, DX-Brain, DX-Liver, DX-Lung, and survival months. Six supervised machine learning and deep learning techniques were applied, including logistic-regression (Logistic), Bayes classifier (BayesNet), lazy-classifier (LWL), meta-classifier (AttributeSelectedClassifier (ASC)), rule-learner (OneR), decision-tree (J48), and deep neural network (DNN). The findings suggest that DNN surpassed the performance of the six traditional machine learning models in accurately predicting the survival duration of lung cancer patients, achieving an accuracy rate of 88.58%. This evidence is thought to assist healthcare experts in cost management and timely treatment provision. | 10.1007/s11042-023-16349-y | a comparative analysis of classical machine learning and deep learning techniques for predicting lung cancer survivability | lung cancer, one of the deadliest forms of cancer, can significantly improve patient survival rates by 60–70% if detected in its early stages. the prediction of lung cancer patient survival has grown to be a popular area of research among medical and computer science experts. this study aims to predict the survival period of lung cancer patients using 12 demographic and clinical features. this is achieved through a comparative analysis between traditional machine learning and deep learning techniques, deviating from previous studies that primarily used ct or x-ray images. the dataset included 10,001 lung cancer patients, and the data attributes involved gender, age, race, t (tumor size), m (tumor dissemination to other organs), n (lymph node involvement), chemo, dx-bone, dx-brain, dx-liver, dx-lung, and survival months. six supervised machine learning and deep learning techniques were applied, including logistic-regression (logistic), bayes classifier (bayesnet), lazy-classifier (lwl), meta-classifier (attributeselectedclassifier (asc)), rule-learner (oner), decision-tree (j48), and deep neural network (dnn). the findings suggest that dnn surpassed the performance of the six traditional machine learning models in accurately predicting the survival duration of lung cancer patients, achieving an accuracy rate of 88.58%. this evidence is thought to assist healthcare experts in cost management and timely treatment provision. | [
"lung cancer",
"the deadliest forms",
"cancer",
"patient survival rates",
"60–70%",
"its early stages",
"the prediction",
"lung cancer patient survival",
"a popular area",
"research",
"medical and computer science experts",
"this study",
"the survival period",
"lung cancer patients",
"12 demographic and clinical features",
"this",
"a comparative analysis",
"traditional machine learning",
"deep learning techniques",
"previous studies",
"that",
"ct or x-ray images",
"the dataset",
"10,001 lung cancer patients",
"the data attributes",
"gender",
"age",
"race",
"t",
"tumor size",
"m",
"tumor dissemination",
"other organs",
"(lymph node involvement",
"chemo",
"dx",
"brain",
"dx-liver",
"dx-lung",
"survival months",
"six supervised machine learning",
"deep learning techniques",
"bayes classifier",
"lazy-classifier (lwl",
"meta-classifier",
"(attributeselectedclassifier",
"asc",
"rule-learner",
"(oner",
"decision-tree",
"(j48",
"deep neural network",
"dnn",
"the findings",
"dnn",
"the performance",
"the six traditional machine learning models",
"the survival duration",
"lung cancer patients",
"an accuracy rate",
"88.58%",
"this evidence",
"healthcare experts",
"cost management",
"timely treatment provision",
"one",
"60–70%",
"12",
"10,001",
"months",
"six",
"bayes classifier",
"meta-classifier",
"six",
"88.58%"
] |
A comprehensive review of deep learning power in steady-state visual evoked potentials | [
"Z. T. Al-Qaysi",
"A. S. Albahri",
"M. A. Ahmed",
"Rula A. Hamid",
"M. A. Alsalem",
"O. S. Albahri",
"A. H. Alamoodi",
"Raad Z. Homod",
"Ghadeer Ghazi Shayea",
"Ali M. Duhaim"
] | Brain–computer interfacing (BCI) research, fueled by deep learning, integrates insights from diverse domains. A notable focus is on steady-state visual evoked potential (SSVEP) in BCI applications, requiring in-depth assessment through deep learning. EEG research frequently employs SSVEPs, which are regarded as normal brain responses to visual stimuli, particularly in investigations of visual perception and attention. This paper tries to give an in-depth analysis of the implications of deep learning for SSVEP-adapted BCI. A systematic search across four stable databases (Web of Science, PubMed, ScienceDirect, and IEEE) was developed to assemble a vast reservoir of relevant theoretical and scientific knowledge. A comprehensive search yielded 177 papers that appeared between 2010 and 2023. Thence a strict screening method from predetermined inclusion criteria finally generated 39 records. These selected works were the basis of the study, presenting alternate views, obstacles, limitations and interesting ideas. By providing a systematic presentation of the material, it has made a key scholarly contribution. It focuses on the technical aspects of SSVEP-based BCI, EEG technologies and complex applications of deep learning technology in these areas. The study delivers more penetrating reporting on the latest deep learning pattern recognition techniques than its predecessors, together with progress in data acquisition and recording means suitable for SSVEP-based BCI devices. Especially in the realms of deep learning technology orchestration, pattern recognition techniques, and EEG data collection, it has effectively closed four important research gaps. To increase the accessibility of this critical material, the results of the study take the form of easy-to-read tables just generated. Applying deep learning techniques in SSVEP-based BCI applications, as the research shows, also has its downsides. The study concludes that a radical framework will be presented which, includes intelligent decision-making tools for evaluation and benchmarking. Rather than just finding a comparable or similar analogy, this framework is intended to help guide future research and pragmatic applications, and to determine which SSVEP-based BCI applications have succeeded at responsibility for what they set out with. | 10.1007/s00521-024-10143-z | a comprehensive review of deep learning power in steady-state visual evoked potentials | brain–computer interfacing (bci) research, fueled by deep learning, integrates insights from diverse domains. a notable focus is on steady-state visual evoked potential (ssvep) in bci applications, requiring in-depth assessment through deep learning. eeg research frequently employs ssveps, which are regarded as normal brain responses to visual stimuli, particularly in investigations of visual perception and attention. this paper tries to give an in-depth analysis of the implications of deep learning for ssvep-adapted bci. a systematic search across four stable databases (web of science, pubmed, sciencedirect, and ieee) was developed to assemble a vast reservoir of relevant theoretical and scientific knowledge. a comprehensive search yielded 177 papers that appeared between 2010 and 2023. thence a strict screening method from predetermined inclusion criteria finally generated 39 records. these selected works were the basis of the study, presenting alternate views, obstacles, limitations and interesting ideas. by providing a systematic presentation of the material, it has made a key scholarly contribution. it focuses on the technical aspects of ssvep-based bci, eeg technologies and complex applications of deep learning technology in these areas. the study delivers more penetrating reporting on the latest deep learning pattern recognition techniques than its predecessors, together with progress in data acquisition and recording means suitable for ssvep-based bci devices. especially in the realms of deep learning technology orchestration, pattern recognition techniques, and eeg data collection, it has effectively closed four important research gaps. to increase the accessibility of this critical material, the results of the study take the form of easy-to-read tables just generated. applying deep learning techniques in ssvep-based bci applications, as the research shows, also has its downsides. the study concludes that a radical framework will be presented which, includes intelligent decision-making tools for evaluation and benchmarking. rather than just finding a comparable or similar analogy, this framework is intended to help guide future research and pragmatic applications, and to determine which ssvep-based bci applications have succeeded at responsibility for what they set out with. | [
"brain",
"computer",
"(bci) research",
"deep learning",
"insights",
"diverse domains",
"a notable focus",
"steady-state visual evoked potential",
"ssvep",
"bci applications",
"depth",
"deep learning",
"eeg research",
"ssveps",
"which",
"normal brain responses",
"visual stimuli",
"investigations",
"visual perception",
"attention",
"this paper",
"an in-depth analysis",
"the implications",
"deep learning",
"ssvep-adapted bci",
"a systematic search",
"four stable databases",
"web",
"science",
"sciencedirect",
"ieee",
"a vast reservoir",
"relevant theoretical and scientific knowledge",
"a comprehensive search",
"177 papers",
"that",
"a strict screening method",
"predetermined inclusion criteria",
"39 records",
"these selected works",
"the basis",
"the study",
"alternate views",
"obstacles",
"limitations",
"interesting ideas",
"a systematic presentation",
"the material",
"it",
"a key scholarly contribution",
"it",
"the technical aspects",
"ssvep-based bci",
"eeg technologies",
"complex applications",
"deep learning technology",
"these areas",
"the study",
"the latest deep learning pattern recognition techniques",
"its predecessors",
"progress",
"data acquisition",
"recording",
"ssvep-based bci devices",
"the realms",
"deep learning technology orchestration",
"pattern recognition techniques",
"eeg data collection",
"it",
"four important research gaps",
"the accessibility",
"this critical material",
"the results",
"the study",
"the form",
"read",
"deep learning techniques",
"ssvep-based bci applications",
"its downsides",
"the study",
"a radical framework",
"which",
"intelligent decision-making tools",
"evaluation",
"benchmarking",
"a comparable or similar analogy",
"this framework",
"future research and pragmatic applications",
"which",
"ssvep-based bci applications",
"responsibility",
"what",
"they",
"four",
"177",
"between 2010 and 2023",
"39",
"four"
] |
Quantum deep learning-based anomaly detection for enhanced network security | [
"Moe Hdaib",
"Sutharshan Rajasegarar",
"Lei Pan"
] | Identifying and mitigating aberrant activities within the network traffic is important to prevent adverse consequences caused by cyber security incidents, which have been increasing significantly in recent times. Existing research mainly focuses on classical machine learning and deep learning-based approaches for detecting such attacks. However, exploiting the power of quantum deep learning to process complex correlation of features for anomaly detection is not well explored. Hence, in this paper, we investigate quantum machine learning and quantum deep learning-based anomaly detection methodologies to accurately detect network attacks. In particular, we propose three novel quantum auto-encoder-based anomaly detection frameworks. Our primary aim is to create hybrid models that leverage the strengths of both quantum and deep learning methodologies for efficient anomaly recognition. The three frameworks are formed by integrating the quantum autoencoder with a quantum one-class support vector machine, a quantum random forest, and a quantum k-nearest neighbor approach. The anomaly detection capability of the frameworks is evaluated using benchmark datasets comprising computer and Internet of Things network flows. Our evaluation demonstrates that all three frameworks have a high potential to detect the network traffic anomalies accurately, while the framework that integrates the quantum autoencoder with the quantum k-nearest neighbor yields the highest accuracy. This demonstrates the promising potential for the development of quantum frameworks for anomaly detection, underscoring their relevance for future advancements in network security. | 10.1007/s42484-024-00163-2 | quantum deep learning-based anomaly detection for enhanced network security | identifying and mitigating aberrant activities within the network traffic is important to prevent adverse consequences caused by cyber security incidents, which have been increasing significantly in recent times. existing research mainly focuses on classical machine learning and deep learning-based approaches for detecting such attacks. however, exploiting the power of quantum deep learning to process complex correlation of features for anomaly detection is not well explored. hence, in this paper, we investigate quantum machine learning and quantum deep learning-based anomaly detection methodologies to accurately detect network attacks. in particular, we propose three novel quantum auto-encoder-based anomaly detection frameworks. our primary aim is to create hybrid models that leverage the strengths of both quantum and deep learning methodologies for efficient anomaly recognition. the three frameworks are formed by integrating the quantum autoencoder with a quantum one-class support vector machine, a quantum random forest, and a quantum k-nearest neighbor approach. the anomaly detection capability of the frameworks is evaluated using benchmark datasets comprising computer and internet of things network flows. our evaluation demonstrates that all three frameworks have a high potential to detect the network traffic anomalies accurately, while the framework that integrates the quantum autoencoder with the quantum k-nearest neighbor yields the highest accuracy. this demonstrates the promising potential for the development of quantum frameworks for anomaly detection, underscoring their relevance for future advancements in network security. | [
"aberrant activities",
"the network traffic",
"adverse consequences",
"cyber security incidents",
"which",
"recent times",
"existing research",
"classical machine learning",
"deep learning-based approaches",
"such attacks",
"the power",
"quantum",
"complex correlation",
"features",
"anomaly detection",
"this paper",
"we",
"quantum machine learning",
"quantum",
"deep learning-based anomaly detection methodologies",
"network attacks",
"we",
"three novel quantum auto-encoder-based anomaly detection frameworks",
"our primary aim",
"hybrid models",
"that",
"the strengths",
"both quantum",
"deep learning methodologies",
"efficient anomaly recognition",
"the three frameworks",
"the quantum",
"autoencoder",
"a quantum one-class support vector machine",
"a quantum random forest",
"a quantum k-nearest neighbor approach",
"the anomaly detection capability",
"the frameworks",
"benchmark datasets",
"computer",
"internet",
"things network",
"our evaluation",
"all three frameworks",
"a high potential",
"the network traffic anomalies",
"the framework",
"that",
"the quantum",
"the quantum k-nearest neighbor",
"the highest accuracy",
"this",
"the promising potential",
"the development",
"quantum frameworks",
"anomaly detection",
"their relevance",
"future advancements",
"network security",
"anomaly detection methodologies",
"three",
"anomaly detection frameworks",
"anomaly recognition",
"three",
"one",
"quantum",
"the anomaly detection capability",
"three",
"quantum",
"quantum"
] |
A pedagogical study on promoting students' deep learning through design-based learning | [
"Chunmeng Weng",
"Congying Chen",
"Xianfeng Ai"
] | This paper illustrates the design-based learning (DBL) approach to promoting the deep learning of students and improving the quality of teaching in engineering design education. We performed three aspects of research with students in a typical educational activity. The first study investigated students' deep learning before and after the DBL approach, both in terms of deep learning status and deep learning ability. The second study examined the effectiveness of the DBL approach by comparative research of a control class (traditional teaching method) and an experimental class (DBL method). The third study examined students' evaluations of the DBL approach. It is approved that the DBL approach has distinctively stimulated the students' motivation to learn, making them more actively engaged in study. The students' higher-order thinking and higher-order capabilities are enhanced, such as critical thinking ability and problem-solving ability. At the same time, they are satisfied with the DBL approach. These findings suggest that the DBL approach is effective in promoting students' deep learning and improving the quality of teaching and learning. | 10.1007/s10798-022-09789-4 | a pedagogical study on promoting students' deep learning through design-based learning | this paper illustrates the design-based learning (dbl) approach to promoting the deep learning of students and improving the quality of teaching in engineering design education. we performed three aspects of research with students in a typical educational activity. the first study investigated students' deep learning before and after the dbl approach, both in terms of deep learning status and deep learning ability. the second study examined the effectiveness of the dbl approach by comparative research of a control class (traditional teaching method) and an experimental class (dbl method). the third study examined students' evaluations of the dbl approach. it is approved that the dbl approach has distinctively stimulated the students' motivation to learn, making them more actively engaged in study. the students' higher-order thinking and higher-order capabilities are enhanced, such as critical thinking ability and problem-solving ability. at the same time, they are satisfied with the dbl approach. these findings suggest that the dbl approach is effective in promoting students' deep learning and improving the quality of teaching and learning. | [
"this paper",
"the design-based learning",
"(dbl) approach",
"the deep learning",
"students",
"the quality",
"teaching",
"engineering design education",
"we",
"three aspects",
"research",
"students",
"a typical educational activity",
"the first study",
"students' deep learning",
"the dbl approach",
"terms",
"deep learning status",
"deep learning ability",
"the second study",
"the effectiveness",
"the dbl approach",
"comparative research",
"a control class",
"traditional teaching method",
"an experimental class",
"dbl method",
"the third study",
"students' evaluations",
"the dbl approach",
"it",
"the dbl approach",
"the students' motivation",
"them",
"study",
"the students' higher-order thinking",
"higher-order capabilities",
"critical thinking ability",
"problem-solving ability",
"the same time",
"they",
"the dbl approach",
"these findings",
"the dbl approach",
"students' deep learning",
"the quality",
"teaching",
"three",
"first",
"second",
"third"
] |
Deep Learning-Based Image and Video Inpainting: A Survey | [
"Weize Quan",
"Jiaxi Chen",
"Yanli Liu",
"Dong-Ming Yan",
"Peter Wonka"
] | Image and video inpainting is a classic problem in computer vision and computer graphics, aiming to fill in the plausible and realistic content in the missing areas of images and videos. With the advance of deep learning, this problem has achieved significant progress recently. The goal of this paper is to comprehensively review the deep learning-based methods for image and video inpainting. Specifically, we sort existing methods into different categories from the perspective of their high-level inpainting pipeline, present different deep learning architectures, including CNN, VAE, GAN, diffusion models, etc., and summarize techniques for module design. We review the training objectives and the common benchmark datasets. We present evaluation metrics for low-level pixel and high-level perceptional similarity, conduct a performance evaluation, and discuss the strengths and weaknesses of representative inpainting methods. We also discuss related real-world applications. Finally, we discuss open challenges and suggest potential future research directions. | 10.1007/s11263-023-01977-6 | deep learning-based image and video inpainting: a survey | image and video inpainting is a classic problem in computer vision and computer graphics, aiming to fill in the plausible and realistic content in the missing areas of images and videos. with the advance of deep learning, this problem has achieved significant progress recently. the goal of this paper is to comprehensively review the deep learning-based methods for image and video inpainting. specifically, we sort existing methods into different categories from the perspective of their high-level inpainting pipeline, present different deep learning architectures, including cnn, vae, gan, diffusion models, etc., and summarize techniques for module design. we review the training objectives and the common benchmark datasets. we present evaluation metrics for low-level pixel and high-level perceptional similarity, conduct a performance evaluation, and discuss the strengths and weaknesses of representative inpainting methods. we also discuss related real-world applications. finally, we discuss open challenges and suggest potential future research directions. | [
"image",
"a classic problem",
"computer vision",
"computer graphics",
"the plausible and realistic content",
"the missing areas",
"images",
"videos",
"the advance",
"deep learning",
"this problem",
"significant progress",
"the goal",
"this paper",
"the deep learning-based methods",
"image",
"we",
"existing methods",
"different categories",
"the perspective",
"their high-level inpainting pipeline",
"different deep learning architectures",
"cnn",
"vae",
"gan",
"diffusion models",
"techniques",
"module design",
"we",
"the training objectives",
"the common benchmark datasets",
"we",
"evaluation metrics",
"low-level pixel",
"high-level perceptional similarity",
"a performance evaluation",
"the strengths",
"weaknesses",
"representative inpainting methods",
"we",
"related real-world applications",
"we",
"open challenges",
"potential future research directions",
"cnn",
"vae, gan, diffusion models"
] |
Brain tumour detection using machine and deep learning: a systematic review | [
"Novsheena Rasool",
"Javaid Iqbal Bhat"
] | Brain tumors rank as the 1oth leading cause of mortality worldwide, accounting for 85% to 95% of all primary nervous system malignancies. The prevalence of this life-threatening disease is steadily increasing worldwide, highlighting the urgent need for an early and precise diagnosis. Timely identification is critical for initiating effective treatment and improving patient survival chances. Delayed diagnosis significantly elevates the risk of mortality. However, the heterogeneous nature of tumor cells poses challenges for radiologists, making manual diagnosis from magnetic resonance imaging (MRI) images time-consuming and complex. Machine learning (ML) and deep learning (DL) have become useful tools in medical image analysis. These techniques facilitate the automated extraction of intricate patterns and features from MRI images, thereby facilitating a more accurate and efficient tumor diagnosis. Furthermore, these algorithms have demonstrated the capability to handle the intricacy and variability of brain tumor characteristics, thereby improving the diagnostic process. A range of deep learning-based algorithms have been utilized to detect brain tumors, yielding impressive results. The purpose of this paper is to provide an exhaustive examination of the latest techniques used for diagnosing brain tumors from MRI imaging, utilizing machine and deep learning technologies. Moreover, it seeks to outline potential avenues for future exploration within this field. The profound insights gleaned from this comprehensive review are poised to offer invaluable guidance and support to both researchers and medical professionals in the healthcare industry. | 10.1007/s11042-024-19333-2 | brain tumour detection using machine and deep learning: a systematic review | brain tumors rank as the 1oth leading cause of mortality worldwide, accounting for 85% to 95% of all primary nervous system malignancies. the prevalence of this life-threatening disease is steadily increasing worldwide, highlighting the urgent need for an early and precise diagnosis. timely identification is critical for initiating effective treatment and improving patient survival chances. delayed diagnosis significantly elevates the risk of mortality. however, the heterogeneous nature of tumor cells poses challenges for radiologists, making manual diagnosis from magnetic resonance imaging (mri) images time-consuming and complex. machine learning (ml) and deep learning (dl) have become useful tools in medical image analysis. these techniques facilitate the automated extraction of intricate patterns and features from mri images, thereby facilitating a more accurate and efficient tumor diagnosis. furthermore, these algorithms have demonstrated the capability to handle the intricacy and variability of brain tumor characteristics, thereby improving the diagnostic process. a range of deep learning-based algorithms have been utilized to detect brain tumors, yielding impressive results. the purpose of this paper is to provide an exhaustive examination of the latest techniques used for diagnosing brain tumors from mri imaging, utilizing machine and deep learning technologies. moreover, it seeks to outline potential avenues for future exploration within this field. the profound insights gleaned from this comprehensive review are poised to offer invaluable guidance and support to both researchers and medical professionals in the healthcare industry. | [
"brain tumors",
"the 1oth leading cause",
"mortality",
"85% to 95%",
"all primary nervous system malignancies",
"the prevalence",
"this life-threatening disease",
"the urgent need",
"an early and precise diagnosis",
"timely identification",
"effective treatment",
"patient survival chances",
"delayed diagnosis",
"the risk",
"mortality",
"the heterogeneous nature",
"tumor cells",
"challenges",
"radiologists",
"manual diagnosis",
"magnetic resonance imaging",
"(mri",
"machine learning",
"ml",
"deep learning",
"dl",
"useful tools",
"medical image analysis",
"these techniques",
"the automated extraction",
"intricate patterns",
"features",
"mri images",
"a more accurate and efficient tumor diagnosis",
"these algorithms",
"the capability",
"the intricacy",
"variability",
"brain tumor characteristics",
"the diagnostic process",
"a range",
"deep learning-based algorithms",
"brain tumors",
"impressive results",
"the purpose",
"this paper",
"an exhaustive examination",
"the latest techniques",
"brain tumors",
"mri imaging",
"machine",
"deep learning technologies",
"it",
"potential avenues",
"future exploration",
"this field",
"the profound insights",
"this comprehensive review",
"invaluable guidance",
"support",
"both researchers",
"medical professionals",
"the healthcare industry",
"85% to 95%"
] |
Deep learning-based streamflow prediction for western Himalayan river basins | [
"Tabasum Majeed",
"Riyaz Ahmad Mir",
"Rayees Ahmad Dar",
"Mohd Anul Haq",
"Shabana Nargis Rasool",
"Assif Assad"
] | Accurate streamflow (Qflow) forecasting plays a pivotal role in water resource monitoring and management, presenting a complex challenge for water managers and engineers. Effective streamflow prediction enables the optimized operation of water resource systems in alignment with technological, financial, ethical, and political objectives. Traditional data-driven models like the Autoregressive model and Autoregressive Moving Average model are widely used for water resource management. However, these models show limitations in handling intricate nonlinear hydrological phenomena. To address these limitations, Deep Learning models emerge as promising alternatives, given their inherent ability to handle nonlinearity. Nonlinearity in time series modeling is formidable due to factors like long-term trends, seasonal variations, cyclical oscillations, and external disturbances. This study proposes a deep neural network architecture based on Neural Basis Expansion Analysis for Time Series (N-BEATS) to predict the daily Qflow of western Himalayan river basins. The study employs datasets collected from the Rampur station of the Satluj basin and the Pandoh and Manali stations of the Beas basin. The experimental results unequivocally demonstrate the superiority of the proposed deep neural network model over benchmarked conventional deep learning models such as Long Short-Term Memory, Feedforward Neural Network, Gated Recurrent Unit, and Recurrent Neural Network. The proposed deep neural network model achieves remarkable accuracy, exhibiting a root mean square error below 0.05 m3/s when comparing actual and predicted Qflow values across all datasets. Consequently, the proposed deep neural network model based on N-BEATS emerges as an efficient and invaluable solution for precise Qflow prediction, empowering efficient water resource management and control. The results suggest that the proposed model can serve for streamflow prediction and water management in Himalayan river basins. | 10.1007/s13198-024-02403-x | deep learning-based streamflow prediction for western himalayan river basins | accurate streamflow (qflow) forecasting plays a pivotal role in water resource monitoring and management, presenting a complex challenge for water managers and engineers. effective streamflow prediction enables the optimized operation of water resource systems in alignment with technological, financial, ethical, and political objectives. traditional data-driven models like the autoregressive model and autoregressive moving average model are widely used for water resource management. however, these models show limitations in handling intricate nonlinear hydrological phenomena. to address these limitations, deep learning models emerge as promising alternatives, given their inherent ability to handle nonlinearity. nonlinearity in time series modeling is formidable due to factors like long-term trends, seasonal variations, cyclical oscillations, and external disturbances. this study proposes a deep neural network architecture based on neural basis expansion analysis for time series (n-beats) to predict the daily qflow of western himalayan river basins. the study employs datasets collected from the rampur station of the satluj basin and the pandoh and manali stations of the beas basin. the experimental results unequivocally demonstrate the superiority of the proposed deep neural network model over benchmarked conventional deep learning models such as long short-term memory, feedforward neural network, gated recurrent unit, and recurrent neural network. the proposed deep neural network model achieves remarkable accuracy, exhibiting a root mean square error below 0.05 m3/s when comparing actual and predicted qflow values across all datasets. consequently, the proposed deep neural network model based on n-beats emerges as an efficient and invaluable solution for precise qflow prediction, empowering efficient water resource management and control. the results suggest that the proposed model can serve for streamflow prediction and water management in himalayan river basins. | [
"accurate streamflow (qflow) forecasting",
"a pivotal role",
"water resource monitoring",
"management",
"a complex challenge",
"water managers",
"engineers",
"effective streamflow prediction",
"the optimized operation",
"water resource systems",
"alignment",
"technological, financial, ethical, and political objectives",
"traditional data-driven models",
"the autoregressive model",
"autoregressive moving average model",
"water resource management",
"these models",
"limitations",
"intricate nonlinear hydrological phenomena",
"these limitations",
"deep learning models",
"promising alternatives",
"their inherent ability",
"nonlinearity",
"nonlinearity",
"time series",
"modeling",
"factors",
"long-term trends",
"seasonal variations",
"cyclical oscillations",
"external disturbances",
"this study",
"a deep neural network architecture",
"neural basis expansion analysis",
"time series",
"-beats",
"the daily qflow",
"western himalayan river basins",
"the study",
"datasets",
"the rampur station",
"the satluj basin",
"the pandoh and manali stations",
"the beas basin",
"the experimental results",
"the superiority",
"the proposed deep neural network model",
"benchmarked conventional deep learning models",
"long short-term memory",
"feedforward neural network",
"gated recurrent unit",
"neural network",
"the proposed deep neural network model",
"remarkable accuracy",
"a root mean square error",
"0.05 m3",
"qflow values",
"all datasets",
"the proposed deep neural network model",
"n-beats",
"an efficient and invaluable solution",
"precise qflow prediction",
"efficient water resource management",
"control",
"the results",
"the proposed model",
"streamflow prediction",
"water management",
"himalayan river basins",
"daily",
"rampur",
"0.05",
"m3/s"
] |
Supervised deep learning for content-aware image retargeting with Fourier Convolutions | [
"MohammadHossein Givkashi",
"MohammadReza Naderi",
"Nader Karimi",
"Shahram Shirani",
"Shadrokh Samavi"
] | Image retargeting aims to alter the size of the image with attention to the contents. One of the main obstacles to training deep learning models for image retargeting is the need for a vast labeled dataset. Labeled datasets are unavailable for training deep learning models in the image retargeting tasks. As a result, we present a new supervised approach for training deep learning models. We use the original images as ground truth and create inputs for the model by resizing and cropping the original images. A second challenge is generating different image sizes in inference time. However, normal convolutional neural networks cannot generate images of different sizes than the input image. To address this issue, we introduced a new method for supervised learning. In our approach, a mask is generated to show the desired size and location of the object. Then the mask and the input image are fed to the network. Comparing image retargeting methods and our proposed method demonstrates the model’s ability to produce high-quality retargeted images. Afterward, we compute the image quality assessment score for each output image based on different techniques and illustrate the effectiveness of our approach. | 10.1007/s11042-024-18876-8 | supervised deep learning for content-aware image retargeting with fourier convolutions | image retargeting aims to alter the size of the image with attention to the contents. one of the main obstacles to training deep learning models for image retargeting is the need for a vast labeled dataset. labeled datasets are unavailable for training deep learning models in the image retargeting tasks. as a result, we present a new supervised approach for training deep learning models. we use the original images as ground truth and create inputs for the model by resizing and cropping the original images. a second challenge is generating different image sizes in inference time. however, normal convolutional neural networks cannot generate images of different sizes than the input image. to address this issue, we introduced a new method for supervised learning. in our approach, a mask is generated to show the desired size and location of the object. then the mask and the input image are fed to the network. comparing image retargeting methods and our proposed method demonstrates the model’s ability to produce high-quality retargeted images. afterward, we compute the image quality assessment score for each output image based on different techniques and illustrate the effectiveness of our approach. | [
"the size",
"the image",
"attention",
"the contents",
"the main obstacles",
"deep learning models",
"the need",
"a vast labeled dataset",
"labeled datasets",
"deep learning models",
"the image retargeting tasks",
"a result",
"we",
"a new supervised approach",
"deep learning models",
"we",
"the original images",
"ground truth",
"inputs",
"the model",
"the original images",
"a second challenge",
"different image sizes",
"inference time",
"normal convolutional neural networks",
"images",
"different sizes",
"the input image",
"this issue",
"we",
"a new method",
"supervised learning",
"our approach",
"a mask",
"the desired size",
"location",
"the object",
"the mask",
"the input image",
"the network",
"image retargeting methods",
"our proposed method",
"the model’s ability",
"high-quality retargeted images",
"we",
"the image quality assessment score",
"each output image",
"different techniques",
"the effectiveness",
"our approach",
"one",
"second",
"fed"
] |
Identifying keystone species in microbial communities using deep learning | [
"Xu-Wen Wang",
"Zheng Sun",
"Huijue Jia",
"Sebastian Michel-Mata",
"Marco Tulio Angulo",
"Lei Dai",
"Xuesong He",
"Scott T. Weiss",
"Yang-Yu Liu"
] | Previous studies suggested that microbial communities can harbour keystone species whose removal can cause a dramatic shift in microbiome structure and functioning. Yet, an efficient method to systematically identify keystone species in microbial communities is still lacking. Here we propose a data-driven keystone species identification (DKI) framework based on deep learning to resolve this challenge. Our key idea is to implicitly learn the assembly rules of microbial communities from a particular habitat by training a deep-learning model using microbiome samples collected from this habitat. The well-trained deep-learning model enables us to quantify the community-specific keystoneness of each species in any microbiome sample from this habitat by conducting a thought experiment on species removal. We systematically validated this DKI framework using synthetic data and applied DKI to analyse real data. We found that those taxa with high median keystoneness across different communities display strong community specificity. The presented DKI framework demonstrates the power of machine learning in tackling a fundamental problem in community ecology, paving the way for the data-driven management of complex microbial communities. | 10.1038/s41559-023-02250-2 | identifying keystone species in microbial communities using deep learning | previous studies suggested that microbial communities can harbour keystone species whose removal can cause a dramatic shift in microbiome structure and functioning. yet, an efficient method to systematically identify keystone species in microbial communities is still lacking. here we propose a data-driven keystone species identification (dki) framework based on deep learning to resolve this challenge. our key idea is to implicitly learn the assembly rules of microbial communities from a particular habitat by training a deep-learning model using microbiome samples collected from this habitat. the well-trained deep-learning model enables us to quantify the community-specific keystoneness of each species in any microbiome sample from this habitat by conducting a thought experiment on species removal. we systematically validated this dki framework using synthetic data and applied dki to analyse real data. we found that those taxa with high median keystoneness across different communities display strong community specificity. the presented dki framework demonstrates the power of machine learning in tackling a fundamental problem in community ecology, paving the way for the data-driven management of complex microbial communities. | [
"previous studies",
"microbial communities",
"keystone species",
"whose removal",
"a dramatic shift",
"microbiome structure",
"functioning",
"an efficient method",
"keystone species",
"microbial communities",
"we",
"a data-driven keystone species identification",
"(dki) framework",
"deep learning",
"this challenge",
"our key idea",
"the assembly rules",
"microbial communities",
"a particular habitat",
"a deep-learning model",
"microbiome samples",
"this habitat",
"the well-trained deep-learning model",
"us",
"the community-specific keystoneness",
"each species",
"any microbiome sample",
"this habitat",
"a thought experiment",
"species removal",
"we",
"this dki framework",
"synthetic data",
"applied dki",
"analyse real data",
"we",
"those taxa",
"high median keystoneness",
"different communities",
"strong community specificity",
"the presented dki framework",
"the power",
"a fundamental problem",
"community ecology",
"the way",
"the data-driven management",
"complex microbial communities"
] |
Leveraging self-paced learning and deep sparse embedding for image clustering | [
"Yanming Liu",
"Jinglei Liu"
] | Deep clustering outperforms traditional methods by incorporating feature learning. However, some existing deep clustering methods overlook the suitability of the learned features for clustering, leading to insufficient feedback received by the clustering model and hampering the accuracy improvement. To tackle these issues, we propose a joint self-paced learning and deep sparse embedding for image clustering. Our method consists of two stages: pretraining and finetuning. In the pretraining stage, the autoencoder learns basic features and constructs the feature space. In the finetuning stage, method performs two tasks: feature learning and cluster assignment. Specifically, we finetune the encoder with both original and augmented data to preserve the local structure in the feature space. Self-paced learning guarantees that the most confident features are used for each iteration and mitigates the influence of boundary samples. Furthermore, sparse embedding ensures that the model encodes only key features in feature learning tasks, thereby avoiding incorrect calculations resulting from redundant features. Finally, we jointly optimize these two tasks to complete the feature learning for clustering. Extensive experiments on various datasets demonstrate that our approach outperforms existing solutions. | 10.1007/s00521-023-09335-w | leveraging self-paced learning and deep sparse embedding for image clustering | deep clustering outperforms traditional methods by incorporating feature learning. however, some existing deep clustering methods overlook the suitability of the learned features for clustering, leading to insufficient feedback received by the clustering model and hampering the accuracy improvement. to tackle these issues, we propose a joint self-paced learning and deep sparse embedding for image clustering. our method consists of two stages: pretraining and finetuning. in the pretraining stage, the autoencoder learns basic features and constructs the feature space. in the finetuning stage, method performs two tasks: feature learning and cluster assignment. specifically, we finetune the encoder with both original and augmented data to preserve the local structure in the feature space. self-paced learning guarantees that the most confident features are used for each iteration and mitigates the influence of boundary samples. furthermore, sparse embedding ensures that the model encodes only key features in feature learning tasks, thereby avoiding incorrect calculations resulting from redundant features. finally, we jointly optimize these two tasks to complete the feature learning for clustering. extensive experiments on various datasets demonstrate that our approach outperforms existing solutions. | [
"deep clustering",
"traditional methods",
"feature learning",
"some existing deep clustering methods",
"the suitability",
"the learned features",
"clustering",
"insufficient feedback",
"the clustering model",
"the accuracy improvement",
"these issues",
"we",
"a joint self-paced learning",
"deep sparse",
"image clustering",
"our method",
"two stages",
"the pretraining stage",
"the autoencoder",
"basic features",
"the feature space",
"the finetuning stage",
"method",
"two tasks",
"feature learning",
"cluster assignment",
"we",
"the encoder",
"both original and augmented data",
"the local structure",
"the feature space",
"self-paced learning guarantees",
"the most confident features",
"each iteration",
"the influence",
"boundary samples",
"sparse",
"the model",
"only key features",
"feature learning tasks",
"incorrect calculations",
"redundant features",
"we",
"these two tasks",
"the feature",
"extensive experiments",
"various datasets",
"our approach",
"existing solutions",
"two",
"two",
"two"
] |
Automated abnormalities detection in mammography using deep learning | [
"Ghada M. El-Banby",
"Nourhan S. Salem",
"Eman A. Tafweek",
"Essam N. Abd El-Azziz"
] | Breast cancer is the second most prevalent cause of cancer death and the most common malignancy among women, posing a life-threatening risk. Treatment for breast cancer can be highly effective, with a survival chance of 90% or higher, especially when the disease is detected early. This paper introduces a groundbreaking deep U-Net framework for mammography breast cancer images to perform automatic detection of abnormalities. The objective is to provide segmented images that show areas of tumors more accurately than other deep learning techniques. The proposed framework consists of three steps. The first step is image preprocessing using the Li algorithm to minimize the cross-entropy between the foreground and the background, contrast enhancement using contrast-limited adaptive histogram equalization (CLAHE), normalization, and median filtering. The second step involves data augmentation to mitigate overfitting and underfitting, and the final step is implementing a convolutional encoder-decoder network-based U-Net architecture, characterized by high precision in medical image analysis. The framework has been tested on two comprehensive public datasets, namely INbreast and CBIS-DDSM. Several metrics have been adopted for quantitative performance assessment, including the Dice score, sensitivity, Hausdorff distance, Jaccard coefficient, precision, and F1 score. Quantitative results on the INbreast dataset show an average Dice score of 85.61% and a sensitivity of 81.26%. On the CBIS-DDSM dataset, the average Dice score is 87.98%, and the sensitivity reaches 90.58%. The experimental results ensure earlier and more accurate abnormality detection. Furthermore, the success of the proposed deep learning framework in mammography shows promise for broader applications in medical imaging, potentially revolutionizing various radiological practices. | 10.1007/s40747-024-01532-x | automated abnormalities detection in mammography using deep learning | breast cancer is the second most prevalent cause of cancer death and the most common malignancy among women, posing a life-threatening risk. treatment for breast cancer can be highly effective, with a survival chance of 90% or higher, especially when the disease is detected early. this paper introduces a groundbreaking deep u-net framework for mammography breast cancer images to perform automatic detection of abnormalities. the objective is to provide segmented images that show areas of tumors more accurately than other deep learning techniques. the proposed framework consists of three steps. the first step is image preprocessing using the li algorithm to minimize the cross-entropy between the foreground and the background, contrast enhancement using contrast-limited adaptive histogram equalization (clahe), normalization, and median filtering. the second step involves data augmentation to mitigate overfitting and underfitting, and the final step is implementing a convolutional encoder-decoder network-based u-net architecture, characterized by high precision in medical image analysis. the framework has been tested on two comprehensive public datasets, namely inbreast and cbis-ddsm. several metrics have been adopted for quantitative performance assessment, including the dice score, sensitivity, hausdorff distance, jaccard coefficient, precision, and f1 score. quantitative results on the inbreast dataset show an average dice score of 85.61% and a sensitivity of 81.26%. on the cbis-ddsm dataset, the average dice score is 87.98%, and the sensitivity reaches 90.58%. the experimental results ensure earlier and more accurate abnormality detection. furthermore, the success of the proposed deep learning framework in mammography shows promise for broader applications in medical imaging, potentially revolutionizing various radiological practices. | [
"breast cancer",
"the second most prevalent cause",
"cancer death",
"the most common malignancy",
"women",
"a life-threatening risk",
"treatment",
"breast cancer",
"a survival chance",
"90%",
"the disease",
"this paper",
"a groundbreaking deep u-net framework",
"mammography breast cancer images",
"automatic detection",
"abnormalities",
"the objective",
"segmented images",
"that",
"areas",
"tumors",
"other deep learning techniques",
"the proposed framework",
"three steps",
"the first step",
"image",
"the li algorithm",
"the cross",
"-",
"entropy",
"the foreground",
"the background",
"contrast enhancement",
"contrast-limited adaptive histogram equalization",
"clahe",
"normalization",
"median filtering",
"the second step",
"data augmentation",
"overfitting",
"the final step",
"a convolutional encoder-decoder network-based u-net architecture",
"high precision",
"medical image analysis",
"the framework",
"two comprehensive public datasets",
"namely inbreast and cbis-ddsm",
"several metrics",
"quantitative performance assessment",
"the dice score",
"sensitivity",
"hausdorff distance",
"precision",
"f1 score",
"quantitative results",
"the inbreast dataset",
"an average dice score",
"85.61%",
"a sensitivity",
"81.26%",
"the cbis-ddsm dataset",
"the average dice score",
"87.98%",
"the sensitivity",
"90.58%",
"the experimental results",
"earlier and more accurate abnormality detection",
"the success",
"the proposed deep learning framework",
"mammography",
"promise",
"broader applications",
"medical imaging",
"various radiological practices",
"second",
"90%",
"three",
"first",
"li",
"second",
"two",
"85.61%",
"81.26%",
"87.98%",
"90.58%"
] |
Deep Learning Based Entropy Controlled Optimization for the Detection of Covid-19 | [
"Jiong Chen",
"Abdullah Alshammari",
"Mohammed Alonazi",
"Aisha M. Alqahtani",
"Sara A. Althubiti",
"Romi Fadillah Rahmat"
] | Emerging technological advancements open the door for employing deep learning-based methods in practically all spheres of human endeavor. Because of their accuracy, deep learning algorithms can be used in healthcare to categorize and identify different illnesses. The recent coronavirus (COVID-19) outbreak has significantly strained the global medical system. By using medical imaging and PCR testing, COVID-19 can be diagnosed. Since COVID-19 is highly transmissible, it is generally considered secure to analyze it with a chest X-ray. To distinguish COVID-19 infections from additional infections that are not COVID-19 infections, a deep learning-based entropy-controlled whale optimization (EWOA) with Transfer Learning is suggested in this paper. The created system comprises three stages: a preliminary processing phase to remove noise effects and resize the image, then a deep learning architecture using a pre-trained model to extract features from the pre-processed image. After extracting the features, optimization is carried out. EWOA is utilized to combine and optimize the optimum features. A softmax layer is used to reach the final categorization. Various activation functions, thresholds, and optimizers are used to assess the systems. Numerous metrics for performance are utilized to measure the performance of the offered methodologies for assessment. Through an accuracy of 97.95%, the suggested technique accurately categorizes four classes, including COVID-19, viral pneumonia, chest infection, and routine. Compared to current methodologies found in the literature, the proposed technique exhibits advantages regarding accuracy. | 10.1007/s10723-024-09766-2 | deep learning based entropy controlled optimization for the detection of covid-19 | emerging technological advancements open the door for employing deep learning-based methods in practically all spheres of human endeavor. because of their accuracy, deep learning algorithms can be used in healthcare to categorize and identify different illnesses. the recent coronavirus (covid-19) outbreak has significantly strained the global medical system. by using medical imaging and pcr testing, covid-19 can be diagnosed. since covid-19 is highly transmissible, it is generally considered secure to analyze it with a chest x-ray. to distinguish covid-19 infections from additional infections that are not covid-19 infections, a deep learning-based entropy-controlled whale optimization (ewoa) with transfer learning is suggested in this paper. the created system comprises three stages: a preliminary processing phase to remove noise effects and resize the image, then a deep learning architecture using a pre-trained model to extract features from the pre-processed image. after extracting the features, optimization is carried out. ewoa is utilized to combine and optimize the optimum features. a softmax layer is used to reach the final categorization. various activation functions, thresholds, and optimizers are used to assess the systems. numerous metrics for performance are utilized to measure the performance of the offered methodologies for assessment. through an accuracy of 97.95%, the suggested technique accurately categorizes four classes, including covid-19, viral pneumonia, chest infection, and routine. compared to current methodologies found in the literature, the proposed technique exhibits advantages regarding accuracy. | [
"emerging technological advancements",
"the door",
"deep learning-based methods",
"practically all spheres",
"human endeavor",
"their accuracy",
"deep learning algorithms",
"healthcare",
"different illnesses",
"the recent coronavirus (covid-19) outbreak",
"the global medical system",
"medical imaging",
"pcr testing",
"covid-19",
"covid-19",
"it",
"it",
"-",
"covid-19 infections",
"additional infections",
"that",
"covid-19 infections",
"a deep learning-based entropy-controlled whale optimization",
"(ewoa",
"transfer learning",
"this paper",
"the created system",
"three stages",
"a preliminary processing phase",
"noise effects",
"the image",
"then a deep learning architecture",
"a pre-trained model",
"features",
"the pre-processed image",
"the features",
"optimization",
"ewoa",
"the optimum features",
"a softmax layer",
"the final categorization",
"various activation functions",
"thresholds",
"optimizers",
"the systems",
"numerous metrics",
"performance",
"the performance",
"the offered methodologies",
"assessment",
"an accuracy",
"97.95%",
"the suggested technique",
"four classes",
"covid-19",
"viral pneumonia",
"chest infection",
"current methodologies",
"the literature",
"accuracy",
"covid-19",
"covid-19",
"covid-19",
"covid-19",
"covid-19",
"three",
"97.95%",
"four",
"covid-19"
] |
Data set terminology of deep learning in medicine: a historical review and recommendation | [
"Shannon L. Walston",
"Hiroshi Seki",
"Hirotaka Takita",
"Yasuhito Mitsuyama",
"Shingo Sato",
"Akifumi Hagiwara",
"Rintaro Ito",
"Shouhei Hanaoka",
"Yukio Miki",
"Daiju Ueda"
] | Medicine and deep learning-based artificial intelligence (AI) engineering represent two distinct fields each with decades of published history. The current rapid convergence of deep learning and medicine has led to significant advancements, yet it has also introduced ambiguity regarding data set terms common to both fields, potentially leading to miscommunication and methodological discrepancies. This narrative review aims to give historical context for these terms, accentuate the importance of clarity when these terms are used in medical deep learning contexts, and offer solutions to mitigate misunderstandings by readers from either field. Through an examination of historical documents, including articles, writing guidelines, and textbooks, this review traces the divergent evolution of terms for data sets and their impact. Initially, the discordant interpretations of the word ‘validation’ in medical and AI contexts are explored. We then show that in the medical field as well, terms traditionally used in the deep learning domain are becoming more common, with the data for creating models referred to as the ‘training set’, the data for tuning of parameters referred to as the ‘validation (or tuning) set’, and the data for the evaluation of models as the ‘test set’. Additionally, the test sets used for model evaluation are classified into internal (random splitting, cross-validation, and leave-one-out) sets and external (temporal and geographic) sets. This review then identifies often misunderstood terms and proposes pragmatic solutions to mitigate terminological confusion in the field of deep learning in medicine. We support the accurate and standardized description of these data sets and the explicit definition of data set splitting terminologies in each publication. These are crucial methods for demonstrating the robustness and generalizability of deep learning applications in medicine. This review aspires to enhance the precision of communication, thereby fostering more effective and transparent research methodologies in this interdisciplinary field. | 10.1007/s11604-024-01608-1 | data set terminology of deep learning in medicine: a historical review and recommendation | medicine and deep learning-based artificial intelligence (ai) engineering represent two distinct fields each with decades of published history. the current rapid convergence of deep learning and medicine has led to significant advancements, yet it has also introduced ambiguity regarding data set terms common to both fields, potentially leading to miscommunication and methodological discrepancies. this narrative review aims to give historical context for these terms, accentuate the importance of clarity when these terms are used in medical deep learning contexts, and offer solutions to mitigate misunderstandings by readers from either field. through an examination of historical documents, including articles, writing guidelines, and textbooks, this review traces the divergent evolution of terms for data sets and their impact. initially, the discordant interpretations of the word ‘validation’ in medical and ai contexts are explored. we then show that in the medical field as well, terms traditionally used in the deep learning domain are becoming more common, with the data for creating models referred to as the ‘training set’, the data for tuning of parameters referred to as the ‘validation (or tuning) set’, and the data for the evaluation of models as the ‘test set’. additionally, the test sets used for model evaluation are classified into internal (random splitting, cross-validation, and leave-one-out) sets and external (temporal and geographic) sets. this review then identifies often misunderstood terms and proposes pragmatic solutions to mitigate terminological confusion in the field of deep learning in medicine. we support the accurate and standardized description of these data sets and the explicit definition of data set splitting terminologies in each publication. these are crucial methods for demonstrating the robustness and generalizability of deep learning applications in medicine. this review aspires to enhance the precision of communication, thereby fostering more effective and transparent research methodologies in this interdisciplinary field. | [
"medicine",
"deep learning-based artificial intelligence",
"(ai) engineering",
"two distinct fields",
"each",
"decades",
"published history",
"the current rapid convergence",
"deep learning",
"medicine",
"significant advancements",
"it",
"ambiguity",
"data",
"terms",
"both fields",
"miscommunication",
"methodological discrepancies",
"this narrative review",
"historical context",
"these terms",
"the importance",
"clarity",
"these terms",
"medical deep learning contexts",
"solutions",
"misunderstandings",
"readers",
"either field",
"an examination",
"historical documents",
"articles",
"guidelines",
"textbooks",
"this review",
"the divergent evolution",
"terms",
"data sets",
"their impact",
"the discordant interpretations",
"the word",
"validation",
"ai contexts",
"we",
"the medical field",
"terms",
"the deep learning domain",
"the data",
"models",
"the ‘training set",
"the data",
"tuning",
"parameters",
"the ‘validation (or tuning) set",
"the evaluation",
"models",
"the test sets",
"model evaluation",
"internal (random splitting",
"validation",
"sets",
"external (temporal and geographic) sets",
"this review",
"terms",
"pragmatic solutions",
"terminological confusion",
"the field",
"deep learning",
"medicine",
"we",
"the accurate and standardized description",
"these data sets",
"the explicit definition",
"data",
"splitting terminologies",
"each publication",
"these",
"crucial methods",
"the robustness",
"generalizability",
"deep learning applications",
"medicine",
"this review",
"the precision",
"communication",
"more effective and transparent research methodologies",
"this interdisciplinary field",
"two",
"each with decades",
"one"
] |
A Study on Machine Learning and Deep Learning Techniques for Identifying Malicious Web Content | [
"Sarita Mohanty",
"Asha Ambhakar"
] | The rapid proliferation of internet usage has led to an exponential increase in cyber threats, particularly malicious websites that can compromise user data and system integrity. Traditional methods of web security are increasingly becoming obsolete, necessitating more dynamic and adaptive approaches. This research paper presents a comprehensive comparative study of Machine Learning (ML) and Deep Learning (DL) techniques for the detection of malicious websites. Utilizing a dataset of over 420,000 web URLs, categorized into various features such as domain, subdomain, and domain suffix, the study aims to evaluate the effectiveness, precision, and computational efficiency of multiple algorithms. Two Convolutional Neural Network (CNN) models were developed and compared against traditional ML algorithms including Decision Trees, Random Forests, AdaBoost, K-Nearest Neighbors (KNN), Stochastic Gradient Descent (SGD), Extra Trees, and Gaussian Naive Bayes. The models were rigorously evaluated based on metrics such as accuracy, precision, recall, and F1-score. Preliminary results indicate that CNN models outperform traditional ML algorithms, achieving an accuracy rate of up to 98%, thereby highlighting the potential of DL in cybersecurity applications. Moreover, the study addresses the challenges posed by high cardinality and class imbalance in the dataset. Various data preprocessing techniques were employed to mitigate these issues, including feature engineering and oversampling of minority classes. The research contributes to the field by providing a detailed analysis of each algorithm’s strengths and weaknesses, thereby offering valuable insights into the adaptability and scalability of ML and DL techniques in malicious web detection. | 10.1007/s42979-024-03099-3 | a study on machine learning and deep learning techniques for identifying malicious web content | the rapid proliferation of internet usage has led to an exponential increase in cyber threats, particularly malicious websites that can compromise user data and system integrity. traditional methods of web security are increasingly becoming obsolete, necessitating more dynamic and adaptive approaches. this research paper presents a comprehensive comparative study of machine learning (ml) and deep learning (dl) techniques for the detection of malicious websites. utilizing a dataset of over 420,000 web urls, categorized into various features such as domain, subdomain, and domain suffix, the study aims to evaluate the effectiveness, precision, and computational efficiency of multiple algorithms. two convolutional neural network (cnn) models were developed and compared against traditional ml algorithms including decision trees, random forests, adaboost, k-nearest neighbors (knn), stochastic gradient descent (sgd), extra trees, and gaussian naive bayes. the models were rigorously evaluated based on metrics such as accuracy, precision, recall, and f1-score. preliminary results indicate that cnn models outperform traditional ml algorithms, achieving an accuracy rate of up to 98%, thereby highlighting the potential of dl in cybersecurity applications. moreover, the study addresses the challenges posed by high cardinality and class imbalance in the dataset. various data preprocessing techniques were employed to mitigate these issues, including feature engineering and oversampling of minority classes. the research contributes to the field by providing a detailed analysis of each algorithm’s strengths and weaknesses, thereby offering valuable insights into the adaptability and scalability of ml and dl techniques in malicious web detection. | [
"the rapid proliferation",
"internet usage",
"an exponential increase",
"cyber threats",
"particularly malicious websites",
"that",
"user data and system integrity",
"traditional methods",
"web security",
"more dynamic and adaptive approaches",
"this research paper",
"a comprehensive comparative study",
"machine learning",
"ml",
"deep learning",
"(dl) techniques",
"the detection",
"malicious websites",
"a dataset",
"over 420,000 web urls",
"various features",
"domain",
"subdomain",
"domain suffix",
"the study",
"the effectiveness",
"precision",
"computational efficiency",
"multiple algorithms",
"two convolutional neural network (cnn) models",
"traditional ml algorithms",
"decision trees",
"random forests",
"adaboost, k-nearest neighbors",
", stochastic gradient descent",
"sgd",
"extra trees",
"gaussian naive bayes",
"the models",
"metrics",
"accuracy",
"precision",
"recall",
"f1-score",
"preliminary results",
"cnn models",
"an accuracy rate",
"up to 98%",
"the potential",
"dl",
"cybersecurity applications",
"the study",
"the challenges",
"high cardinality",
"class imbalance",
"the dataset",
"various data",
"techniques",
"these issues",
"feature engineering",
"oversampling",
"minority classes",
"the research",
"the field",
"a detailed analysis",
"each algorithm’s strengths",
"weaknesses",
"valuable insights",
"the adaptability",
"scalability",
"ml",
"dl techniques",
"malicious web detection",
"over 420,000",
"suffix",
"two",
"cnn",
"bayes",
"cnn",
"up to 98%"
] |
Federal learning-based a dual-branch deep learning model for colon polyp segmentation | [
"Xuguang Cao",
"Kefeng Fan",
"Huilin Ma"
] | The incidence of colon cancer occupies the top three places in gastrointestinal tumors, and colon polyps are an important causative factor in the development of colon cancer. Early screening for colon polyps and colon polypectomy can reduce the chances of colon cancer. The current means of colon polyp examination is through colonoscopy, taking images of the gastrointestinal tract, and then manually marking them manually, which is time-consuming and labor-intensive for doctors. Therefore, relying on advanced deep learning technology to automatically identify colon polyps in the gastrointestinal tract of the patient and segmenting the polyps is an important direction of research nowadays. Due to the privacy of medical data and the non-interoperability of disease information, this paper proposes a dual-branch colon polyp segmentation network based on federated learning, which makes it possible to achieve a better training effect under the guarantee of data independence, and secondly, the dual-branch colon polyp segmentation network proposed in this paper adopts the two different structures of convolutional neural network (CNN) and Transformer to form a dual-branch structure, and through layer-by-layer fusion embedding, the advantages between different structures are realized. In this paper, we also propose the Aggregated Attention Module (AAM) to preserve the high-dimensional semantic information and to complement the missing information in the lower layers. Ultimately our approach achieves state of the art in Kvasir-SEG and CVC-ClinicDB datasets. | 10.1007/s11042-024-19197-6 | federal learning-based a dual-branch deep learning model for colon polyp segmentation | the incidence of colon cancer occupies the top three places in gastrointestinal tumors, and colon polyps are an important causative factor in the development of colon cancer. early screening for colon polyps and colon polypectomy can reduce the chances of colon cancer. the current means of colon polyp examination is through colonoscopy, taking images of the gastrointestinal tract, and then manually marking them manually, which is time-consuming and labor-intensive for doctors. therefore, relying on advanced deep learning technology to automatically identify colon polyps in the gastrointestinal tract of the patient and segmenting the polyps is an important direction of research nowadays. due to the privacy of medical data and the non-interoperability of disease information, this paper proposes a dual-branch colon polyp segmentation network based on federated learning, which makes it possible to achieve a better training effect under the guarantee of data independence, and secondly, the dual-branch colon polyp segmentation network proposed in this paper adopts the two different structures of convolutional neural network (cnn) and transformer to form a dual-branch structure, and through layer-by-layer fusion embedding, the advantages between different structures are realized. in this paper, we also propose the aggregated attention module (aam) to preserve the high-dimensional semantic information and to complement the missing information in the lower layers. ultimately our approach achieves state of the art in kvasir-seg and cvc-clinicdb datasets. | [
"the incidence",
"colon cancer",
"the top three places",
"gastrointestinal tumors",
"colon polyps",
"an important causative factor",
"the development",
"colon cancer",
"colon polyps",
"colon polypectomy",
"the chances",
"colon cancer",
"the current means",
"colon polyp examination",
"colonoscopy",
"images",
"the gastrointestinal tract",
"them",
"which",
"doctors",
"advanced deep learning technology",
"colon polyps",
"the gastrointestinal tract",
"the patient",
"the polyps",
"an important direction",
"research",
"the privacy",
"medical data",
"the non",
"-",
"interoperability",
"disease information",
"this paper",
"a dual-branch colon polyp segmentation network",
"federated learning",
"which",
"it",
"a better training effect",
"the guarantee",
"data independence",
"the dual-branch colon polyp segmentation network",
"this paper",
"the two different structures",
"convolutional neural network",
"cnn",
"a dual-branch structure",
"layer",
"the advantages",
"different structures",
"this paper",
"we",
"the aggregated attention module",
"aam",
"the high-dimensional semantic information",
"the missing information",
"the lower layers",
"our approach",
"state",
"the art",
"kvasir-seg and cvc-clinicdb datasets",
"three",
"secondly",
"two",
"cnn"
] |
Deep learning in pulmonary nodule detection and segmentation: a systematic review | [
"Chuan Gao",
"Linyu Wu",
"Wei Wu",
"Yichao Huang",
"Xinyue Wang",
"Zhichao Sun",
"Maosheng Xu",
"Chen Gao"
] | ObjectivesThe accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. This study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature.MethodsThis study utilized a systematic review with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, searching PubMed, Embase, Web of Science Core Collection, and the Cochrane Library databases up to May 10, 2023. The Quality Assessment of Diagnostic Accuracy Studies 2 criteria was used to assess the risk of bias and was adjusted with the Checklist for Artificial Intelligence in Medical Imaging. The study analyzed and extracted model performance, data sources, and task-focus information.ResultsAfter screening, we included nine studies meeting our inclusion criteria. These studies were published between 2019 and 2023 and predominantly used public datasets, with the Lung Image Database Consortium Image Collection and Image Database Resource Initiative and Lung Nodule Analysis 2016 being the most common. The studies focused on detection, segmentation, and other tasks, primarily utilizing Convolutional Neural Networks for model development. Performance evaluation covered multiple metrics, including sensitivity and the Dice coefficient.ConclusionsThis study highlights the potential power of deep learning in lung nodule detection and segmentation. It underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research.Clinical relevance statementDeep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. Future research should address methodological shortcomings and variability to enhance its clinical utility.Key Points
Deep learning shows potential in the detection and segmentation of pulmonary nodules.
There are methodological gaps and biases present in the existing literature.
Factors such as external validation and transparency affect the clinical application. | 10.1007/s00330-024-10907-0 | deep learning in pulmonary nodule detection and segmentation: a systematic review | objectivesthe accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. this study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature.methodsthis study utilized a systematic review with the preferred reporting items for systematic reviews and meta-analyses guidelines, searching pubmed, embase, web of science core collection, and the cochrane library databases up to may 10, 2023. the quality assessment of diagnostic accuracy studies 2 criteria was used to assess the risk of bias and was adjusted with the checklist for artificial intelligence in medical imaging. the study analyzed and extracted model performance, data sources, and task-focus information.resultsafter screening, we included nine studies meeting our inclusion criteria. these studies were published between 2019 and 2023 and predominantly used public datasets, with the lung image database consortium image collection and image database resource initiative and lung nodule analysis 2016 being the most common. the studies focused on detection, segmentation, and other tasks, primarily utilizing convolutional neural networks for model development. performance evaluation covered multiple metrics, including sensitivity and the dice coefficient.conclusionsthis study highlights the potential power of deep learning in lung nodule detection and segmentation. it underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research.clinical relevance statementdeep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. future research should address methodological shortcomings and variability to enhance its clinical utility.key points deep learning shows potential in the detection and segmentation of pulmonary nodules. there are methodological gaps and biases present in the existing literature. factors such as external validation and transparency affect the clinical application. | [
"objectivesthe accurate detection",
"precise segmentation",
"lung nodules",
"computed tomography",
"key prerequisites",
"early diagnosis",
"appropriate treatment",
"lung cancer",
"this study",
"detection and segmentation methods",
"pulmonary nodules",
"deep-learning techniques",
"methodological gaps",
"biases",
"the existing literature.methodsthis study",
"a systematic review",
"the preferred reporting items",
"systematic reviews",
"meta-analyses guidelines",
"embase, web",
"science core collection",
"the cochrane library",
"may",
"the quality assessment",
"diagnostic accuracy studies",
"2 criteria",
"the risk",
"bias",
"the checklist",
"artificial intelligence",
"medical imaging",
"the study",
"model performance",
"data sources",
"task-focus information.resultsafter screening",
"we",
"nine studies",
"our inclusion criteria",
"these studies",
"public datasets",
"the lung image database consortium image collection and image database resource initiative",
"lung",
"the studies",
"detection",
"segmentation",
"other tasks",
"convolutional neural networks",
"model development",
"performance evaluation",
"multiple metrics",
"sensitivity",
"the dice coefficient.conclusionsthis study",
"the potential power",
"deep learning",
"lung nodule detection",
"segmentation",
"it",
"the importance",
"standardized data processing",
"code",
"data",
"sharing",
"the value",
"external test datasets",
"the need",
"model complexity",
"efficiency",
"future research.clinical relevance statementdeep learning",
"significant promise",
"segmenting pulmonary nodules",
"future research",
"methodological shortcomings",
"variability",
"its clinical utility.key points",
"deep learning",
"potential",
"the detection",
"segmentation",
"pulmonary nodules",
"methodological gaps",
"biases",
"the existing literature",
"factors",
"external validation",
"transparency",
"the clinical application",
"2",
"nine",
"between 2019 and 2023",
"2016"
] |
Towards optimized tensor code generation for deep learning on sunway many-core processor | [
"Mingzhen Li",
"Changxi Liu",
"Jianjin Liao",
"Xuegui Zheng",
"Hailong Yang",
"Rujun Sun",
"Jun Xu",
"Lin Gan",
"Guangwen Yang",
"Zhongzhi Luan",
"Depei Qian"
] | The flourish of deep learning frameworks and hardware platforms has been demanding an efficient compiler that can shield the diversity in both software and hardware in order to provide application portability. Among the existing deep learning compilers, TVM is well known for its efficiency in code generation and optimization across diverse hardware devices. In the meanwhile, the Sunway many-core processor renders itself as a competitive candidate for its attractive computational power in both scientific computing and deep learning workloads. This paper combines the trends in these two directions. Specifically, we propose swTVM} that extends the original TVM to support ahead-of-time compilation for architecture requiring cross-compilation such as Sunway. In addition, we leverage the architecture features during the compilation such as core group for massive parallelism, DMA for high bandwidth memory transfer and local device memory for data locality, in order to generate efficient codes for deep learning workloads on Sunway. The experiment results show that the codes generated by swTVM} achieve 1.79× improvement of inference latency on average compared to the state-of-the-art deep learning framework on Sunway, across eight representative benchmarks. This work is the first attempt from the compiler perspective to bridge the gap of deep learning and Sunway processor particularly with productivity and efficiency in mind. We believe this work will encourage more people to embrace the power of deep learning and Sunway many-core processor. | 10.1007/s11704-022-2440-7 | towards optimized tensor code generation for deep learning on sunway many-core processor | the flourish of deep learning frameworks and hardware platforms has been demanding an efficient compiler that can shield the diversity in both software and hardware in order to provide application portability. among the existing deep learning compilers, tvm is well known for its efficiency in code generation and optimization across diverse hardware devices. in the meanwhile, the sunway many-core processor renders itself as a competitive candidate for its attractive computational power in both scientific computing and deep learning workloads. this paper combines the trends in these two directions. specifically, we propose swtvm} that extends the original tvm to support ahead-of-time compilation for architecture requiring cross-compilation such as sunway. in addition, we leverage the architecture features during the compilation such as core group for massive parallelism, dma for high bandwidth memory transfer and local device memory for data locality, in order to generate efficient codes for deep learning workloads on sunway. the experiment results show that the codes generated by swtvm} achieve 1.79× improvement of inference latency on average compared to the state-of-the-art deep learning framework on sunway, across eight representative benchmarks. this work is the first attempt from the compiler perspective to bridge the gap of deep learning and sunway processor particularly with productivity and efficiency in mind. we believe this work will encourage more people to embrace the power of deep learning and sunway many-core processor. | [
"the flourish",
"deep learning frameworks",
"hardware platforms",
"an efficient compiler",
"that",
"the diversity",
"both software",
"hardware",
"order",
"application portability",
"the existing deep learning compilers",
"tvm",
"its efficiency",
"code generation",
"optimization",
"diverse hardware devices",
"the meanwhile",
"the sunway many-core processor",
"itself",
"a competitive candidate",
"its attractive computational power",
"both scientific computing",
"deep learning workloads",
"this paper",
"the trends",
"these two directions",
"we",
"swtvm",
"that",
"the original tvm",
"time",
"architecture",
"cross",
"-",
"compilation",
"sunway",
"addition",
"we",
"the architecture",
"the compilation",
"core group",
"massive parallelism",
"dma",
"high bandwidth memory transfer",
"local device memory",
"data locality",
"order",
"efficient codes",
"deep learning workloads",
"sunway",
"the experiment results",
"the codes",
"swtvm",
"1.79× improvement",
"inference latency",
"the-art",
"sunway",
"eight representative benchmarks",
"this work",
"the first attempt",
"the compiler perspective",
"the gap",
"deep learning and sunway processor",
"productivity",
"efficiency",
"mind",
"we",
"this work",
"more people",
"the power",
"deep learning",
"many-core processor",
"two",
"1.79×",
"eight",
"first"
] |
Discovering cholinesterase inhibitors from Chinese herbal medicine with deep learning models | [
"Fulu Pan",
"Yang Liu",
"Zhiqiang Luo",
"Guopeng Wang",
"Xueyan Li",
"Huining Liu",
"Shuang Yu",
"Dongying Qi",
"Xinyu Wang",
"Xiaoyu Chai",
"Qianqian Wang",
"Renfang Yin",
"Yanli Pan"
] | Traditional Chinese medicine (TCM) holds distinctive advantages in the management of Alzheimer’s disease. Nonetheless, a considerable gap remains in our understanding of its pharmacologically active constituents. In this study, we harnessed the potential of deep learning models to swiftly and precisely predict drug-target interactions. We conducted a systematic screening of cholinesterase (ChE) inhibitors from an extensive array of TCM ingredients, followed by rigorous validation through in vitro experiments. We constructed both a drug-target interactions (DTI) model and a blood-brain barrier permeability (BBBP) model, with both models achieving an AUPRC score exceeding 0.9. Subsequently, we conducted a screening process that identified six compounds for in vitro ChE inhibitory assay. Notably, all six compounds exhibited a robust inhibitory effect on acetylcholinesterase (AChE), while four of the six compounds demonstrated potent inhibitory activity against butyrylcholinesterase (BChE). Our findings underscore the promise of leveraging deep learning to discover inhibitors from TCM. | 10.1007/s00044-024-03238-8 | discovering cholinesterase inhibitors from chinese herbal medicine with deep learning models | traditional chinese medicine (tcm) holds distinctive advantages in the management of alzheimer’s disease. nonetheless, a considerable gap remains in our understanding of its pharmacologically active constituents. in this study, we harnessed the potential of deep learning models to swiftly and precisely predict drug-target interactions. we conducted a systematic screening of cholinesterase (che) inhibitors from an extensive array of tcm ingredients, followed by rigorous validation through in vitro experiments. we constructed both a drug-target interactions (dti) model and a blood-brain barrier permeability (bbbp) model, with both models achieving an auprc score exceeding 0.9. subsequently, we conducted a screening process that identified six compounds for in vitro che inhibitory assay. notably, all six compounds exhibited a robust inhibitory effect on acetylcholinesterase (ache), while four of the six compounds demonstrated potent inhibitory activity against butyrylcholinesterase (bche). our findings underscore the promise of leveraging deep learning to discover inhibitors from tcm. | [
"traditional chinese medicine",
"tcm",
"distinctive advantages",
"the management",
"alzheimer’s disease",
"a considerable gap",
"our understanding",
"its pharmacologically active constituents",
"this study",
"we",
"the potential",
"deep learning models",
"drug-target interactions",
"we",
"a systematic screening",
"cholinesterase (che) inhibitors",
"an extensive array",
"tcm ingredients",
"rigorous validation",
"vitro experiments",
"we",
"both a drug-target interactions",
"(dti) model",
"a blood-brain barrier permeability",
"(bbbp) model",
"both models",
"an auprc score",
"we",
"a screening process",
"that",
"six compounds",
"in vitro che inhibitory assay",
"all six compounds",
"a robust inhibitory effect",
"acetylcholinesterase",
"ache",
"the six compounds",
"potent inhibitory activity",
"butyrylcholinesterase",
"bche",
"our findings",
"the promise",
"deep learning",
"inhibitors",
"tcm",
"chinese",
"cholinesterase (che",
"dti",
"0.9",
"six",
"six",
"four",
"six"
] |
Predict customer churn using combination deep learning networks model | [
"Van-Hieu Vu"
] | Customers churn is an important issue that is always concerned by banks, and is put at the forefront of the bank’s policies. The fact that banks can identify customers who are intending to leave the service can help banks promptly make policies to retain customers. In this paper, we propose a combined deep learning network models to predict customers leaving or staying at the bank. The proposed model consists of two levels, Level 0 consists of three basic models using three Deep Learning Neural Networks, and Level 1 is a logistic regression model. The proposed model has obtained evaluation results with accuracy metrics of 96.60%, precision metrics of 90.26%, recall metrics of 91.91% and F1 score of 91.07% on the dataset “Bank Customer Churn Prediction”. | 10.1007/s00521-023-09327-w | predict customer churn using combination deep learning networks model | customers churn is an important issue that is always concerned by banks, and is put at the forefront of the bank’s policies. the fact that banks can identify customers who are intending to leave the service can help banks promptly make policies to retain customers. in this paper, we propose a combined deep learning network models to predict customers leaving or staying at the bank. the proposed model consists of two levels, level 0 consists of three basic models using three deep learning neural networks, and level 1 is a logistic regression model. the proposed model has obtained evaluation results with accuracy metrics of 96.60%, precision metrics of 90.26%, recall metrics of 91.91% and f1 score of 91.07% on the dataset “bank customer churn prediction”. | [
"customers",
"an important issue",
"that",
"banks",
"the forefront",
"the bank’s policies",
"the fact",
"banks",
"customers",
"who",
"the service",
"banks",
"policies",
"customers",
"this paper",
"we",
"a combined deep learning network models",
"customers",
"the bank",
"the proposed model",
"two levels",
"level",
"three basic models",
"three deep learning neural networks",
"level",
"a logistic regression model",
"the proposed model",
"evaluation results",
"accuracy metrics",
"96.60%",
"precision metrics",
"90.26%",
"recall metrics",
"91.91%",
"f1 score",
"91.07%",
"the dataset “bank customer churn prediction",
"two",
"0",
"three",
"three",
"1",
"96.60%",
"90.26%",
"91.91%",
"91.07%"
] |
Genome analysis through image processing with deep learning models | [
"Yao-zhong Zhang",
"Seiya Imoto"
] | Genomic sequences are traditionally represented as strings of characters: A (adenine), C (cytosine), G (guanine), and T (thymine). However, an alternative approach involves depicting sequence-related information through image representations, such as Chaos Game Representation (CGR) and read pileup images. With rapid advancements in deep learning (DL) methods within computer vision and natural language processing, there is growing interest in applying image-based DL methods to genomic sequence analysis. These methods involve encoding genomic information as images or integrating spatial information from images into the analytical process. In this review, we summarize three typical applications that use image processing with DL models for genome analysis. We examine the utilization and advantages of these image-based approaches. | 10.1038/s10038-024-01275-0 | genome analysis through image processing with deep learning models | genomic sequences are traditionally represented as strings of characters: a (adenine), c (cytosine), g (guanine), and t (thymine). however, an alternative approach involves depicting sequence-related information through image representations, such as chaos game representation (cgr) and read pileup images. with rapid advancements in deep learning (dl) methods within computer vision and natural language processing, there is growing interest in applying image-based dl methods to genomic sequence analysis. these methods involve encoding genomic information as images or integrating spatial information from images into the analytical process. in this review, we summarize three typical applications that use image processing with dl models for genome analysis. we examine the utilization and advantages of these image-based approaches. | [
"genomic sequences",
"strings",
"characters",
"a (adenine",
"c",
"cytosine",
"g",
"guanine",
"t",
"(thymine",
"an alternative approach",
"sequence-related information",
"image representations",
"chaos game representation",
"pileup images",
"rapid advancements",
"deep learning",
"computer vision",
"natural language processing",
"interest",
"image-based dl methods",
"genomic sequence analysis",
"these methods",
"genomic information",
"images",
"spatial information",
"images",
"the analytical process",
"this review",
"we",
"three typical applications",
"that",
"image processing",
"dl models",
"genome analysis",
"we",
"the utilization",
"advantages",
"these image-based approaches",
"three"
] |
Prediction of mechanical properties for deep drawing steel by deep learning | [
"Gang Xu",
"Jinshan He",
"Zhimin Lü",
"Min Li",
"Jinwu Xu"
] | At present, iron and steel enterprises mainly use “after spot test ward” to control final product quality. However, it is impossible to realize on-line quality predetermining for all products by this traditional approach, hence claims and returns often occur, resulting in major economic losses of enterprises. In order to realize the on-line quality predetermining for steel products during manufacturing process, the prediction models of mechanical properties based on deep learning have been proposed in this work. First, the mechanical properties of deep drawing steels were predicted by using LSTM (long short team memory), GRU (gated recurrent unit) network, and GPR (Gaussian process regression) model, and prediction accuracy and learning efficiency for different models were also discussed. Then, on-line re-learning methods for transfer learning models and model parameters were proposed. The experimental results show that not only the prediction accuracy of optimized transfer learning models has been improved, but also predetermining time was shortened to meet real time requirements of on-line property predetermining. The industrial production data of interstitial-free (IF) steel was used to demonstrate that R2 value of GRU model in training stage reaches more than 0.99, and R2 value in testing stage is more than 0.96. | 10.1007/s12613-022-2547-8 | prediction of mechanical properties for deep drawing steel by deep learning | at present, iron and steel enterprises mainly use “after spot test ward” to control final product quality. however, it is impossible to realize on-line quality predetermining for all products by this traditional approach, hence claims and returns often occur, resulting in major economic losses of enterprises. in order to realize the on-line quality predetermining for steel products during manufacturing process, the prediction models of mechanical properties based on deep learning have been proposed in this work. first, the mechanical properties of deep drawing steels were predicted by using lstm (long short team memory), gru (gated recurrent unit) network, and gpr (gaussian process regression) model, and prediction accuracy and learning efficiency for different models were also discussed. then, on-line re-learning methods for transfer learning models and model parameters were proposed. the experimental results show that not only the prediction accuracy of optimized transfer learning models has been improved, but also predetermining time was shortened to meet real time requirements of on-line property predetermining. the industrial production data of interstitial-free (if) steel was used to demonstrate that r2 value of gru model in training stage reaches more than 0.99, and r2 value in testing stage is more than 0.96. | [
"iron and steel enterprises",
"spot test ward",
"final product quality",
"it",
"line",
"all products",
"this traditional approach",
"returns",
"major economic losses",
"enterprises",
"order",
"line",
"steel products",
"manufacturing process",
"the prediction models",
"mechanical properties",
"deep learning",
"this work",
"the mechanical properties",
"deep drawing steels",
"lstm",
"long short team memory",
"gru",
"gated recurrent unit",
"network",
"gpr",
"gaussian process regression",
"model",
"prediction accuracy",
"efficiency",
"different models",
"line",
"-learning methods",
"transfer learning models",
"model parameters",
"the experimental results",
"not only the prediction accuracy",
"optimized transfer learning models",
"predetermining time",
"real time requirements",
"line",
"the industrial production data",
") steel",
"r2 value",
"gru model",
"training stage",
"r2 value",
"testing stage",
"first",
"gpr",
"gaussian",
"more than 0.99"
] |
Supervised and Unsupervised Deep Learning Approaches for EEG Seizure Prediction | [
"Zakary Georgis-Yap",
"Milos R. Popovic",
"Shehroz S. Khan"
] | Epilepsy affects more than 50 million people worldwide, making it one of the world’s most prevalent neurological diseases. The main symptom of epilepsy is seizures, which occur abruptly and can cause serious injury or death. The ability to predict the occurrence of an epileptic seizure could alleviate many risks and stresses people with epilepsy face. We formulate the problem of detecting preictal (or pre-seizure) with reference to normal EEG as a precursor to incoming seizure. To this end, we developed several supervised deep learning approaches model to identify preictal EEG from normal EEG. We further develop novel unsupervised deep learning approaches to train the models on only normal EEG, and detecting pre-seizure EEG as an anomalous event. These deep learning models were trained and evaluated on two large EEG seizure datasets in a person-specific manner. We found that both supervised and unsupervised approaches are feasible; however, their performance varies depending on the patient, approach and architecture. This new line of research has the potential to develop therapeutic interventions and save human lives. | 10.1007/s41666-024-00160-x | supervised and unsupervised deep learning approaches for eeg seizure prediction | epilepsy affects more than 50 million people worldwide, making it one of the world’s most prevalent neurological diseases. the main symptom of epilepsy is seizures, which occur abruptly and can cause serious injury or death. the ability to predict the occurrence of an epileptic seizure could alleviate many risks and stresses people with epilepsy face. we formulate the problem of detecting preictal (or pre-seizure) with reference to normal eeg as a precursor to incoming seizure. to this end, we developed several supervised deep learning approaches model to identify preictal eeg from normal eeg. we further develop novel unsupervised deep learning approaches to train the models on only normal eeg, and detecting pre-seizure eeg as an anomalous event. these deep learning models were trained and evaluated on two large eeg seizure datasets in a person-specific manner. we found that both supervised and unsupervised approaches are feasible; however, their performance varies depending on the patient, approach and architecture. this new line of research has the potential to develop therapeutic interventions and save human lives. | [
"epilepsy",
"more than 50 million people",
"it",
"the world’s most prevalent neurological diseases",
"the main symptom",
"epilepsy",
"seizures",
"which",
"serious injury",
"death",
"the ability",
"the occurrence",
"an epileptic seizure",
"many risks",
"people",
"epilepsy face",
"we",
"the problem",
"reference",
"normal eeg",
"a precursor",
"incoming seizure",
"this end",
"we",
"several supervised deep learning approaches model",
"preictal eeg",
"normal eeg",
"we",
"novel unsupervised deep learning approaches",
"the models",
"only normal eeg",
"pre-seizure eeg",
"an anomalous event",
"these deep learning models",
"two large eeg seizure datasets",
"a person-specific manner",
"we",
"both",
"approaches",
"their performance",
"the patient",
"approach",
"architecture",
"this new line",
"research",
"the potential",
"therapeutic interventions",
"human lives",
"more than 50 million",
"two"
] |
Leveraging deep learning-assisted attacks against image obfuscation via federated learning | [
"Jimmy Tekli",
"Bechara Al Bouna",
"Gilbert Tekli",
"Raphaël Couturier",
"Antoine Charbel"
] | Obfuscation techniques (e.g., blurring) are employed to protect sensitive information (SI) in images such as individuals’ faces. Recent works demonstrated that adversaries can perform deep learning-assisted (DL) attacks to re-identify obfuscated face images. Adversaries are modeled by their goals, knowledge (e.g., background knowledge), and capabilities (e.g., DL-assisted attacks). Nevertheless, enhancing the evaluation methodology of obfuscation techniques and improving the defense strategies against adversaries requires considering more "pessimistic” attacking scenario, i.e., stronger adversaries. According to a 2019 article published by the European Union Agency for Cybersecurity (ENISA), adversaries tend to perform more sophisticated and dangerous attacks when collaborating together. To address these concerns, our paper investigates a novel privacy challenge in the context of image obfuscation. Specifically, we examine whether adversaries, when collaborating together, can amplify their DL-assisted attacks and cause additional privacy breaches against a target dataset of obfuscated images. We empirically demonstrate that federated learning (FL) can be used as a collaborative attack/adversarial strategy to (i) leverage the attacking capabilities of an adversary, (ii) increase the privacy breaches, and (iii) remedy the lack of background knowledge and data shortage without the need to share/disclose the local training datasets in a centralized location. To the best of our knowledge, we are the first to consider collaborative and more specifically FL-based attacks in the context of face obfuscation. | 10.1007/s00521-024-09703-0 | leveraging deep learning-assisted attacks against image obfuscation via federated learning | obfuscation techniques (e.g., blurring) are employed to protect sensitive information (si) in images such as individuals’ faces. recent works demonstrated that adversaries can perform deep learning-assisted (dl) attacks to re-identify obfuscated face images. adversaries are modeled by their goals, knowledge (e.g., background knowledge), and capabilities (e.g., dl-assisted attacks). nevertheless, enhancing the evaluation methodology of obfuscation techniques and improving the defense strategies against adversaries requires considering more "pessimistic” attacking scenario, i.e., stronger adversaries. according to a 2019 article published by the european union agency for cybersecurity (enisa), adversaries tend to perform more sophisticated and dangerous attacks when collaborating together. to address these concerns, our paper investigates a novel privacy challenge in the context of image obfuscation. specifically, we examine whether adversaries, when collaborating together, can amplify their dl-assisted attacks and cause additional privacy breaches against a target dataset of obfuscated images. we empirically demonstrate that federated learning (fl) can be used as a collaborative attack/adversarial strategy to (i) leverage the attacking capabilities of an adversary, (ii) increase the privacy breaches, and (iii) remedy the lack of background knowledge and data shortage without the need to share/disclose the local training datasets in a centralized location. to the best of our knowledge, we are the first to consider collaborative and more specifically fl-based attacks in the context of face obfuscation. | [
"obfuscation techniques",
"e.g., blurring",
"sensitive information",
"images",
"recent works",
"adversaries",
"dl",
"obfuscated face images",
"adversaries",
"their goals",
"knowledge",
"capabilities",
"(e.g., dl-assisted attacks",
"the evaluation methodology",
"obfuscation techniques",
"the defense strategies",
"adversaries",
"scenario",
"i.e., stronger adversaries",
"a 2019 article",
"the european union agency",
"cybersecurity",
"enisa",
"adversaries",
"more sophisticated and dangerous attacks",
"these concerns",
"our paper",
"a novel privacy challenge",
"the context",
"image obfuscation",
"we",
"adversaries",
"their dl-assisted attacks",
"additional privacy breaches",
"a target dataset",
"obfuscated images",
"we",
"federated learning",
"a collaborative attack/adversarial strategy",
"(i",
"the attacking capabilities",
"an adversary",
"ii",
"the privacy breaches",
"(iii",
"the lack",
"background knowledge and data shortage",
"the need",
"the local training datasets",
"a centralized location",
"our knowledge",
"we",
"collaborative and more specifically fl-based attacks",
"the context",
"face obfuscation",
"2019",
"the european union agency",
"first"
] |
Are deep learning classification results obtained on CT scans fair and interpretable? | [
"Mohamad M. A. Ashames",
"Ahmet Demir",
"Omer N. Gerek",
"Mehmet Fidan",
"M. Bilginer Gulmezoglu",
"Semih Ergin",
"Rifat Edizkan",
"Mehmet Koc",
"Atalay Barkana",
"Cuneyt Calisir"
] | Following the great success of various deep learning methods in image and object classification, the biomedical image processing society is also overwhelmed with their applications to various automatic diagnosis cases. Unfortunately, most of the deep learning-based classification attempts in the literature solely focus on the aim of extreme accuracy scores, without considering interpretability, or patient-wise separation of training and test data. For example, most lung nodule classification papers using deep learning randomly shuffle data and split it into training, validation, and test sets, causing certain images from the Computed Tomography (CT) scan of a person to be in the training set, while other images of the same person to be in the validation or testing image sets. This can result in reporting misleading accuracy rates and the learning of irrelevant features, ultimately reducing the real-life usability of these models. When the deep neural networks trained on the traditional, unfair data shuffling method are challenged with new patient images, it is observed that the trained models perform poorly. In contrast, deep neural networks trained with strict patient-level separation maintain their accuracy rates even when new patient images are tested. Heat map visualizations of the activations of the deep neural networks trained with strict patient-level separation indicate a higher degree of focus on the relevant nodules. We argue that the research question posed in the title has a positive answer only if the deep neural networks are trained with images of patients that are strictly isolated from the validation and testing patient sets. | 10.1007/s13246-024-01419-8 | are deep learning classification results obtained on ct scans fair and interpretable? | following the great success of various deep learning methods in image and object classification, the biomedical image processing society is also overwhelmed with their applications to various automatic diagnosis cases. unfortunately, most of the deep learning-based classification attempts in the literature solely focus on the aim of extreme accuracy scores, without considering interpretability, or patient-wise separation of training and test data. for example, most lung nodule classification papers using deep learning randomly shuffle data and split it into training, validation, and test sets, causing certain images from the computed tomography (ct) scan of a person to be in the training set, while other images of the same person to be in the validation or testing image sets. this can result in reporting misleading accuracy rates and the learning of irrelevant features, ultimately reducing the real-life usability of these models. when the deep neural networks trained on the traditional, unfair data shuffling method are challenged with new patient images, it is observed that the trained models perform poorly. in contrast, deep neural networks trained with strict patient-level separation maintain their accuracy rates even when new patient images are tested. heat map visualizations of the activations of the deep neural networks trained with strict patient-level separation indicate a higher degree of focus on the relevant nodules. we argue that the research question posed in the title has a positive answer only if the deep neural networks are trained with images of patients that are strictly isolated from the validation and testing patient sets. | [
"the great success",
"various deep learning methods",
"image and object classification",
"the biomedical image processing society",
"their applications",
"various automatic diagnosis cases",
"the deep learning-based classification attempts",
"the literature",
"the aim",
"extreme accuracy scores",
"interpretability",
"patient-wise separation",
"training",
"test data",
"example",
"most lung",
"classification papers",
"deep learning randomly shuffle data",
"it",
"training",
"validation",
"test sets",
"certain images",
"the computed tomography (ct) scan",
"a person",
"the training set",
"the same person",
"the validation or testing image sets",
"this",
"misleading accuracy rates",
"the learning",
"irrelevant features",
"the real-life usability",
"these models",
"the deep neural networks",
"the traditional, unfair data",
"method",
"new patient images",
"it",
"the trained models",
"contrast",
"deep neural networks",
"strict patient-level separation",
"their accuracy rates",
"new patient images",
"heat map visualizations",
"the activations",
"the deep neural networks",
"strict patient-level separation",
"a higher degree",
"focus",
"the relevant nodules",
"we",
"the research question",
"the title",
"a positive answer",
"the deep neural networks",
"images",
"patients",
"that",
"the validation and testing patient sets"
] |
A Novel Approach Using Transfer Learning Architectural Models Based Deep Learning Techniques for Identification and Classification of Malignant Skin Cancer | [
"Balambigai Subramanian",
"Suresh Muthusamy",
"Kokilavani Thangaraj",
"Hitesh Panchal",
"Elavarasi Kasirajan",
"Abarna Marimuthu",
"Abinaya Ravi"
] | Melanoma, a form of skin cancer originating in melanocyte cells, poses a significant health risk, although it is less prevalent than other types of skin cancer. Its detection presents challenges, even under expert observation. To enhance the classification accuracy of skin lesions, a Deep Convolutional Neural Network, Visual Geometry Group model has been proposed. However, deep learning methods typically require substantial training time. To mitigate this, transfer learning techniques are employed, reducing training duration. Data sets sourced from the International Skin Imaging Collaboration are utilized to train the model within this proposed approach. Evaluation of classification performance involves metrics such as Accuracy, Positive Predictive Value, Negative Predictive Value, Specificity, and Sensitivity. The classifier’s performance on test data is depicted through a confusion matrix. The introduction of transfer learning techniques into the Deep Convolutional Neural Network has resulted in an improved classification accuracy of 85%, compared to the 81% achieved by a standard Convolutional Neural Network. | 10.1007/s11277-024-11006-5 | a novel approach using transfer learning architectural models based deep learning techniques for identification and classification of malignant skin cancer | melanoma, a form of skin cancer originating in melanocyte cells, poses a significant health risk, although it is less prevalent than other types of skin cancer. its detection presents challenges, even under expert observation. to enhance the classification accuracy of skin lesions, a deep convolutional neural network, visual geometry group model has been proposed. however, deep learning methods typically require substantial training time. to mitigate this, transfer learning techniques are employed, reducing training duration. data sets sourced from the international skin imaging collaboration are utilized to train the model within this proposed approach. evaluation of classification performance involves metrics such as accuracy, positive predictive value, negative predictive value, specificity, and sensitivity. the classifier’s performance on test data is depicted through a confusion matrix. the introduction of transfer learning techniques into the deep convolutional neural network has resulted in an improved classification accuracy of 85%, compared to the 81% achieved by a standard convolutional neural network. | [
"melanoma",
"a form",
"melanocyte cells",
"a significant health risk",
"it",
"other types",
"skin cancer",
"its detection",
"challenges",
"expert observation",
"the classification accuracy",
"skin lesions",
"a deep convolutional neural network",
"visual geometry group model",
"deep learning methods",
"substantial training time",
"this",
"transfer learning techniques",
"training duration",
"data sets",
"the international skin imaging collaboration",
"the model",
"this proposed approach",
"evaluation",
"classification performance",
"metrics",
"accuracy",
"positive predictive value",
"negative predictive value",
"specificity",
"sensitivity",
"the classifier’s performance",
"test data",
"a confusion matrix",
"the introduction",
"techniques",
"the deep convolutional neural network",
"an improved classification accuracy",
"85%",
"the 81%",
"a standard convolutional neural network",
"85%",
"the 81%"
] |
Deep contrastive learning of molecular conformation for efficient property prediction | [
"Yang Jeong Park",
"HyunGi Kim",
"Jeonghee Jo",
"Sungroh Yoon"
] | Data-driven deep learning algorithms provide accurate prediction of high-level quantum-chemical molecular properties. However, their inputs must be constrained to the same quantum-chemical level of geometric relaxation as the training dataset, limiting their flexibility. Adopting alternative cost-effective conformation generative methods introduces domain-shift problems, deteriorating prediction accuracy. Here we propose a deep contrastive learning-based domain-adaptation method called Local Atomic environment Contrastive Learning (LACL). LACL learns to alleviate the disparities in distribution between the two geometric conformations by comparing different conformation-generation methods. We found that LACL forms a domain-agnostic latent space that encapsulates the semantics of an atom’s local atomic environment. LACL achieves quantum-chemical accuracy while circumventing the geometric relaxation bottleneck and could enable future application scenarios such as inverse molecular engineering and large-scale screening. Our approach is also generalizable from small organic molecules to long chains of biological and pharmacological molecules. | 10.1038/s43588-023-00560-w | deep contrastive learning of molecular conformation for efficient property prediction | data-driven deep learning algorithms provide accurate prediction of high-level quantum-chemical molecular properties. however, their inputs must be constrained to the same quantum-chemical level of geometric relaxation as the training dataset, limiting their flexibility. adopting alternative cost-effective conformation generative methods introduces domain-shift problems, deteriorating prediction accuracy. here we propose a deep contrastive learning-based domain-adaptation method called local atomic environment contrastive learning (lacl). lacl learns to alleviate the disparities in distribution between the two geometric conformations by comparing different conformation-generation methods. we found that lacl forms a domain-agnostic latent space that encapsulates the semantics of an atom’s local atomic environment. lacl achieves quantum-chemical accuracy while circumventing the geometric relaxation bottleneck and could enable future application scenarios such as inverse molecular engineering and large-scale screening. our approach is also generalizable from small organic molecules to long chains of biological and pharmacological molecules. | [
"data-driven deep learning algorithms",
"accurate prediction",
"high-level quantum-chemical molecular properties",
"their inputs",
"the same quantum-chemical level",
"geometric relaxation",
"the training dataset",
"their flexibility",
"alternative cost-effective conformation generative methods",
"domain-shift problems",
"deteriorating prediction accuracy",
"we",
"a deep contrastive learning-based domain-adaptation method",
"local atomic environment",
"contrastive learning",
"lacl",
"lacl",
"the disparities",
"distribution",
"the two geometric conformations",
"different conformation-generation methods",
"we",
"lacl",
"a domain-agnostic latent space",
"that",
"the semantics",
"an atom’s local atomic environment",
"lacl",
"quantum-chemical accuracy",
"the geometric relaxation bottleneck",
"future application scenarios",
"inverse molecular engineering and large-scale screening",
"our approach",
"small organic molecules",
"long chains",
"biological and pharmacological molecules",
"two"
] |
A pedagogical study on promoting students' deep learning through design-based learning | [
"Chunmeng Weng",
"Congying Chen",
"Xianfeng Ai"
] | This paper illustrates the design-based learning (DBL) approach to promoting the deep learning of students and improving the quality of teaching in engineering design education. We performed three aspects of research with students in a typical educational activity. The first study investigated students' deep learning before and after the DBL approach, both in terms of deep learning status and deep learning ability. The second study examined the effectiveness of the DBL approach by comparative research of a control class (traditional teaching method) and an experimental class (DBL method). The third study examined students' evaluations of the DBL approach. It is approved that the DBL approach has distinctively stimulated the students' motivation to learn, making them more actively engaged in study. The students' higher-order thinking and higher-order capabilities are enhanced, such as critical thinking ability and problem-solving ability. At the same time, they are satisfied with the DBL approach. These findings suggest that the DBL approach is effective in promoting students' deep learning and improving the quality of teaching and learning. | 10.1007/s10798-022-09789-4 | a pedagogical study on promoting students' deep learning through design-based learning | this paper illustrates the design-based learning (dbl) approach to promoting the deep learning of students and improving the quality of teaching in engineering design education. we performed three aspects of research with students in a typical educational activity. the first study investigated students' deep learning before and after the dbl approach, both in terms of deep learning status and deep learning ability. the second study examined the effectiveness of the dbl approach by comparative research of a control class (traditional teaching method) and an experimental class (dbl method). the third study examined students' evaluations of the dbl approach. it is approved that the dbl approach has distinctively stimulated the students' motivation to learn, making them more actively engaged in study. the students' higher-order thinking and higher-order capabilities are enhanced, such as critical thinking ability and problem-solving ability. at the same time, they are satisfied with the dbl approach. these findings suggest that the dbl approach is effective in promoting students' deep learning and improving the quality of teaching and learning. | [
"this paper",
"the design-based learning",
"(dbl) approach",
"the deep learning",
"students",
"the quality",
"teaching",
"engineering design education",
"we",
"three aspects",
"research",
"students",
"a typical educational activity",
"the first study",
"students' deep learning",
"the dbl approach",
"terms",
"deep learning status",
"deep learning ability",
"the second study",
"the effectiveness",
"the dbl approach",
"comparative research",
"a control class",
"traditional teaching method",
"an experimental class",
"dbl method",
"the third study",
"students' evaluations",
"the dbl approach",
"it",
"the dbl approach",
"the students' motivation",
"them",
"study",
"the students' higher-order thinking",
"higher-order capabilities",
"critical thinking ability",
"problem-solving ability",
"the same time",
"they",
"the dbl approach",
"these findings",
"the dbl approach",
"students' deep learning",
"the quality",
"teaching",
"three",
"first",
"second",
"third"
] |
Machine and deep learning techniques for the prediction of diabetics: a review | [
"Sandip Kumar Singh Modak",
"Vijay Kumar Jha"
] | Diabetes has become one of the significant reasons for public sickness and death in worldwide. By 2019, diabetes had affected more than 463 million people worldwide. According to the International Diabetes Federation report, this figure is expected to rise to more than 700 million in 2040, so early screening and diagnosis of diabetes patients have great significance in detecting and treating diabetes on time. Diabetes is a multi factorial metabolic disease, its diagnostic criteria are difficult to cover all the ethology, damage degree, pathogenesis and other factors, so there is a situation for uncertainty and imprecision under various aspects of the medical diagnosis process. With the development of Data mining, researchers find that machine learning and deep learning, playing an important role in diabetes prediction research. This paper is an in-depth study on the application of machine learning and deep learning techniques in the prediction of diabetics. In addition, this paper also discusses the different methodology used in machine and deep learning for prediction of diabetics since last two decades and examines the methods used, to explore their successes and failure. This review would help researchers and practitioners understand the current state-of-the-art methods and identify gaps in the literature. | 10.1007/s11042-024-19766-9 | machine and deep learning techniques for the prediction of diabetics: a review | diabetes has become one of the significant reasons for public sickness and death in worldwide. by 2019, diabetes had affected more than 463 million people worldwide. according to the international diabetes federation report, this figure is expected to rise to more than 700 million in 2040, so early screening and diagnosis of diabetes patients have great significance in detecting and treating diabetes on time. diabetes is a multi factorial metabolic disease, its diagnostic criteria are difficult to cover all the ethology, damage degree, pathogenesis and other factors, so there is a situation for uncertainty and imprecision under various aspects of the medical diagnosis process. with the development of data mining, researchers find that machine learning and deep learning, playing an important role in diabetes prediction research. this paper is an in-depth study on the application of machine learning and deep learning techniques in the prediction of diabetics. in addition, this paper also discusses the different methodology used in machine and deep learning for prediction of diabetics since last two decades and examines the methods used, to explore their successes and failure. this review would help researchers and practitioners understand the current state-of-the-art methods and identify gaps in the literature. | [
"diabetes",
"the significant reasons",
"public sickness",
"death",
"diabetes",
"more than 463 million people",
"the international diabetes federation report",
"this figure",
"early screening",
"diagnosis",
"diabetes patients",
"great significance",
"diabetes",
"time",
"diabetes",
"a multi factorial metabolic disease",
"its diagnostic criteria",
"all the ethology",
"damage degree",
"pathogenesis",
"other factors",
"a situation",
"uncertainty",
"imprecision",
"various aspects",
"the medical diagnosis process",
"the development",
"data mining",
"researchers",
"an important role",
"diabetes prediction research",
"this paper",
"-depth",
"the application",
"machine learning",
"deep learning techniques",
"the prediction",
"diabetics",
"addition",
"this paper",
"the different methodology",
"machine",
"deep learning",
"prediction",
"diabetics",
"last two decades",
"the methods",
"their successes",
"failure",
"this review",
"researchers",
"practitioners",
"the-art",
"gaps",
"the literature",
"2019",
"more than 463 million",
"more than 700 million",
"2040",
"last two decades"
] |
Deep encoder–decoder-based shared learning for multi-criteria recommendation systems | [
"Salam Fraihat",
"Bushra Abu Tahon",
"Bushra Alhijawi",
"Arafat Awajan"
] | A recommendation system (RS) can help overcome information overload issues by offering personalized predictions for users. Typically, RS considers the overall ratings of users on items to generate recommendations for them. However, users may consider several aspects when evaluating items. Hence, a multi-criteria RS considers n-aspects of items to generate more accurate recommendations than a single-criteria RS. This research paper proposes two deep encoder–decoder models based on shared learning for a multi-criteria RS, multi-modal deep encoder–decoder-based shared learning (MMEDSL) and multi-criteria deep encoder–decoder-based shared learning (MCEDSL). MMEDSL employs the shared learning technique by concentrating on the multi-modality concept in deep learning, while MCEDSL focuses on the training process to apply the shared learning technique. The shared learning captures useful shared information during the learning process since the multi-criteria may have hidden inter-relationships. A set of experiments were conducted to compare the proposed models with recent baseline approaches. The Yahoo! Movies multi-criteria dataset was utilized. The results demonstrate that the proposed models outperform other algorithms. In addition, the results show that integrating the shared learning technique with the RS produces precise recommendation predictions. | 10.1007/s00521-023-09007-9 | deep encoder–decoder-based shared learning for multi-criteria recommendation systems | a recommendation system (rs) can help overcome information overload issues by offering personalized predictions for users. typically, rs considers the overall ratings of users on items to generate recommendations for them. however, users may consider several aspects when evaluating items. hence, a multi-criteria rs considers n-aspects of items to generate more accurate recommendations than a single-criteria rs. this research paper proposes two deep encoder–decoder models based on shared learning for a multi-criteria rs, multi-modal deep encoder–decoder-based shared learning (mmedsl) and multi-criteria deep encoder–decoder-based shared learning (mcedsl). mmedsl employs the shared learning technique by concentrating on the multi-modality concept in deep learning, while mcedsl focuses on the training process to apply the shared learning technique. the shared learning captures useful shared information during the learning process since the multi-criteria may have hidden inter-relationships. a set of experiments were conducted to compare the proposed models with recent baseline approaches. the yahoo! movies multi-criteria dataset was utilized. the results demonstrate that the proposed models outperform other algorithms. in addition, the results show that integrating the shared learning technique with the rs produces precise recommendation predictions. | [
"a recommendation system",
"information overload issues",
"personalized predictions",
"users",
"rs",
"the overall ratings",
"users",
"items",
"recommendations",
"them",
"users",
"several aspects",
"items",
"a multi-criteria rs",
"n",
"-aspects",
"items",
"more accurate recommendations",
"a single-criteria rs",
"this research paper",
"two deep encoder",
"decoder models",
"shared learning",
"a multi-criteria rs, multi-modal deep encoder",
"decoder-based shared learning",
"mmedsl",
"multi-criteria deep encoder",
"decoder-based shared learning",
"mcedsl",
"the shared learning technique",
"the multi-modality concept",
"deep learning",
"the training process",
"the shared learning technique",
"the shared learning captures",
"useful shared information",
"the learning process",
"-",
"criteria",
"inter",
"-",
"relationships",
"a set",
"experiments",
"the proposed models",
"recent baseline approaches",
"the yahoo",
"movies multi-criteria dataset",
"the results",
"the proposed models",
"other algorithms",
"addition",
"the results",
"the shared learning technique",
"the rs",
"precise recommendation predictions",
"two",
"yahoo"
] |
A Deep Learning Based Approach for Biomedical Named Entity Recognition Using Multitasking Transfer Learning with BiLSTM, BERT and CRF | [
"H. Pooja",
"M. P. Prabhudev Jagadeesh"
] | The named entity recognition (NER) is a method for locating references to rigid designators in text that fall into well-established semantic categories like person, place, organisation, etc., Many natural language e applications, like summarization of text, question–answer models and machine translation, always include NER at their core. Early NER systems were quite successful in reaching high performance at the expense of using human engineers to create features and rules that were particular to a certain domain. Currently, the biomedical data is soaring expeditiously and extracting the useful information can help to facilitate the appropriate diagnosis. Therefore, these systems are widely adopted in biomedical domain. However, the traditional rule-based, dictionary based and machine learning based methods suffer from computational complexity and out-of-vocabulary (OOV)issues Deep learning has recently been used in NER systems, achieving the state-of-the-art outcome. The present work proposes a novel deep learning based approach which uses Bidirectional Long Short Term (BiLSTM), Bidirectional Encoder Representation (BERT) and Conditional Random Field mode (CRF) model along with transfer learning and multi-tasking model to solve the OOV problem in biomedical domain. The transfer learning architecture uses shared and task specific layers to achieve the multi-task transfer learning task. The shared layer consists of lexicon encoder and transformer encoder followed by embedding vectors. Finally, we define a training loss function based on the BERT model. The proposed Multi-task TLBBC approach is compared with numerous prevailing methods. The proposed Multi-task TLBBC approach realizes average accuracy as 97.30%, 97.20%, 96.80% and 97.50% for NCBI, BC5CDR, JNLPBA, and s800 dataset, respectively. | 10.1007/s42979-024-02835-z | a deep learning based approach for biomedical named entity recognition using multitasking transfer learning with bilstm, bert and crf | the named entity recognition (ner) is a method for locating references to rigid designators in text that fall into well-established semantic categories like person, place, organisation, etc., many natural language e applications, like summarization of text, question–answer models and machine translation, always include ner at their core. early ner systems were quite successful in reaching high performance at the expense of using human engineers to create features and rules that were particular to a certain domain. currently, the biomedical data is soaring expeditiously and extracting the useful information can help to facilitate the appropriate diagnosis. therefore, these systems are widely adopted in biomedical domain. however, the traditional rule-based, dictionary based and machine learning based methods suffer from computational complexity and out-of-vocabulary (oov)issues deep learning has recently been used in ner systems, achieving the state-of-the-art outcome. the present work proposes a novel deep learning based approach which uses bidirectional long short term (bilstm), bidirectional encoder representation (bert) and conditional random field mode (crf) model along with transfer learning and multi-tasking model to solve the oov problem in biomedical domain. the transfer learning architecture uses shared and task specific layers to achieve the multi-task transfer learning task. the shared layer consists of lexicon encoder and transformer encoder followed by embedding vectors. finally, we define a training loss function based on the bert model. the proposed multi-task tlbbc approach is compared with numerous prevailing methods. the proposed multi-task tlbbc approach realizes average accuracy as 97.30%, 97.20%, 96.80% and 97.50% for ncbi, bc5cdr, jnlpba, and s800 dataset, respectively. | [
"the named entity recognition",
"ner",
"a method",
"references",
"rigid designators",
"text",
"that",
"well-established semantic categories",
"person",
"place",
"organisation",
"many natural language e applications",
"summarization",
"text",
"question–answer models",
"machine translation",
"ner",
"their core",
"early ner systems",
"high performance",
"the expense",
"human engineers",
"features",
"rules",
"that",
"a certain domain",
"the biomedical data",
"the useful information",
"the appropriate diagnosis",
"these systems",
"biomedical domain",
"the traditional rule-based, dictionary based and machine learning based methods",
"computational complexity",
"deep learning",
"ner systems",
"the-art",
"the present work",
"a novel deep learning based approach",
"which",
"bidirectional long short term",
"bilstm",
"bert",
"crf",
"transfer learning",
"multi-tasking model",
"the oov problem",
"biomedical domain",
"the transfer learning architecture",
"shared and task specific layers",
"the multi-task transfer learning task",
"the shared layer",
"lexicon encoder",
"transformer encoder",
"vectors",
"we",
"a training loss function",
"the bert model",
"the proposed multi-task tlbbc approach",
"numerous prevailing methods",
"the proposed multi-task tlbbc approach",
"average accuracy",
"97.30%",
"97.20%",
"96.80%",
"97.50%",
"bc5cdr",
"jnlpba",
"s800 dataset",
"ner",
"as 97.30%",
"97.20%",
"96.80%",
"97.50%"
] |
Is deep learning good enough for software defect prediction? | [
"Sushant Kumar Pandey",
"Arya Haldar",
"Anil Kumar Tripathi"
] | Due to high impact of internet technology and rapid change in software systems, it has been a tough challenge for us to detect software defects with high accuracy. Traditional software defect prediction research mainly concentrates on manually designing features (e.g., complexity metrics) and inputting them into machine learning classifiers to distinguish defective code. To gain high prediction accuracy, researchers have developed several deep learning or high computational models for software defect prediction. However, there are several critical conditions and theoretical problems in order to achieve better results. This article explores the investigation of SDP using two deep learning techniques, i.e., SqueezeNet and Bottleneck models. We employed seven different open-source datasets from NASA Repository to perform this comparative study. We use F-Measure as a performance evaluator and found that these methods statistically outperform eight state-of-the-art methods with mean F-Measure of 0.93 ± 0.014 and 0.90 ± 0.013, respectively. We found that these two methods are significantly more effective in terms of F-Measure over large- and moderate-size projects. But they are computationally expensive in terms of training time. As the size of projects is getting immense and sophisticated, such deep learning methods are worth applying. | 10.1007/s11334-023-00542-1 | is deep learning good enough for software defect prediction? | due to high impact of internet technology and rapid change in software systems, it has been a tough challenge for us to detect software defects with high accuracy. traditional software defect prediction research mainly concentrates on manually designing features (e.g., complexity metrics) and inputting them into machine learning classifiers to distinguish defective code. to gain high prediction accuracy, researchers have developed several deep learning or high computational models for software defect prediction. however, there are several critical conditions and theoretical problems in order to achieve better results. this article explores the investigation of sdp using two deep learning techniques, i.e., squeezenet and bottleneck models. we employed seven different open-source datasets from nasa repository to perform this comparative study. we use f-measure as a performance evaluator and found that these methods statistically outperform eight state-of-the-art methods with mean f-measure of 0.93 ± 0.014 and 0.90 ± 0.013, respectively. we found that these two methods are significantly more effective in terms of f-measure over large- and moderate-size projects. but they are computationally expensive in terms of training time. as the size of projects is getting immense and sophisticated, such deep learning methods are worth applying. | [
"high impact",
"internet technology",
"rapid change",
"software systems",
"it",
"a tough challenge",
"us",
"software defects",
"high accuracy",
"traditional software defect prediction research",
"features",
"e.g., complexity metrics",
"them",
"machine learning classifiers",
"defective code",
"high prediction accuracy",
"researchers",
"several deep learning",
"high computational models",
"software defect prediction",
"several critical conditions",
"theoretical problems",
"order",
"better results",
"this article",
"the investigation",
"sdp",
"two deep learning techniques",
"i.e., squeezenet and bottleneck models",
"we",
"seven different open-source datasets",
"nasa repository",
"this comparative study",
"we",
"f-measure",
"a performance evaluator",
"these methods",
"the-art",
"mean f-measure",
"0.93 ±",
"0.90 ±",
"we",
"these two methods",
"terms",
"f-measure",
"large- and moderate-size projects",
"they",
"terms",
"training time",
"the size",
"projects",
"such deep learning methods",
"two",
"seven",
"nasa",
"eight",
"0.93",
"0.014",
"0.90",
"0.013",
"two"
] |
Forecasting oil price in times of crisis: a new evidence from machine learning versus deep learning models | [
"Haithem Awijen",
"Hachmi Ben Ameur",
"Zied Ftiti",
"Waël Louhichi"
] | This study investigates oil price forecasting during a time of crisis, from December 2007 to December 2021. As the oil market has experienced various shocks (exogenous versus endogenous), modelling and forecasting its prices dynamics become more complex based on conventional (econometric and structural) models. A new strand of literature has been attracting more attention during the last decades dealing with artificial intelligence methods. However, this literature is unanimous regarding the performance accuracy between machine learning and deep learning methods. We aim in this study to contribute to this literature by investigating the oil price forecasting based on these two approaches. Based on the stylized facts of oil prices dynamics, we select the support vector machine and long short-term memory approach, as two main models of Machine Learning and deep learning methods, respectively. Our findings support the superiority of the Deep Learning method compared to the Machine Learning approach. Interestingly, our results show that the Deep LSTM-prediction has a close pattern to the observed oil prices, demonstrating robust fitting accuracy at mid-to-long forecast horizons during crisis events. However, our results show that SVM machine learning has poor memory ability to establish a clearer understanding of time-dependent volatility and the dynamic co-movements between actual and predicted data. Moreover, our results show that the power of SVM to learn for long-term predictions is reduced, which potentially lead to distortions of forecasting performance. | 10.1007/s10479-023-05400-8 | forecasting oil price in times of crisis: a new evidence from machine learning versus deep learning models | this study investigates oil price forecasting during a time of crisis, from december 2007 to december 2021. as the oil market has experienced various shocks (exogenous versus endogenous), modelling and forecasting its prices dynamics become more complex based on conventional (econometric and structural) models. a new strand of literature has been attracting more attention during the last decades dealing with artificial intelligence methods. however, this literature is unanimous regarding the performance accuracy between machine learning and deep learning methods. we aim in this study to contribute to this literature by investigating the oil price forecasting based on these two approaches. based on the stylized facts of oil prices dynamics, we select the support vector machine and long short-term memory approach, as two main models of machine learning and deep learning methods, respectively. our findings support the superiority of the deep learning method compared to the machine learning approach. interestingly, our results show that the deep lstm-prediction has a close pattern to the observed oil prices, demonstrating robust fitting accuracy at mid-to-long forecast horizons during crisis events. however, our results show that svm machine learning has poor memory ability to establish a clearer understanding of time-dependent volatility and the dynamic co-movements between actual and predicted data. moreover, our results show that the power of svm to learn for long-term predictions is reduced, which potentially lead to distortions of forecasting performance. | [
"this study",
"oil price forecasting",
"a time",
"crisis",
"december",
"december",
"the oil market",
"various shocks",
"its prices dynamics",
"conventional (econometric and structural) models",
"a new strand",
"literature",
"more attention",
"the last decades",
"artificial intelligence methods",
"this literature",
"the performance accuracy",
"machine learning",
"deep learning methods",
"we",
"this study",
"this literature",
"the oil price forecasting",
"these two approaches",
"the stylized facts",
"we",
"the support vector machine",
"long short-term memory approach",
"two main models",
"machine learning",
"deep learning methods",
"our findings",
"the superiority",
"the deep learning method",
"the machine learning approach",
"our results",
"the deep lstm-prediction",
"a close pattern",
"the observed oil prices",
"robust fitting accuracy",
"mid",
"forecast horizons",
"crisis events",
"our results",
"svm machine learning",
"poor memory ability",
"a clearer understanding",
"time-dependent volatility",
"the dynamic co",
"-",
"movements",
"actual and predicted data",
"our results",
"the power",
"svm",
"long-term predictions",
"which",
"distortions",
"forecasting performance",
"december 2007 to december 2021",
"the last decades",
"two",
"two"
] |
A machine learning based deep convective trigger for climate models | [
"Siddharth Kumar",
"P Mukhopadhyay",
"C Balaji"
] | The present study focuses on addressing the issue of too frequent triggers of deep convection in climate models, which are primarily based on physics-based classical trigger functions such as convective available potential energy (CAPE) or cloud work function (CWF). To overcome this problem, the study proposes using machine learning (ML) based deep convective triggers as an alternative. The deep convective trigger is formulated as a binary classification problem, where the goal is to predict whether deep convection will occur or not. Two elementary classification algorithms, namely support vector machines and neural networks, are adopted in this study. Additionally, a novel method is proposed to rank the importance of input variables for the classification problem, which may aid in understanding the underlying mechanisms and factors influencing deep convection. The accuracy of the ML-based methods is compared with the widely used convective available potential energy (CAPE)-based and dynamic generation of CAPE (dCAPE) trigger function found in many convective parameterization schemes. Results demonstrate that the elementary machine learning-based algorithms can outperform the classical CAPE-based triggers, indicating the potential effectiveness of ML-based approaches in dealing with this issue. Furthermore, a method based on the Mahalanobis distance is presented for binary classification, which is easy to interpret and implement. The Mahalanobis distance-based approach shows accuracy comparable to other ML-based methods, suggesting its viability as an alternative method for deep convective triggers. By correcting for deep convective triggers using ML-based approaches, the study proposes a possible solution to improve the probability density of rain in the climate model. This improvement may help overcome the issue of excessive drizzle often observed in many climate models. | 10.1007/s00382-024-07332-w | a machine learning based deep convective trigger for climate models | the present study focuses on addressing the issue of too frequent triggers of deep convection in climate models, which are primarily based on physics-based classical trigger functions such as convective available potential energy (cape) or cloud work function (cwf). to overcome this problem, the study proposes using machine learning (ml) based deep convective triggers as an alternative. the deep convective trigger is formulated as a binary classification problem, where the goal is to predict whether deep convection will occur or not. two elementary classification algorithms, namely support vector machines and neural networks, are adopted in this study. additionally, a novel method is proposed to rank the importance of input variables for the classification problem, which may aid in understanding the underlying mechanisms and factors influencing deep convection. the accuracy of the ml-based methods is compared with the widely used convective available potential energy (cape)-based and dynamic generation of cape (dcape) trigger function found in many convective parameterization schemes. results demonstrate that the elementary machine learning-based algorithms can outperform the classical cape-based triggers, indicating the potential effectiveness of ml-based approaches in dealing with this issue. furthermore, a method based on the mahalanobis distance is presented for binary classification, which is easy to interpret and implement. the mahalanobis distance-based approach shows accuracy comparable to other ml-based methods, suggesting its viability as an alternative method for deep convective triggers. by correcting for deep convective triggers using ml-based approaches, the study proposes a possible solution to improve the probability density of rain in the climate model. this improvement may help overcome the issue of excessive drizzle often observed in many climate models. | [
"the present study",
"the issue",
"too frequent triggers",
"deep convection",
"climate models",
"which",
"physics-based classical trigger functions",
"convective available potential energy",
"cape",
"cloud work function",
"cwf",
"this problem",
"the study",
"machine learning",
"ml",
"deep convective triggers",
"an alternative",
"the deep convective trigger",
"a binary classification problem",
"the goal",
"deep convection",
"two elementary classification algorithms",
"vector machines",
"neural networks",
"this study",
"a novel method",
"the importance",
"input variables",
"the classification problem",
"which",
"the underlying mechanisms",
"factors",
"deep convection",
"the accuracy",
"the ml-based methods",
"the widely used convective available potential energy",
"cape)-based and dynamic generation",
"cape",
"dcape",
"trigger function",
"many convective parameterization schemes",
"results",
"the elementary machine learning-based algorithms",
"the classical cape-based triggers",
"the potential effectiveness",
"ml-based approaches",
"this issue",
"a method",
"the mahalanobis distance",
"binary classification",
"which",
"the mahalanobis distance-based approach",
"accuracy",
"other ml-based methods",
"its viability",
"an alternative method",
"deep convective triggers",
"deep convective triggers",
"ml-based approaches",
"the study",
"a possible solution",
"the probability density",
"rain",
"the climate model",
"this improvement",
"the issue",
"excessive drizzle",
"many climate models",
"cwf",
"two"
] |
Deep learning theory of distribution regression with CNNs | [
"Zhan Yu",
"Ding-Xuan Zhou"
] | We establish a deep learning theory for distribution regression with deep convolutional neural networks (DCNNs). Deep learning based on structured deep neural networks has been powerful in practical applications. Generalization analysis for regression with DCNNs has been carried out very recently. However, for the distribution regression problem in which the input variables are probability measures, there is no mathematical model or theoretical analysis of DCNN-based learning theory. One of the difficulties is that the classical neural network structure requires the input variable to be a Euclidean vector. When the input samples are probability distributions, the traditional neural network structure cannot be directly used. A well-defined DCNN framework for distribution regression is desirable. In this paper, we overcome the difficulty and establish a novel DCNN-based learning theory for a two-stage distribution regression model. Firstly, we realize an approximation theory for functionals defined on the set of Borel probability measures with the proposed DCNN framework. Then, we show that the hypothesis space is well-defined by rigorously proving its compactness. Furthermore, in the hypothesis space induced by the general DCNN framework with distribution inputs, by using a two-stage error decomposition technique, we derive a novel DCNN-based two-stage oracle inequality and optimal learning rates (up to a logarithmic factor) for the proposed algorithm for distribution regression. | 10.1007/s10444-023-10054-y | deep learning theory of distribution regression with cnns | we establish a deep learning theory for distribution regression with deep convolutional neural networks (dcnns). deep learning based on structured deep neural networks has been powerful in practical applications. generalization analysis for regression with dcnns has been carried out very recently. however, for the distribution regression problem in which the input variables are probability measures, there is no mathematical model or theoretical analysis of dcnn-based learning theory. one of the difficulties is that the classical neural network structure requires the input variable to be a euclidean vector. when the input samples are probability distributions, the traditional neural network structure cannot be directly used. a well-defined dcnn framework for distribution regression is desirable. in this paper, we overcome the difficulty and establish a novel dcnn-based learning theory for a two-stage distribution regression model. firstly, we realize an approximation theory for functionals defined on the set of borel probability measures with the proposed dcnn framework. then, we show that the hypothesis space is well-defined by rigorously proving its compactness. furthermore, in the hypothesis space induced by the general dcnn framework with distribution inputs, by using a two-stage error decomposition technique, we derive a novel dcnn-based two-stage oracle inequality and optimal learning rates (up to a logarithmic factor) for the proposed algorithm for distribution regression. | [
"we",
"a deep learning theory",
"distribution regression",
"deep convolutional neural networks",
"dcnns",
"deep learning",
"structured deep neural networks",
"practical applications",
"generalization analysis",
"regression",
"dcnns",
"the distribution regression problem",
"which",
"the input variables",
"probability measures",
"no mathematical model",
"theoretical analysis",
"dcnn-based learning theory",
"the difficulties",
"the classical neural network structure",
"the input variable",
"a euclidean vector",
"the input samples",
"probability distributions",
"the traditional neural network structure",
"a well-defined dcnn framework",
"distribution regression",
"this paper",
"we",
"the difficulty",
"a novel dcnn-based learning theory",
"a two-stage distribution regression model",
"we",
"an approximation theory",
"functionals",
"the set",
"borel probability measures",
"the proposed dcnn framework",
"we",
"the hypothesis space",
"its compactness",
"the hypothesis space",
"the general dcnn framework",
"distribution inputs",
"a two-stage error decomposition technique",
"we",
"a novel dcnn-based two-stage oracle inequality",
"optimal learning rates",
"a logarithmic factor",
"the proposed algorithm",
"distribution regression",
"one",
"two",
"firstly",
"two",
"two"
] |
Deep learning for quadratic hedging in incomplete jump market | [
"Nacira Agram",
"Bernt Øksendal",
"Jan Rems"
] | We propose a deep learning approach to study the minimal variance pricing and hedging problem in an incomplete jump diffusion market. It is based on a rigorous stochastic calculus derivation of the optimal hedging portfolio, optimal option price, and the corresponding equivalent martingale measure through the means of the Stackelberg game approach. A deep learning algorithm based on the combination of the feed-forward and LSTM neural networks is tested on three different market models, two of which are incomplete. In contrast, the complete market Black–Scholes model serves as a benchmark for the algorithm’s performance. The results that indicate the algorithm’s good performance are presented and discussed. In particular, we apply our results to the special incomplete market model studied by Merton and give a detailed comparison between our results based on the minimal variance principle and the results obtained by Merton based on a different pricing principle. Using deep learning, we find that the minimal variance principle leads to typically higher option prices than those deduced from the Merton principle. On the other hand, the minimal variance principle leads to lower losses than the Merton principle. | 10.1007/s42521-024-00112-5 | deep learning for quadratic hedging in incomplete jump market | we propose a deep learning approach to study the minimal variance pricing and hedging problem in an incomplete jump diffusion market. it is based on a rigorous stochastic calculus derivation of the optimal hedging portfolio, optimal option price, and the corresponding equivalent martingale measure through the means of the stackelberg game approach. a deep learning algorithm based on the combination of the feed-forward and lstm neural networks is tested on three different market models, two of which are incomplete. in contrast, the complete market black–scholes model serves as a benchmark for the algorithm’s performance. the results that indicate the algorithm’s good performance are presented and discussed. in particular, we apply our results to the special incomplete market model studied by merton and give a detailed comparison between our results based on the minimal variance principle and the results obtained by merton based on a different pricing principle. using deep learning, we find that the minimal variance principle leads to typically higher option prices than those deduced from the merton principle. on the other hand, the minimal variance principle leads to lower losses than the merton principle. | [
"we",
"a deep learning approach",
"the minimal variance pricing",
"hedging problem",
"an incomplete jump diffusion market",
"it",
"a rigorous stochastic calculus derivation",
"the optimal hedging portfolio",
"optimal option price",
"the corresponding equivalent martingale measure",
"the means",
"the stackelberg game approach",
"a deep learning algorithm",
"the combination",
"the feed-forward",
"lstm neural networks",
"three different market models",
"which",
"contrast",
"the complete market black–scholes model",
"a benchmark",
"the algorithm’s performance",
"the results",
"that",
"the algorithm’s good performance",
"we",
"our results",
"the special incomplete market model",
"merton",
"a detailed comparison",
"our results",
"the minimal variance principle",
"the results",
"merton",
"a different pricing principle",
"deep learning",
"we",
"the minimal variance principle",
"typically higher option prices",
"those",
"the merton principle",
"the other hand",
"the minimal variance principle",
"lower losses",
"the merton principle",
"three",
"two",
"merton",
"merton"
] |
Structure-based, deep-learning models for protein-ligand binding affinity prediction | [
"Debby D. Wang",
"Wenhui Wu",
"Ran Wang"
] | The launch of AlphaFold series has brought deep-learning techniques into the molecular structural science. As another crucial problem, structure-based prediction of protein-ligand binding affinity urgently calls for advanced computational techniques. Is deep learning ready to decode this problem? Here we review mainstream structure-based, deep-learning approaches for this problem, focusing on molecular representations, learning architectures and model interpretability. A model taxonomy has been generated. To compensate for the lack of valid comparisons among those models, we realized and evaluated representatives from a uniform basis, with the advantages and shortcomings discussed. This review will potentially benefit structure-based drug discovery and related areas.Graphical Abstract | 10.1186/s13321-023-00795-9 | structure-based, deep-learning models for protein-ligand binding affinity prediction | the launch of alphafold series has brought deep-learning techniques into the molecular structural science. as another crucial problem, structure-based prediction of protein-ligand binding affinity urgently calls for advanced computational techniques. is deep learning ready to decode this problem? here we review mainstream structure-based, deep-learning approaches for this problem, focusing on molecular representations, learning architectures and model interpretability. a model taxonomy has been generated. to compensate for the lack of valid comparisons among those models, we realized and evaluated representatives from a uniform basis, with the advantages and shortcomings discussed. this review will potentially benefit structure-based drug discovery and related areas.graphical abstract | [
"the launch",
"alphafold series",
"deep-learning techniques",
"the molecular structural science",
"another crucial problem",
"structure-based prediction",
"protein-ligand binding affinity",
"advanced computational techniques",
"this problem",
"we",
"mainstream structure-based, deep-learning approaches",
"this problem",
"molecular representations",
"architectures",
"model interpretability",
"a model taxonomy",
"the lack",
"valid comparisons",
"those models",
"we",
"evaluated representatives",
"a uniform basis",
"the advantages",
"shortcomings",
"this review",
"structure-based drug discovery",
"related areas.graphical abstract"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.