title
stringlengths 31
206
| authors
sequencelengths 1
85
| abstract
stringlengths 428
3.21k
| doi
stringlengths 21
31
| cleaned_title
stringlengths 31
206
| cleaned_abstract
stringlengths 428
3.21k
| key_phrases
sequencelengths 19
150
|
---|---|---|---|---|---|---|
Deep learning in magnetic resonance enterography for Crohn’s disease assessment: a systematic review | [
"Ofir Brem",
"David Elisha",
"Eli Konen",
"Michal Amitai",
"Eyal Klang"
] | Crohn’s disease (CD) poses significant morbidity, underscoring the need for effective, non-invasive inflammatory assessment using magnetic resonance enterography (MRE). This literature review evaluates recent publications on the role of deep learning in improving MRE for CD assessment. We searched MEDLINE/PUBMED for studies that reported the use of deep learning algorithms for assessment of CD activity. The study was conducted according to the PRISMA guidelines. The risk of bias was evaluated using the QUADAS‐2 tool. Five eligible studies, encompassing 468 subjects, were identified. Our study suggests that diverse deep learning applications, including image quality enhancement, bowel segmentation for disease burden quantification, and 3D reconstruction for surgical planning are useful and promising for CD assessment. However, most of the studies are preliminary, retrospective studies, and have a high risk of bias in at least one category. Future research is needed to assess how deep learning can impact CD patient diagnostics, particularly when considering the increasing integration of such models into hospital systems. | 10.1007/s00261-024-04326-4 | deep learning in magnetic resonance enterography for crohn’s disease assessment: a systematic review | crohn’s disease (cd) poses significant morbidity, underscoring the need for effective, non-invasive inflammatory assessment using magnetic resonance enterography (mre). this literature review evaluates recent publications on the role of deep learning in improving mre for cd assessment. we searched medline/pubmed for studies that reported the use of deep learning algorithms for assessment of cd activity. the study was conducted according to the prisma guidelines. the risk of bias was evaluated using the quadas‐2 tool. five eligible studies, encompassing 468 subjects, were identified. our study suggests that diverse deep learning applications, including image quality enhancement, bowel segmentation for disease burden quantification, and 3d reconstruction for surgical planning are useful and promising for cd assessment. however, most of the studies are preliminary, retrospective studies, and have a high risk of bias in at least one category. future research is needed to assess how deep learning can impact cd patient diagnostics, particularly when considering the increasing integration of such models into hospital systems. | [
"crohn’s disease",
"cd",
"significant morbidity",
"the need",
"effective, non-invasive inflammatory assessment",
"magnetic resonance enterography",
"mre",
"this literature review",
"recent publications",
"the role",
"deep learning",
"mre",
"cd assessment",
"we",
"medline",
"studies",
"that",
"the use",
"deep learning algorithms",
"assessment",
"cd activity",
"the study",
"the prisma guidelines",
"the risk",
"bias",
"the quadas‐2 tool",
"five eligible studies",
"468 subjects",
"our study",
"diverse deep learning applications",
"image quality enhancement",
"bowel segmentation",
"disease burden quantification",
"3d reconstruction",
"surgical planning",
"cd assessment",
"the studies",
"preliminary, retrospective studies",
"a high risk",
"bias",
"at least one category",
"future research",
"how deep learning",
"cd patient diagnostics",
"the increasing integration",
"such models",
"hospital systems",
"crohn",
"five",
"468",
"3d",
"at least one"
] |
Is deep learning good enough for software defect prediction? | [
"Sushant Kumar Pandey",
"Arya Haldar",
"Anil Kumar Tripathi"
] | Due to high impact of internet technology and rapid change in software systems, it has been a tough challenge for us to detect software defects with high accuracy. Traditional software defect prediction research mainly concentrates on manually designing features (e.g., complexity metrics) and inputting them into machine learning classifiers to distinguish defective code. To gain high prediction accuracy, researchers have developed several deep learning or high computational models for software defect prediction. However, there are several critical conditions and theoretical problems in order to achieve better results. This article explores the investigation of SDP using two deep learning techniques, i.e., SqueezeNet and Bottleneck models. We employed seven different open-source datasets from NASA Repository to perform this comparative study. We use F-Measure as a performance evaluator and found that these methods statistically outperform eight state-of-the-art methods with mean F-Measure of 0.93 ± 0.014 and 0.90 ± 0.013, respectively. We found that these two methods are significantly more effective in terms of F-Measure over large- and moderate-size projects. But they are computationally expensive in terms of training time. As the size of projects is getting immense and sophisticated, such deep learning methods are worth applying. | 10.1007/s11334-023-00542-1 | is deep learning good enough for software defect prediction? | due to high impact of internet technology and rapid change in software systems, it has been a tough challenge for us to detect software defects with high accuracy. traditional software defect prediction research mainly concentrates on manually designing features (e.g., complexity metrics) and inputting them into machine learning classifiers to distinguish defective code. to gain high prediction accuracy, researchers have developed several deep learning or high computational models for software defect prediction. however, there are several critical conditions and theoretical problems in order to achieve better results. this article explores the investigation of sdp using two deep learning techniques, i.e., squeezenet and bottleneck models. we employed seven different open-source datasets from nasa repository to perform this comparative study. we use f-measure as a performance evaluator and found that these methods statistically outperform eight state-of-the-art methods with mean f-measure of 0.93 ± 0.014 and 0.90 ± 0.013, respectively. we found that these two methods are significantly more effective in terms of f-measure over large- and moderate-size projects. but they are computationally expensive in terms of training time. as the size of projects is getting immense and sophisticated, such deep learning methods are worth applying. | [
"high impact",
"internet technology",
"rapid change",
"software systems",
"it",
"a tough challenge",
"us",
"software defects",
"high accuracy",
"traditional software defect prediction research",
"features",
"e.g., complexity metrics",
"them",
"machine learning classifiers",
"defective code",
"high prediction accuracy",
"researchers",
"several deep learning",
"high computational models",
"software defect prediction",
"several critical conditions",
"theoretical problems",
"order",
"better results",
"this article",
"the investigation",
"sdp",
"two deep learning techniques",
"i.e., squeezenet and bottleneck models",
"we",
"seven different open-source datasets",
"nasa repository",
"this comparative study",
"we",
"f-measure",
"a performance evaluator",
"these methods",
"the-art",
"mean f-measure",
"0.93 ±",
"0.90 ±",
"we",
"these two methods",
"terms",
"f-measure",
"large- and moderate-size projects",
"they",
"terms",
"training time",
"the size",
"projects",
"such deep learning methods",
"two",
"seven",
"nasa",
"eight",
"0.93",
"0.014",
"0.90",
"0.013",
"two"
] |
Forecasting oil price in times of crisis: a new evidence from machine learning versus deep learning models | [
"Haithem Awijen",
"Hachmi Ben Ameur",
"Zied Ftiti",
"Waël Louhichi"
] | This study investigates oil price forecasting during a time of crisis, from December 2007 to December 2021. As the oil market has experienced various shocks (exogenous versus endogenous), modelling and forecasting its prices dynamics become more complex based on conventional (econometric and structural) models. A new strand of literature has been attracting more attention during the last decades dealing with artificial intelligence methods. However, this literature is unanimous regarding the performance accuracy between machine learning and deep learning methods. We aim in this study to contribute to this literature by investigating the oil price forecasting based on these two approaches. Based on the stylized facts of oil prices dynamics, we select the support vector machine and long short-term memory approach, as two main models of Machine Learning and deep learning methods, respectively. Our findings support the superiority of the Deep Learning method compared to the Machine Learning approach. Interestingly, our results show that the Deep LSTM-prediction has a close pattern to the observed oil prices, demonstrating robust fitting accuracy at mid-to-long forecast horizons during crisis events. However, our results show that SVM machine learning has poor memory ability to establish a clearer understanding of time-dependent volatility and the dynamic co-movements between actual and predicted data. Moreover, our results show that the power of SVM to learn for long-term predictions is reduced, which potentially lead to distortions of forecasting performance. | 10.1007/s10479-023-05400-8 | forecasting oil price in times of crisis: a new evidence from machine learning versus deep learning models | this study investigates oil price forecasting during a time of crisis, from december 2007 to december 2021. as the oil market has experienced various shocks (exogenous versus endogenous), modelling and forecasting its prices dynamics become more complex based on conventional (econometric and structural) models. a new strand of literature has been attracting more attention during the last decades dealing with artificial intelligence methods. however, this literature is unanimous regarding the performance accuracy between machine learning and deep learning methods. we aim in this study to contribute to this literature by investigating the oil price forecasting based on these two approaches. based on the stylized facts of oil prices dynamics, we select the support vector machine and long short-term memory approach, as two main models of machine learning and deep learning methods, respectively. our findings support the superiority of the deep learning method compared to the machine learning approach. interestingly, our results show that the deep lstm-prediction has a close pattern to the observed oil prices, demonstrating robust fitting accuracy at mid-to-long forecast horizons during crisis events. however, our results show that svm machine learning has poor memory ability to establish a clearer understanding of time-dependent volatility and the dynamic co-movements between actual and predicted data. moreover, our results show that the power of svm to learn for long-term predictions is reduced, which potentially lead to distortions of forecasting performance. | [
"this study",
"oil price forecasting",
"a time",
"crisis",
"december",
"december",
"the oil market",
"various shocks",
"its prices dynamics",
"conventional (econometric and structural) models",
"a new strand",
"literature",
"more attention",
"the last decades",
"artificial intelligence methods",
"this literature",
"the performance accuracy",
"machine learning",
"deep learning methods",
"we",
"this study",
"this literature",
"the oil price forecasting",
"these two approaches",
"the stylized facts",
"we",
"the support vector machine",
"long short-term memory approach",
"two main models",
"machine learning",
"deep learning methods",
"our findings",
"the superiority",
"the deep learning method",
"the machine learning approach",
"our results",
"the deep lstm-prediction",
"a close pattern",
"the observed oil prices",
"robust fitting accuracy",
"mid",
"forecast horizons",
"crisis events",
"our results",
"svm machine learning",
"poor memory ability",
"a clearer understanding",
"time-dependent volatility",
"the dynamic co",
"-",
"movements",
"actual and predicted data",
"our results",
"the power",
"svm",
"long-term predictions",
"which",
"distortions",
"forecasting performance",
"december 2007 to december 2021",
"the last decades",
"two",
"two"
] |
A machine learning based deep convective trigger for climate models | [
"Siddharth Kumar",
"P Mukhopadhyay",
"C Balaji"
] | The present study focuses on addressing the issue of too frequent triggers of deep convection in climate models, which are primarily based on physics-based classical trigger functions such as convective available potential energy (CAPE) or cloud work function (CWF). To overcome this problem, the study proposes using machine learning (ML) based deep convective triggers as an alternative. The deep convective trigger is formulated as a binary classification problem, where the goal is to predict whether deep convection will occur or not. Two elementary classification algorithms, namely support vector machines and neural networks, are adopted in this study. Additionally, a novel method is proposed to rank the importance of input variables for the classification problem, which may aid in understanding the underlying mechanisms and factors influencing deep convection. The accuracy of the ML-based methods is compared with the widely used convective available potential energy (CAPE)-based and dynamic generation of CAPE (dCAPE) trigger function found in many convective parameterization schemes. Results demonstrate that the elementary machine learning-based algorithms can outperform the classical CAPE-based triggers, indicating the potential effectiveness of ML-based approaches in dealing with this issue. Furthermore, a method based on the Mahalanobis distance is presented for binary classification, which is easy to interpret and implement. The Mahalanobis distance-based approach shows accuracy comparable to other ML-based methods, suggesting its viability as an alternative method for deep convective triggers. By correcting for deep convective triggers using ML-based approaches, the study proposes a possible solution to improve the probability density of rain in the climate model. This improvement may help overcome the issue of excessive drizzle often observed in many climate models. | 10.1007/s00382-024-07332-w | a machine learning based deep convective trigger for climate models | the present study focuses on addressing the issue of too frequent triggers of deep convection in climate models, which are primarily based on physics-based classical trigger functions such as convective available potential energy (cape) or cloud work function (cwf). to overcome this problem, the study proposes using machine learning (ml) based deep convective triggers as an alternative. the deep convective trigger is formulated as a binary classification problem, where the goal is to predict whether deep convection will occur or not. two elementary classification algorithms, namely support vector machines and neural networks, are adopted in this study. additionally, a novel method is proposed to rank the importance of input variables for the classification problem, which may aid in understanding the underlying mechanisms and factors influencing deep convection. the accuracy of the ml-based methods is compared with the widely used convective available potential energy (cape)-based and dynamic generation of cape (dcape) trigger function found in many convective parameterization schemes. results demonstrate that the elementary machine learning-based algorithms can outperform the classical cape-based triggers, indicating the potential effectiveness of ml-based approaches in dealing with this issue. furthermore, a method based on the mahalanobis distance is presented for binary classification, which is easy to interpret and implement. the mahalanobis distance-based approach shows accuracy comparable to other ml-based methods, suggesting its viability as an alternative method for deep convective triggers. by correcting for deep convective triggers using ml-based approaches, the study proposes a possible solution to improve the probability density of rain in the climate model. this improvement may help overcome the issue of excessive drizzle often observed in many climate models. | [
"the present study",
"the issue",
"too frequent triggers",
"deep convection",
"climate models",
"which",
"physics-based classical trigger functions",
"convective available potential energy",
"cape",
"cloud work function",
"cwf",
"this problem",
"the study",
"machine learning",
"ml",
"deep convective triggers",
"an alternative",
"the deep convective trigger",
"a binary classification problem",
"the goal",
"deep convection",
"two elementary classification algorithms",
"vector machines",
"neural networks",
"this study",
"a novel method",
"the importance",
"input variables",
"the classification problem",
"which",
"the underlying mechanisms",
"factors",
"deep convection",
"the accuracy",
"the ml-based methods",
"the widely used convective available potential energy",
"cape)-based and dynamic generation",
"cape",
"dcape",
"trigger function",
"many convective parameterization schemes",
"results",
"the elementary machine learning-based algorithms",
"the classical cape-based triggers",
"the potential effectiveness",
"ml-based approaches",
"this issue",
"a method",
"the mahalanobis distance",
"binary classification",
"which",
"the mahalanobis distance-based approach",
"accuracy",
"other ml-based methods",
"its viability",
"an alternative method",
"deep convective triggers",
"deep convective triggers",
"ml-based approaches",
"the study",
"a possible solution",
"the probability density",
"rain",
"the climate model",
"this improvement",
"the issue",
"excessive drizzle",
"many climate models",
"cwf",
"two"
] |
English–Vietnamese Machine Translation Using Deep Learning for Chatbot Applications | [
"Nguyen Minh Tuan",
"Phayung Meesad",
"Ha Huy Cuong Nguyen"
] | Recently, artificial intelligence-based machine translation has been much improved over traditional methods. A machine translator is very useful for translating text or speech from one language to another. Machine translators have replaced the word mechanism in one language for words in another with verbatim translations. However, a good translation should be employed as both a sentence and a word that has a completed meaning in accordance with the context of the relevant sentence. In this paper, we studied English–Vietnamese translation using deep learning methods including recurrent neural network, long short-term memory, gated recurrent units, attention, and transformer. The deep learning-based machine translators were compared based on the test accuracy of the result translation. It was found that the best deep learning-based machine translator model was the Attention mechanism, and the Transformer yielded the second rank. | 10.1007/s42979-023-02339-2 | english–vietnamese machine translation using deep learning for chatbot applications | recently, artificial intelligence-based machine translation has been much improved over traditional methods. a machine translator is very useful for translating text or speech from one language to another. machine translators have replaced the word mechanism in one language for words in another with verbatim translations. however, a good translation should be employed as both a sentence and a word that has a completed meaning in accordance with the context of the relevant sentence. in this paper, we studied english–vietnamese translation using deep learning methods including recurrent neural network, long short-term memory, gated recurrent units, attention, and transformer. the deep learning-based machine translators were compared based on the test accuracy of the result translation. it was found that the best deep learning-based machine translator model was the attention mechanism, and the transformer yielded the second rank. | [
"artificial intelligence-based machine translation",
"traditional methods",
"a machine translator",
"text",
"speech",
"one language",
"another",
"machine translators",
"the word mechanism",
"one language",
"words",
"another",
"verbatim translations",
"a good translation",
"both a sentence",
"a word",
"that",
"a completed meaning",
"accordance",
"the context",
"the relevant sentence",
"this paper",
"we",
"english–vietnamese translation",
"deep learning methods",
"recurrent neural network",
"long short-term memory",
"gated recurrent units",
"attention",
"transformer",
"the deep learning-based machine translators",
"the test accuracy",
"the result translation",
"it",
"the best deep learning-based machine translator model",
"the attention mechanism",
"the transformer",
"the second rank",
"one language",
"english",
"vietnamese",
"second"
] |
Role of machine learning and deep learning techniques in EEG-based BCI emotion recognition system: a review | [
"Priyadarsini Samal",
"Mohammad Farukh Hashmi"
] | Emotion is a subjective psychophysiological reaction coming from external stimuli which impacts every aspect of our daily lives. Due to the continuing development of non-invasive and portable sensor technologies, such as brain-computer interfaces (BCI), intellectuals from several fields have been interested in emotion recognition techniques. Human emotions can be recognised using a variety of behavioural cues, including gestures and body language, voice, and physiological markers. The first three, however, might be ineffective because people sometimes conceal their genuine emotions either intentionally or unknowingly. More precise and objective emotion recognition can be accomplished using physiological signals. Among other physiological signals, Electroencephalogram (EEG) is more responsive and sensitive to variation in affective states. Various EEG-based emotion recognition methods have recently been introduced. This study reviews EEG-based BCIs for emotion identification and gives an outline of the progress made in this field. A summary of the datasets and techniques utilised to evoke human emotions and various emotion models is also given. We discuss several EEG feature extractions, feature selection/reduction, machine learning, and deep learning algorithms in accordance with standard emotional identification process. We provide an overview of the human brain's EEG rhythms, which are closely related to emotional states. We also go over a number of EEG-based emotion identification research and compare numerous machine learning and deep learning techniques. In conclusion, this study highlights the applications, challenges and potential areas for future research in identification and classification of human emotional states. | 10.1007/s10462-023-10690-2 | role of machine learning and deep learning techniques in eeg-based bci emotion recognition system: a review | emotion is a subjective psychophysiological reaction coming from external stimuli which impacts every aspect of our daily lives. due to the continuing development of non-invasive and portable sensor technologies, such as brain-computer interfaces (bci), intellectuals from several fields have been interested in emotion recognition techniques. human emotions can be recognised using a variety of behavioural cues, including gestures and body language, voice, and physiological markers. the first three, however, might be ineffective because people sometimes conceal their genuine emotions either intentionally or unknowingly. more precise and objective emotion recognition can be accomplished using physiological signals. among other physiological signals, electroencephalogram (eeg) is more responsive and sensitive to variation in affective states. various eeg-based emotion recognition methods have recently been introduced. this study reviews eeg-based bcis for emotion identification and gives an outline of the progress made in this field. a summary of the datasets and techniques utilised to evoke human emotions and various emotion models is also given. we discuss several eeg feature extractions, feature selection/reduction, machine learning, and deep learning algorithms in accordance with standard emotional identification process. we provide an overview of the human brain's eeg rhythms, which are closely related to emotional states. we also go over a number of eeg-based emotion identification research and compare numerous machine learning and deep learning techniques. in conclusion, this study highlights the applications, challenges and potential areas for future research in identification and classification of human emotional states. | [
"emotion",
"a subjective psychophysiological reaction",
"external stimuli",
"which",
"every aspect",
"our daily lives",
"the continuing development",
"non-invasive and portable sensor technologies",
"brain-computer interfaces",
"bci",
"intellectuals",
"several fields",
"emotion recognition techniques",
"human emotions",
"a variety",
"behavioural cues",
"gestures",
"body language",
"voice",
"physiological markers",
"people",
"their genuine emotions",
"more precise and objective emotion recognition",
"physiological signals",
"other physiological signals",
"electroencephalogram",
"eeg",
"variation",
"affective states",
"various eeg-based emotion recognition methods",
"this study",
"eeg-based bcis",
"emotion identification",
"an outline",
"the progress",
"this field",
"a summary",
"the datasets",
"techniques",
"human emotions",
"various emotion models",
"we",
"several eeg feature extractions",
"feature selection/reduction",
"machine learning",
"deep learning algorithms",
"accordance",
"standard emotional identification process",
"we",
"an overview",
"the human brain's eeg rhythms",
"which",
"emotional states",
"we",
"a number",
"eeg-based emotion identification research",
"numerous machine learning",
"deep learning techniques",
"conclusion",
"this study",
"the applications",
"challenges",
"potential areas",
"future research",
"identification",
"classification",
"human emotional states",
"first",
"three"
] |
Enhanced Hybrid Intrusion Detection System with Attention Mechanism using Deep Learning | [
"Pundalik Chavan",
"H. Hanumanthappa",
"E. G. Satish",
"Sunil Manoli",
"S. Supreeth",
"S. Rohith",
"H. C. Ramaprasad"
] | The introduction of the Attention mechanism by the Internet of Things—or WSN-IoT—in the sector has greatly enhanced the intrusion detection mechanism capabilities, whereas the deep learning techniques, together with the attention mechanism, enhance the efficacy and efficiency of IDS within WSNs. The proposed “Enhanced Hybrid Intrusion Detection System with Attention Mechanism” (EHID-SCA) underlying insight of this is that the characteristics of WSN data validate this including Convolutional Neural Networks (CNNs) and other deep learning models into an effective and coherent architecture. The design of the deep hybrid network with attention incorporates Channel Attention and Spatial Attention as some of the main components. The model biased capacity is enlarged to choose and stress important spatial and channel information from the sensor input. The method is based on the consideration of these things and it cuts noise. Therefore, the proposed technique can be made to pave the way for enabling the Internet of Things intrusion detection systems to do the automatic extraction of useful information from sensor data through the utilization of deep learning along with the Attention mechanism. The attack might therefore be better situated for mitigation in the network design that allows analysis of the geographical and temporal context of events, which are then more properly termed as intrusion events. | 10.1007/s42979-024-02852-y | enhanced hybrid intrusion detection system with attention mechanism using deep learning | the introduction of the attention mechanism by the internet of things—or wsn-iot—in the sector has greatly enhanced the intrusion detection mechanism capabilities, whereas the deep learning techniques, together with the attention mechanism, enhance the efficacy and efficiency of ids within wsns. the proposed “enhanced hybrid intrusion detection system with attention mechanism” (ehid-sca) underlying insight of this is that the characteristics of wsn data validate this including convolutional neural networks (cnns) and other deep learning models into an effective and coherent architecture. the design of the deep hybrid network with attention incorporates channel attention and spatial attention as some of the main components. the model biased capacity is enlarged to choose and stress important spatial and channel information from the sensor input. the method is based on the consideration of these things and it cuts noise. therefore, the proposed technique can be made to pave the way for enabling the internet of things intrusion detection systems to do the automatic extraction of useful information from sensor data through the utilization of deep learning along with the attention mechanism. the attack might therefore be better situated for mitigation in the network design that allows analysis of the geographical and temporal context of events, which are then more properly termed as intrusion events. | [
"the introduction",
"the attention mechanism",
"the internet",
"things",
"wsn-iot",
"the sector",
"the intrusion detection mechanism capabilities",
"the deep learning techniques",
"the attention mechanism",
"the efficacy",
"efficiency",
"ids",
"wsns",
"the proposed “enhanced hybrid intrusion detection system",
"attention mechanism",
"ehid-sca) underlying insight",
"this",
"the characteristics",
"wsn data",
"this",
"convolutional neural networks",
"cnns",
"other deep learning models",
"an effective and coherent architecture",
"the design",
"the deep hybrid network",
"attention",
"channel attention",
"spatial attention",
"some",
"the main components",
"the model biased capacity",
"important spatial and channel information",
"the sensor input",
"the method",
"the consideration",
"these things",
"it",
"noise",
"the proposed technique",
"the way",
"the internet",
"things intrusion detection systems",
"the automatic extraction",
"useful information",
"sensor data",
"the utilization",
"deep learning",
"the attention mechanism",
"the attack",
"mitigation",
"the network design",
"that",
"analysis",
"the geographical and temporal context",
"events",
"which",
"intrusion events"
] |
Federal learning-based a dual-branch deep learning model for colon polyp segmentation | [
"Xuguang Cao",
"Kefeng Fan",
"Huilin Ma"
] | The incidence of colon cancer occupies the top three places in gastrointestinal tumors, and colon polyps are an important causative factor in the development of colon cancer. Early screening for colon polyps and colon polypectomy can reduce the chances of colon cancer. The current means of colon polyp examination is through colonoscopy, taking images of the gastrointestinal tract, and then manually marking them manually, which is time-consuming and labor-intensive for doctors. Therefore, relying on advanced deep learning technology to automatically identify colon polyps in the gastrointestinal tract of the patient and segmenting the polyps is an important direction of research nowadays. Due to the privacy of medical data and the non-interoperability of disease information, this paper proposes a dual-branch colon polyp segmentation network based on federated learning, which makes it possible to achieve a better training effect under the guarantee of data independence, and secondly, the dual-branch colon polyp segmentation network proposed in this paper adopts the two different structures of convolutional neural network (CNN) and Transformer to form a dual-branch structure, and through layer-by-layer fusion embedding, the advantages between different structures are realized. In this paper, we also propose the Aggregated Attention Module (AAM) to preserve the high-dimensional semantic information and to complement the missing information in the lower layers. Ultimately our approach achieves state of the art in Kvasir-SEG and CVC-ClinicDB datasets. | 10.1007/s11042-024-19197-6 | federal learning-based a dual-branch deep learning model for colon polyp segmentation | the incidence of colon cancer occupies the top three places in gastrointestinal tumors, and colon polyps are an important causative factor in the development of colon cancer. early screening for colon polyps and colon polypectomy can reduce the chances of colon cancer. the current means of colon polyp examination is through colonoscopy, taking images of the gastrointestinal tract, and then manually marking them manually, which is time-consuming and labor-intensive for doctors. therefore, relying on advanced deep learning technology to automatically identify colon polyps in the gastrointestinal tract of the patient and segmenting the polyps is an important direction of research nowadays. due to the privacy of medical data and the non-interoperability of disease information, this paper proposes a dual-branch colon polyp segmentation network based on federated learning, which makes it possible to achieve a better training effect under the guarantee of data independence, and secondly, the dual-branch colon polyp segmentation network proposed in this paper adopts the two different structures of convolutional neural network (cnn) and transformer to form a dual-branch structure, and through layer-by-layer fusion embedding, the advantages between different structures are realized. in this paper, we also propose the aggregated attention module (aam) to preserve the high-dimensional semantic information and to complement the missing information in the lower layers. ultimately our approach achieves state of the art in kvasir-seg and cvc-clinicdb datasets. | [
"the incidence",
"colon cancer",
"the top three places",
"gastrointestinal tumors",
"colon polyps",
"an important causative factor",
"the development",
"colon cancer",
"colon polyps",
"colon polypectomy",
"the chances",
"colon cancer",
"the current means",
"colon polyp examination",
"colonoscopy",
"images",
"the gastrointestinal tract",
"them",
"which",
"doctors",
"advanced deep learning technology",
"colon polyps",
"the gastrointestinal tract",
"the patient",
"the polyps",
"an important direction",
"research",
"the privacy",
"medical data",
"the non",
"-",
"interoperability",
"disease information",
"this paper",
"a dual-branch colon polyp segmentation network",
"federated learning",
"which",
"it",
"a better training effect",
"the guarantee",
"data independence",
"the dual-branch colon polyp segmentation network",
"this paper",
"the two different structures",
"convolutional neural network",
"cnn",
"a dual-branch structure",
"layer",
"the advantages",
"different structures",
"this paper",
"we",
"the aggregated attention module",
"aam",
"the high-dimensional semantic information",
"the missing information",
"the lower layers",
"our approach",
"state",
"the art",
"kvasir-seg and cvc-clinicdb datasets",
"three",
"secondly",
"two",
"cnn"
] |
Improving Access Trust in Healthcare Through Multimodal Deep Learning for Affective Computing | [
"I. Sakthidevi",
"G. Fathima"
] | In healthcare domain, access trust is of prime importance paramount to ensure effective delivery of medical services. It also fosters positive patient-provider relationships. With the advancement of technology, affective computing has emerged as a promising approach to enhance access trust. It enables systems to understand and respond to human emotions. The research work investigates the application of multimodal deep learning techniques in affective computing to improve access trust in healthcare environment. A novel algorithm, "Belief-Emo-Fusion," is proposed, aiming to enhance the understanding and interpretation of emotions in healthcare. The research conducts a comprehensive simulation analysis, comparing the performance of Belief-Emo-Fusion with existing algorithms using simulation metrics: modal accuracy, ınference time, and F1-score. The study emphasizes the importance of emotion recognition and understanding in healthcare settings. The work highlights the role of deep learning models in facilitating empathetic and emotionally intelligent technologies. By addressing the challenges associated with affective computing, the proposed approach contributes to the development of more effective and reliable healthcare systems. The findings offer valuable insights for researchers and practitioners seeking to leverage deep learning techniques for enhancing trust and communication in healthcare environments. | 10.1007/s44230-024-00080-4 | improving access trust in healthcare through multimodal deep learning for affective computing | in healthcare domain, access trust is of prime importance paramount to ensure effective delivery of medical services. it also fosters positive patient-provider relationships. with the advancement of technology, affective computing has emerged as a promising approach to enhance access trust. it enables systems to understand and respond to human emotions. the research work investigates the application of multimodal deep learning techniques in affective computing to improve access trust in healthcare environment. a novel algorithm, "belief-emo-fusion," is proposed, aiming to enhance the understanding and interpretation of emotions in healthcare. the research conducts a comprehensive simulation analysis, comparing the performance of belief-emo-fusion with existing algorithms using simulation metrics: modal accuracy, ınference time, and f1-score. the study emphasizes the importance of emotion recognition and understanding in healthcare settings. the work highlights the role of deep learning models in facilitating empathetic and emotionally intelligent technologies. by addressing the challenges associated with affective computing, the proposed approach contributes to the development of more effective and reliable healthcare systems. the findings offer valuable insights for researchers and practitioners seeking to leverage deep learning techniques for enhancing trust and communication in healthcare environments. | [
"healthcare domain",
"access trust",
"prime importance",
"effective delivery",
"medical services",
"it",
"positive patient-provider relationships",
"the advancement",
"technology",
"affective computing",
"a promising approach",
"access trust",
"it",
"systems",
"human emotions",
"the research work",
"the application",
"techniques",
"affective computing",
"access trust",
"healthcare environment",
"a novel algorithm",
"\"belief-emo-fusion",
"the understanding",
"interpretation",
"emotions",
"healthcare",
"the research",
"a comprehensive simulation analysis",
"the performance",
"belief-emo-fusion",
"existing algorithms",
"simulation metrics",
"modal accuracy",
"ınference time",
"f1-score",
"the study",
"the importance",
"emotion recognition",
"understanding",
"healthcare settings",
"the work",
"the role",
"deep learning models",
"empathetic and emotionally intelligent technologies",
"the challenges",
"affective computing",
"the proposed approach",
"the development",
"more effective and reliable healthcare systems",
"the findings",
"valuable insights",
"researchers",
"practitioners",
"deep learning techniques",
"trust",
"communication",
"healthcare environments"
] |
Grapevine fruits disease detection using different deep learning models | [
"Om G",
"Saketh Ram Billa",
"Vishal Malik",
"Eslavath Bharath",
"Sanjeev Sharma"
] | In India, grapes are one of the most important crops for business. Grapes and their byproducts are one of India’s leading exports. The leaves of grapes are susceptible to a variety of diseases. Large-scale production of grapes can be affected by these diseases. The purpose of this research paper is to classify and detect disease on leaves using deep learning before taking a big picture. It describes the advances in detecting leaf disease and shows how to improve results. A total of seven different types of pre-trained deep Convolutional Neural Networks (CNNs) were used for transfer learning: MobileNet, InceptionResNetV2, DenseNet121, InceptionV3, Xception, VGG16 and ResNet101V2. There are 4062 images of grapevine leaves in total, divided into four classes: Leaf Blight, Black Rot, Black Measles, and Healthy. The major challenge was to detect if the leaf was healthy or had been infected, followed by classifying the type of disease of the leaf. In order to learn and detect the disease, these images were trained to seven different transfer learning models. The image classification accuracy obtained is 99.672% by DenseNet121 and it is the highest accuracy obtained compared to any accuracy reported in the literature. It is followed by 99.500% of Xception and 99.345% of VGG16, 99.345% of InceptionV3. Therefore, the proposed research paper can be useful for detecting Grapevine disease at an early stage and preventing its spread. This process can increase the production rate and profit for farmers. | 10.1007/s11042-024-19036-8 | grapevine fruits disease detection using different deep learning models | in india, grapes are one of the most important crops for business. grapes and their byproducts are one of india’s leading exports. the leaves of grapes are susceptible to a variety of diseases. large-scale production of grapes can be affected by these diseases. the purpose of this research paper is to classify and detect disease on leaves using deep learning before taking a big picture. it describes the advances in detecting leaf disease and shows how to improve results. a total of seven different types of pre-trained deep convolutional neural networks (cnns) were used for transfer learning: mobilenet, inceptionresnetv2, densenet121, inceptionv3, xception, vgg16 and resnet101v2. there are 4062 images of grapevine leaves in total, divided into four classes: leaf blight, black rot, black measles, and healthy. the major challenge was to detect if the leaf was healthy or had been infected, followed by classifying the type of disease of the leaf. in order to learn and detect the disease, these images were trained to seven different transfer learning models. the image classification accuracy obtained is 99.672% by densenet121 and it is the highest accuracy obtained compared to any accuracy reported in the literature. it is followed by 99.500% of xception and 99.345% of vgg16, 99.345% of inceptionv3. therefore, the proposed research paper can be useful for detecting grapevine disease at an early stage and preventing its spread. this process can increase the production rate and profit for farmers. | [
"india",
"grapes",
"the most important crops",
"business",
"grapes",
"their byproducts",
"india’s leading exports",
"the leaves",
"grapes",
"a variety",
"diseases",
"large-scale production",
"grapes",
"these diseases",
"the purpose",
"this research paper",
"disease",
"leaves",
"deep learning",
"a big picture",
"it",
"the advances",
"leaf disease",
"results",
"a total",
"seven different types",
"pre-trained deep convolutional neural networks",
"cnns",
"transfer learning",
"mobilenet",
"inceptionresnetv2",
"densenet121",
"inceptionv3",
"xception",
"vgg16",
"resnet101v2",
"4062 images",
"grapevine leaves",
"four classes",
"leaf blight",
"black rot",
"black measles",
"the major challenge",
"the leaf",
"the type",
"disease",
"the leaf",
"order",
"the disease",
"these images",
"seven different transfer learning models",
"the image classification accuracy",
"99.672%",
"densenet121",
"it",
"the highest accuracy",
"any accuracy",
"the literature",
"it",
"99.500%",
"xception",
"99.345%",
"vgg16",
"99.345%",
"inceptionv3",
"the proposed research paper",
"grapevine disease",
"an early stage",
"its spread",
"this process",
"the production rate",
"profit",
"farmers",
"india",
"one",
"one",
"india",
"seven",
"inceptionresnetv2",
"inceptionv3",
"4062",
"four",
"seven",
"99.672%",
"99.500%",
"99.345%",
"99.345%",
"inceptionv3"
] |
Deep learning in pulmonary nodule detection and segmentation: a systematic review | [
"Chuan Gao",
"Linyu Wu",
"Wei Wu",
"Yichao Huang",
"Xinyue Wang",
"Zhichao Sun",
"Maosheng Xu",
"Chen Gao"
] | ObjectivesThe accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. This study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature.MethodsThis study utilized a systematic review with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, searching PubMed, Embase, Web of Science Core Collection, and the Cochrane Library databases up to May 10, 2023. The Quality Assessment of Diagnostic Accuracy Studies 2 criteria was used to assess the risk of bias and was adjusted with the Checklist for Artificial Intelligence in Medical Imaging. The study analyzed and extracted model performance, data sources, and task-focus information.ResultsAfter screening, we included nine studies meeting our inclusion criteria. These studies were published between 2019 and 2023 and predominantly used public datasets, with the Lung Image Database Consortium Image Collection and Image Database Resource Initiative and Lung Nodule Analysis 2016 being the most common. The studies focused on detection, segmentation, and other tasks, primarily utilizing Convolutional Neural Networks for model development. Performance evaluation covered multiple metrics, including sensitivity and the Dice coefficient.ConclusionsThis study highlights the potential power of deep learning in lung nodule detection and segmentation. It underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research.Clinical relevance statementDeep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. Future research should address methodological shortcomings and variability to enhance its clinical utility.Key Points
Deep learning shows potential in the detection and segmentation of pulmonary nodules.
There are methodological gaps and biases present in the existing literature.
Factors such as external validation and transparency affect the clinical application. | 10.1007/s00330-024-10907-0 | deep learning in pulmonary nodule detection and segmentation: a systematic review | objectivesthe accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. this study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature.methodsthis study utilized a systematic review with the preferred reporting items for systematic reviews and meta-analyses guidelines, searching pubmed, embase, web of science core collection, and the cochrane library databases up to may 10, 2023. the quality assessment of diagnostic accuracy studies 2 criteria was used to assess the risk of bias and was adjusted with the checklist for artificial intelligence in medical imaging. the study analyzed and extracted model performance, data sources, and task-focus information.resultsafter screening, we included nine studies meeting our inclusion criteria. these studies were published between 2019 and 2023 and predominantly used public datasets, with the lung image database consortium image collection and image database resource initiative and lung nodule analysis 2016 being the most common. the studies focused on detection, segmentation, and other tasks, primarily utilizing convolutional neural networks for model development. performance evaluation covered multiple metrics, including sensitivity and the dice coefficient.conclusionsthis study highlights the potential power of deep learning in lung nodule detection and segmentation. it underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research.clinical relevance statementdeep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. future research should address methodological shortcomings and variability to enhance its clinical utility.key points deep learning shows potential in the detection and segmentation of pulmonary nodules. there are methodological gaps and biases present in the existing literature. factors such as external validation and transparency affect the clinical application. | [
"objectivesthe accurate detection",
"precise segmentation",
"lung nodules",
"computed tomography",
"key prerequisites",
"early diagnosis",
"appropriate treatment",
"lung cancer",
"this study",
"detection and segmentation methods",
"pulmonary nodules",
"deep-learning techniques",
"methodological gaps",
"biases",
"the existing literature.methodsthis study",
"a systematic review",
"the preferred reporting items",
"systematic reviews",
"meta-analyses guidelines",
"embase, web",
"science core collection",
"the cochrane library",
"may",
"the quality assessment",
"diagnostic accuracy studies",
"2 criteria",
"the risk",
"bias",
"the checklist",
"artificial intelligence",
"medical imaging",
"the study",
"model performance",
"data sources",
"task-focus information.resultsafter screening",
"we",
"nine studies",
"our inclusion criteria",
"these studies",
"public datasets",
"the lung image database consortium image collection and image database resource initiative",
"lung",
"the studies",
"detection",
"segmentation",
"other tasks",
"convolutional neural networks",
"model development",
"performance evaluation",
"multiple metrics",
"sensitivity",
"the dice coefficient.conclusionsthis study",
"the potential power",
"deep learning",
"lung nodule detection",
"segmentation",
"it",
"the importance",
"standardized data processing",
"code",
"data",
"sharing",
"the value",
"external test datasets",
"the need",
"model complexity",
"efficiency",
"future research.clinical relevance statementdeep learning",
"significant promise",
"segmenting pulmonary nodules",
"future research",
"methodological shortcomings",
"variability",
"its clinical utility.key points",
"deep learning",
"potential",
"the detection",
"segmentation",
"pulmonary nodules",
"methodological gaps",
"biases",
"the existing literature",
"factors",
"external validation",
"transparency",
"the clinical application",
"2",
"nine",
"between 2019 and 2023",
"2016"
] |
Emotion Detection-Based Video Recommendation System Using Machine Learning and Deep Learning Framework | [
"Anuja Bokhare",
"Tripti Kothari"
] | Emotions play an important role in identification of the interest of user with respect to product, technology and other parameter. A human being has so many emotions which is used to express emotions such as anger, surprise, happiness, sadness and fear. In today's fast growing world of technology, there is necessity of expert system which helps to recognize the interest of user based on emotions. These systems would be helpful for various applications. Techniques such as machine learning or deep learning are the emerging ones’ for the same. These are the technologies that have been used in applications like in automatic cars, for music recommendation and movie recommendation. Human emotions cannot be expressed but it can be detected by the help of techniques that is machine learning and deep learning. Current study emphases on human face detection and emotion detection to recommend video which motivates the person for changing the mood. Current system attention is on detection of face and emotion, using Haar cascade and deep face technology. Based on the outcome, an appropriate video is proposed. | 10.1007/s42979-022-01619-7 | emotion detection-based video recommendation system using machine learning and deep learning framework | emotions play an important role in identification of the interest of user with respect to product, technology and other parameter. a human being has so many emotions which is used to express emotions such as anger, surprise, happiness, sadness and fear. in today's fast growing world of technology, there is necessity of expert system which helps to recognize the interest of user based on emotions. these systems would be helpful for various applications. techniques such as machine learning or deep learning are the emerging ones’ for the same. these are the technologies that have been used in applications like in automatic cars, for music recommendation and movie recommendation. human emotions cannot be expressed but it can be detected by the help of techniques that is machine learning and deep learning. current study emphases on human face detection and emotion detection to recommend video which motivates the person for changing the mood. current system attention is on detection of face and emotion, using haar cascade and deep face technology. based on the outcome, an appropriate video is proposed. | [
"emotions",
"an important role",
"identification",
"the interest",
"user",
"respect",
"product",
"technology",
"other parameter",
"a human being",
"so many emotions",
"which",
"emotions",
"anger",
"surprise",
"happiness",
"sadness",
"fear",
"today's fast growing world",
"technology",
"necessity",
"expert system",
"which",
"the interest",
"user",
"emotions",
"these systems",
"various applications",
"techniques",
"machine learning",
"deep learning",
"the emerging ones",
"these",
"the technologies",
"that",
"applications",
"automatic cars",
"music recommendation and movie recommendation",
"human emotions",
"it",
"the help",
"techniques",
"that",
"machine learning",
"deep learning",
"current study",
"human face detection",
"emotion detection",
"video",
"which",
"the person",
"the mood",
"current system attention",
"detection",
"face",
"emotion",
"haar cascade",
"deep face technology",
"the outcome",
"an appropriate video",
"today"
] |
Genome analysis through image processing with deep learning models | [
"Yao-zhong Zhang",
"Seiya Imoto"
] | Genomic sequences are traditionally represented as strings of characters: A (adenine), C (cytosine), G (guanine), and T (thymine). However, an alternative approach involves depicting sequence-related information through image representations, such as Chaos Game Representation (CGR) and read pileup images. With rapid advancements in deep learning (DL) methods within computer vision and natural language processing, there is growing interest in applying image-based DL methods to genomic sequence analysis. These methods involve encoding genomic information as images or integrating spatial information from images into the analytical process. In this review, we summarize three typical applications that use image processing with DL models for genome analysis. We examine the utilization and advantages of these image-based approaches. | 10.1038/s10038-024-01275-0 | genome analysis through image processing with deep learning models | genomic sequences are traditionally represented as strings of characters: a (adenine), c (cytosine), g (guanine), and t (thymine). however, an alternative approach involves depicting sequence-related information through image representations, such as chaos game representation (cgr) and read pileup images. with rapid advancements in deep learning (dl) methods within computer vision and natural language processing, there is growing interest in applying image-based dl methods to genomic sequence analysis. these methods involve encoding genomic information as images or integrating spatial information from images into the analytical process. in this review, we summarize three typical applications that use image processing with dl models for genome analysis. we examine the utilization and advantages of these image-based approaches. | [
"genomic sequences",
"strings",
"characters",
"a (adenine",
"c",
"cytosine",
"g",
"guanine",
"t",
"(thymine",
"an alternative approach",
"sequence-related information",
"image representations",
"chaos game representation",
"pileup images",
"rapid advancements",
"deep learning",
"computer vision",
"natural language processing",
"interest",
"image-based dl methods",
"genomic sequence analysis",
"these methods",
"genomic information",
"images",
"spatial information",
"images",
"the analytical process",
"this review",
"we",
"three typical applications",
"that",
"image processing",
"dl models",
"genome analysis",
"we",
"the utilization",
"advantages",
"these image-based approaches",
"three"
] |
Deep learning based System for automatic motorcycle license plates detection and recognition | [
"Abdolhossein Fathi",
"Babak Moradi",
"Iman Zarei",
"Afshin Shirbandi"
] | Nowadays, by increased the utilization of motorcycle the detection and recognition of its license plate play a very important role in intelligent transportation systems (ITS). ITS can be used for traffic control, violation monitoring, e-payment systems in the toll pay and parking. Several algorithms have been developed for this task and each of them has advantages and disadvantages under different circumstances and situations. By emerging deep learning based methods, they were employed to tackle the issue of automatic license plate detection and recognition. Since the deep learning models need a high volume of data for efficient training, and also each country has its license plate template, at first, it is crucial to collect proper dataset and then trains an efficient model on it. To this end, this research collected and introduced a new dataset, and then, designed a deep learning-based system for automatically detecting and identifying Iranian motorcycle license plates. At first, images that have different dimensions, angles, levels of lighting (daytime and nighttime images), were collected from various cities. Then two datasets for detection and identification are annotated and constructed from these images. Finally for implementing an efficient deep learning-based system, three networks YOLOv8, SSD, and Faster RCNN were investigated for detection and identification of license plates. The obtained results showed that the YOLOv8 network has the best result with 98.5% accuracy in the detection stage and 99% accuracy in the identification stage. The proposed YOLOv8 model was compared with deep learning-based methods and showed better performance on the collected dataset. The collected dataset and the source code of the investigated models are publicly available. | 10.1007/s11760-024-03514-5 | deep learning based system for automatic motorcycle license plates detection and recognition | nowadays, by increased the utilization of motorcycle the detection and recognition of its license plate play a very important role in intelligent transportation systems (its). its can be used for traffic control, violation monitoring, e-payment systems in the toll pay and parking. several algorithms have been developed for this task and each of them has advantages and disadvantages under different circumstances and situations. by emerging deep learning based methods, they were employed to tackle the issue of automatic license plate detection and recognition. since the deep learning models need a high volume of data for efficient training, and also each country has its license plate template, at first, it is crucial to collect proper dataset and then trains an efficient model on it. to this end, this research collected and introduced a new dataset, and then, designed a deep learning-based system for automatically detecting and identifying iranian motorcycle license plates. at first, images that have different dimensions, angles, levels of lighting (daytime and nighttime images), were collected from various cities. then two datasets for detection and identification are annotated and constructed from these images. finally for implementing an efficient deep learning-based system, three networks yolov8, ssd, and faster rcnn were investigated for detection and identification of license plates. the obtained results showed that the yolov8 network has the best result with 98.5% accuracy in the detection stage and 99% accuracy in the identification stage. the proposed yolov8 model was compared with deep learning-based methods and showed better performance on the collected dataset. the collected dataset and the source code of the investigated models are publicly available. | [
"the utilization",
"motorcycle",
"the detection",
"recognition",
"its license plate",
"a very important role",
"intelligent transportation systems",
"its",
"its",
"traffic control",
"violation monitoring",
"e-payment systems",
"the toll pay",
"parking",
"several algorithms",
"this task",
"each",
"them",
"advantages",
"disadvantages",
"different circumstances",
"situations",
"deep learning based methods",
"they",
"the issue",
"automatic license plate detection",
"recognition",
"the deep learning models",
"a high volume",
"data",
"efficient training",
"each country",
"its license plate template",
"it",
"proper dataset",
"an efficient model",
"it",
"this end",
"this research",
"a new dataset",
"a deep learning-based system",
"iranian motorcycle license plates",
"images",
"that",
"different dimensions",
"angles",
"levels",
"lighting",
"daytime and nighttime images",
"various cities",
"two datasets",
"detection",
"identification",
"these images",
"an efficient deep learning-based system",
"three networks",
"ssd",
"faster rcnn",
"detection",
"identification",
"license plates",
"the obtained results",
"the yolov8 network",
"the best result",
"98.5% accuracy",
"the detection stage",
"99% accuracy",
"the identification stage",
"the proposed yolov8 model",
"deep learning-based methods",
"better performance",
"the collected dataset",
"the collected dataset",
"the source code",
"the investigated models",
"first",
"iranian",
"first",
"daytime",
"nighttime",
"two",
"three",
"yolov8",
"rcnn",
"yolov8",
"98.5%",
"99%",
"yolov8 model"
] |
A Novel Approach Using Transfer Learning Architectural Models Based Deep Learning Techniques for Identification and Classification of Malignant Skin Cancer | [
"Balambigai Subramanian",
"Suresh Muthusamy",
"Kokilavani Thangaraj",
"Hitesh Panchal",
"Elavarasi Kasirajan",
"Abarna Marimuthu",
"Abinaya Ravi"
] | Melanoma, a form of skin cancer originating in melanocyte cells, poses a significant health risk, although it is less prevalent than other types of skin cancer. Its detection presents challenges, even under expert observation. To enhance the classification accuracy of skin lesions, a Deep Convolutional Neural Network, Visual Geometry Group model has been proposed. However, deep learning methods typically require substantial training time. To mitigate this, transfer learning techniques are employed, reducing training duration. Data sets sourced from the International Skin Imaging Collaboration are utilized to train the model within this proposed approach. Evaluation of classification performance involves metrics such as Accuracy, Positive Predictive Value, Negative Predictive Value, Specificity, and Sensitivity. The classifier’s performance on test data is depicted through a confusion matrix. The introduction of transfer learning techniques into the Deep Convolutional Neural Network has resulted in an improved classification accuracy of 85%, compared to the 81% achieved by a standard Convolutional Neural Network. | 10.1007/s11277-024-11006-5 | a novel approach using transfer learning architectural models based deep learning techniques for identification and classification of malignant skin cancer | melanoma, a form of skin cancer originating in melanocyte cells, poses a significant health risk, although it is less prevalent than other types of skin cancer. its detection presents challenges, even under expert observation. to enhance the classification accuracy of skin lesions, a deep convolutional neural network, visual geometry group model has been proposed. however, deep learning methods typically require substantial training time. to mitigate this, transfer learning techniques are employed, reducing training duration. data sets sourced from the international skin imaging collaboration are utilized to train the model within this proposed approach. evaluation of classification performance involves metrics such as accuracy, positive predictive value, negative predictive value, specificity, and sensitivity. the classifier’s performance on test data is depicted through a confusion matrix. the introduction of transfer learning techniques into the deep convolutional neural network has resulted in an improved classification accuracy of 85%, compared to the 81% achieved by a standard convolutional neural network. | [
"melanoma",
"a form",
"melanocyte cells",
"a significant health risk",
"it",
"other types",
"skin cancer",
"its detection",
"challenges",
"expert observation",
"the classification accuracy",
"skin lesions",
"a deep convolutional neural network",
"visual geometry group model",
"deep learning methods",
"substantial training time",
"this",
"transfer learning techniques",
"training duration",
"data sets",
"the international skin imaging collaboration",
"the model",
"this proposed approach",
"evaluation",
"classification performance",
"metrics",
"accuracy",
"positive predictive value",
"negative predictive value",
"specificity",
"sensitivity",
"the classifier’s performance",
"test data",
"a confusion matrix",
"the introduction",
"techniques",
"the deep convolutional neural network",
"an improved classification accuracy",
"85%",
"the 81%",
"a standard convolutional neural network",
"85%",
"the 81%"
] |
An Adaptive Learning Rate Deep Learning Optimizer Using Long and Short-Term Gradients Based on G–L Fractional-Order Derivative | [
"Shuang Chen",
"Changlun Zhang",
"Haibing Mu"
] | Deep learning model is a multi-layered network structure, and the network parameters that evaluate the final performance of the model must be trained by a deep learning optimizer. In comparison to the mainstream optimizers that utilize integer-order derivatives reflecting only local information, fractional-order derivatives optimizers, which can capture global information, are gradually gaining attention. However, relying solely on the long-term estimated gradients computed from fractional-order derivatives while disregarding the influence of recent gradients on the optimization process can sometimes lead to issues such as local optima and slower optimization speeds. In this paper, we design an adaptive learning rate optimizer called AdaGL based on the Grünwald–Letnikov (G–L) fractional-order derivative. It changes the direction and step size of parameter updating dynamically according to the long-term and short-term gradients information, addressing the problem of falling into local minima or saddle points. To be specific, by utilizing the global memory of fractional-order calculus, we replace the gradient of parameter update with G–L fractional-order approximated gradient, making better use of the long-term curvature information in the past. Furthermore, considering that the recent gradient information often impacts the optimization phase significantly, we propose a step size control coefficient to adjust the learning rate in real-time. To compare the performance of the proposed AdaGL with the current advanced optimizers, we conduct several different deep learning tasks, including image classification on CNNs, node classification and graph classification on GNNs, image generation on GANs, and language modeling on LSTM. Extensive experimental results demonstrate that AdaGL achieves stable and fast convergence, excellent accuracy, and good generalization performance. | 10.1007/s11063-024-11571-7 | an adaptive learning rate deep learning optimizer using long and short-term gradients based on g–l fractional-order derivative | deep learning model is a multi-layered network structure, and the network parameters that evaluate the final performance of the model must be trained by a deep learning optimizer. in comparison to the mainstream optimizers that utilize integer-order derivatives reflecting only local information, fractional-order derivatives optimizers, which can capture global information, are gradually gaining attention. however, relying solely on the long-term estimated gradients computed from fractional-order derivatives while disregarding the influence of recent gradients on the optimization process can sometimes lead to issues such as local optima and slower optimization speeds. in this paper, we design an adaptive learning rate optimizer called adagl based on the grünwald–letnikov (g–l) fractional-order derivative. it changes the direction and step size of parameter updating dynamically according to the long-term and short-term gradients information, addressing the problem of falling into local minima or saddle points. to be specific, by utilizing the global memory of fractional-order calculus, we replace the gradient of parameter update with g–l fractional-order approximated gradient, making better use of the long-term curvature information in the past. furthermore, considering that the recent gradient information often impacts the optimization phase significantly, we propose a step size control coefficient to adjust the learning rate in real-time. to compare the performance of the proposed adagl with the current advanced optimizers, we conduct several different deep learning tasks, including image classification on cnns, node classification and graph classification on gnns, image generation on gans, and language modeling on lstm. extensive experimental results demonstrate that adagl achieves stable and fast convergence, excellent accuracy, and good generalization performance. | [
"deep learning model",
"a multi-layered network structure",
"the network parameters",
"that",
"the final performance",
"the model",
"a deep learning optimizer",
"comparison",
"the mainstream optimizers",
"that",
"integer-order derivatives",
"only local information, fractional-order derivatives optimizers",
"which",
"global information",
"attention",
"the long-term estimated gradients",
"fractional-order derivatives",
"the influence",
"recent gradients",
"the optimization process",
"issues",
"local optima",
"slower optimization speeds",
"this paper",
"we",
"an adaptive learning rate optimizer",
"adagl",
"the grünwald",
"–letnikov (g–l) fractional-order derivative",
"it",
"the direction",
"step size",
"parameter",
"the long-term and short-term gradients information",
"the problem",
"local minima or saddle points",
"the global memory",
"fractional-order calculus",
"we",
"the gradient",
"parameter update",
"g",
"fractional-order",
"gradient",
"better use",
"the long-term curvature information",
"the past",
"the recent gradient information",
"the optimization phase",
"we",
"a step size control",
"coefficient",
"the learning rate",
"real-time",
"the performance",
"the proposed adagl",
"the current advanced optimizers",
"we",
"several different deep learning tasks",
"image classification",
"cnns",
"node classification",
"graph classification",
"gnns",
"image generation",
"gans",
"language modeling",
"lstm",
"extensive experimental results",
"adagl",
"stable and fast convergence",
"excellent accuracy",
"good generalization performance",
"adagl",
"adagl"
] |
Deep learning-based point cloud upsampling: a review of recent trends | [
"Soonjo Kwon",
"Ji-Hyeon Hur",
"Hyungki Kim"
] | Point clouds are acquired primarily using 3D scanners and are used for product inspection and reverse engineering. The quality of the point cloud varies depending on the scanning environment and scanner specifications. The quality of the point cloud has a significant impact on the accuracy of automatic or manual modeling. In response, various point cloud post-processing technologies are being developed. Point cloud upsampling is a technique to improve the resolution of point clouds, and the purpose of upsampling is to generate additional points to express the target object more accurately and in higher detail. This technology is important in areas where high-resolution 3D representation is required, and approaches based on deep learning have been recently gaining attention. Deep learning-based point cloud upsampling research can be classified as surface consolidation or edge consolidation research depending on the target regions to be consolidated, and as supervised or self-supervised learning depending on the type of learning approaches. This study examines the latest research trends in deep learning-based point cloud sampling, analyzes the issues and limitations of each research category, and proposes future research directions.Graphical abstract | 10.1007/s42791-023-00058-6 | deep learning-based point cloud upsampling: a review of recent trends | point clouds are acquired primarily using 3d scanners and are used for product inspection and reverse engineering. the quality of the point cloud varies depending on the scanning environment and scanner specifications. the quality of the point cloud has a significant impact on the accuracy of automatic or manual modeling. in response, various point cloud post-processing technologies are being developed. point cloud upsampling is a technique to improve the resolution of point clouds, and the purpose of upsampling is to generate additional points to express the target object more accurately and in higher detail. this technology is important in areas where high-resolution 3d representation is required, and approaches based on deep learning have been recently gaining attention. deep learning-based point cloud upsampling research can be classified as surface consolidation or edge consolidation research depending on the target regions to be consolidated, and as supervised or self-supervised learning depending on the type of learning approaches. this study examines the latest research trends in deep learning-based point cloud sampling, analyzes the issues and limitations of each research category, and proposes future research directions.graphical abstract | [
"point clouds",
"3d scanners",
"product inspection",
"engineering",
"the quality",
"the point cloud",
"the scanning environment",
"scanner specifications",
"the quality",
"the point cloud",
"a significant impact",
"the accuracy",
"automatic or manual modeling",
"response",
"various point cloud post-processing technologies",
"point cloud upsampling",
"a technique",
"the resolution",
"point clouds",
"the purpose",
"upsampling",
"additional points",
"the target object",
"higher detail",
"this technology",
"areas",
"high-resolution 3d representation",
"approaches",
"deep learning",
"attention",
"deep learning-based point cloud upsampling research",
"surface consolidation",
"edge consolidation research",
"the target regions",
"supervised or self-supervised learning",
"the type",
"approaches",
"this study",
"the latest research trends",
"deep learning-based point cloud sampling",
"the issues",
"limitations",
"each research category",
"future research",
"directions.graphical abstract",
"3d",
"3d"
] |
Determination of scintillation pixel location through deep learning using a two-layer DOI detector | [
"Byungdu Jo",
"Seung-Jae Lee"
] | Small gantries and long, thin scintillation pixels are used in preclinical positron emission tomography, resulting in parallax errors outside the system’s field of view. To solve this problem, a detector for measuring the depth of interaction (DOI) was developed. In addition, conduct of research on methods for DOI measurement through deep learning is underway. In this study, we designed a detector for measurement of DOI, consisting of two layers of scintillation pixel arrays and developed a method for specifying 3-dimensional (3D) position through deep learning. DETECT2000 simulation was performed to assess the 3D-positioning accuracy of the designed detector. Data acquired through DETECT2000 simulation wereused for learning a deep learning model, and assessment of location specification accuracy was performed using data generated at a new location and the deep learning model. According to the result, the 3D-position measurement accuracy was calculated as 94.48% on average. | 10.1007/s40042-024-01134-3 | determination of scintillation pixel location through deep learning using a two-layer doi detector | small gantries and long, thin scintillation pixels are used in preclinical positron emission tomography, resulting in parallax errors outside the system’s field of view. to solve this problem, a detector for measuring the depth of interaction (doi) was developed. in addition, conduct of research on methods for doi measurement through deep learning is underway. in this study, we designed a detector for measurement of doi, consisting of two layers of scintillation pixel arrays and developed a method for specifying 3-dimensional (3d) position through deep learning. detect2000 simulation was performed to assess the 3d-positioning accuracy of the designed detector. data acquired through detect2000 simulation wereused for learning a deep learning model, and assessment of location specification accuracy was performed using data generated at a new location and the deep learning model. according to the result, the 3d-position measurement accuracy was calculated as 94.48% on average. | [
"small gantries",
"long, thin scintillation pixels",
"preclinical positron emission tomography",
"parallax errors",
"the system’s field",
"view",
"this problem",
"a detector",
"the depth",
"interaction",
"doi",
"addition",
"conduct",
"research",
"methods",
"doi measurement",
"deep learning",
"this study",
"we",
"a detector",
"measurement",
"doi",
"two layers",
"scintillation pixel arrays",
"a method",
"3-dimensional (3d) position",
"deep learning",
"detect2000 simulation",
"the 3d-positioning accuracy",
"the designed detector",
"data",
"detect2000 simulation",
"a deep learning model",
"assessment",
"location specification accuracy",
"data",
"a new location",
"the deep learning model",
"the result",
"the 3d-position measurement accuracy",
"94.48%",
"two",
"3",
"3d",
"3d",
"detect2000",
"3d",
"94.48%"
] |
Sentiment analysis using a deep ensemble learning model | [
"Muhammet Sinan Başarslan",
"Fatih Kayaalp"
] | The coronavirus pandemic has kept people away from social life and this has led to an increase in the use of social media over the past two years. Thanks to social media, people can now instantly share their thoughts on various topics such as their favourite movies, restaurants, hotels, etc. This has created a huge amount of data and many researchers from different sciences have focused on analysing this data. Natural Language Processing (NLP) is one of these areas of computer science that uses artificial technologies. Sentiment analysis is also one of the tasks of NLP, which is based on extracting emotions from huge post data. In this study, sentiment analysis was performed on two datasets of tweets about coronavirus and TripAdvisor hotel reviews. A frequency-based word representation method (Term Frequency-Inverse Document Frequency (TF-IDF)) and a prediction-based Word2Vec word embedding method were used to vectorise the datasets. Sentiment analysis models were then built using single machine learning methods (Decision Trees-DT, K-Nearest Neighbour-KNN, Naive Bayes-NB and Support Vector Machine-SVM), single deep learning methods (Long Short Term Memory-LSTM, Recurrent Neural Network-RNN) and heterogeneous ensemble learning methods (Stacking and Majority Voting) based on these single machine learning and deep learning methods. Accuracy was used as a performance measure. The heterogeneous model with stacking (LSTM-RNN) has outperformed the other models with accuracy values of 0.864 on the coronavirus dataset and 0.898 on the Trip Advisor dataset and they have been evaluated as promising results when compared to the literature. It has been observed that the use of single methods as an ensemble gives better results, which is consistent with the literature, which is a step forward in the detection of sentiments through posts. Investigating the performance of heterogeneous ensemble learning models based on different algorithms in sentiment analysis tasks is planned as future work. | 10.1007/s11042-023-17278-6 | sentiment analysis using a deep ensemble learning model | the coronavirus pandemic has kept people away from social life and this has led to an increase in the use of social media over the past two years. thanks to social media, people can now instantly share their thoughts on various topics such as their favourite movies, restaurants, hotels, etc. this has created a huge amount of data and many researchers from different sciences have focused on analysing this data. natural language processing (nlp) is one of these areas of computer science that uses artificial technologies. sentiment analysis is also one of the tasks of nlp, which is based on extracting emotions from huge post data. in this study, sentiment analysis was performed on two datasets of tweets about coronavirus and tripadvisor hotel reviews. a frequency-based word representation method (term frequency-inverse document frequency (tf-idf)) and a prediction-based word2vec word embedding method were used to vectorise the datasets. sentiment analysis models were then built using single machine learning methods (decision trees-dt, k-nearest neighbour-knn, naive bayes-nb and support vector machine-svm), single deep learning methods (long short term memory-lstm, recurrent neural network-rnn) and heterogeneous ensemble learning methods (stacking and majority voting) based on these single machine learning and deep learning methods. accuracy was used as a performance measure. the heterogeneous model with stacking (lstm-rnn) has outperformed the other models with accuracy values of 0.864 on the coronavirus dataset and 0.898 on the trip advisor dataset and they have been evaluated as promising results when compared to the literature. it has been observed that the use of single methods as an ensemble gives better results, which is consistent with the literature, which is a step forward in the detection of sentiments through posts. investigating the performance of heterogeneous ensemble learning models based on different algorithms in sentiment analysis tasks is planned as future work. | [
"people",
"social life",
"this",
"an increase",
"the use",
"social media",
"the past two years",
"social media",
"people",
"their thoughts",
"various topics",
"their favourite movies",
"restaurants",
"hotels",
"this",
"a huge amount",
"data",
"many researchers",
"different sciences",
"this data",
"natural language processing",
"nlp",
"these areas",
"computer science",
"that",
"artificial technologies",
"sentiment analysis",
"the tasks",
"nlp",
"which",
"emotions",
"huge post data",
"this study",
"sentiment analysis",
"two datasets",
"tweets",
"coronavirus and tripadvisor hotel reviews",
"a frequency-based word representation method",
"term frequency-inverse document frequency",
"tf-idf",
"a prediction-based word2vec word",
"embedding method",
"the datasets",
"sentiment analysis models",
"single machine learning methods",
"decision trees",
"dt",
"k-nearest neighbour-knn, naive bayes",
"vector machine-svm",
"single deep learning methods",
"long short term memory-lstm",
"recurrent neural network-rnn",
"heterogeneous ensemble learning methods",
"stacking and majority voting",
"these single machine learning",
"deep learning methods",
"accuracy",
"a performance measure",
"the heterogeneous model",
"lstm-rnn",
"the other models",
"accuracy values",
"the coronavirus dataset",
"the trip advisor dataset",
"they",
"results",
"the literature",
"it",
"the use",
"single methods",
"an ensemble",
"better results",
"which",
"the literature",
"which",
"a step",
"the detection",
"sentiments",
"posts",
"the performance",
"heterogeneous ensemble learning models",
"different algorithms",
"sentiment analysis tasks",
"future work",
"the past two years",
"two",
"accuracy",
"0.864",
"0.898"
] |
Differential testing for machine learning: an analysis for classification algorithms beyond deep learning | [
"Steffen Herbold",
"Steffen Tunkel"
] | Differential testing is a useful approach that uses different implementations of the same algorithms and compares the results for software testing. In recent years, this approach was successfully used for test campaigns of deep learning frameworks. There is little knowledge about the application of differential testing beyond deep learning. Within this article, we want to close this gap for classification algorithms. We conduct a case study using Scikit-learn, Weka, Spark MLlib, and Caret in which we identify the potential of differential testing by considering which algorithms are available in multiple frameworks, the feasibility by identifying pairs of algorithms that should exhibit the same behavior, and the effectiveness by executing tests for the identified pairs and analyzing the deviations. While we found a large potential for popular algorithms, the feasibility seems limited because, often, it is not possible to determine configurations that are the same in other frameworks. The execution of the feasible tests revealed that there is a large number of deviations for the scores and classes. Only a lenient approach based on statistical significance of classes does not lead to a huge amount of test failures. The potential of differential testing beyond deep learning seems limited for research into the quality of machine learning libraries. Practitioners may still use the approach if they have deep knowledge about implementations, especially if a coarse oracle that only considers significant differences of classes is sufficient. | 10.1007/s10664-022-10273-9 | differential testing for machine learning: an analysis for classification algorithms beyond deep learning | differential testing is a useful approach that uses different implementations of the same algorithms and compares the results for software testing. in recent years, this approach was successfully used for test campaigns of deep learning frameworks. there is little knowledge about the application of differential testing beyond deep learning. within this article, we want to close this gap for classification algorithms. we conduct a case study using scikit-learn, weka, spark mllib, and caret in which we identify the potential of differential testing by considering which algorithms are available in multiple frameworks, the feasibility by identifying pairs of algorithms that should exhibit the same behavior, and the effectiveness by executing tests for the identified pairs and analyzing the deviations. while we found a large potential for popular algorithms, the feasibility seems limited because, often, it is not possible to determine configurations that are the same in other frameworks. the execution of the feasible tests revealed that there is a large number of deviations for the scores and classes. only a lenient approach based on statistical significance of classes does not lead to a huge amount of test failures. the potential of differential testing beyond deep learning seems limited for research into the quality of machine learning libraries. practitioners may still use the approach if they have deep knowledge about implementations, especially if a coarse oracle that only considers significant differences of classes is sufficient. | [
"differential testing",
"a useful approach",
"that",
"different implementations",
"the same algorithms",
"the results",
"software testing",
"recent years",
"this approach",
"test campaigns",
"deep learning frameworks",
"little knowledge",
"the application",
"differential testing",
"deep learning",
"this article",
"we",
"this gap",
"classification algorithms",
"we",
"a case study",
"scikit-learn",
"weka",
"spark mllib",
"which",
"we",
"the potential",
"differential testing",
"which algorithms",
"multiple frameworks",
"the feasibility",
"pairs",
"algorithms",
"that",
"the same behavior",
"the effectiveness",
"tests",
"the identified pairs",
"the deviations",
"we",
"a large potential",
"popular algorithms",
"the feasibility",
"it",
"configurations",
"that",
"other frameworks",
"the execution",
"the feasible tests",
"a large number",
"deviations",
"the scores",
"classes",
"only a lenient approach",
"statistical significance",
"classes",
"a huge amount",
"test failures",
"the potential",
"differential testing",
"deep learning",
"research",
"the quality",
"machine learning libraries",
"practitioners",
"the approach",
"they",
"deep knowledge",
"implementations",
"that",
"significant differences",
"classes",
"recent years",
"weka",
"spark mllib"
] |
Deep encoder–decoder-based shared learning for multi-criteria recommendation systems | [
"Salam Fraihat",
"Bushra Abu Tahon",
"Bushra Alhijawi",
"Arafat Awajan"
] | A recommendation system (RS) can help overcome information overload issues by offering personalized predictions for users. Typically, RS considers the overall ratings of users on items to generate recommendations for them. However, users may consider several aspects when evaluating items. Hence, a multi-criteria RS considers n-aspects of items to generate more accurate recommendations than a single-criteria RS. This research paper proposes two deep encoder–decoder models based on shared learning for a multi-criteria RS, multi-modal deep encoder–decoder-based shared learning (MMEDSL) and multi-criteria deep encoder–decoder-based shared learning (MCEDSL). MMEDSL employs the shared learning technique by concentrating on the multi-modality concept in deep learning, while MCEDSL focuses on the training process to apply the shared learning technique. The shared learning captures useful shared information during the learning process since the multi-criteria may have hidden inter-relationships. A set of experiments were conducted to compare the proposed models with recent baseline approaches. The Yahoo! Movies multi-criteria dataset was utilized. The results demonstrate that the proposed models outperform other algorithms. In addition, the results show that integrating the shared learning technique with the RS produces precise recommendation predictions. | 10.1007/s00521-023-09007-9 | deep encoder–decoder-based shared learning for multi-criteria recommendation systems | a recommendation system (rs) can help overcome information overload issues by offering personalized predictions for users. typically, rs considers the overall ratings of users on items to generate recommendations for them. however, users may consider several aspects when evaluating items. hence, a multi-criteria rs considers n-aspects of items to generate more accurate recommendations than a single-criteria rs. this research paper proposes two deep encoder–decoder models based on shared learning for a multi-criteria rs, multi-modal deep encoder–decoder-based shared learning (mmedsl) and multi-criteria deep encoder–decoder-based shared learning (mcedsl). mmedsl employs the shared learning technique by concentrating on the multi-modality concept in deep learning, while mcedsl focuses on the training process to apply the shared learning technique. the shared learning captures useful shared information during the learning process since the multi-criteria may have hidden inter-relationships. a set of experiments were conducted to compare the proposed models with recent baseline approaches. the yahoo! movies multi-criteria dataset was utilized. the results demonstrate that the proposed models outperform other algorithms. in addition, the results show that integrating the shared learning technique with the rs produces precise recommendation predictions. | [
"a recommendation system",
"information overload issues",
"personalized predictions",
"users",
"rs",
"the overall ratings",
"users",
"items",
"recommendations",
"them",
"users",
"several aspects",
"items",
"a multi-criteria rs",
"n",
"-aspects",
"items",
"more accurate recommendations",
"a single-criteria rs",
"this research paper",
"two deep encoder",
"decoder models",
"shared learning",
"a multi-criteria rs, multi-modal deep encoder",
"decoder-based shared learning",
"mmedsl",
"multi-criteria deep encoder",
"decoder-based shared learning",
"mcedsl",
"the shared learning technique",
"the multi-modality concept",
"deep learning",
"the training process",
"the shared learning technique",
"the shared learning captures",
"useful shared information",
"the learning process",
"-",
"criteria",
"inter",
"-",
"relationships",
"a set",
"experiments",
"the proposed models",
"recent baseline approaches",
"the yahoo",
"movies multi-criteria dataset",
"the results",
"the proposed models",
"other algorithms",
"addition",
"the results",
"the shared learning technique",
"the rs",
"precise recommendation predictions",
"two",
"yahoo"
] |
Federated Deep Learning for Solving an Image Classification Problem on a Desktop Grid System | [
"I. I. Kurochkin",
"A. I. Prun",
"A. A. Balaev"
] | AbstractThe paper considers the adaptation of federated deep learning on a desktop grid system using the example of an image classification problem. Restrictions are imposed on data transfer between the nodes of the desktop grid only for a part of the dataset. The implementation of federated deep learning on a desktop grid system based on the BOINC platform is considered. Methods for generating local datasets for desktop grid nodes are discussed. The results of numerical experiments are presented. | 10.1134/S1063779624030560 | federated deep learning for solving an image classification problem on a desktop grid system | abstractthe paper considers the adaptation of federated deep learning on a desktop grid system using the example of an image classification problem. restrictions are imposed on data transfer between the nodes of the desktop grid only for a part of the dataset. the implementation of federated deep learning on a desktop grid system based on the boinc platform is considered. methods for generating local datasets for desktop grid nodes are discussed. the results of numerical experiments are presented. | [
"abstractthe paper",
"the adaptation",
"federated deep learning",
"a desktop grid system",
"the example",
"an image classification problem",
"restrictions",
"data transfer",
"the nodes",
"the desktop grid",
"a part",
"the dataset",
"the implementation",
"federated deep learning",
"a desktop grid system",
"the boinc platform",
"methods",
"local datasets",
"desktop grid nodes",
"the results",
"numerical experiments"
] |
Machine and deep learning techniques for the prediction of diabetics: a review | [
"Sandip Kumar Singh Modak",
"Vijay Kumar Jha"
] | Diabetes has become one of the significant reasons for public sickness and death in worldwide. By 2019, diabetes had affected more than 463 million people worldwide. According to the International Diabetes Federation report, this figure is expected to rise to more than 700 million in 2040, so early screening and diagnosis of diabetes patients have great significance in detecting and treating diabetes on time. Diabetes is a multi factorial metabolic disease, its diagnostic criteria are difficult to cover all the ethology, damage degree, pathogenesis and other factors, so there is a situation for uncertainty and imprecision under various aspects of the medical diagnosis process. With the development of Data mining, researchers find that machine learning and deep learning, playing an important role in diabetes prediction research. This paper is an in-depth study on the application of machine learning and deep learning techniques in the prediction of diabetics. In addition, this paper also discusses the different methodology used in machine and deep learning for prediction of diabetics since last two decades and examines the methods used, to explore their successes and failure. This review would help researchers and practitioners understand the current state-of-the-art methods and identify gaps in the literature. | 10.1007/s11042-024-19766-9 | machine and deep learning techniques for the prediction of diabetics: a review | diabetes has become one of the significant reasons for public sickness and death in worldwide. by 2019, diabetes had affected more than 463 million people worldwide. according to the international diabetes federation report, this figure is expected to rise to more than 700 million in 2040, so early screening and diagnosis of diabetes patients have great significance in detecting and treating diabetes on time. diabetes is a multi factorial metabolic disease, its diagnostic criteria are difficult to cover all the ethology, damage degree, pathogenesis and other factors, so there is a situation for uncertainty and imprecision under various aspects of the medical diagnosis process. with the development of data mining, researchers find that machine learning and deep learning, playing an important role in diabetes prediction research. this paper is an in-depth study on the application of machine learning and deep learning techniques in the prediction of diabetics. in addition, this paper also discusses the different methodology used in machine and deep learning for prediction of diabetics since last two decades and examines the methods used, to explore their successes and failure. this review would help researchers and practitioners understand the current state-of-the-art methods and identify gaps in the literature. | [
"diabetes",
"the significant reasons",
"public sickness",
"death",
"diabetes",
"more than 463 million people",
"the international diabetes federation report",
"this figure",
"early screening",
"diagnosis",
"diabetes patients",
"great significance",
"diabetes",
"time",
"diabetes",
"a multi factorial metabolic disease",
"its diagnostic criteria",
"all the ethology",
"damage degree",
"pathogenesis",
"other factors",
"a situation",
"uncertainty",
"imprecision",
"various aspects",
"the medical diagnosis process",
"the development",
"data mining",
"researchers",
"an important role",
"diabetes prediction research",
"this paper",
"-depth",
"the application",
"machine learning",
"deep learning techniques",
"the prediction",
"diabetics",
"addition",
"this paper",
"the different methodology",
"machine",
"deep learning",
"prediction",
"diabetics",
"last two decades",
"the methods",
"their successes",
"failure",
"this review",
"researchers",
"practitioners",
"the-art",
"gaps",
"the literature",
"2019",
"more than 463 million",
"more than 700 million",
"2040",
"last two decades"
] |
Deep learning-based PET image denoising and reconstruction: a review | [
"Fumio Hashimoto",
"Yuya Onishi",
"Kibo Ote",
"Hideaki Tashima",
"Andrew J. Reader",
"Taiga Yamaya"
] | This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology. | 10.1007/s12194-024-00780-3 | deep learning-based pet image denoising and reconstruction: a review | this review focuses on positron emission tomography (pet) imaging algorithms and traces the evolution of pet image reconstruction methods. first, we provide an overview of conventional pet image reconstruction methods from filtered backprojection through to recent iterative pet image reconstruction algorithms, and then review deep learning methods for pet data up to the latest innovations within three main categories. the first category involves post-processing methods for pet image denoising. the second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. the third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. we discuss future perspectives on pet imaging and deep learning technology. | [
"this review",
"positron emission tomography",
"pet",
"the evolution",
"pet image reconstruction methods",
"we",
"an overview",
"conventional pet image reconstruction methods",
"filtered backprojection",
"recent iterative pet image reconstruction algorithms",
"deep learning methods",
"pet data",
"the latest innovations",
"three main categories",
"the first category",
"post-processing methods",
"pet image denoising",
"the second category",
"direct image reconstruction methods",
"that",
"mappings",
"sinograms",
"the reconstructed images",
"end",
"the third category",
"iterative reconstruction methods",
"that",
"conventional iterative image reconstruction",
"neural-network enhancement",
"we",
"future perspectives",
"pet imaging",
"deep learning technology",
"first",
"three",
"first",
"second",
"third"
] |
Deep learning algorithms for hedging with frictions | [
"Xiaofei Shi",
"Daran Xu",
"Zhanhao Zhang"
] | This work studies the deep learning-based numerical algorithms for optimal hedging problems in markets with general convex transaction costs. Our main focus is on how these algorithms scale with the length of the trading time horizon. Based on the comparison results of the FBSDE solver by Han, Jentzen, and E (2018) and the Deep Hedging algorithm by Buehler, Gonon, Teichmann, and Wood (2019), we propose a Stable-Transfer Hedging (ST-Hedging) algorithm, to aggregate the convenience of the leading-order approximation formulas and the accuracy of the deep learning-based algorithms. Our ST-Hedging algorithm achieves the same state-of-the-art performance in short and moderately long time horizon as FBSDE solver and Deep Hedging, and generalize well to long time horizon when previous algorithms become suboptimal. With the transfer learning technique, ST-Hedging drastically reduce the training time, and shows great scalability to high-dimensional settings. This opens up new possibilities in model-based deep learning algorithms in economics, finance, and operational research, which takes advantage of the domain expert knowledge and the accuracy of the learning-based methods. | 10.1007/s42521-023-00075-z | deep learning algorithms for hedging with frictions | this work studies the deep learning-based numerical algorithms for optimal hedging problems in markets with general convex transaction costs. our main focus is on how these algorithms scale with the length of the trading time horizon. based on the comparison results of the fbsde solver by han, jentzen, and e (2018) and the deep hedging algorithm by buehler, gonon, teichmann, and wood (2019), we propose a stable-transfer hedging (st-hedging) algorithm, to aggregate the convenience of the leading-order approximation formulas and the accuracy of the deep learning-based algorithms. our st-hedging algorithm achieves the same state-of-the-art performance in short and moderately long time horizon as fbsde solver and deep hedging, and generalize well to long time horizon when previous algorithms become suboptimal. with the transfer learning technique, st-hedging drastically reduce the training time, and shows great scalability to high-dimensional settings. this opens up new possibilities in model-based deep learning algorithms in economics, finance, and operational research, which takes advantage of the domain expert knowledge and the accuracy of the learning-based methods. | [
"this work",
"the deep learning-based numerical algorithms",
"optimal hedging problems",
"markets",
"general convex transaction costs",
"our main focus",
"these algorithms",
"the length",
"the trading time horizon",
"the comparison results",
"the fbsde",
"han",
"jentzen",
"e",
"buehler",
"gonon",
"teichmann",
"wood",
"we",
"a stable-transfer hedging",
"st-hedging",
"the convenience",
"the leading-order approximation formulas",
"the accuracy",
"the deep learning-based algorithms",
"our st-hedging algorithm",
"the-art",
"short and moderately long time horizon",
"fbsde",
"long time horizon",
"previous algorithms",
"the transfer learning technique",
"st-hedging",
"the training time",
"great scalability",
"high-dimensional settings",
"this",
"new possibilities",
"model-based deep learning algorithms",
"economics",
"finance",
"operational research",
"which",
"advantage",
"the domain expert knowledge",
"the accuracy",
"the learning-based methods",
"han",
"2018",
"teichmann",
"2019"
] |
Enhancing trash classification in smart cities using federated deep learning | [
"Haroon Ahmed Khan",
"Syed Saud Naqvi",
"Abeer A. K. Alharbi",
"Salihah Alotaibi",
"Mohammed Alkhathami"
] | Efficient Waste management plays a crucial role to ensure clean and green environment in the smart cities. This study investigates the critical role of efficient trash classification in achieving sustainable solid waste management within smart city environments. We conduct a comparative analysis of various trash classification methods utilizing deep learning models built on convolutional neural networks (CNNs). Leveraging the PyTorch open-source framework and the TrashBox dataset, we perform experiments involving ten unique deep neural network models. Our approach aims to maximize training accuracy. Through extensive experimentation, we observe the consistent superiority of the ResNext-101 model compared to others, achieving exceptional training, validation, and test accuracies. These findings illuminate the potential of CNN-based techniques in significantly advancing trash classification for optimized solid waste management within smart city initiatives. Lastly, this study presents a distributed framework based on federated learning that can be used to optimize the performance of a combination of CNN models for trash detection. | 10.1038/s41598-024-62003-4 | enhancing trash classification in smart cities using federated deep learning | efficient waste management plays a crucial role to ensure clean and green environment in the smart cities. this study investigates the critical role of efficient trash classification in achieving sustainable solid waste management within smart city environments. we conduct a comparative analysis of various trash classification methods utilizing deep learning models built on convolutional neural networks (cnns). leveraging the pytorch open-source framework and the trashbox dataset, we perform experiments involving ten unique deep neural network models. our approach aims to maximize training accuracy. through extensive experimentation, we observe the consistent superiority of the resnext-101 model compared to others, achieving exceptional training, validation, and test accuracies. these findings illuminate the potential of cnn-based techniques in significantly advancing trash classification for optimized solid waste management within smart city initiatives. lastly, this study presents a distributed framework based on federated learning that can be used to optimize the performance of a combination of cnn models for trash detection. | [
"efficient waste management",
"a crucial role",
"clean and green environment",
"the smart cities",
"this study",
"the critical role",
"efficient trash classification",
"sustainable solid waste management",
"smart city environments",
"we",
"a comparative analysis",
"various trash classification methods",
"deep learning models",
"convolutional neural networks",
"cnns",
"the pytorch open-source framework",
"the trashbox dataset",
"we",
"experiments",
"ten unique deep neural network models",
"our approach",
"training accuracy",
"extensive experimentation",
"we",
"the consistent superiority",
"the resnext-101 model",
"others",
"exceptional training",
"validation",
"test accuracies",
"these findings",
"the potential",
"cnn-based techniques",
"significantly advancing trash classification",
"optimized solid waste management",
"smart city initiatives",
"this study",
"a distributed framework",
"federated learning",
"that",
"the performance",
"a combination",
"cnn models",
"trash detection",
"smart city",
"cnn",
"smart city",
"cnn"
] |
Interpreting and generalizing deep learning in physics-based problems with functional linear models | [
"Amirhossein Arzani",
"Lingxiao Yuan",
"Pania Newell",
"Bei Wang"
] | Although deep learning has achieved remarkable success in various scientific machine learning applications, its opaque nature poses concerns regarding interpretability and generalization capabilities beyond the training data. Interpretability is crucial and often desired in modeling physical systems. Moreover, acquiring extensive datasets that encompass the entire range of input features is challenging in many physics-based learning tasks, leading to increased errors when encountering out-of-distribution (OOD) data. In this work, motivated by the field of functional data analysis (FDA), we propose generalized functional linear models as an interpretable surrogate for a trained deep learning model. We demonstrate that our model could be trained either based on a trained neural network (post-hoc interpretation) or directly from training data (interpretable operator learning). A library of generalized functional linear models with different kernel functions is considered and sparse regression is used to discover an interpretable surrogate model that could be analytically presented. We present test cases in solid mechanics, fluid mechanics, and transport. Our results demonstrate that our model can achieve comparable accuracy to deep learning and can improve OOD generalization while providing more transparency and interpretability. Our study underscores the significance of interpretable representation in scientific machine learning and showcases the potential of functional linear models as a tool for interpreting and generalizing deep learning. | 10.1007/s00366-024-01987-z | interpreting and generalizing deep learning in physics-based problems with functional linear models | although deep learning has achieved remarkable success in various scientific machine learning applications, its opaque nature poses concerns regarding interpretability and generalization capabilities beyond the training data. interpretability is crucial and often desired in modeling physical systems. moreover, acquiring extensive datasets that encompass the entire range of input features is challenging in many physics-based learning tasks, leading to increased errors when encountering out-of-distribution (ood) data. in this work, motivated by the field of functional data analysis (fda), we propose generalized functional linear models as an interpretable surrogate for a trained deep learning model. we demonstrate that our model could be trained either based on a trained neural network (post-hoc interpretation) or directly from training data (interpretable operator learning). a library of generalized functional linear models with different kernel functions is considered and sparse regression is used to discover an interpretable surrogate model that could be analytically presented. we present test cases in solid mechanics, fluid mechanics, and transport. our results demonstrate that our model can achieve comparable accuracy to deep learning and can improve ood generalization while providing more transparency and interpretability. our study underscores the significance of interpretable representation in scientific machine learning and showcases the potential of functional linear models as a tool for interpreting and generalizing deep learning. | [
"deep learning",
"remarkable success",
"various scientific machine learning applications",
"its opaque nature",
"concerns",
"interpretability and generalization capabilities",
"the training data",
"interpretability",
"physical systems",
"extensive datasets",
"that",
"the entire range",
"input features",
"many physics-based learning tasks",
"increased errors",
"distribution",
"this work",
"the field",
"functional data analysis",
"fda",
"we",
"generalized functional linear models",
"an interpretable surrogate",
"a trained deep learning model",
"we",
"our model",
"a trained neural network",
"post-hoc interpretation",
"training data",
"interpretable operator learning",
"a library",
"generalized functional linear models",
"different kernel functions",
"sparse regression",
"an interpretable surrogate model",
"that",
"we",
"test cases",
"solid mechanics",
"fluid mechanics",
"transport",
"our results",
"our model",
"comparable accuracy",
"deep learning",
"ood generalization",
"more transparency",
"interpretability",
"our study",
"the significance",
"interpretable representation",
"scientific machine learning",
"the potential",
"functional linear models",
"a tool",
"interpreting",
"deep learning",
"fda",
"linear",
"linear"
] |
Can supervised deep learning architecture outperform autoencoders in building propensity score models for matching? | [
"Mohammad Ehsanul Karim"
] | PurposePropensity score matching is vital in epidemiological studies using observational data, yet its estimates relies on correct model-specification. This study assesses supervised deep learning models and unsupervised autoencoders for propensity score estimation, comparing them with traditional methods for bias and variance accuracy in treatment effect estimations.MethodsUtilizing a plasmode simulation based on the Right Heart Catheterization dataset, under a variety of settings, we evaluated (1) a supervised deep learning architecture and (2) an unsupervised autoencoder, alongside two traditional methods: logistic regression and a spline-based method in estimating propensity scores for matching. Performance metrics included bias, standard errors, and coverage probability. The analysis was also extended to real-world data, with estimates compared to those obtained via a double robust approach.ResultsThe analysis revealed that supervised deep learning models outperformed unsupervised autoencoders in variance estimation while maintaining comparable levels of bias. These results were supported by analyses of real-world data, where the supervised model’s estimates closely matched those derived from conventional methods. Additionally, deep learning models performed well compared to traditional methods in settings where exposure was rare.ConclusionSupervised deep learning models hold promise in refining propensity score estimations in epidemiological research, offering nuanced confounder adjustment, especially in complex datasets. We endorse integrating supervised deep learning into epidemiological research and share reproducible codes for widespread use and methodological transparency. | 10.1186/s12874-024-02284-5 | can supervised deep learning architecture outperform autoencoders in building propensity score models for matching? | purposepropensity score matching is vital in epidemiological studies using observational data, yet its estimates relies on correct model-specification. this study assesses supervised deep learning models and unsupervised autoencoders for propensity score estimation, comparing them with traditional methods for bias and variance accuracy in treatment effect estimations.methodsutilizing a plasmode simulation based on the right heart catheterization dataset, under a variety of settings, we evaluated (1) a supervised deep learning architecture and (2) an unsupervised autoencoder, alongside two traditional methods: logistic regression and a spline-based method in estimating propensity scores for matching. performance metrics included bias, standard errors, and coverage probability. the analysis was also extended to real-world data, with estimates compared to those obtained via a double robust approach.resultsthe analysis revealed that supervised deep learning models outperformed unsupervised autoencoders in variance estimation while maintaining comparable levels of bias. these results were supported by analyses of real-world data, where the supervised model’s estimates closely matched those derived from conventional methods. additionally, deep learning models performed well compared to traditional methods in settings where exposure was rare.conclusionsupervised deep learning models hold promise in refining propensity score estimations in epidemiological research, offering nuanced confounder adjustment, especially in complex datasets. we endorse integrating supervised deep learning into epidemiological research and share reproducible codes for widespread use and methodological transparency. | [
"purposepropensity score matching",
"epidemiological studies",
"observational data",
"its estimates",
"correct model-specification",
"this study",
"deep learning models",
"autoencoders",
"propensity score estimation",
"them",
"traditional methods",
"bias",
"variance accuracy",
"treatment effect",
"a plasmode simulation",
"the right heart catheterization dataset",
"a variety",
"settings",
"we",
"1) a supervised deep learning architecture",
"(2) an unsupervised autoencoder",
"two traditional methods",
"logistic regression",
"a spline-based method",
"propensity scores",
"performance metrics",
"bias",
"standard errors",
"coverage probability",
"the analysis",
"real-world data",
"estimates",
"those",
"a double robust",
"approach.resultsthe analysis",
"supervised deep learning models",
"unsupervised autoencoders",
"variance estimation",
"comparable levels",
"bias",
"these results",
"analyses",
"real-world data",
"the supervised model’s estimates",
"those",
"conventional methods",
"deep learning models",
"traditional methods",
"settings",
"exposure",
"deep learning models",
"promise",
"propensity score estimations",
"epidemiological research",
"nuanced confounder adjustment",
"complex datasets",
"we",
"supervised deep learning",
"epidemiological research",
"reproducible codes",
"widespread use",
"methodological transparency",
"1",
"2",
"two"
] |
Estimating infant age from skull X-ray images using deep learning | [
"Heui Seung Lee",
"Jaewoong Kang",
"So Eui Kim",
"Ji Hee Kim",
"Bum-Joo Cho"
] | This study constructed deep learning models using plain skull radiograph images to predict the accurate postnatal age of infants under 12 months. Utilizing the results of the trained deep learning models, it aimed to evaluate the feasibility of employing major changes visible in skull X-ray images for assessing postnatal cranial development through gradient-weighted class activation mapping. We developed DenseNet-121 and EfficientNet-v2-M convolutional neural network models to analyze 4933 skull X-ray images collected from 1343 infants. Notably, allowing for a ± 1 month error margin, DenseNet-121 reached a maximum corrected accuracy of 79.4% for anteroposterior (AP) views (average: 78.0 ± 1.5%) and 84.2% for lateral views (average: 81.1 ± 2.9%). EfficientNet-v2-M reached a maximum corrected accuracy 79.1% for AP views (average: 77.0 ± 2.3%) and 87.3% for lateral views (average: 85.1 ± 2.5%). Saliency maps identified critical discriminative areas in skull radiographs, including the coronal, sagittal, and metopic sutures in AP skull X-ray images, and the lambdoid suture and cortical bone density in lateral images, marking them as indicators for evaluating cranial development. These findings highlight the precision of deep learning in estimating infant age through non-invasive methods, offering the progress for clinical diagnostics and developmental assessment tools. | 10.1038/s41598-024-64489-4 | estimating infant age from skull x-ray images using deep learning | this study constructed deep learning models using plain skull radiograph images to predict the accurate postnatal age of infants under 12 months. utilizing the results of the trained deep learning models, it aimed to evaluate the feasibility of employing major changes visible in skull x-ray images for assessing postnatal cranial development through gradient-weighted class activation mapping. we developed densenet-121 and efficientnet-v2-m convolutional neural network models to analyze 4933 skull x-ray images collected from 1343 infants. notably, allowing for a ± 1 month error margin, densenet-121 reached a maximum corrected accuracy of 79.4% for anteroposterior (ap) views (average: 78.0 ± 1.5%) and 84.2% for lateral views (average: 81.1 ± 2.9%). efficientnet-v2-m reached a maximum corrected accuracy 79.1% for ap views (average: 77.0 ± 2.3%) and 87.3% for lateral views (average: 85.1 ± 2.5%). saliency maps identified critical discriminative areas in skull radiographs, including the coronal, sagittal, and metopic sutures in ap skull x-ray images, and the lambdoid suture and cortical bone density in lateral images, marking them as indicators for evaluating cranial development. these findings highlight the precision of deep learning in estimating infant age through non-invasive methods, offering the progress for clinical diagnostics and developmental assessment tools. | [
"this study",
"deep learning models",
"plain skull radiograph images",
"the accurate postnatal age",
"infants",
"12 months",
"the results",
"the trained deep learning models",
"it",
"the feasibility",
"major changes",
"skull x-ray images",
"postnatal cranial development",
"gradient-weighted class activation mapping",
"we",
"densenet-121 and efficientnet-v2-m",
"convolutional neural network models",
"4933 skull x-ray images",
"1343 infants",
"a ± 1 month error margin",
"densenet-121",
"a maximum corrected accuracy",
"79.4%",
"anteroposterior (ap) views",
"1.5%",
"lateral views",
"81.1 ±",
"2.9%",
"efficientnet-v2",
"m",
"a maximum corrected accuracy",
"ap views",
"2.3%",
"lateral views",
"2.5%",
"saliency maps",
"critical discriminative areas",
"the coronal, sagittal, and metopic sutures",
"ap skull x-ray images",
"the lambdoid suture",
"cortical bone density",
"lateral images",
"them",
"indicators",
"cranial development",
"these findings",
"the precision",
"deep learning",
"infant age",
"non-invasive methods",
"the progress",
"clinical diagnostics",
"developmental assessment tools",
"under 12 months",
"4933",
"1343",
"1 month",
"79.4%",
"78.0 ±",
"1.5%",
"84.2%",
"81.1",
"2.9%",
"79.1%",
"77.0",
"2.3%",
"87.3%",
"85.1",
"2.5%"
] |
Deep learning based damage detection of concrete structures | [
"Maheswara Rao Bandi",
"Laxmi Narayana Pasupuleti",
"Tanmay Das",
"Shyamal Guchhait"
] | Damage detection of any civil engineering structure is a part of the interest in the field of engineering from which one can estimate the stability and lifetime of the structure. With the advancement of technology in the field of infrastructure, damage assessment of any structure with the help of convolutional neural networks (CNNs) and deep learning is gaining importance because of the ease with which it can detect the damage. By using these computer-aided techniques, we can reduce manpower and detect damage in inaccessible places that we cannot see directly. In the current study, we used ResNet-50, which is a part of a deep convolutional neural network with 50 layers deep in it. ResNet-50 is a subclass of convolutional neural networks most popularly used for image classification. Furthermore, we used the image data set collected from the Utah State University, Logan, Utah, USA. This data set contains nearly 56,000 annotated images of both cracked and non-cracked bridge deck images, wall images, and pavement images. These images contain a variety of obstructions, such as shadows and surface roughness in some cases. The present study aimed to train and test the images and compare the accuracy of the results among themselves with an increase in training data. We devised an algorithm to measure cracks as soon as we detected them. The results show that the Resnet-50 architecture was in good agreement with the developed algorithm. | 10.1007/s42107-024-01106-9 | deep learning based damage detection of concrete structures | damage detection of any civil engineering structure is a part of the interest in the field of engineering from which one can estimate the stability and lifetime of the structure. with the advancement of technology in the field of infrastructure, damage assessment of any structure with the help of convolutional neural networks (cnns) and deep learning is gaining importance because of the ease with which it can detect the damage. by using these computer-aided techniques, we can reduce manpower and detect damage in inaccessible places that we cannot see directly. in the current study, we used resnet-50, which is a part of a deep convolutional neural network with 50 layers deep in it. resnet-50 is a subclass of convolutional neural networks most popularly used for image classification. furthermore, we used the image data set collected from the utah state university, logan, utah, usa. this data set contains nearly 56,000 annotated images of both cracked and non-cracked bridge deck images, wall images, and pavement images. these images contain a variety of obstructions, such as shadows and surface roughness in some cases. the present study aimed to train and test the images and compare the accuracy of the results among themselves with an increase in training data. we devised an algorithm to measure cracks as soon as we detected them. the results show that the resnet-50 architecture was in good agreement with the developed algorithm. | [
"damage detection",
"any civil engineering structure",
"a part",
"the interest",
"the field",
"engineering",
"which",
"one",
"the stability",
"lifetime",
"the structure",
"the advancement",
"technology",
"the field",
"infrastructure",
"damage assessment",
"any structure",
"the help",
"convolutional neural networks",
"cnns",
"deep learning",
"importance",
"the ease",
"which",
"it",
"the damage",
"these computer-aided techniques",
"we",
"manpower",
"damage",
"inaccessible places",
"that",
"we",
"the current study",
"we",
"resnet-50",
"which",
"a part",
"a deep convolutional neural network",
"50 layers",
"it",
"resnet-50",
"a subclass",
"convolutional neural networks",
"image classification",
"we",
"the utah state university",
"usa",
"this data",
"nearly 56,000 annotated images",
"both cracked and non-cracked bridge deck images",
"wall images",
"pavement images",
"these images",
"a variety",
"obstructions",
"shadows",
"surface roughness",
"some cases",
"the present study",
"the images",
"the accuracy",
"the results",
"themselves",
"an increase",
"training data",
"we",
"an algorithm",
"cracks",
"we",
"them",
"the results",
"the resnet-50 architecture",
"good agreement",
"the developed algorithm",
"resnet-50",
"50",
"resnet-50",
"the utah state university",
"utah",
"usa",
"nearly 56,000",
"resnet-50"
] |
A Comparative Analysis of Deep Learning-Based Approaches for Classifying Dental Implants Decision Support System | [
"Mohammed A. H. Lubbad",
"Ikbal Leblebicioglu Kurtulus",
"Dervis Karaboga",
"Kerem Kilic",
"Alper Basturk",
"Bahriye Akay",
"Ozkan Ufuk Nalbantoglu",
"Ozden Melis Durmaz Yilmaz",
"Mustafa Ayata",
"Serkan Yilmaz",
"Ishak Pacal"
] | This study aims to provide an effective solution for the autonomous identification of dental implant brands through a deep learning-based computer diagnostic system. It also seeks to ascertain the system’s potential in clinical practices and to offer a strategic framework for improving diagnosis and treatment processes in implantology. This study employed a total of 28 different deep learning models, including 18 convolutional neural network (CNN) models (VGG, ResNet, DenseNet, EfficientNet, RegNet, ConvNeXt) and 10 vision transformer models (Swin and Vision Transformer). The dataset comprises 1258 panoramic radiographs from patients who received implant treatments at Erciyes University Faculty of Dentistry between 2012 and 2023. It is utilized for the training and evaluation process of deep learning models and consists of prototypes from six different implant systems provided by six manufacturers. The deep learning-based dental implant system provided high classification accuracy for different dental implant brands using deep learning models. Furthermore, among all the architectures evaluated, the small model of the ConvNeXt architecture achieved an impressive accuracy rate of 94.2%, demonstrating a high level of classification success.This study emphasizes the effectiveness of deep learning-based systems in achieving high classification accuracy in dental implant types. These findings pave the way for integrating advanced deep learning tools into clinical practice, promising significant improvements in patient care and treatment outcomes. | 10.1007/s10278-024-01086-x | a comparative analysis of deep learning-based approaches for classifying dental implants decision support system | this study aims to provide an effective solution for the autonomous identification of dental implant brands through a deep learning-based computer diagnostic system. it also seeks to ascertain the system’s potential in clinical practices and to offer a strategic framework for improving diagnosis and treatment processes in implantology. this study employed a total of 28 different deep learning models, including 18 convolutional neural network (cnn) models (vgg, resnet, densenet, efficientnet, regnet, convnext) and 10 vision transformer models (swin and vision transformer). the dataset comprises 1258 panoramic radiographs from patients who received implant treatments at erciyes university faculty of dentistry between 2012 and 2023. it is utilized for the training and evaluation process of deep learning models and consists of prototypes from six different implant systems provided by six manufacturers. the deep learning-based dental implant system provided high classification accuracy for different dental implant brands using deep learning models. furthermore, among all the architectures evaluated, the small model of the convnext architecture achieved an impressive accuracy rate of 94.2%, demonstrating a high level of classification success.this study emphasizes the effectiveness of deep learning-based systems in achieving high classification accuracy in dental implant types. these findings pave the way for integrating advanced deep learning tools into clinical practice, promising significant improvements in patient care and treatment outcomes. | [
"this study",
"an effective solution",
"the autonomous identification",
"dental implant brands",
"a deep learning-based computer diagnostic system",
"it",
"the system’s potential",
"clinical practices",
"a strategic framework",
"diagnosis and treatment processes",
"implantology",
"this study",
"a total",
"28 different deep learning models",
"18 convolutional neural network (cnn) models",
"vgg",
"resnet",
"densenet",
"efficientnet",
"regnet",
"convnext",
"10 vision transformer models",
"swin and vision transformer",
"the dataset",
"patients",
"who",
"implant treatments",
"erciyes university faculty",
"dentistry",
"it",
"the training and evaluation process",
"deep learning models",
"prototypes",
"six different implant systems",
"six manufacturers",
"the deep learning-based dental implant system",
"high classification accuracy",
"different dental implant brands",
"deep learning models",
"all the architectures",
"the small model",
"the convnext architecture",
"an impressive accuracy rate",
"94.2%",
"a high level",
"classification success.this study",
"the effectiveness",
"deep learning-based systems",
"high classification accuracy",
"dental implant types",
"these findings",
"the way",
"advanced deep learning tools",
"clinical practice",
"significant improvements",
"patient care and treatment outcomes",
"28",
"18",
"cnn",
"10",
"1258",
"erciyes university faculty of dentistry",
"between 2012 and 2023",
"six",
"six",
"94.2%"
] |
A radiomics-boosted deep-learning for risk assessment of synchronous peritoneal metastasis in colorectal cancer | [
"Ding Zhang",
"BingShu Zheng",
"LiuWei Xu",
"YiCong Wu",
"Chen Shen",
"ShanLei Bao",
"ZhongHua Tan",
"ChunFeng Sun"
] | ObjectivesSynchronous colorectal cancer peritoneal metastasis (CRPM) has a poor prognosis. This study aimed to create a radiomics-boosted deep learning model by PET/CT image for risk assessment of synchronous CRPM.MethodsA total of 220 colorectal cancer (CRC) cases were enrolled in this study. We mapped the feature maps (Radiomic feature maps (RFMs)) of radiomic features across CT and PET image patches by a 2D sliding kernel. Based on ResNet50, a radiomics-boosted deep learning model was trained using PET/CT image patches and RFMs. Besides that, we explored whether the peritumoral region contributes to the assessment of CRPM. In this study, the performance of each model was evaluated by the area under the curves (AUC).ResultsThe AUCs of the radiomics-boosted deep learning model in the training, internal, external, and all validation datasets were 0.926 (95% confidence interval (CI): 0.874–0.978), 0.897 (95% CI: 0.801–0.994), 0.885 (95% CI: 0.795–0.975), and 0.889 (95% CI: 0.823–0.954), respectively. This model exhibited consistency in the calibration curve, the Delong test and IDI identified it as the most predictive model.ConclusionsThe radiomics-boosted deep learning model showed superior estimated performance in preoperative prediction of synchronous CRPM from pre-treatment PET/CT, offering potential assistance in the development of more personalized treatment methods and follow-up plans.Critical relevance statementThe onset of synchronous colorectal CRPM is insidious, and using a radiomics-boosted deep learning model to assess the risk of CRPM before treatment can help make personalized clinical treatment decisions or choose more sensitive follow-up plans.Key Points
Prognosis for patients with CRPM is bleak, and early detection poses challenges.
The synergy between radiomics and deep learning proves advantageous in evaluating CRPM.
The radiomics-boosted deep-learning model proves valuable in tailoring treatment approaches for CRC patients.
Graphical Abstract | 10.1186/s13244-024-01733-5 | a radiomics-boosted deep-learning for risk assessment of synchronous peritoneal metastasis in colorectal cancer | objectivessynchronous colorectal cancer peritoneal metastasis (crpm) has a poor prognosis. this study aimed to create a radiomics-boosted deep learning model by pet/ct image for risk assessment of synchronous crpm.methodsa total of 220 colorectal cancer (crc) cases were enrolled in this study. we mapped the feature maps (radiomic feature maps (rfms)) of radiomic features across ct and pet image patches by a 2d sliding kernel. based on resnet50, a radiomics-boosted deep learning model was trained using pet/ct image patches and rfms. besides that, we explored whether the peritumoral region contributes to the assessment of crpm. in this study, the performance of each model was evaluated by the area under the curves (auc).resultsthe aucs of the radiomics-boosted deep learning model in the training, internal, external, and all validation datasets were 0.926 (95% confidence interval (ci): 0.874–0.978), 0.897 (95% ci: 0.801–0.994), 0.885 (95% ci: 0.795–0.975), and 0.889 (95% ci: 0.823–0.954), respectively. this model exhibited consistency in the calibration curve, the delong test and idi identified it as the most predictive model.conclusionsthe radiomics-boosted deep learning model showed superior estimated performance in preoperative prediction of synchronous crpm from pre-treatment pet/ct, offering potential assistance in the development of more personalized treatment methods and follow-up plans.critical relevance statementthe onset of synchronous colorectal crpm is insidious, and using a radiomics-boosted deep learning model to assess the risk of crpm before treatment can help make personalized clinical treatment decisions or choose more sensitive follow-up plans.key points prognosis for patients with crpm is bleak, and early detection poses challenges. the synergy between radiomics and deep learning proves advantageous in evaluating crpm. the radiomics-boosted deep-learning model proves valuable in tailoring treatment approaches for crc patients. graphical abstract | [
"objectivessynchronous colorectal cancer peritoneal metastasis",
"crpm",
"a poor prognosis",
"this study",
"a radiomics-boosted deep learning model",
"pet/ct image",
"risk assessment",
"synchronous crpm.methodsa total",
"220 colorectal cancer (crc) cases",
"this study",
"we",
"the feature maps",
"radiomic feature maps",
"rfms",
"radiomic features",
"ct",
"pet image",
"a 2d sliding kernel",
"resnet50",
"a radiomics-boosted deep learning model",
"pet/ct image patches",
"rfms",
"that",
"we",
"the peritumoral region",
"the assessment",
"crpm",
"this study",
"the performance",
"each model",
"the area",
"the curves",
"auc).resultsthe aucs",
"the radiomics",
"the training",
"all validation datasets",
"95% confidence interval",
"ci",
"0.874–0.978",
"95% ci",
"95% ci",
"0.889 (95% ci",
"this model",
"consistency",
"the calibration curve",
"the delong test",
"idi",
"it",
"the most predictive model.conclusionsthe radiomics-boosted deep learning model",
"superior estimated performance",
"preoperative prediction",
"synchronous crpm",
"treatment pet",
"ct",
"potential assistance",
"the development",
"more personalized treatment methods",
"follow-up plans.critical relevance",
"statementthe onset",
"synchronous colorectal crpm",
"a radiomics-boosted deep learning model",
"the risk",
"crpm",
"treatment",
"personalized clinical treatment decisions",
"prognosis",
"patients",
"crpm",
"early detection poses challenges",
"the synergy",
"radiomics",
"deep learning",
"crpm",
"the radiomics",
"treatment approaches",
"crc patients",
"graphical abstract",
"crpm.methodsa",
"220",
"2d",
"resnet50",
"0.926",
"95%",
"0.897",
"95%",
"0.885",
"95%",
"0.795–0.975",
"0.889",
"95%"
] |
Transfer learning-based quantized deep learning models for nail melanoma classification | [
"Mujahid Hussain",
"Makhmoor Fiza",
"Aiman Khalil",
"Asad Ali Siyal",
"Fayaz Ali Dharejo",
"Waheeduddin Hyder",
"Antonella Guzzo",
"Moez Krichen",
"Giancarlo Fortino"
] | Skin cancer, particularly melanoma, has remained a severe issue for many years due to its increasing incidences. The rising mortality rate associated with melanoma demands immediate attention at early stages to facilitate timely diagnosis and effective treatment. Due to the similar visual appearance of malignant tumors and normal cells, the detection and classification of melanoma are considered to be one of the most challenging tasks. Detecting melanoma accurately and promptly is essential to diagnosis and treatment, which can contribute significantly to patient survival. A new dataset, Nailmelonma, is presented in this study in order to train and evaluate various deep learning models applying transfer learning for an indigenous nail melanoma localization dataset. Using the dermoscopic image datasets, seven CNN-based DL architectures (viz., VGG19, ResNet101, ResNet152V2, Xception, InceptionV3, MobileNet, and MobileNetv2) have been trained and tested for the classification of skin lesions for melanoma detection. The trained models have been validated, and key performance parameters (i.e., accuracy, recall, specificity, precision, and F1-score) are systematically evaluated to test the performance of each transfer learning model. The results indicated that the proposed workflow could realize and achieve more than 95% accuracy. In addition, we show how the quantization of such models can enable them for memory-constrained mobile/edge devices. To facilitate an accurate, timely, and faster diagnosis of nail melanoma and to evaluate the early detection of other types of skin cancer, the proposed workflow can be readily applied and robust to the early detection of nail melanoma. | 10.1007/s00521-023-08925-y | transfer learning-based quantized deep learning models for nail melanoma classification | skin cancer, particularly melanoma, has remained a severe issue for many years due to its increasing incidences. the rising mortality rate associated with melanoma demands immediate attention at early stages to facilitate timely diagnosis and effective treatment. due to the similar visual appearance of malignant tumors and normal cells, the detection and classification of melanoma are considered to be one of the most challenging tasks. detecting melanoma accurately and promptly is essential to diagnosis and treatment, which can contribute significantly to patient survival. a new dataset, nailmelonma, is presented in this study in order to train and evaluate various deep learning models applying transfer learning for an indigenous nail melanoma localization dataset. using the dermoscopic image datasets, seven cnn-based dl architectures (viz., vgg19, resnet101, resnet152v2, xception, inceptionv3, mobilenet, and mobilenetv2) have been trained and tested for the classification of skin lesions for melanoma detection. the trained models have been validated, and key performance parameters (i.e., accuracy, recall, specificity, precision, and f1-score) are systematically evaluated to test the performance of each transfer learning model. the results indicated that the proposed workflow could realize and achieve more than 95% accuracy. in addition, we show how the quantization of such models can enable them for memory-constrained mobile/edge devices. to facilitate an accurate, timely, and faster diagnosis of nail melanoma and to evaluate the early detection of other types of skin cancer, the proposed workflow can be readily applied and robust to the early detection of nail melanoma. | [
"skin cancer",
"particularly melanoma",
"a severe issue",
"many years",
"its increasing incidences",
"the rising mortality rate",
"melanoma",
"immediate attention",
"early stages",
"timely diagnosis and effective treatment",
"the similar visual appearance",
"malignant tumors",
"normal cells",
"the detection",
"classification",
"melanoma",
"the most challenging tasks",
"melanoma",
"diagnosis",
"treatment",
"which",
"patient survival",
"a new dataset",
"nailmelonma",
"this study",
"order",
"various deep learning models",
"an indigenous nail melanoma localization dataset",
"the dermoscopic image datasets",
"seven cnn-based dl architectures",
"viz",
".",
"vgg19",
"xception",
"inceptionv3",
"mobilenet",
"mobilenetv2",
"the classification",
"skin lesions",
"melanoma detection",
"the trained models",
"key performance parameters",
"i.e., accuracy",
"recall",
"specificity",
"precision",
"f1-score",
"the performance",
"each transfer learning model",
"the results",
"the proposed workflow",
"more than 95% accuracy",
"addition",
"we",
"the quantization",
"such models",
"them",
"memory-constrained mobile/edge devices",
"an accurate, timely, and faster diagnosis",
"nail melanoma",
"the early detection",
"other types",
"skin cancer",
"the proposed workflow",
"the early detection",
"nail melanoma",
"seven",
"cnn",
"inceptionv3",
"mobilenetv2",
"more than 95%"
] |
RETRACTED ARTICLE: Multimedia Lu Xun literature online learning based on deep learning | [
"Wang Hongsheng"
] | As a great Chinese thinker and writer in the twentieth century, Lu Xun and his literary works are widely known. However, as a successful cultural communication activist, editor and publisher, we still need to conduct in-depth research on Lu Xun in many aspects. Therefore, based on deep learning, this paper constructs an online multimedia learning system of Lu Xun literature. This system takes the relationship between classical Lu Xun literature and modern multimedia technology as the research object, and compare the calculation effect of other different types of algorithms and this dhraa algorithm. Through the availability of data, the dhraa algorithm is significantly better than other algorithms in the recommendation accuracy, thus proving its effectiveness. This system is managed by two servers and one system. The two servers are database server and web server, respectively. After testing, the system has good bearing capacity, can make up for the limited processing capacity of the server, and ensure the system has high performance. Its performance characteristics also show that the system achieved the expected performance. This paper systematically combines Lu Xun’s literature with modern multimedia, which can provide online learning services for Lu Xun’s literature lovers, thus helping scholars to expand Lu Xun’s research field and academic vision. This paper designs an effective online learning system of Lu Xun’s literature by combining deep learning, multimedia technology and Lu Xun’s literature. | 10.1007/s00500-023-08118-8 | retracted article: multimedia lu xun literature online learning based on deep learning | as a great chinese thinker and writer in the twentieth century, lu xun and his literary works are widely known. however, as a successful cultural communication activist, editor and publisher, we still need to conduct in-depth research on lu xun in many aspects. therefore, based on deep learning, this paper constructs an online multimedia learning system of lu xun literature. this system takes the relationship between classical lu xun literature and modern multimedia technology as the research object, and compare the calculation effect of other different types of algorithms and this dhraa algorithm. through the availability of data, the dhraa algorithm is significantly better than other algorithms in the recommendation accuracy, thus proving its effectiveness. this system is managed by two servers and one system. the two servers are database server and web server, respectively. after testing, the system has good bearing capacity, can make up for the limited processing capacity of the server, and ensure the system has high performance. its performance characteristics also show that the system achieved the expected performance. this paper systematically combines lu xun’s literature with modern multimedia, which can provide online learning services for lu xun’s literature lovers, thus helping scholars to expand lu xun’s research field and academic vision. this paper designs an effective online learning system of lu xun’s literature by combining deep learning, multimedia technology and lu xun’s literature. | [
"a great chinese thinker",
"writer",
"the twentieth century",
"lu xun",
"his literary works",
"a successful cultural communication activist",
"editor",
"publisher",
"we",
"depth",
"lu xun",
"many aspects",
"deep learning",
"this paper",
"an online multimedia learning system",
"lu xun literature",
"this system",
"the relationship",
"classical lu xun literature",
"modern multimedia technology",
"the research object",
"the calculation effect",
"other different types",
"algorithms",
"this dhraa algorithm",
"the availability",
"data",
"the dhraa algorithm",
"other algorithms",
"the recommendation accuracy",
"its effectiveness",
"this system",
"two servers",
"one system",
"the two servers",
"database server and web server",
"testing",
"the system",
"good bearing capacity",
"the limited processing capacity",
"the server",
"the system",
"high performance",
"its performance characteristics",
"the system",
"the expected performance",
"this paper",
"lu xun’s literature",
"modern multimedia",
"which",
"online learning services",
"lu xun’s literature lovers",
"scholars",
"lu xun’s research field",
"academic vision",
"this paper",
"an effective online learning system",
"lu xun’s literature",
"deep learning",
"multimedia technology",
"lu xun’s literature",
"chinese",
"the twentieth century",
"lu xun",
"lu xun",
"lu xun literature",
"two",
"one",
"two",
"xun",
"xun’s",
"xun",
"xun"
] |
Deep learning approaches for lyme disease detection: leveraging progressive resizing and self-supervised learning models | [
"Daryl Jacob Jerrish",
"Om Nankar",
"Shilpa Gite",
"Shruti Patil",
"Ketan Kotecha",
"Ganeshsree Selvachandran",
"Ajith Abraham"
] | Lyme disease diagnosis poses a significant challenge, with blood tests exhibiting an alarming inaccuracy rate of nearly 60% in detecting early-stage infections. As a result, there is an urgent need for improved diagnostic methods that can offer more accurate detection outcomes. To address this pressing issue, our study focuses on harnessing the potential of deep learning approaches, specifically by employing model pipelining through progressive resizing and multiple self-supervised learning models. In this paper, we present a comprehensive exploration of self-supervised learning models, including SimCLR, SwAV, MoCo, and BYOL, tailored to the context of Lyme disease detection using medical imaging. The effectiveness and performance of these models are evaluated using standard metrics such as F1 score, precision, recall, and accuracy. Furthermore, we emphasize the significance of progressive resizing and its implications when dealing with convolutional neural networks (CNNs) for medical image analysis. By leveraging deep learning approaches, progressive resizing, and self-supervised learning models, the challenges associated with Lyme disease detection are effectively addressed in this study. The application of our novel methodology and the execution of a comprehensive evaluation framework contribute invaluable insights, fostering the development of more efficient and accurate diagnostic methods for Lyme disease. It is firmly believed that our research will serve as a catalyst, inspiring interdisciplinary collaborations that accelerate progress at the convergence of medicine, computing, and technology, ultimately benefiting public health. | 10.1007/s11042-023-16306-9 | deep learning approaches for lyme disease detection: leveraging progressive resizing and self-supervised learning models | lyme disease diagnosis poses a significant challenge, with blood tests exhibiting an alarming inaccuracy rate of nearly 60% in detecting early-stage infections. as a result, there is an urgent need for improved diagnostic methods that can offer more accurate detection outcomes. to address this pressing issue, our study focuses on harnessing the potential of deep learning approaches, specifically by employing model pipelining through progressive resizing and multiple self-supervised learning models. in this paper, we present a comprehensive exploration of self-supervised learning models, including simclr, swav, moco, and byol, tailored to the context of lyme disease detection using medical imaging. the effectiveness and performance of these models are evaluated using standard metrics such as f1 score, precision, recall, and accuracy. furthermore, we emphasize the significance of progressive resizing and its implications when dealing with convolutional neural networks (cnns) for medical image analysis. by leveraging deep learning approaches, progressive resizing, and self-supervised learning models, the challenges associated with lyme disease detection are effectively addressed in this study. the application of our novel methodology and the execution of a comprehensive evaluation framework contribute invaluable insights, fostering the development of more efficient and accurate diagnostic methods for lyme disease. it is firmly believed that our research will serve as a catalyst, inspiring interdisciplinary collaborations that accelerate progress at the convergence of medicine, computing, and technology, ultimately benefiting public health. | [
"lyme disease diagnosis",
"a significant challenge",
"blood tests",
"an alarming inaccuracy rate",
"nearly 60%",
"early-stage infections",
"a result",
"an urgent need",
"improved diagnostic methods",
"that",
"more accurate detection outcomes",
"this pressing issue",
"our study",
"the potential",
"deep learning approaches",
"model",
"progressive resizing",
"multiple self-supervised learning models",
"this paper",
"we",
"a comprehensive exploration",
"self-supervised learning models",
"simclr",
"swav",
"moco",
"byol",
"the context",
"lyme disease detection",
"medical imaging",
"the effectiveness",
"performance",
"these models",
"standard metrics",
"f1 score",
"precision",
"recall",
"accuracy",
"we",
"the significance",
"progressive resizing",
"its implications",
"convolutional neural networks",
"cnns",
"medical image analysis",
"deep learning approaches",
"progressive resizing",
"self-supervised learning models",
"the challenges",
"lyme disease detection",
"this study",
"the application",
"our novel methodology",
"the execution",
"a comprehensive evaluation framework",
"invaluable insights",
"the development",
"more efficient and accurate diagnostic methods",
"lyme disease",
"it",
"our research",
"a catalyst",
"interdisciplinary collaborations",
"that",
"progress",
"the convergence",
"medicine",
"computing",
"technology",
"public health",
"nearly 60%"
] |
Enhancing trash classification in smart cities using federated deep learning | [
"Haroon Ahmed Khan",
"Syed Saud Naqvi",
"Abeer A. K. Alharbi",
"Salihah Alotaibi",
"Mohammed Alkhathami"
] | Efficient Waste management plays a crucial role to ensure clean and green environment in the smart cities. This study investigates the critical role of efficient trash classification in achieving sustainable solid waste management within smart city environments. We conduct a comparative analysis of various trash classification methods utilizing deep learning models built on convolutional neural networks (CNNs). Leveraging the PyTorch open-source framework and the TrashBox dataset, we perform experiments involving ten unique deep neural network models. Our approach aims to maximize training accuracy. Through extensive experimentation, we observe the consistent superiority of the ResNext-101 model compared to others, achieving exceptional training, validation, and test accuracies. These findings illuminate the potential of CNN-based techniques in significantly advancing trash classification for optimized solid waste management within smart city initiatives. Lastly, this study presents a distributed framework based on federated learning that can be used to optimize the performance of a combination of CNN models for trash detection. | 10.1038/s41598-024-62003-4 | enhancing trash classification in smart cities using federated deep learning | efficient waste management plays a crucial role to ensure clean and green environment in the smart cities. this study investigates the critical role of efficient trash classification in achieving sustainable solid waste management within smart city environments. we conduct a comparative analysis of various trash classification methods utilizing deep learning models built on convolutional neural networks (cnns). leveraging the pytorch open-source framework and the trashbox dataset, we perform experiments involving ten unique deep neural network models. our approach aims to maximize training accuracy. through extensive experimentation, we observe the consistent superiority of the resnext-101 model compared to others, achieving exceptional training, validation, and test accuracies. these findings illuminate the potential of cnn-based techniques in significantly advancing trash classification for optimized solid waste management within smart city initiatives. lastly, this study presents a distributed framework based on federated learning that can be used to optimize the performance of a combination of cnn models for trash detection. | [
"efficient waste management",
"a crucial role",
"clean and green environment",
"the smart cities",
"this study",
"the critical role",
"efficient trash classification",
"sustainable solid waste management",
"smart city environments",
"we",
"a comparative analysis",
"various trash classification methods",
"deep learning models",
"convolutional neural networks",
"cnns",
"the pytorch open-source framework",
"the trashbox dataset",
"we",
"experiments",
"ten unique deep neural network models",
"our approach",
"training accuracy",
"extensive experimentation",
"we",
"the consistent superiority",
"the resnext-101 model",
"others",
"exceptional training",
"validation",
"test accuracies",
"these findings",
"the potential",
"cnn-based techniques",
"significantly advancing trash classification",
"optimized solid waste management",
"smart city initiatives",
"this study",
"a distributed framework",
"federated learning",
"that",
"the performance",
"a combination",
"cnn models",
"trash detection",
"smart city",
"cnn",
"smart city",
"cnn"
] |
Can supervised deep learning architecture outperform autoencoders in building propensity score models for matching? | [
"Mohammad Ehsanul Karim"
] | PurposePropensity score matching is vital in epidemiological studies using observational data, yet its estimates relies on correct model-specification. This study assesses supervised deep learning models and unsupervised autoencoders for propensity score estimation, comparing them with traditional methods for bias and variance accuracy in treatment effect estimations.MethodsUtilizing a plasmode simulation based on the Right Heart Catheterization dataset, under a variety of settings, we evaluated (1) a supervised deep learning architecture and (2) an unsupervised autoencoder, alongside two traditional methods: logistic regression and a spline-based method in estimating propensity scores for matching. Performance metrics included bias, standard errors, and coverage probability. The analysis was also extended to real-world data, with estimates compared to those obtained via a double robust approach.ResultsThe analysis revealed that supervised deep learning models outperformed unsupervised autoencoders in variance estimation while maintaining comparable levels of bias. These results were supported by analyses of real-world data, where the supervised model’s estimates closely matched those derived from conventional methods. Additionally, deep learning models performed well compared to traditional methods in settings where exposure was rare.ConclusionSupervised deep learning models hold promise in refining propensity score estimations in epidemiological research, offering nuanced confounder adjustment, especially in complex datasets. We endorse integrating supervised deep learning into epidemiological research and share reproducible codes for widespread use and methodological transparency. | 10.1186/s12874-024-02284-5 | can supervised deep learning architecture outperform autoencoders in building propensity score models for matching? | purposepropensity score matching is vital in epidemiological studies using observational data, yet its estimates relies on correct model-specification. this study assesses supervised deep learning models and unsupervised autoencoders for propensity score estimation, comparing them with traditional methods for bias and variance accuracy in treatment effect estimations.methodsutilizing a plasmode simulation based on the right heart catheterization dataset, under a variety of settings, we evaluated (1) a supervised deep learning architecture and (2) an unsupervised autoencoder, alongside two traditional methods: logistic regression and a spline-based method in estimating propensity scores for matching. performance metrics included bias, standard errors, and coverage probability. the analysis was also extended to real-world data, with estimates compared to those obtained via a double robust approach.resultsthe analysis revealed that supervised deep learning models outperformed unsupervised autoencoders in variance estimation while maintaining comparable levels of bias. these results were supported by analyses of real-world data, where the supervised model’s estimates closely matched those derived from conventional methods. additionally, deep learning models performed well compared to traditional methods in settings where exposure was rare.conclusionsupervised deep learning models hold promise in refining propensity score estimations in epidemiological research, offering nuanced confounder adjustment, especially in complex datasets. we endorse integrating supervised deep learning into epidemiological research and share reproducible codes for widespread use and methodological transparency. | [
"purposepropensity score matching",
"epidemiological studies",
"observational data",
"its estimates",
"correct model-specification",
"this study",
"deep learning models",
"autoencoders",
"propensity score estimation",
"them",
"traditional methods",
"bias",
"variance accuracy",
"treatment effect",
"a plasmode simulation",
"the right heart catheterization dataset",
"a variety",
"settings",
"we",
"1) a supervised deep learning architecture",
"(2) an unsupervised autoencoder",
"two traditional methods",
"logistic regression",
"a spline-based method",
"propensity scores",
"performance metrics",
"bias",
"standard errors",
"coverage probability",
"the analysis",
"real-world data",
"estimates",
"those",
"a double robust",
"approach.resultsthe analysis",
"supervised deep learning models",
"unsupervised autoencoders",
"variance estimation",
"comparable levels",
"bias",
"these results",
"analyses",
"real-world data",
"the supervised model’s estimates",
"those",
"conventional methods",
"deep learning models",
"traditional methods",
"settings",
"exposure",
"deep learning models",
"promise",
"propensity score estimations",
"epidemiological research",
"nuanced confounder adjustment",
"complex datasets",
"we",
"supervised deep learning",
"epidemiological research",
"reproducible codes",
"widespread use",
"methodological transparency",
"1",
"2",
"two"
] |
A systematic review of deep learning based image segmentation to detect polyp | [
"Mayuri Gupta",
"Ashish Mishra"
] | Among the world’s most common cancers, colorectal cancer is the third most severe form of cancer. Early polyp detection reduces the risk of colorectal cancer, vital for effective treatment. Artificial intelligence methods such as deep learning have emerged as leading techniques for polyp image segmentation that have gained success in advancing medical image diagnosis. This study aims to provide a review of the most recent research studies that have used deep learning methods and models for polyp segmentation. A comprehensive review of deep learning-based image segmentation models is provided based on existing research studies that are essential for polyp segmentation. Convolutional neural networks, encoder–decoder models, recurrent neural networks, attention-based models, and generative models were the most popular deep learning models which play an essential role in detecting and diagnosing polyp at an early stage. Additionally, this study also aims to provide a detailed classification of prominently used polyp image and video datasets. The evaluation metrics for assessing the effectiveness of different methods, models, and techniques are identified and discussed. A statistical analysis of deep learning models based on polyp datasets and performance metrics is presented, with a discussion of future research trends and limitations. | 10.1007/s10462-023-10621-1 | a systematic review of deep learning based image segmentation to detect polyp | among the world’s most common cancers, colorectal cancer is the third most severe form of cancer. early polyp detection reduces the risk of colorectal cancer, vital for effective treatment. artificial intelligence methods such as deep learning have emerged as leading techniques for polyp image segmentation that have gained success in advancing medical image diagnosis. this study aims to provide a review of the most recent research studies that have used deep learning methods and models for polyp segmentation. a comprehensive review of deep learning-based image segmentation models is provided based on existing research studies that are essential for polyp segmentation. convolutional neural networks, encoder–decoder models, recurrent neural networks, attention-based models, and generative models were the most popular deep learning models which play an essential role in detecting and diagnosing polyp at an early stage. additionally, this study also aims to provide a detailed classification of prominently used polyp image and video datasets. the evaluation metrics for assessing the effectiveness of different methods, models, and techniques are identified and discussed. a statistical analysis of deep learning models based on polyp datasets and performance metrics is presented, with a discussion of future research trends and limitations. | [
"the world’s most common cancers",
"colorectal cancer",
"the third most severe form",
"cancer",
"early polyp detection",
"the risk",
"colorectal cancer",
"effective treatment",
"artificial intelligence methods",
"deep learning",
"leading techniques",
"polyp image segmentation",
"that",
"success",
"medical image diagnosis",
"this study",
"a review",
"the most recent research studies",
"that",
"deep learning methods",
"models",
"polyp segmentation",
"a comprehensive review",
"deep learning-based image segmentation models",
"existing research studies",
"that",
"polyp segmentation",
"convolutional neural networks",
"encoder",
"decoder models",
"recurrent neural networks",
"attention-based models",
"generative models",
"the most popular deep learning models",
"which",
"an essential role",
"polyp",
"an early stage",
"this study",
"a detailed classification",
"prominently used polyp image",
"video datasets",
"the evaluation metrics",
"the effectiveness",
"different methods",
"models",
"techniques",
"a statistical analysis",
"deep learning models",
"polyp datasets",
"performance metrics",
"a discussion",
"future research trends",
"limitations",
"third"
] |
Deep learning based damage detection of concrete structures | [
"Maheswara Rao Bandi",
"Laxmi Narayana Pasupuleti",
"Tanmay Das",
"Shyamal Guchhait"
] | Damage detection of any civil engineering structure is a part of the interest in the field of engineering from which one can estimate the stability and lifetime of the structure. With the advancement of technology in the field of infrastructure, damage assessment of any structure with the help of convolutional neural networks (CNNs) and deep learning is gaining importance because of the ease with which it can detect the damage. By using these computer-aided techniques, we can reduce manpower and detect damage in inaccessible places that we cannot see directly. In the current study, we used ResNet-50, which is a part of a deep convolutional neural network with 50 layers deep in it. ResNet-50 is a subclass of convolutional neural networks most popularly used for image classification. Furthermore, we used the image data set collected from the Utah State University, Logan, Utah, USA. This data set contains nearly 56,000 annotated images of both cracked and non-cracked bridge deck images, wall images, and pavement images. These images contain a variety of obstructions, such as shadows and surface roughness in some cases. The present study aimed to train and test the images and compare the accuracy of the results among themselves with an increase in training data. We devised an algorithm to measure cracks as soon as we detected them. The results show that the Resnet-50 architecture was in good agreement with the developed algorithm. | 10.1007/s42107-024-01106-9 | deep learning based damage detection of concrete structures | damage detection of any civil engineering structure is a part of the interest in the field of engineering from which one can estimate the stability and lifetime of the structure. with the advancement of technology in the field of infrastructure, damage assessment of any structure with the help of convolutional neural networks (cnns) and deep learning is gaining importance because of the ease with which it can detect the damage. by using these computer-aided techniques, we can reduce manpower and detect damage in inaccessible places that we cannot see directly. in the current study, we used resnet-50, which is a part of a deep convolutional neural network with 50 layers deep in it. resnet-50 is a subclass of convolutional neural networks most popularly used for image classification. furthermore, we used the image data set collected from the utah state university, logan, utah, usa. this data set contains nearly 56,000 annotated images of both cracked and non-cracked bridge deck images, wall images, and pavement images. these images contain a variety of obstructions, such as shadows and surface roughness in some cases. the present study aimed to train and test the images and compare the accuracy of the results among themselves with an increase in training data. we devised an algorithm to measure cracks as soon as we detected them. the results show that the resnet-50 architecture was in good agreement with the developed algorithm. | [
"damage detection",
"any civil engineering structure",
"a part",
"the interest",
"the field",
"engineering",
"which",
"one",
"the stability",
"lifetime",
"the structure",
"the advancement",
"technology",
"the field",
"infrastructure",
"damage assessment",
"any structure",
"the help",
"convolutional neural networks",
"cnns",
"deep learning",
"importance",
"the ease",
"which",
"it",
"the damage",
"these computer-aided techniques",
"we",
"manpower",
"damage",
"inaccessible places",
"that",
"we",
"the current study",
"we",
"resnet-50",
"which",
"a part",
"a deep convolutional neural network",
"50 layers",
"it",
"resnet-50",
"a subclass",
"convolutional neural networks",
"image classification",
"we",
"the utah state university",
"usa",
"this data",
"nearly 56,000 annotated images",
"both cracked and non-cracked bridge deck images",
"wall images",
"pavement images",
"these images",
"a variety",
"obstructions",
"shadows",
"surface roughness",
"some cases",
"the present study",
"the images",
"the accuracy",
"the results",
"themselves",
"an increase",
"training data",
"we",
"an algorithm",
"cracks",
"we",
"them",
"the results",
"the resnet-50 architecture",
"good agreement",
"the developed algorithm",
"resnet-50",
"50",
"resnet-50",
"the utah state university",
"utah",
"usa",
"nearly 56,000",
"resnet-50"
] |
A Comparative Analysis of Deep Learning-Based Approaches for Classifying Dental Implants Decision Support System | [
"Mohammed A. H. Lubbad",
"Ikbal Leblebicioglu Kurtulus",
"Dervis Karaboga",
"Kerem Kilic",
"Alper Basturk",
"Bahriye Akay",
"Ozkan Ufuk Nalbantoglu",
"Ozden Melis Durmaz Yilmaz",
"Mustafa Ayata",
"Serkan Yilmaz",
"Ishak Pacal"
] | This study aims to provide an effective solution for the autonomous identification of dental implant brands through a deep learning-based computer diagnostic system. It also seeks to ascertain the system’s potential in clinical practices and to offer a strategic framework for improving diagnosis and treatment processes in implantology. This study employed a total of 28 different deep learning models, including 18 convolutional neural network (CNN) models (VGG, ResNet, DenseNet, EfficientNet, RegNet, ConvNeXt) and 10 vision transformer models (Swin and Vision Transformer). The dataset comprises 1258 panoramic radiographs from patients who received implant treatments at Erciyes University Faculty of Dentistry between 2012 and 2023. It is utilized for the training and evaluation process of deep learning models and consists of prototypes from six different implant systems provided by six manufacturers. The deep learning-based dental implant system provided high classification accuracy for different dental implant brands using deep learning models. Furthermore, among all the architectures evaluated, the small model of the ConvNeXt architecture achieved an impressive accuracy rate of 94.2%, demonstrating a high level of classification success.This study emphasizes the effectiveness of deep learning-based systems in achieving high classification accuracy in dental implant types. These findings pave the way for integrating advanced deep learning tools into clinical practice, promising significant improvements in patient care and treatment outcomes. | 10.1007/s10278-024-01086-x | a comparative analysis of deep learning-based approaches for classifying dental implants decision support system | this study aims to provide an effective solution for the autonomous identification of dental implant brands through a deep learning-based computer diagnostic system. it also seeks to ascertain the system’s potential in clinical practices and to offer a strategic framework for improving diagnosis and treatment processes in implantology. this study employed a total of 28 different deep learning models, including 18 convolutional neural network (cnn) models (vgg, resnet, densenet, efficientnet, regnet, convnext) and 10 vision transformer models (swin and vision transformer). the dataset comprises 1258 panoramic radiographs from patients who received implant treatments at erciyes university faculty of dentistry between 2012 and 2023. it is utilized for the training and evaluation process of deep learning models and consists of prototypes from six different implant systems provided by six manufacturers. the deep learning-based dental implant system provided high classification accuracy for different dental implant brands using deep learning models. furthermore, among all the architectures evaluated, the small model of the convnext architecture achieved an impressive accuracy rate of 94.2%, demonstrating a high level of classification success.this study emphasizes the effectiveness of deep learning-based systems in achieving high classification accuracy in dental implant types. these findings pave the way for integrating advanced deep learning tools into clinical practice, promising significant improvements in patient care and treatment outcomes. | [
"this study",
"an effective solution",
"the autonomous identification",
"dental implant brands",
"a deep learning-based computer diagnostic system",
"it",
"the system’s potential",
"clinical practices",
"a strategic framework",
"diagnosis and treatment processes",
"implantology",
"this study",
"a total",
"28 different deep learning models",
"18 convolutional neural network (cnn) models",
"vgg",
"resnet",
"densenet",
"efficientnet",
"regnet",
"convnext",
"10 vision transformer models",
"swin and vision transformer",
"the dataset",
"patients",
"who",
"implant treatments",
"erciyes university faculty",
"dentistry",
"it",
"the training and evaluation process",
"deep learning models",
"prototypes",
"six different implant systems",
"six manufacturers",
"the deep learning-based dental implant system",
"high classification accuracy",
"different dental implant brands",
"deep learning models",
"all the architectures",
"the small model",
"the convnext architecture",
"an impressive accuracy rate",
"94.2%",
"a high level",
"classification success.this study",
"the effectiveness",
"deep learning-based systems",
"high classification accuracy",
"dental implant types",
"these findings",
"the way",
"advanced deep learning tools",
"clinical practice",
"significant improvements",
"patient care and treatment outcomes",
"28",
"18",
"cnn",
"10",
"1258",
"erciyes university faculty of dentistry",
"between 2012 and 2023",
"six",
"six",
"94.2%"
] |
A radiomics-boosted deep-learning for risk assessment of synchronous peritoneal metastasis in colorectal cancer | [
"Ding Zhang",
"BingShu Zheng",
"LiuWei Xu",
"YiCong Wu",
"Chen Shen",
"ShanLei Bao",
"ZhongHua Tan",
"ChunFeng Sun"
] | ObjectivesSynchronous colorectal cancer peritoneal metastasis (CRPM) has a poor prognosis. This study aimed to create a radiomics-boosted deep learning model by PET/CT image for risk assessment of synchronous CRPM.MethodsA total of 220 colorectal cancer (CRC) cases were enrolled in this study. We mapped the feature maps (Radiomic feature maps (RFMs)) of radiomic features across CT and PET image patches by a 2D sliding kernel. Based on ResNet50, a radiomics-boosted deep learning model was trained using PET/CT image patches and RFMs. Besides that, we explored whether the peritumoral region contributes to the assessment of CRPM. In this study, the performance of each model was evaluated by the area under the curves (AUC).ResultsThe AUCs of the radiomics-boosted deep learning model in the training, internal, external, and all validation datasets were 0.926 (95% confidence interval (CI): 0.874–0.978), 0.897 (95% CI: 0.801–0.994), 0.885 (95% CI: 0.795–0.975), and 0.889 (95% CI: 0.823–0.954), respectively. This model exhibited consistency in the calibration curve, the Delong test and IDI identified it as the most predictive model.ConclusionsThe radiomics-boosted deep learning model showed superior estimated performance in preoperative prediction of synchronous CRPM from pre-treatment PET/CT, offering potential assistance in the development of more personalized treatment methods and follow-up plans.Critical relevance statementThe onset of synchronous colorectal CRPM is insidious, and using a radiomics-boosted deep learning model to assess the risk of CRPM before treatment can help make personalized clinical treatment decisions or choose more sensitive follow-up plans.Key Points
Prognosis for patients with CRPM is bleak, and early detection poses challenges.
The synergy between radiomics and deep learning proves advantageous in evaluating CRPM.
The radiomics-boosted deep-learning model proves valuable in tailoring treatment approaches for CRC patients.
Graphical Abstract | 10.1186/s13244-024-01733-5 | a radiomics-boosted deep-learning for risk assessment of synchronous peritoneal metastasis in colorectal cancer | objectivessynchronous colorectal cancer peritoneal metastasis (crpm) has a poor prognosis. this study aimed to create a radiomics-boosted deep learning model by pet/ct image for risk assessment of synchronous crpm.methodsa total of 220 colorectal cancer (crc) cases were enrolled in this study. we mapped the feature maps (radiomic feature maps (rfms)) of radiomic features across ct and pet image patches by a 2d sliding kernel. based on resnet50, a radiomics-boosted deep learning model was trained using pet/ct image patches and rfms. besides that, we explored whether the peritumoral region contributes to the assessment of crpm. in this study, the performance of each model was evaluated by the area under the curves (auc).resultsthe aucs of the radiomics-boosted deep learning model in the training, internal, external, and all validation datasets were 0.926 (95% confidence interval (ci): 0.874–0.978), 0.897 (95% ci: 0.801–0.994), 0.885 (95% ci: 0.795–0.975), and 0.889 (95% ci: 0.823–0.954), respectively. this model exhibited consistency in the calibration curve, the delong test and idi identified it as the most predictive model.conclusionsthe radiomics-boosted deep learning model showed superior estimated performance in preoperative prediction of synchronous crpm from pre-treatment pet/ct, offering potential assistance in the development of more personalized treatment methods and follow-up plans.critical relevance statementthe onset of synchronous colorectal crpm is insidious, and using a radiomics-boosted deep learning model to assess the risk of crpm before treatment can help make personalized clinical treatment decisions or choose more sensitive follow-up plans.key points prognosis for patients with crpm is bleak, and early detection poses challenges. the synergy between radiomics and deep learning proves advantageous in evaluating crpm. the radiomics-boosted deep-learning model proves valuable in tailoring treatment approaches for crc patients. graphical abstract | [
"objectivessynchronous colorectal cancer peritoneal metastasis",
"crpm",
"a poor prognosis",
"this study",
"a radiomics-boosted deep learning model",
"pet/ct image",
"risk assessment",
"synchronous crpm.methodsa total",
"220 colorectal cancer (crc) cases",
"this study",
"we",
"the feature maps",
"radiomic feature maps",
"rfms",
"radiomic features",
"ct",
"pet image",
"a 2d sliding kernel",
"resnet50",
"a radiomics-boosted deep learning model",
"pet/ct image patches",
"rfms",
"that",
"we",
"the peritumoral region",
"the assessment",
"crpm",
"this study",
"the performance",
"each model",
"the area",
"the curves",
"auc).resultsthe aucs",
"the radiomics",
"the training",
"all validation datasets",
"95% confidence interval",
"ci",
"0.874–0.978",
"95% ci",
"95% ci",
"0.889 (95% ci",
"this model",
"consistency",
"the calibration curve",
"the delong test",
"idi",
"it",
"the most predictive model.conclusionsthe radiomics-boosted deep learning model",
"superior estimated performance",
"preoperative prediction",
"synchronous crpm",
"treatment pet",
"ct",
"potential assistance",
"the development",
"more personalized treatment methods",
"follow-up plans.critical relevance",
"statementthe onset",
"synchronous colorectal crpm",
"a radiomics-boosted deep learning model",
"the risk",
"crpm",
"treatment",
"personalized clinical treatment decisions",
"prognosis",
"patients",
"crpm",
"early detection poses challenges",
"the synergy",
"radiomics",
"deep learning",
"crpm",
"the radiomics",
"treatment approaches",
"crc patients",
"graphical abstract",
"crpm.methodsa",
"220",
"2d",
"resnet50",
"0.926",
"95%",
"0.897",
"95%",
"0.885",
"95%",
"0.795–0.975",
"0.889",
"95%"
] |
Transfer learning-based quantized deep learning models for nail melanoma classification | [
"Mujahid Hussain",
"Makhmoor Fiza",
"Aiman Khalil",
"Asad Ali Siyal",
"Fayaz Ali Dharejo",
"Waheeduddin Hyder",
"Antonella Guzzo",
"Moez Krichen",
"Giancarlo Fortino"
] | Skin cancer, particularly melanoma, has remained a severe issue for many years due to its increasing incidences. The rising mortality rate associated with melanoma demands immediate attention at early stages to facilitate timely diagnosis and effective treatment. Due to the similar visual appearance of malignant tumors and normal cells, the detection and classification of melanoma are considered to be one of the most challenging tasks. Detecting melanoma accurately and promptly is essential to diagnosis and treatment, which can contribute significantly to patient survival. A new dataset, Nailmelonma, is presented in this study in order to train and evaluate various deep learning models applying transfer learning for an indigenous nail melanoma localization dataset. Using the dermoscopic image datasets, seven CNN-based DL architectures (viz., VGG19, ResNet101, ResNet152V2, Xception, InceptionV3, MobileNet, and MobileNetv2) have been trained and tested for the classification of skin lesions for melanoma detection. The trained models have been validated, and key performance parameters (i.e., accuracy, recall, specificity, precision, and F1-score) are systematically evaluated to test the performance of each transfer learning model. The results indicated that the proposed workflow could realize and achieve more than 95% accuracy. In addition, we show how the quantization of such models can enable them for memory-constrained mobile/edge devices. To facilitate an accurate, timely, and faster diagnosis of nail melanoma and to evaluate the early detection of other types of skin cancer, the proposed workflow can be readily applied and robust to the early detection of nail melanoma. | 10.1007/s00521-023-08925-y | transfer learning-based quantized deep learning models for nail melanoma classification | skin cancer, particularly melanoma, has remained a severe issue for many years due to its increasing incidences. the rising mortality rate associated with melanoma demands immediate attention at early stages to facilitate timely diagnosis and effective treatment. due to the similar visual appearance of malignant tumors and normal cells, the detection and classification of melanoma are considered to be one of the most challenging tasks. detecting melanoma accurately and promptly is essential to diagnosis and treatment, which can contribute significantly to patient survival. a new dataset, nailmelonma, is presented in this study in order to train and evaluate various deep learning models applying transfer learning for an indigenous nail melanoma localization dataset. using the dermoscopic image datasets, seven cnn-based dl architectures (viz., vgg19, resnet101, resnet152v2, xception, inceptionv3, mobilenet, and mobilenetv2) have been trained and tested for the classification of skin lesions for melanoma detection. the trained models have been validated, and key performance parameters (i.e., accuracy, recall, specificity, precision, and f1-score) are systematically evaluated to test the performance of each transfer learning model. the results indicated that the proposed workflow could realize and achieve more than 95% accuracy. in addition, we show how the quantization of such models can enable them for memory-constrained mobile/edge devices. to facilitate an accurate, timely, and faster diagnosis of nail melanoma and to evaluate the early detection of other types of skin cancer, the proposed workflow can be readily applied and robust to the early detection of nail melanoma. | [
"skin cancer",
"particularly melanoma",
"a severe issue",
"many years",
"its increasing incidences",
"the rising mortality rate",
"melanoma",
"immediate attention",
"early stages",
"timely diagnosis and effective treatment",
"the similar visual appearance",
"malignant tumors",
"normal cells",
"the detection",
"classification",
"melanoma",
"the most challenging tasks",
"melanoma",
"diagnosis",
"treatment",
"which",
"patient survival",
"a new dataset",
"nailmelonma",
"this study",
"order",
"various deep learning models",
"an indigenous nail melanoma localization dataset",
"the dermoscopic image datasets",
"seven cnn-based dl architectures",
"viz",
".",
"vgg19",
"xception",
"inceptionv3",
"mobilenet",
"mobilenetv2",
"the classification",
"skin lesions",
"melanoma detection",
"the trained models",
"key performance parameters",
"i.e., accuracy",
"recall",
"specificity",
"precision",
"f1-score",
"the performance",
"each transfer learning model",
"the results",
"the proposed workflow",
"more than 95% accuracy",
"addition",
"we",
"the quantization",
"such models",
"them",
"memory-constrained mobile/edge devices",
"an accurate, timely, and faster diagnosis",
"nail melanoma",
"the early detection",
"other types",
"skin cancer",
"the proposed workflow",
"the early detection",
"nail melanoma",
"seven",
"cnn",
"inceptionv3",
"mobilenetv2",
"more than 95%"
] |
RETRACTED ARTICLE: Multimedia Lu Xun literature online learning based on deep learning | [
"Wang Hongsheng"
] | As a great Chinese thinker and writer in the twentieth century, Lu Xun and his literary works are widely known. However, as a successful cultural communication activist, editor and publisher, we still need to conduct in-depth research on Lu Xun in many aspects. Therefore, based on deep learning, this paper constructs an online multimedia learning system of Lu Xun literature. This system takes the relationship between classical Lu Xun literature and modern multimedia technology as the research object, and compare the calculation effect of other different types of algorithms and this dhraa algorithm. Through the availability of data, the dhraa algorithm is significantly better than other algorithms in the recommendation accuracy, thus proving its effectiveness. This system is managed by two servers and one system. The two servers are database server and web server, respectively. After testing, the system has good bearing capacity, can make up for the limited processing capacity of the server, and ensure the system has high performance. Its performance characteristics also show that the system achieved the expected performance. This paper systematically combines Lu Xun’s literature with modern multimedia, which can provide online learning services for Lu Xun’s literature lovers, thus helping scholars to expand Lu Xun’s research field and academic vision. This paper designs an effective online learning system of Lu Xun’s literature by combining deep learning, multimedia technology and Lu Xun’s literature. | 10.1007/s00500-023-08118-8 | retracted article: multimedia lu xun literature online learning based on deep learning | as a great chinese thinker and writer in the twentieth century, lu xun and his literary works are widely known. however, as a successful cultural communication activist, editor and publisher, we still need to conduct in-depth research on lu xun in many aspects. therefore, based on deep learning, this paper constructs an online multimedia learning system of lu xun literature. this system takes the relationship between classical lu xun literature and modern multimedia technology as the research object, and compare the calculation effect of other different types of algorithms and this dhraa algorithm. through the availability of data, the dhraa algorithm is significantly better than other algorithms in the recommendation accuracy, thus proving its effectiveness. this system is managed by two servers and one system. the two servers are database server and web server, respectively. after testing, the system has good bearing capacity, can make up for the limited processing capacity of the server, and ensure the system has high performance. its performance characteristics also show that the system achieved the expected performance. this paper systematically combines lu xun’s literature with modern multimedia, which can provide online learning services for lu xun’s literature lovers, thus helping scholars to expand lu xun’s research field and academic vision. this paper designs an effective online learning system of lu xun’s literature by combining deep learning, multimedia technology and lu xun’s literature. | [
"a great chinese thinker",
"writer",
"the twentieth century",
"lu xun",
"his literary works",
"a successful cultural communication activist",
"editor",
"publisher",
"we",
"depth",
"lu xun",
"many aspects",
"deep learning",
"this paper",
"an online multimedia learning system",
"lu xun literature",
"this system",
"the relationship",
"classical lu xun literature",
"modern multimedia technology",
"the research object",
"the calculation effect",
"other different types",
"algorithms",
"this dhraa algorithm",
"the availability",
"data",
"the dhraa algorithm",
"other algorithms",
"the recommendation accuracy",
"its effectiveness",
"this system",
"two servers",
"one system",
"the two servers",
"database server and web server",
"testing",
"the system",
"good bearing capacity",
"the limited processing capacity",
"the server",
"the system",
"high performance",
"its performance characteristics",
"the system",
"the expected performance",
"this paper",
"lu xun’s literature",
"modern multimedia",
"which",
"online learning services",
"lu xun’s literature lovers",
"scholars",
"lu xun’s research field",
"academic vision",
"this paper",
"an effective online learning system",
"lu xun’s literature",
"deep learning",
"multimedia technology",
"lu xun’s literature",
"chinese",
"the twentieth century",
"lu xun",
"lu xun",
"lu xun literature",
"two",
"one",
"two",
"xun",
"xun’s",
"xun",
"xun"
] |
Deep learning approaches for lyme disease detection: leveraging progressive resizing and self-supervised learning models | [
"Daryl Jacob Jerrish",
"Om Nankar",
"Shilpa Gite",
"Shruti Patil",
"Ketan Kotecha",
"Ganeshsree Selvachandran",
"Ajith Abraham"
] | Lyme disease diagnosis poses a significant challenge, with blood tests exhibiting an alarming inaccuracy rate of nearly 60% in detecting early-stage infections. As a result, there is an urgent need for improved diagnostic methods that can offer more accurate detection outcomes. To address this pressing issue, our study focuses on harnessing the potential of deep learning approaches, specifically by employing model pipelining through progressive resizing and multiple self-supervised learning models. In this paper, we present a comprehensive exploration of self-supervised learning models, including SimCLR, SwAV, MoCo, and BYOL, tailored to the context of Lyme disease detection using medical imaging. The effectiveness and performance of these models are evaluated using standard metrics such as F1 score, precision, recall, and accuracy. Furthermore, we emphasize the significance of progressive resizing and its implications when dealing with convolutional neural networks (CNNs) for medical image analysis. By leveraging deep learning approaches, progressive resizing, and self-supervised learning models, the challenges associated with Lyme disease detection are effectively addressed in this study. The application of our novel methodology and the execution of a comprehensive evaluation framework contribute invaluable insights, fostering the development of more efficient and accurate diagnostic methods for Lyme disease. It is firmly believed that our research will serve as a catalyst, inspiring interdisciplinary collaborations that accelerate progress at the convergence of medicine, computing, and technology, ultimately benefiting public health. | 10.1007/s11042-023-16306-9 | deep learning approaches for lyme disease detection: leveraging progressive resizing and self-supervised learning models | lyme disease diagnosis poses a significant challenge, with blood tests exhibiting an alarming inaccuracy rate of nearly 60% in detecting early-stage infections. as a result, there is an urgent need for improved diagnostic methods that can offer more accurate detection outcomes. to address this pressing issue, our study focuses on harnessing the potential of deep learning approaches, specifically by employing model pipelining through progressive resizing and multiple self-supervised learning models. in this paper, we present a comprehensive exploration of self-supervised learning models, including simclr, swav, moco, and byol, tailored to the context of lyme disease detection using medical imaging. the effectiveness and performance of these models are evaluated using standard metrics such as f1 score, precision, recall, and accuracy. furthermore, we emphasize the significance of progressive resizing and its implications when dealing with convolutional neural networks (cnns) for medical image analysis. by leveraging deep learning approaches, progressive resizing, and self-supervised learning models, the challenges associated with lyme disease detection are effectively addressed in this study. the application of our novel methodology and the execution of a comprehensive evaluation framework contribute invaluable insights, fostering the development of more efficient and accurate diagnostic methods for lyme disease. it is firmly believed that our research will serve as a catalyst, inspiring interdisciplinary collaborations that accelerate progress at the convergence of medicine, computing, and technology, ultimately benefiting public health. | [
"lyme disease diagnosis",
"a significant challenge",
"blood tests",
"an alarming inaccuracy rate",
"nearly 60%",
"early-stage infections",
"a result",
"an urgent need",
"improved diagnostic methods",
"that",
"more accurate detection outcomes",
"this pressing issue",
"our study",
"the potential",
"deep learning approaches",
"model",
"progressive resizing",
"multiple self-supervised learning models",
"this paper",
"we",
"a comprehensive exploration",
"self-supervised learning models",
"simclr",
"swav",
"moco",
"byol",
"the context",
"lyme disease detection",
"medical imaging",
"the effectiveness",
"performance",
"these models",
"standard metrics",
"f1 score",
"precision",
"recall",
"accuracy",
"we",
"the significance",
"progressive resizing",
"its implications",
"convolutional neural networks",
"cnns",
"medical image analysis",
"deep learning approaches",
"progressive resizing",
"self-supervised learning models",
"the challenges",
"lyme disease detection",
"this study",
"the application",
"our novel methodology",
"the execution",
"a comprehensive evaluation framework",
"invaluable insights",
"the development",
"more efficient and accurate diagnostic methods",
"lyme disease",
"it",
"our research",
"a catalyst",
"interdisciplinary collaborations",
"that",
"progress",
"the convergence",
"medicine",
"computing",
"technology",
"public health",
"nearly 60%"
] |
Deep Learning Models for Skin Cancer Classification Across Diverse Color Spaces: Comprehensive Analysis | [
"Anisha Paul",
"Asfak Ali",
"Sheli Sinha Chaudhuri"
] | Color space plays an important role in various aspects of imaging tasks. However, in deep learning-based computer vision, the RGB color model is predominantly employed. This research analyzes the impact of deep convolutional neural networks on cancer classification across different color spaces. The five most popular deep learning models undergo training and testing in eleven color spaces, revealing that YUV, LAB, and YIQ consistently outperform other color models in most cases. RGB images are frequently converted to alternative color spaces for enhanced representation in specific applications, like object detection and segmentation. This transformation induces alterations in the features of the color image due to variations in pixel intensity information across different color models. In this research, the aforementioned principle is applied to the classification of skin cancer using deep learning networks on images of skin lesions. The results exhibit diverse responses, with some networks achieving higher accuracy in alternative color spaces compared to RGB, while others do not. This study provides insights into the classification performance across RGB, HED, HSV, LAB, RGBCIE, XYZ, YCbCr, YDbDr, YIQ, YPbPr, and YUV color spaces. The research aims to illustrate how deep learning facilitates the analysis of skin cancer images in different color spaces. | 10.1007/s11831-024-10160-0 | deep learning models for skin cancer classification across diverse color spaces: comprehensive analysis | color space plays an important role in various aspects of imaging tasks. however, in deep learning-based computer vision, the rgb color model is predominantly employed. this research analyzes the impact of deep convolutional neural networks on cancer classification across different color spaces. the five most popular deep learning models undergo training and testing in eleven color spaces, revealing that yuv, lab, and yiq consistently outperform other color models in most cases. rgb images are frequently converted to alternative color spaces for enhanced representation in specific applications, like object detection and segmentation. this transformation induces alterations in the features of the color image due to variations in pixel intensity information across different color models. in this research, the aforementioned principle is applied to the classification of skin cancer using deep learning networks on images of skin lesions. the results exhibit diverse responses, with some networks achieving higher accuracy in alternative color spaces compared to rgb, while others do not. this study provides insights into the classification performance across rgb, hed, hsv, lab, rgbcie, xyz, ycbcr, ydbdr, yiq, ypbpr, and yuv color spaces. the research aims to illustrate how deep learning facilitates the analysis of skin cancer images in different color spaces. | [
"color space",
"an important role",
"various aspects",
"imaging tasks",
"deep learning-based computer vision",
"the rgb color model",
"this research",
"the impact",
"deep convolutional neural networks",
"cancer classification",
"different color spaces",
"the five most popular deep learning models",
"training",
"testing",
"eleven color spaces",
"yuv",
"lab",
"yiq",
"other color models",
"most cases",
"rgb images",
"alternative color spaces",
"enhanced representation",
"specific applications",
"object detection",
"segmentation",
"this transformation",
"alterations",
"the features",
"the color image",
"variations",
"pixel intensity information",
"different color models",
"this research",
"the aforementioned principle",
"the classification",
"skin cancer",
"deep learning networks",
"images",
"skin lesions",
"the results",
"diverse responses",
"some networks",
"higher accuracy",
"alternative color spaces",
"rgb",
"others",
"this study",
"insights",
"the classification performance",
"rgb",
"the research",
"how deep learning",
"the analysis",
"skin cancer images",
"different color spaces",
"rgb",
"five",
"eleven",
"yuv",
"rgb",
"rgb",
"rgb"
] |
Scaffolding cooperation in human groups with deep reinforcement learning | [
"Kevin R. McKee",
"Andrea Tacchetti",
"Michiel A. Bakker",
"Jan Balaguer",
"Lucy Campbell-Gillingham",
"Richard Everett",
"Matthew Botvinick"
] | Effective approaches to encouraging group cooperation are still an open challenge. Here we apply recent advances in deep learning to structure networks of human participants playing a group cooperation game. We leverage deep reinforcement learning and simulation methods to train a ‘social planner’ capable of making recommendations to create or break connections between group members. The strategy that it develops succeeds at encouraging pro-sociality in networks of human participants (N = 208 participants in 13 groups) playing for real monetary stakes. Under the social planner, groups finished the game with an average cooperation rate of 77.7%, compared with 42.8% in static networks (N = 176 in 11 groups). In contrast to prior strategies that separate defectors from cooperators (tested here with N = 384 in 24 groups), the social planner learns to take a conciliatory approach to defectors, encouraging them to act pro-socially by moving them to small highly cooperative neighbourhoods. | 10.1038/s41562-023-01686-7 | scaffolding cooperation in human groups with deep reinforcement learning | effective approaches to encouraging group cooperation are still an open challenge. here we apply recent advances in deep learning to structure networks of human participants playing a group cooperation game. we leverage deep reinforcement learning and simulation methods to train a ‘social planner’ capable of making recommendations to create or break connections between group members. the strategy that it develops succeeds at encouraging pro-sociality in networks of human participants (n = 208 participants in 13 groups) playing for real monetary stakes. under the social planner, groups finished the game with an average cooperation rate of 77.7%, compared with 42.8% in static networks (n = 176 in 11 groups). in contrast to prior strategies that separate defectors from cooperators (tested here with n = 384 in 24 groups), the social planner learns to take a conciliatory approach to defectors, encouraging them to act pro-socially by moving them to small highly cooperative neighbourhoods. | [
"effective approaches",
"group cooperation",
"an open challenge",
"we",
"recent advances",
"deep learning",
"networks",
"human participants",
"a group cooperation game",
"we",
"deep reinforcement learning and simulation methods",
"a ‘social planner",
"recommendations",
"connections",
"group members",
"the strategy",
"that",
"it",
"networks",
"human participants",
"= 208 participants",
"13 groups",
"real monetary stakes",
"the social planner",
"groups",
"the game",
"an average cooperation rate",
"77.7%",
"42.8%",
"static networks",
"11 groups",
"contrast",
"prior strategies",
"that",
"separate defectors",
"cooperators",
"n",
"24 groups",
"the social planner",
"a conciliatory approach",
"defectors",
"them",
"them",
"small highly cooperative neighbourhoods",
"208",
"13",
"77.7%",
"42.8%",
"176",
"11",
"384",
"24"
] |
Deep-learning-enabled brain hemodynamic mapping using resting-state fMRI | [
"Xirui Hou",
"Pengfei Guo",
"Puyang Wang",
"Peiying Liu",
"Doris D. M. Lin",
"Hongli Fan",
"Yang Li",
"Zhiliang Wei",
"Zixuan Lin",
"Dengrong Jiang",
"Jin Jin",
"Catherine Kelly",
"Jay J. Pillai",
"Judy Huang",
"Marco C. Pinho",
"Binu P. Thomas",
"Babu G. Welch",
"Denise C. Park",
"Vishal M. Patel",
"Argye E. Hillis",
"Hanzhang Lu"
] | Cerebrovascular disease is a leading cause of death globally. Prevention and early intervention are known to be the most effective forms of its management. Non-invasive imaging methods hold great promises for early stratification, but at present lack the sensitivity for personalized prognosis. Resting-state functional magnetic resonance imaging (rs-fMRI), a powerful tool previously used for mapping neural activity, is available in most hospitals. Here we show that rs-fMRI can be used to map cerebral hemodynamic function and delineate impairment. By exploiting time variations in breathing pattern during rs-fMRI, deep learning enables reproducible mapping of cerebrovascular reactivity (CVR) and bolus arrival time (BAT) of the human brain using resting-state CO2 fluctuations as a natural “contrast media”. The deep-learning network is trained with CVR and BAT maps obtained with a reference method of CO2-inhalation MRI, which includes data from young and older healthy subjects and patients with Moyamoya disease and brain tumors. We demonstrate the performance of deep-learning cerebrovascular mapping in the detection of vascular abnormalities, evaluation of revascularization effects, and vascular alterations in normal aging. In addition, cerebrovascular maps obtained with the proposed method exhibit excellent reproducibility in both healthy volunteers and stroke patients. Deep-learning resting-state vascular imaging has the potential to become a useful tool in clinical cerebrovascular imaging. | 10.1038/s41746-023-00859-y | deep-learning-enabled brain hemodynamic mapping using resting-state fmri | cerebrovascular disease is a leading cause of death globally. prevention and early intervention are known to be the most effective forms of its management. non-invasive imaging methods hold great promises for early stratification, but at present lack the sensitivity for personalized prognosis. resting-state functional magnetic resonance imaging (rs-fmri), a powerful tool previously used for mapping neural activity, is available in most hospitals. here we show that rs-fmri can be used to map cerebral hemodynamic function and delineate impairment. by exploiting time variations in breathing pattern during rs-fmri, deep learning enables reproducible mapping of cerebrovascular reactivity (cvr) and bolus arrival time (bat) of the human brain using resting-state co2 fluctuations as a natural “contrast media”. the deep-learning network is trained with cvr and bat maps obtained with a reference method of co2-inhalation mri, which includes data from young and older healthy subjects and patients with moyamoya disease and brain tumors. we demonstrate the performance of deep-learning cerebrovascular mapping in the detection of vascular abnormalities, evaluation of revascularization effects, and vascular alterations in normal aging. in addition, cerebrovascular maps obtained with the proposed method exhibit excellent reproducibility in both healthy volunteers and stroke patients. deep-learning resting-state vascular imaging has the potential to become a useful tool in clinical cerebrovascular imaging. | [
"cerebrovascular disease",
"a leading cause",
"death",
"prevention",
"early intervention",
"the most effective forms",
"its management",
"non-invasive imaging methods",
"great promises",
"early stratification",
"present lack",
"personalized prognosis",
"resting-state functional magnetic resonance imaging",
"rs-fmri",
"a powerful tool",
"neural activity",
"most hospitals",
"we",
"rs",
"fmri",
"cerebral hemodynamic function",
"delineate impairment",
"time variations",
"breathing pattern",
"rs-fmri",
"deep learning",
"reproducible mapping",
"cerebrovascular reactivity",
"cvr",
"bolus arrival time",
"bat",
"the human brain",
"resting-state co2 fluctuations",
"a natural “contrast media",
"the deep-learning network",
"cvr",
"bat maps",
"a reference method",
"co2-inhalation mri",
"which",
"data",
"young and older healthy subjects",
"patients",
"moyamoya disease",
"brain tumors",
"we",
"the performance",
"deep-learning cerebrovascular mapping",
"the detection",
"vascular abnormalities",
"evaluation",
"revascularization effects",
"vascular alterations",
"normal aging",
"addition",
"cerebrovascular maps",
"the proposed method",
"excellent reproducibility",
"both healthy volunteers",
"stroke patients",
"deep-learning resting-state vascular imaging",
"the potential",
"a useful tool",
"clinical cerebrovascular imaging"
] |
An Alternative to Cognitivism: Computational Phenomenology for Deep Learning | [
"Pierre Beckmann",
"Guillaume Köstner",
"Inês Hipólito"
] | We propose a non-representationalist framework for deep learning relying on a novel method computational phenomenology, a dialogue between the first-person perspective (relying on phenomenology) and the mechanisms of computational models. We thereby propose an alternative to the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities. This interpretation mainly relies on neuro-representationalism, a position that combines a strong ontological commitment towards scientific theoretical entities and the idea that the brain operates on symbolic representations of these entities. We proceed as follows: after offering a review of cognitivism and neuro-representationalism in the field of deep learning, we first elaborate a phenomenological critique of these positions; we then sketch out computational phenomenology and distinguish it from existing alternatives; finally we apply this new method to deep learning models trained on specific tasks, in order to formulate a conceptual framework of deep-learning, that allows one to think of artificial neural networks’ mechanisms in terms of lived experience. | 10.1007/s11023-023-09638-w | an alternative to cognitivism: computational phenomenology for deep learning | we propose a non-representationalist framework for deep learning relying on a novel method computational phenomenology, a dialogue between the first-person perspective (relying on phenomenology) and the mechanisms of computational models. we thereby propose an alternative to the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities. this interpretation mainly relies on neuro-representationalism, a position that combines a strong ontological commitment towards scientific theoretical entities and the idea that the brain operates on symbolic representations of these entities. we proceed as follows: after offering a review of cognitivism and neuro-representationalism in the field of deep learning, we first elaborate a phenomenological critique of these positions; we then sketch out computational phenomenology and distinguish it from existing alternatives; finally we apply this new method to deep learning models trained on specific tasks, in order to formulate a conceptual framework of deep-learning, that allows one to think of artificial neural networks’ mechanisms in terms of lived experience. | [
"we",
"a non-representationalist framework",
"deep learning",
"a novel method",
"computational phenomenology",
"a dialogue",
"the first-person perspective",
"phenomenology",
"the mechanisms",
"computational models",
"we",
"an alternative",
"the modern cognitivist interpretation",
"deep learning",
"which",
"artificial neural networks encode representations",
"external entities",
"this interpretation",
"neuro-representationalism",
"a position",
"that",
"a strong ontological commitment",
"scientific theoretical entities",
"the idea",
"the brain",
"symbolic representations",
"these entities",
"we",
"a review",
"cognitivism",
"neuro-representationalism",
"the field",
"deep learning",
"we",
"a phenomenological critique",
"these positions",
"we",
"computational phenomenology",
"it",
"existing alternatives",
"we",
"this new method",
"deep learning models",
"specific tasks",
"order",
"a conceptual framework",
"deep-learning",
"that",
"artificial neural networks’ mechanisms",
"terms",
"lived experience",
"first",
"first"
] |
Slideflow: deep learning for digital histopathology with real-time whole-slide visualization | [
"James M. Dolezal",
"Sara Kochanny",
"Emma Dyer",
"Siddhi Ramesh",
"Andrew Srisuwananukorn",
"Matteo Sacco",
"Frederick M. Howard",
"Anran Li",
"Prajval Mohan",
"Alexander T. Pearson"
] | Deep learning methods have emerged as powerful tools for analyzing histopathological images, but current methods are often specialized for specific domains and software environments, and few open-source options exist for deploying models in an interactive interface. Experimenting with different deep learning approaches typically requires switching software libraries and reprocessing data, reducing the feasibility and practicality of experimenting with new architectures. We developed a flexible deep learning library for histopathology called Slideflow, a package which supports a broad array of deep learning methods for digital pathology and includes a fast whole-slide interface for deploying trained models. Slideflow includes unique tools for whole-slide image data processing, efficient stain normalization and augmentation, weakly-supervised whole-slide classification, uncertainty quantification, feature generation, feature space analysis, and explainability. Whole-slide image processing is highly optimized, enabling whole-slide tile extraction at 40x magnification in 2.5 s per slide. The framework-agnostic data processing pipeline enables rapid experimentation with new methods built with either Tensorflow or PyTorch, and the graphical user interface supports real-time visualization of slides, predictions, heatmaps, and feature space characteristics on a variety of hardware devices, including ARM-based devices such as the Raspberry Pi. | 10.1186/s12859-024-05758-x | slideflow: deep learning for digital histopathology with real-time whole-slide visualization | deep learning methods have emerged as powerful tools for analyzing histopathological images, but current methods are often specialized for specific domains and software environments, and few open-source options exist for deploying models in an interactive interface. experimenting with different deep learning approaches typically requires switching software libraries and reprocessing data, reducing the feasibility and practicality of experimenting with new architectures. we developed a flexible deep learning library for histopathology called slideflow, a package which supports a broad array of deep learning methods for digital pathology and includes a fast whole-slide interface for deploying trained models. slideflow includes unique tools for whole-slide image data processing, efficient stain normalization and augmentation, weakly-supervised whole-slide classification, uncertainty quantification, feature generation, feature space analysis, and explainability. whole-slide image processing is highly optimized, enabling whole-slide tile extraction at 40x magnification in 2.5 s per slide. the framework-agnostic data processing pipeline enables rapid experimentation with new methods built with either tensorflow or pytorch, and the graphical user interface supports real-time visualization of slides, predictions, heatmaps, and feature space characteristics on a variety of hardware devices, including arm-based devices such as the raspberry pi. | [
"deep learning methods",
"powerful tools",
"histopathological images",
"current methods",
"specific domains",
"software environments",
"few open-source options",
"models",
"an interactive interface",
"different deep learning approaches",
"software libraries",
"reprocessing data",
"the feasibility",
"practicality",
"new architectures",
"we",
"a flexible deep learning library",
"histopathology",
"a package",
"which",
"a broad array",
"deep learning methods",
"digital pathology",
"a fast whole-slide interface",
"trained models",
"slideflow",
"unique tools",
"whole-slide image data processing",
"efficient stain normalization",
"augmentation",
"weakly-supervised whole-slide classification",
"uncertainty quantification",
"feature generation",
"feature space analysis",
"explainability",
"whole-slide image processing",
"whole-slide tile extraction",
"40x magnification",
"2.5 s",
"slide",
"the framework-agnostic data processing pipeline",
"rapid experimentation",
"new methods",
"either tensorflow",
"pytorch",
"the graphical user interface",
"real-time visualization",
"slides",
"predictions",
"heatmaps",
"feature space characteristics",
"a variety",
"hardware devices",
"arm-based devices",
"the raspberry pi",
"40x",
"2.5"
] |
Authorship attribution in twitter: a comparative study of machine learning and deep learning approaches | [
"Rebeh Imane Ammar Aouchiche",
"Fatima Boumahdi",
"Mohamed Abdelkarim Remmide",
"Amina Madani"
] | As social media platforms gain popularity and influence, content integrity and user accountability issues become more critical. Authorship attribution (AA) is a powerful tool for tackling such issues by accurately determining the real author of online posts. This study proposes an AA approach using machine and deep learning algorithms to accurately predict the author of unknown posts on social media platforms. It introduces Temporal Convolutional Networks (TCN) for short texts, investigates the effectiveness of combining Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN), and explores the use of an Autoencoder combined with Adaboost classifier. This approach was tested on a Twitter dataset, achieving 52.77% accuracy in AA through multiple experiments across various scenarios. | 10.1007/s41870-024-01788-z | authorship attribution in twitter: a comparative study of machine learning and deep learning approaches | as social media platforms gain popularity and influence, content integrity and user accountability issues become more critical. authorship attribution (aa) is a powerful tool for tackling such issues by accurately determining the real author of online posts. this study proposes an aa approach using machine and deep learning algorithms to accurately predict the author of unknown posts on social media platforms. it introduces temporal convolutional networks (tcn) for short texts, investigates the effectiveness of combining long short-term memory (lstm) and convolutional neural networks (cnn), and explores the use of an autoencoder combined with adaboost classifier. this approach was tested on a twitter dataset, achieving 52.77% accuracy in aa through multiple experiments across various scenarios. | [
"social media platforms",
"popularity",
"influence",
"content integrity",
"user accountability issues",
"authorship attribution",
"(aa",
"a powerful tool",
"such issues",
"the real author",
"online posts",
"this study",
"an aa approach",
"machine",
"algorithms",
"the author",
"unknown posts",
"social media platforms",
"it",
"temporal convolutional networks",
"tcn",
"short texts",
"the effectiveness",
"long short-term memory",
"lstm",
"convolutional neural networks",
"cnn",
"the use",
"an autoencoder",
"adaboost classifier",
"this approach",
"a twitter dataset",
"52.77% accuracy",
"aa",
"multiple experiments",
"various scenarios",
"cnn",
"52.77%"
] |
Ultrasound-based deep learning radiomics nomogram for differentiating mass mastitis from invasive breast cancer | [
"Linyong Wu",
"Songhua Li",
"Chaojun Wu",
"Shaofeng Wu",
"Yan Lin",
"Dayou Wei"
] | BackgroundThe purpose of this study is to develop and validate the potential value of the deep learning radiomics nomogram (DLRN) based on ultrasound to differentiate mass mastitis (MM) and invasive breast cancer (IBC).Methods50 cases of MM and 180 cases of IBC with ultrasound Breast Imaging Reporting and Data System 4 category were recruited (training cohort, n = 161, validation cohort, n = 69). Based on PyRadiomics and ResNet50 extractors, radiomics and deep learning features were extracted, respectively. Based on supervised machine learning methods such as logistic regression, random forest, and support vector machine, as well as unsupervised machine learning methods using K-means clustering analysis, the differences in features between MM and IBC were analyzed to develop DLRN. The performance of DLRN had been evaluated by receiver operating characteristic curve, calibration, and clinical practicality.ResultsSupervised machine learning results showed that compared with radiomics models, especially random forest models, deep learning models were better at recognizing MM and IBC. The area under the curve (AUC) of the validation cohort was 0.84, the accuracy was 0.83, the sensitivity was 0.73, and the specificity was 0.83. Compared to radiomics or deep learning models, DLRN even further improved discrimination ability (AUC of 0.90 and 0.90, accuracy of 0.83 and 0.88 for training and validation cohorts), which had better clinical benefits and good calibratability. In addition, the information heterogeneity of deep learning features in MM and IBC was validated again through unsupervised machine learning clustering analysis, indicating that MM had a unique features phenotype.ConclusionThe DLRN developed based on radiomics and deep learning features of ultrasound images has potential clinical value in effectively distinguishing between MM and IBC. DLRN breaks through visual limitations and quantifies more image information related to MM based on computers, further utilizing machine learning to effectively utilize this information for clinical decision-making. As DLRN becomes an autonomous screening system, it will improve the recognition rate of MM in grassroots hospitals and reduce the possibility of incorrect treatment and overtreatment. | 10.1186/s12880-024-01353-x | ultrasound-based deep learning radiomics nomogram for differentiating mass mastitis from invasive breast cancer | backgroundthe purpose of this study is to develop and validate the potential value of the deep learning radiomics nomogram (dlrn) based on ultrasound to differentiate mass mastitis (mm) and invasive breast cancer (ibc).methods50 cases of mm and 180 cases of ibc with ultrasound breast imaging reporting and data system 4 category were recruited (training cohort, n = 161, validation cohort, n = 69). based on pyradiomics and resnet50 extractors, radiomics and deep learning features were extracted, respectively. based on supervised machine learning methods such as logistic regression, random forest, and support vector machine, as well as unsupervised machine learning methods using k-means clustering analysis, the differences in features between mm and ibc were analyzed to develop dlrn. the performance of dlrn had been evaluated by receiver operating characteristic curve, calibration, and clinical practicality.resultssupervised machine learning results showed that compared with radiomics models, especially random forest models, deep learning models were better at recognizing mm and ibc. the area under the curve (auc) of the validation cohort was 0.84, the accuracy was 0.83, the sensitivity was 0.73, and the specificity was 0.83. compared to radiomics or deep learning models, dlrn even further improved discrimination ability (auc of 0.90 and 0.90, accuracy of 0.83 and 0.88 for training and validation cohorts), which had better clinical benefits and good calibratability. in addition, the information heterogeneity of deep learning features in mm and ibc was validated again through unsupervised machine learning clustering analysis, indicating that mm had a unique features phenotype.conclusionthe dlrn developed based on radiomics and deep learning features of ultrasound images has potential clinical value in effectively distinguishing between mm and ibc. dlrn breaks through visual limitations and quantifies more image information related to mm based on computers, further utilizing machine learning to effectively utilize this information for clinical decision-making. as dlrn becomes an autonomous screening system, it will improve the recognition rate of mm in grassroots hospitals and reduce the possibility of incorrect treatment and overtreatment. | [
"backgroundthe purpose",
"this study",
"the potential value",
"the deep learning radiomics nomogram",
"dlrn",
"ultrasound",
"mm",
"mm",
"ibc",
"ultrasound breast imaging reporting",
"data system",
"pyradiomics",
"resnet50 extractors",
"radiomics",
"deep learning features",
"supervised machine learning methods",
"logistic regression",
"random forest",
"vector machine",
"unsupervised machine learning methods",
"k",
"clustering analysis",
"the differences",
"features",
"mm",
"ibc",
"dlrn",
"the performance",
"dlrn",
"receiver operating characteristic curve",
"calibration",
"clinical practicality.resultssupervised machine learning results",
"radiomics models",
"especially random forest models",
"deep learning models",
"mm",
"ibc",
"the area",
"the curve",
"auc",
"the validation cohort",
"the accuracy",
"the sensitivity",
"the specificity",
"radiomics",
"deep learning models",
"even further improved discrimination ability",
"auc",
"accuracy",
"training and validation cohorts",
"which",
"better clinical benefits",
"good calibratability",
"addition",
"the information heterogeneity",
"deep learning features",
"mm",
"ibc",
"unsupervised machine",
"clustering analysis",
"mm",
"a unique features",
"phenotype.conclusionthe dlrn",
"radiomics",
"deep learning features",
"ultrasound images",
"potential clinical value",
"mm",
"ibc",
"dlrn",
"visual limitations",
"quantifies",
"more image information",
"mm",
"computers",
"this information",
"clinical decision-making",
"dlrn",
"an autonomous screening system",
"it",
"the recognition rate",
"mm",
"grassroots hospitals",
"the possibility",
"incorrect treatment",
"overtreatment",
"mm",
"mm",
"180",
"4",
"161",
"69",
"resnet50",
"mm",
"0.84",
"0.83",
"0.73",
"0.83",
"0.90",
"0.90",
"0.83",
"0.88",
"mm",
"mm",
"dlrn",
"mm",
"mm",
"dlrn"
] |
Two-layer Ensemble of Deep Learning Models for Medical Image Segmentation | [
"Truong Dang",
"Tien Thanh Nguyen",
"John McCall",
"Eyad Elyan",
"Carlos Francisco Moreno-García"
] | One of the most important areas in medical image analysis is segmentation, in which raw image data is partitioned into structured and meaningful regions to gain further insights. By using Deep Neural Networks (DNN), AI-based automated segmentation algorithms can potentially assist physicians with more effective imaging-based diagnoses. However, since it is difficult to acquire high-quality ground truths for medical images and DNN hyperparameters require significant manual tuning, the results by DNN-based medical models might be limited. A potential solution is to combine multiple DNN models using ensemble learning. We propose a two-layer ensemble of deep learning models in which the prediction of each training image pixel made by each model in the first layer is used as the augmented data of the training image for the second layer of the ensemble. The prediction of the second layer is then combined by using a weight-based scheme which is found by solving linear regression problems. To the best of our knowledge, our paper is the first work which proposes a two-layer ensemble of deep learning models with an augmented data technique in medical image segmentation. Experiments conducted on five different medical image datasets for diverse segmentation tasks show that proposed method achieves better results in terms of several performance metrics compared to some well-known benchmark algorithms. Our proposed two-layer ensemble of deep learning models for segmentation of medical images shows effectiveness compared to several benchmark algorithms. The research can be expanded in several directions like image classification. | 10.1007/s12559-024-10257-5 | two-layer ensemble of deep learning models for medical image segmentation | one of the most important areas in medical image analysis is segmentation, in which raw image data is partitioned into structured and meaningful regions to gain further insights. by using deep neural networks (dnn), ai-based automated segmentation algorithms can potentially assist physicians with more effective imaging-based diagnoses. however, since it is difficult to acquire high-quality ground truths for medical images and dnn hyperparameters require significant manual tuning, the results by dnn-based medical models might be limited. a potential solution is to combine multiple dnn models using ensemble learning. we propose a two-layer ensemble of deep learning models in which the prediction of each training image pixel made by each model in the first layer is used as the augmented data of the training image for the second layer of the ensemble. the prediction of the second layer is then combined by using a weight-based scheme which is found by solving linear regression problems. to the best of our knowledge, our paper is the first work which proposes a two-layer ensemble of deep learning models with an augmented data technique in medical image segmentation. experiments conducted on five different medical image datasets for diverse segmentation tasks show that proposed method achieves better results in terms of several performance metrics compared to some well-known benchmark algorithms. our proposed two-layer ensemble of deep learning models for segmentation of medical images shows effectiveness compared to several benchmark algorithms. the research can be expanded in several directions like image classification. | [
"the most important areas",
"medical image analysis",
"segmentation",
"which",
"raw image data",
"structured and meaningful regions",
"further insights",
"deep neural networks",
"dnn",
"ai-based automated segmentation algorithms",
"physicians",
"more effective imaging-based diagnoses",
"it",
"high-quality ground truths",
"medical images",
"dnn hyperparameters",
"significant manual tuning",
"the results",
"dnn-based medical models",
"a potential solution",
"multiple dnn models",
"ensemble learning",
"we",
"a two-layer ensemble",
"deep learning models",
"which",
"the prediction",
"each training image pixel",
"each model",
"the first layer",
"the augmented data",
"the training image",
"the second layer",
"the ensemble",
"the prediction",
"the second layer",
"a weight-based scheme",
"which",
"linear regression problems",
"our knowledge",
"our paper",
"the first work",
"which",
"a two-layer ensemble",
"deep learning models",
"an augmented data technique",
"medical image segmentation",
"experiments",
"five different medical image datasets",
"diverse segmentation tasks",
"proposed method",
"better results",
"terms",
"several performance metrics",
"some well-known benchmark algorithms",
"our proposed two-layer ensemble",
"deep learning models",
"segmentation",
"medical images",
"effectiveness",
"several benchmark algorithms",
"the research",
"several directions",
"image classification",
"one",
"two",
"first",
"second",
"second",
"first",
"two",
"five",
"two"
] |
A Diffusion Equation for Improving the Robustness of Deep Learning Speckle Removal Model | [
"Li Cheng",
"Yuming Xing",
"Yao Li",
"Zhichang Guo"
] | Speckle removal aims to smooth noise while preserving image boundaries and texture information. In recent years, speckle removal models based on deep learning methods have attracted a lot of attention. However, it was found that these models are less robust to adversarial attacks. The adversarial attack makes the image recovery of deep learning methods significantly less effective when the speckle noise distribution is almost unchanged. In purpose of addressing the above problem, we propose a diffusion equation-based speckle removal model that can improve the robustness of deep learning algorithms in this paper. The model utilizes a deep learning image prior and an image grayscale detection operator together to construct the coefficient function of the diffusion equation. Among them, there is a high possibility that the deep learning image prior is inaccurate or even incorrect, but it will not affect the performance and the properties of the proposed diffusion equation model for noise removal. Moreover, we analyze the robustness of the proposed diffusion equation model in terms of theoretical and numerical properties. Experiments show that our proposed diffusion equation speckle removal model is not affected by adversarial attacks in any way and has stronger robustness. | 10.1007/s10851-024-01199-6 | a diffusion equation for improving the robustness of deep learning speckle removal model | speckle removal aims to smooth noise while preserving image boundaries and texture information. in recent years, speckle removal models based on deep learning methods have attracted a lot of attention. however, it was found that these models are less robust to adversarial attacks. the adversarial attack makes the image recovery of deep learning methods significantly less effective when the speckle noise distribution is almost unchanged. in purpose of addressing the above problem, we propose a diffusion equation-based speckle removal model that can improve the robustness of deep learning algorithms in this paper. the model utilizes a deep learning image prior and an image grayscale detection operator together to construct the coefficient function of the diffusion equation. among them, there is a high possibility that the deep learning image prior is inaccurate or even incorrect, but it will not affect the performance and the properties of the proposed diffusion equation model for noise removal. moreover, we analyze the robustness of the proposed diffusion equation model in terms of theoretical and numerical properties. experiments show that our proposed diffusion equation speckle removal model is not affected by adversarial attacks in any way and has stronger robustness. | [
"speckle removal",
"noise",
"image boundaries and texture information",
"recent years",
"speckle removal models",
"deep learning methods",
"a lot",
"attention",
"it",
"these models",
"adversarial attacks",
"the adversarial attack",
"the image recovery",
"deep learning methods",
"the speckle noise distribution",
"purpose",
"the above problem",
"we",
"a diffusion equation-based speckle removal model",
"that",
"the robustness",
"deep learning algorithms",
"this paper",
"the model",
"a deep learning image",
"an image grayscale detection operator",
"the coefficient function",
"the diffusion equation",
"them",
"a high possibility",
"the deep learning image",
"it",
"the performance",
"the properties",
"the proposed diffusion equation model",
"noise removal",
"we",
"the robustness",
"the proposed diffusion equation model",
"terms",
"theoretical and numerical properties",
"experiments",
"our proposed diffusion equation speckle removal model",
"adversarial attacks",
"any way",
"stronger robustness",
"recent years"
] |
Application of deep learning technique in next generation sequence experiments | [
"Su Özgür",
"Mehmet Orman"
] | In recent years, the widespread utilization of biological data processing technology has been driven by its cost-effectiveness. Consequently, next-generation sequencing (NGS) has become an integral component of biological research. NGS technologies enable the sequencing of billions of nucleotides in the entire genome, transcriptome, or specific target regions. This sequencing generates vast data matrices. Consequently, there is a growing demand for deep learning (DL) approaches, which employ multilayer artificial neural networks and systems capable of extracting meaningful information from these extensive data structures. In this study, the aim was to obtain optimized parameters and assess the prediction performance of deep learning and machine learning (ML) algorithms for binary classification in real and simulated whole genome data using a cloud-based system. The ART-simulated data and paired-end NGS (whole genome) data of Ch22, which includes ethnicity information, were evaluated using XGBoost, LightGBM, and DL algorithms. When the learning rate was set to 0.01 and 0.001, and the epoch values were updated to 500, 1000, and 2000 in the deep learning model for the ART simulated dataset, the median accuracy values of the ART models were as follows: 0.6320, 0.6800, and 0.7340 for epoch 0.01; and 0.6920, 0.7220, and 0.8020 for epoch 0.001, respectively. In comparison, the median accuracy values of the XGBoost and LightGBM models were 0.6990 and 0.6250 respectively. When the same process is repeated for Chr 22, the results are as follows: the median accuracy values of the DL models were 0.5290, 0.5420 and 0.5820 for epoch 0.01; and 0.5510, 0.5830 and 0.6040 for epoch 0.001, respectively. Additionally, the median accuracy values of the XGBoost and LightGBM models were 0.5760 and 0.5250, respectively. While the best classification estimates were obtained at 2000 epochs and a learning rate (LR) value of 0.001 for both real and simulated data, the XGBoost algorithm showed higher performance when the epoch value was 500 and the LR was 0.01. When dealing with class imbalance, the DL algorithm yielded similar and high Recall and Precision values. Conclusively, this study serves as a timely resource for genomic scientists, providing guidance on why, when, and how to effectively utilize deep learning/machine learning methods for the analysis of human genomic data. | 10.1186/s40537-023-00838-w | application of deep learning technique in next generation sequence experiments | in recent years, the widespread utilization of biological data processing technology has been driven by its cost-effectiveness. consequently, next-generation sequencing (ngs) has become an integral component of biological research. ngs technologies enable the sequencing of billions of nucleotides in the entire genome, transcriptome, or specific target regions. this sequencing generates vast data matrices. consequently, there is a growing demand for deep learning (dl) approaches, which employ multilayer artificial neural networks and systems capable of extracting meaningful information from these extensive data structures. in this study, the aim was to obtain optimized parameters and assess the prediction performance of deep learning and machine learning (ml) algorithms for binary classification in real and simulated whole genome data using a cloud-based system. the art-simulated data and paired-end ngs (whole genome) data of ch22, which includes ethnicity information, were evaluated using xgboost, lightgbm, and dl algorithms. when the learning rate was set to 0.01 and 0.001, and the epoch values were updated to 500, 1000, and 2000 in the deep learning model for the art simulated dataset, the median accuracy values of the art models were as follows: 0.6320, 0.6800, and 0.7340 for epoch 0.01; and 0.6920, 0.7220, and 0.8020 for epoch 0.001, respectively. in comparison, the median accuracy values of the xgboost and lightgbm models were 0.6990 and 0.6250 respectively. when the same process is repeated for chr 22, the results are as follows: the median accuracy values of the dl models were 0.5290, 0.5420 and 0.5820 for epoch 0.01; and 0.5510, 0.5830 and 0.6040 for epoch 0.001, respectively. additionally, the median accuracy values of the xgboost and lightgbm models were 0.5760 and 0.5250, respectively. while the best classification estimates were obtained at 2000 epochs and a learning rate (lr) value of 0.001 for both real and simulated data, the xgboost algorithm showed higher performance when the epoch value was 500 and the lr was 0.01. when dealing with class imbalance, the dl algorithm yielded similar and high recall and precision values. conclusively, this study serves as a timely resource for genomic scientists, providing guidance on why, when, and how to effectively utilize deep learning/machine learning methods for the analysis of human genomic data. | [
"recent years",
"the widespread utilization",
"biological data processing technology",
"its cost-effectiveness",
"(ngs",
"an integral component",
"biological research",
"ngs technologies",
"the sequencing",
"billions",
"nucleotides",
"the entire genome",
"specific target regions",
"vast data matrices",
"a growing demand",
"deep learning (dl) approaches",
"which",
"multilayer artificial neural networks",
"systems",
"meaningful information",
"these extensive data structures",
"this study",
"the aim",
"optimized parameters",
"the prediction performance",
"deep learning",
"machine learning",
"ml",
"algorithms",
"binary classification",
"real and simulated whole genome data",
"a cloud-based system",
"the art-simulated data",
"paired-end ngs",
"whole genome) data",
"ch22",
"which",
"ethnicity information",
"xgboost",
"lightgbm",
"the learning rate",
"the epoch values",
"the deep learning model",
"the art",
"the median accuracy values",
"the art models",
"epoch",
"epoch",
"comparison",
"the median accuracy values",
"the xgboost and lightgbm models",
"the same process",
"chr",
"the results",
"the median accuracy values",
"the dl models",
"epoch",
"epoch",
"the median accuracy values",
"the xgboost and lightgbm models",
"the best classification estimates",
"2000 epochs",
"a learning rate (lr) value",
"both real and simulated data",
"the xgboost algorithm",
"higher performance",
"the epoch value",
"the lr",
"class imbalance",
"the dl algorithm",
"similar and high recall",
"precision values",
"this study",
"a timely resource",
"genomic scientists",
"guidance",
"deep learning/machine learning methods",
"the analysis",
"human genomic data",
"recent years",
"billions",
"ch22",
"0.01",
"0.001",
"500, 1000",
"2000",
"0.6320",
"0.6800",
"0.7340",
"0.01",
"0.6920",
"0.7220",
"0.8020",
"0.001",
"0.6990",
"0.6250",
"chr",
"22",
"0.5290",
"0.5420",
"0.5820",
"0.01",
"0.5510",
"0.5830",
"0.6040",
"0.001",
"0.5760",
"0.5250",
"2000",
"0.001",
"500",
"0.01"
] |
Unsupervised deep learning for geometric feature detection and multilevel-multimodal image registration | [
"Mohamed Lajili",
"Zakaria Belhachmi",
"Maher Moakher",
"Anis Theljani"
] | Medical image registration is a crucial step in computer-assisted medical diagnosis, and has seen significant progress with the adoption of deep learning methods like convolutional neural networks (CNN). Creating a deep learning network for image registration is complex because humans can’t easily prepare or supervise the training data unless it’s very basic. This article presents an innovative approach to unsupervised deep learning-based multilevel image registration approach. We propose to develop a CNN to detect the geometric features, such as edges and thin structures, from images using a loss function derived from the Blake-Zisserman energy. This method enables the detection of discontinuities at different scales without relying on labeled data. Subsequently, we use this geometric information extracted from the input images, to define a second loss function and to perform our multimodal image registration process. Furthermore, we introduce a novel deep neural network architecture for multilevel image registration, offering enhanced precision and efficiency compared to traditional methods. Numerical simulations are employed to demonstrate the accuracy and relevance of our approach. We perform some numerical simulations to show the accuracy and the relevance of our approach for multimodal registration and its multilevel implementation. | 10.1007/s10489-024-05585-w | unsupervised deep learning for geometric feature detection and multilevel-multimodal image registration | medical image registration is a crucial step in computer-assisted medical diagnosis, and has seen significant progress with the adoption of deep learning methods like convolutional neural networks (cnn). creating a deep learning network for image registration is complex because humans can’t easily prepare or supervise the training data unless it’s very basic. this article presents an innovative approach to unsupervised deep learning-based multilevel image registration approach. we propose to develop a cnn to detect the geometric features, such as edges and thin structures, from images using a loss function derived from the blake-zisserman energy. this method enables the detection of discontinuities at different scales without relying on labeled data. subsequently, we use this geometric information extracted from the input images, to define a second loss function and to perform our multimodal image registration process. furthermore, we introduce a novel deep neural network architecture for multilevel image registration, offering enhanced precision and efficiency compared to traditional methods. numerical simulations are employed to demonstrate the accuracy and relevance of our approach. we perform some numerical simulations to show the accuracy and the relevance of our approach for multimodal registration and its multilevel implementation. | [
"medical image registration",
"a crucial step",
"computer-assisted medical diagnosis",
"significant progress",
"the adoption",
"deep learning methods",
"convolutional neural networks",
"cnn",
"a deep learning network",
"image registration",
"humans",
"the training data",
"it",
"this article",
"an innovative approach",
"unsupervised deep learning-based multilevel image registration approach",
"we",
"a cnn",
"the geometric features",
"edges",
"thin structures",
"images",
"a loss function",
"the blake-zisserman energy",
"this method",
"the detection",
"discontinuities",
"different scales",
"labeled data",
"we",
"this geometric information",
"the input images",
"a second loss function",
"our multimodal image registration process",
"we",
"a novel deep neural network architecture",
"multilevel image registration",
"enhanced precision",
"efficiency",
"traditional methods",
"numerical simulations",
"the accuracy",
"relevance",
"our approach",
"we",
"some numerical simulations",
"the accuracy",
"the relevance",
"our approach",
"multimodal registration",
"its multilevel implementation",
"cnn",
"cnn",
"second"
] |
Generative deep learning for data generation in natural hazard analysis: motivations, advances, challenges, and opportunities | [
"Zhengjing Ma",
"Gang Mei",
"Nengxiong Xu"
] | Data mining and analysis are critical for preventing or mitigating natural hazards. However, data availability in natural hazard analysis is experiencing unprecedented challenges due to economic, technical, and environmental constraints. Recently, generative deep learning has become an increasingly attractive solution to these challenges, which can augment, impute, or synthesize data based on these learned complex, high-dimensional probability distributions of data. Over the last several years, much research has demonstrated the remarkable capabilities of generative deep learning for addressing data-related problems in natural hazards analysis. Data processed by deep generative models can be utilized to describe the evolution or occurrence of natural hazards and contribute to subsequent natural hazard modeling. Here we present a comprehensive review concerning generative deep learning for data generation in natural hazard analysis. (1) We summarized the limitations associated with data availability in natural hazards analysis and identified the fundamental motivations for employing generative deep learning as a critical response to these challenges. (2) We discuss several deep generative models that have been applied to overcome the problems caused by limited data availability in natural hazards analysis. (3) We analyze advances in utilizing generative deep learning for data generation in natural hazard analysis. (4) We discuss challenges associated with leveraging generative deep learning in natural hazard analysis. (5) We explore further opportunities for leveraging generative deep learning in natural hazard analysis. This comprehensive review provides a detailed roadmap for scholars interested in applying generative models for data generation in natural hazard analysis. | 10.1007/s10462-024-10764-9 | generative deep learning for data generation in natural hazard analysis: motivations, advances, challenges, and opportunities | data mining and analysis are critical for preventing or mitigating natural hazards. however, data availability in natural hazard analysis is experiencing unprecedented challenges due to economic, technical, and environmental constraints. recently, generative deep learning has become an increasingly attractive solution to these challenges, which can augment, impute, or synthesize data based on these learned complex, high-dimensional probability distributions of data. over the last several years, much research has demonstrated the remarkable capabilities of generative deep learning for addressing data-related problems in natural hazards analysis. data processed by deep generative models can be utilized to describe the evolution or occurrence of natural hazards and contribute to subsequent natural hazard modeling. here we present a comprehensive review concerning generative deep learning for data generation in natural hazard analysis. (1) we summarized the limitations associated with data availability in natural hazards analysis and identified the fundamental motivations for employing generative deep learning as a critical response to these challenges. (2) we discuss several deep generative models that have been applied to overcome the problems caused by limited data availability in natural hazards analysis. (3) we analyze advances in utilizing generative deep learning for data generation in natural hazard analysis. (4) we discuss challenges associated with leveraging generative deep learning in natural hazard analysis. (5) we explore further opportunities for leveraging generative deep learning in natural hazard analysis. this comprehensive review provides a detailed roadmap for scholars interested in applying generative models for data generation in natural hazard analysis. | [
"data mining",
"analysis",
"natural hazards",
"data availability",
"natural hazard analysis",
"unprecedented challenges",
"economic, technical, and environmental constraints",
"generative deep learning",
"an increasingly attractive solution",
"these challenges",
"which",
"data",
"these learned complex, high-dimensional probability distributions",
"data",
"the last several years",
"much research",
"the remarkable capabilities",
"generative deep learning",
"data-related problems",
"natural hazards analysis",
"data",
"deep generative models",
"the evolution",
"occurrence",
"natural hazards",
"subsequent natural hazard modeling",
"we",
"a comprehensive review",
"generative deep learning",
"data generation",
"natural hazard analysis",
"we",
"the limitations",
"data availability",
"natural hazards analysis",
"the fundamental motivations",
"generative deep learning",
"a critical response",
"these challenges",
"we",
"several deep generative models",
"that",
"the problems",
"limited data availability",
"natural hazards analysis",
"we",
"advances",
"generative deep learning",
"data generation",
"natural hazard analysis",
"we",
"challenges",
"generative deep learning",
"natural hazard analysis",
"we",
"further opportunities",
"generative deep learning",
"natural hazard analysis",
"this comprehensive review",
"a detailed roadmap",
"scholars",
"generative models",
"data generation",
"natural hazard analysis",
"the last several years",
"1",
"2",
"3",
"4",
"5"
] |
A Novel Discrete Deep Learning–Based Cancer Classification Methodology | [
"Marzieh Soltani",
"Mehdi Khashei",
"Negar Bakhtiarvand"
] | Classification is one of the most well-known data mining branches used in diverse domains and fields. In the literature, many different classification techniques, such as statistical/intelligent, linear/nonlinear, fuzzy/crisp, shallow/deep, and single/hybrid, have been developed to cover data and systems with different characteristics. Intelligent classification approaches, especially deep learning classifiers, due to their unique features to provide accurate and efficient results, have recently attracted a lot of attention. However, in the learning process of the intelligent classifiers, a continuous distance-based cost function is used to estimate the connection weights, though the goal function in classification problems is discrete and using a continuous cost function in their learning process is unreasonable and inefficient. In this paper, a novel discrete learning–based methodology is proposed to estimate the connection weights of intelligent classifiers more accurately. In the proposed learning process, they are discretely adjusted and at once jumped to the target. This is in contrast to conventional continuous learning algorithms in which the connection weights are continuously adjusted and step by step near the target. In the present research, the proposed methodology is exemplarily applied to the deep neural network (DNN), which is one of the most recognized deep classification approaches, with a solid mathematical foundation and strong practical results in complex problems. Although the proposed methodology is just implemented on the DNN, it is a general methodology that can be similarly applied to other shallow and deep intelligent classification models. It can be generally demonstrated that the performance of the proposed discrete learning–based DNN (DIDNN) model, due to its consistency property, will not be worse than the conventional ones. The proposed DIDNN model is exemplarily evaluated on some well-known cancer classification benchmarks to illustrate the efficiency of the proposed model. The empirical results indicate that the proposed model outperforms the conventional versions of the selected deep approach in all data sets. Based on the performance analysis, the DIDNN model can improve the performance of the classic version by approximately 3.39%. Therefore, using this technique is an appropriate and effective alternative to conventional DNN-based models for classification purposes. | 10.1007/s12559-023-10170-3 | a novel discrete deep learning–based cancer classification methodology | classification is one of the most well-known data mining branches used in diverse domains and fields. in the literature, many different classification techniques, such as statistical/intelligent, linear/nonlinear, fuzzy/crisp, shallow/deep, and single/hybrid, have been developed to cover data and systems with different characteristics. intelligent classification approaches, especially deep learning classifiers, due to their unique features to provide accurate and efficient results, have recently attracted a lot of attention. however, in the learning process of the intelligent classifiers, a continuous distance-based cost function is used to estimate the connection weights, though the goal function in classification problems is discrete and using a continuous cost function in their learning process is unreasonable and inefficient. in this paper, a novel discrete learning–based methodology is proposed to estimate the connection weights of intelligent classifiers more accurately. in the proposed learning process, they are discretely adjusted and at once jumped to the target. this is in contrast to conventional continuous learning algorithms in which the connection weights are continuously adjusted and step by step near the target. in the present research, the proposed methodology is exemplarily applied to the deep neural network (dnn), which is one of the most recognized deep classification approaches, with a solid mathematical foundation and strong practical results in complex problems. although the proposed methodology is just implemented on the dnn, it is a general methodology that can be similarly applied to other shallow and deep intelligent classification models. it can be generally demonstrated that the performance of the proposed discrete learning–based dnn (didnn) model, due to its consistency property, will not be worse than the conventional ones. the proposed didnn model is exemplarily evaluated on some well-known cancer classification benchmarks to illustrate the efficiency of the proposed model. the empirical results indicate that the proposed model outperforms the conventional versions of the selected deep approach in all data sets. based on the performance analysis, the didnn model can improve the performance of the classic version by approximately 3.39%. therefore, using this technique is an appropriate and effective alternative to conventional dnn-based models for classification purposes. | [
"classification",
"the most well-known data mining branches",
"diverse domains",
"fields",
"the literature",
"many different classification techniques",
"single/hybrid",
"data",
"systems",
"different characteristics",
"intelligent classification approaches",
"especially deep learning classifiers",
"their unique features",
"accurate and efficient results",
"a lot",
"attention",
"the learning process",
"the intelligent classifiers",
"a continuous distance-based cost function",
"the connection weights",
"the goal function",
"classification problems",
"a continuous cost function",
"their learning process",
"this paper",
"a novel discrete learning",
"based methodology",
"the connection weights",
"intelligent classifiers",
"the proposed learning process",
"they",
"the target",
"this",
"contrast",
"conventional continuous learning algorithms",
"which",
"the connection weights",
"step",
"the target",
"the present research",
"the proposed methodology",
"the deep neural network",
"dnn",
"which",
"the most recognized deep classification approaches",
"a solid mathematical foundation",
"strong practical results",
"complex problems",
"the proposed methodology",
"the dnn",
"it",
"a general methodology",
"that",
"other shallow and deep intelligent classification models",
"it",
"the performance",
"the proposed discrete learning",
"based dnn (didnn) model",
"its consistency property",
"the conventional ones",
"the proposed didnn model",
"some well-known cancer classification benchmarks",
"the efficiency",
"the proposed model",
"the empirical results",
"the proposed model",
"the conventional versions",
"the selected deep approach",
"all data sets",
"the performance analysis",
"the didnn model",
"the performance",
"the classic version",
"approximately 3.39%",
"this technique",
"an appropriate and effective alternative",
"conventional dnn-based models",
"classification purposes",
"linear",
"approximately 3.39%"
] |
Deep reinforcement learning for inverting earthquake focal mechanism and its potential application to marine earthquakes | [
"Wenhuan Kuang",
"Zhihui Zou",
"Junhui Xing",
"Wei Wei"
] | Earthquake data are one of the key means by which to explore our planet. At a large scale, the layered structure of the Earth is revealed by the seismic waves of natural earthquakes that go deep into its inner core. At a local scale, seismology for exploration has successfully been employed to discover massive fossil energies. As the volume of recorded seismic data becomes greater, intelligent methods for processing such a volume of data are eagerly anticipated. In particular, earthquake focal mechanisms are important for assessing the severity of tsunamis, characterizing seismogenic faults, and investigating the stress perturbations that follow a major earthquake. Here, we report a novel deep reinforcement learning method for inverting the earthquake focal mechanism. Unlike more typical deep learning applications, which require a large training dataset, a deep reinforcement learning system learns by itself. We demonstrate the validity and efficacy of the proposed deep reinforcement learning method by applying it to the Mw 7.1 mainshock of the Ridgecrest earthquakes in southern California. In the foreseeable future, deep learning technologies may greatly contribute to our understanding of the oceanographic process. The proposed method may help us understand the mechanism of marine earthquakes. | 10.1007/s44295-024-00031-6 | deep reinforcement learning for inverting earthquake focal mechanism and its potential application to marine earthquakes | earthquake data are one of the key means by which to explore our planet. at a large scale, the layered structure of the earth is revealed by the seismic waves of natural earthquakes that go deep into its inner core. at a local scale, seismology for exploration has successfully been employed to discover massive fossil energies. as the volume of recorded seismic data becomes greater, intelligent methods for processing such a volume of data are eagerly anticipated. in particular, earthquake focal mechanisms are important for assessing the severity of tsunamis, characterizing seismogenic faults, and investigating the stress perturbations that follow a major earthquake. here, we report a novel deep reinforcement learning method for inverting the earthquake focal mechanism. unlike more typical deep learning applications, which require a large training dataset, a deep reinforcement learning system learns by itself. we demonstrate the validity and efficacy of the proposed deep reinforcement learning method by applying it to the mw 7.1 mainshock of the ridgecrest earthquakes in southern california. in the foreseeable future, deep learning technologies may greatly contribute to our understanding of the oceanographic process. the proposed method may help us understand the mechanism of marine earthquakes. | [
"earthquake data",
"the key means",
"which",
"our planet",
"a large scale",
"the layered structure",
"the earth",
"the seismic waves",
"natural earthquakes",
"that",
"its inner core",
"a local scale",
"seismology",
"exploration",
"massive fossil energies",
"the volume",
"recorded seismic data",
"greater, intelligent methods",
"such a volume",
"data",
"earthquake focal mechanisms",
"the severity",
"tsunamis",
"seismogenic faults",
"the stress perturbations",
"that",
"a major earthquake",
"we",
"a novel deep reinforcement learning method",
"the earthquake focal mechanism",
"more typical deep learning applications",
"which",
"a large training dataset",
"a deep reinforcement learning system",
"itself",
"we",
"the validity",
"efficacy",
"the proposed deep reinforcement learning method",
"it",
"the mw 7.1 mainshock",
"the ridgecrest earthquakes",
"southern california",
"the foreseeable future",
"deep learning technologies",
"our understanding",
"the oceanographic process",
"the proposed method",
"us",
"the mechanism",
"marine earthquakes",
"one",
"7.1",
"southern california"
] |
Advancements in hybrid approaches for brain tumor segmentation in MRI: a comprehensive review of machine learning and deep learning techniques | [
"Ravikumar Sajjanar",
"Umesh D. Dixit",
"Vittalkumar K Vagga"
] | Magnetic resonance imaging (MRI) brain tumour segmentation is essential for the diagnosis, planning, and follow-up of patients with brain tumours. In an effort to increase efficiency and accuracy, a number of machine learning and deep learning algorithms have been developed over time to automate the segmentation process. Hybrid strategies, which include the advantages of both machine learning and deep learning, have become more and more popular as viable options. This in-depth analysis covers the developments in hybrid techniques for MRI segmentation of brain tumours. The essential ideas of machine learning and deep learning approaches are then covered, with an emphasis on their individual advantages and disadvantages. After that, the review explores the numerous hybrid strategies put out in the literature. In hybrid approaches, various phases of the segmentation pipeline are combined with machine learning and deep learning techniques. Pre-processing, feature extraction, and post-processing are examples of these phases. The paper examines at various combinations of methods utilised at these phases, such as segmentation using deep learning models and feature extraction utilising conventional machine learning algorithms. The implementation of ensemble approaches, which integrate forecasts from various models to improve segmentation accuracy, is also explored. The research study also examines the properties of freely accessible brain tumour datasets, which are essential for developing and testing hybrid models. To address the difficulties of generalisation and robustness in brain tumour segmentation, it emphasises the necessity of vast, varied, and annotated datasets. Additionally, by contrasting them with conventional machine learning and deep learning techniques, the review analyses the effectiveness of hybrid approaches reported in the literature. This comprehensive research provides information on recent advancements in hybrid techniques for MRI segmenting brain tumours. It emphasises the potential for merging deep learning and machine learning methods to enhance the precision and effectiveness of brain tumour segmentation, ultimately assisting in improving patient diagnosis and treatment planning. | 10.1007/s11042-023-16654-6 | advancements in hybrid approaches for brain tumor segmentation in mri: a comprehensive review of machine learning and deep learning techniques | magnetic resonance imaging (mri) brain tumour segmentation is essential for the diagnosis, planning, and follow-up of patients with brain tumours. in an effort to increase efficiency and accuracy, a number of machine learning and deep learning algorithms have been developed over time to automate the segmentation process. hybrid strategies, which include the advantages of both machine learning and deep learning, have become more and more popular as viable options. this in-depth analysis covers the developments in hybrid techniques for mri segmentation of brain tumours. the essential ideas of machine learning and deep learning approaches are then covered, with an emphasis on their individual advantages and disadvantages. after that, the review explores the numerous hybrid strategies put out in the literature. in hybrid approaches, various phases of the segmentation pipeline are combined with machine learning and deep learning techniques. pre-processing, feature extraction, and post-processing are examples of these phases. the paper examines at various combinations of methods utilised at these phases, such as segmentation using deep learning models and feature extraction utilising conventional machine learning algorithms. the implementation of ensemble approaches, which integrate forecasts from various models to improve segmentation accuracy, is also explored. the research study also examines the properties of freely accessible brain tumour datasets, which are essential for developing and testing hybrid models. to address the difficulties of generalisation and robustness in brain tumour segmentation, it emphasises the necessity of vast, varied, and annotated datasets. additionally, by contrasting them with conventional machine learning and deep learning techniques, the review analyses the effectiveness of hybrid approaches reported in the literature. this comprehensive research provides information on recent advancements in hybrid techniques for mri segmenting brain tumours. it emphasises the potential for merging deep learning and machine learning methods to enhance the precision and effectiveness of brain tumour segmentation, ultimately assisting in improving patient diagnosis and treatment planning. | [
"mri",
"the diagnosis",
"planning",
"follow-up",
"patients",
"brain tumours",
"an effort",
"efficiency",
"accuracy",
"a number",
"machine learning",
"deep learning algorithms",
"time",
"the segmentation process",
"hybrid strategies",
"which",
"the advantages",
"both machine learning",
"deep learning",
"viable options",
"-depth",
"the developments",
"hybrid techniques",
"mri segmentation",
"brain tumours",
"the essential ideas",
"machine learning",
"deep learning approaches",
"an emphasis",
"their individual advantages",
"disadvantages",
"that",
"the review",
"the numerous hybrid strategies",
"the literature",
"hybrid approaches",
"various phases",
"the segmentation pipeline",
"machine learning",
"deep learning techniques",
"pre-processing, feature extraction",
"post-processing",
"examples",
"these phases",
"various combinations",
"methods",
"these phases",
"segmentation",
"deep learning models",
"extraction",
"conventional machine learning algorithms",
"the implementation",
"ensemble approaches",
"which",
"forecasts",
"various models",
"segmentation accuracy",
"the research study",
"the properties",
"freely accessible brain tumour datasets",
"which",
"hybrid models",
"the difficulties",
"generalisation",
"robustness",
"brain tumour segmentation",
"it",
"the necessity",
"datasets",
"them",
"conventional machine learning",
"deep learning techniques",
"the review",
"the effectiveness",
"hybrid approaches",
"the literature",
"this comprehensive research",
"information",
"recent advancements",
"hybrid techniques",
"mri segmenting brain tumours",
"it",
"the potential",
"deep learning and machine learning methods",
"the precision",
"effectiveness",
"brain tumour segmentation",
"patient diagnosis and treatment planning"
] |
Analysis of English Classroom Teaching Behavior Mode in Environmental Protection Field Based on Deep Learning | [
"Yuanyuan Li"
] | Learning is to use algorithms to enable machines to learn rules from a large amount of historical data, so as to intelligently identify new samples or predict the future. Deep learning can promote students’ understanding of knowledge, conduct in-depth processing of new knowledge, integrate it with the original knowledge, and apply it to new situations, solve intelligent audio–visual listening from the perspective of deep learning, and focus on cultivating students’ in-depth learning ability and individual differences in innovative thinking. As the main position of ecological education, schools should effectively strengthen the publicity and education of ecological ideas and low-carbon concepts, and integrate them into education and teaching to effectively improve students’ awareness of environmental protection. This study aims to explore the effectiveness of flipped classroom teaching model based on deep learning. Therefore, from the perspective of deep learning, this paper combs the theory of deep learning, constructs a new model of smart classroom, and provides ideas and directions for model reform. In this study, the flipped classroom teaching model based on deep learning was applied to English teaching, and an 8-week teaching experiment was conducted. In addition, this paper believes that it is of great practical significance to carry out environmental protection education with the help of English teaching. | 10.1007/s44196-024-00457-0 | analysis of english classroom teaching behavior mode in environmental protection field based on deep learning | learning is to use algorithms to enable machines to learn rules from a large amount of historical data, so as to intelligently identify new samples or predict the future. deep learning can promote students’ understanding of knowledge, conduct in-depth processing of new knowledge, integrate it with the original knowledge, and apply it to new situations, solve intelligent audio–visual listening from the perspective of deep learning, and focus on cultivating students’ in-depth learning ability and individual differences in innovative thinking. as the main position of ecological education, schools should effectively strengthen the publicity and education of ecological ideas and low-carbon concepts, and integrate them into education and teaching to effectively improve students’ awareness of environmental protection. this study aims to explore the effectiveness of flipped classroom teaching model based on deep learning. therefore, from the perspective of deep learning, this paper combs the theory of deep learning, constructs a new model of smart classroom, and provides ideas and directions for model reform. in this study, the flipped classroom teaching model based on deep learning was applied to english teaching, and an 8-week teaching experiment was conducted. in addition, this paper believes that it is of great practical significance to carry out environmental protection education with the help of english teaching. | [
"algorithms",
"machines",
"rules",
"a large amount",
"historical data",
"new samples",
"the future",
"deep learning",
"students’ understanding",
"knowledge",
"depth",
"new knowledge",
"it",
"the original knowledge",
"it",
"new situations",
"intelligent audio–visual listening",
"the perspective",
"deep learning",
"students",
"depth",
"individual differences",
"innovative thinking",
"the main position",
"ecological education",
"schools",
"the publicity",
"education",
"ecological ideas",
"low-carbon concepts",
"them",
"education",
"teaching",
"students’ awareness",
"environmental protection",
"this study",
"the effectiveness",
"flipped classroom teaching model",
"deep learning",
"the perspective",
"deep learning",
"this paper",
"the theory",
"deep learning",
"a new model",
"smart classroom",
"ideas",
"directions",
"model reform",
"this study",
"the flipped classroom teaching model",
"deep learning",
"english teaching",
"an 8-week teaching experiment",
"addition",
"this paper",
"it",
"great practical significance",
"environmental protection education",
"the help",
"english teaching",
"english",
"8-week",
"english"
] |
Deep learning-based point cloud upsampling: a review of recent trends | [
"Soonjo Kwon",
"Ji-Hyeon Hur",
"Hyungki Kim"
] | Point clouds are acquired primarily using 3D scanners and are used for product inspection and reverse engineering. The quality of the point cloud varies depending on the scanning environment and scanner specifications. The quality of the point cloud has a significant impact on the accuracy of automatic or manual modeling. In response, various point cloud post-processing technologies are being developed. Point cloud upsampling is a technique to improve the resolution of point clouds, and the purpose of upsampling is to generate additional points to express the target object more accurately and in higher detail. This technology is important in areas where high-resolution 3D representation is required, and approaches based on deep learning have been recently gaining attention. Deep learning-based point cloud upsampling research can be classified as surface consolidation or edge consolidation research depending on the target regions to be consolidated, and as supervised or self-supervised learning depending on the type of learning approaches. This study examines the latest research trends in deep learning-based point cloud sampling, analyzes the issues and limitations of each research category, and proposes future research directions.Graphical abstract | 10.1007/s42791-023-00058-6 | deep learning-based point cloud upsampling: a review of recent trends | point clouds are acquired primarily using 3d scanners and are used for product inspection and reverse engineering. the quality of the point cloud varies depending on the scanning environment and scanner specifications. the quality of the point cloud has a significant impact on the accuracy of automatic or manual modeling. in response, various point cloud post-processing technologies are being developed. point cloud upsampling is a technique to improve the resolution of point clouds, and the purpose of upsampling is to generate additional points to express the target object more accurately and in higher detail. this technology is important in areas where high-resolution 3d representation is required, and approaches based on deep learning have been recently gaining attention. deep learning-based point cloud upsampling research can be classified as surface consolidation or edge consolidation research depending on the target regions to be consolidated, and as supervised or self-supervised learning depending on the type of learning approaches. this study examines the latest research trends in deep learning-based point cloud sampling, analyzes the issues and limitations of each research category, and proposes future research directions.graphical abstract | [
"point clouds",
"3d scanners",
"product inspection",
"engineering",
"the quality",
"the point cloud",
"the scanning environment",
"scanner specifications",
"the quality",
"the point cloud",
"a significant impact",
"the accuracy",
"automatic or manual modeling",
"response",
"various point cloud post-processing technologies",
"point cloud upsampling",
"a technique",
"the resolution",
"point clouds",
"the purpose",
"upsampling",
"additional points",
"the target object",
"higher detail",
"this technology",
"areas",
"high-resolution 3d representation",
"approaches",
"deep learning",
"attention",
"deep learning-based point cloud upsampling research",
"surface consolidation",
"edge consolidation research",
"the target regions",
"supervised or self-supervised learning",
"the type",
"approaches",
"this study",
"the latest research trends",
"deep learning-based point cloud sampling",
"the issues",
"limitations",
"each research category",
"future research",
"directions.graphical abstract",
"3d",
"3d"
] |
Diagnosis of EV Gearbox Bearing Fault Using Deep Learning-Based Signal Processing | [
"Kicheol Jeong",
"Chulwoo Moon"
] | The gearbox of an electric vehicle operates under the high load torque and axial load of electric vehicles. In particular, the bearings that support the shaft of the gearbox are subjected to several tons of axial load, and as the mileage increases, fault occurs on bearing rolling elements frequently. Such bearing fault has a serious impact on driving comfort and vehicle safety, however, bearing faults are diagnosed by human experts nowadays, and algorithm-based electric vehicle bearing fault diagnosis has not been implemented. Therefore, in this paper, a deep learning-based bearing vibration signal processing method to diagnose bearing fault in electric vehicle gearboxes is proposed. The proposed method consists of a deep neural network learning stage and an application stage of the pre-trained neural network. In the deep neural network learning stage, supervised learning is carried out based on two acceleration sensors. In the neural network application stage, signal processing of a single accelerometer signal is performed through a pre-trained neural network. In conclusion, the pre-trained neural network makes bearing fault signals stand out and can utilize these signals to extract frequency characteristics of bearing fault. | 10.1007/s12239-024-00094-8 | diagnosis of ev gearbox bearing fault using deep learning-based signal processing | the gearbox of an electric vehicle operates under the high load torque and axial load of electric vehicles. in particular, the bearings that support the shaft of the gearbox are subjected to several tons of axial load, and as the mileage increases, fault occurs on bearing rolling elements frequently. such bearing fault has a serious impact on driving comfort and vehicle safety, however, bearing faults are diagnosed by human experts nowadays, and algorithm-based electric vehicle bearing fault diagnosis has not been implemented. therefore, in this paper, a deep learning-based bearing vibration signal processing method to diagnose bearing fault in electric vehicle gearboxes is proposed. the proposed method consists of a deep neural network learning stage and an application stage of the pre-trained neural network. in the deep neural network learning stage, supervised learning is carried out based on two acceleration sensors. in the neural network application stage, signal processing of a single accelerometer signal is performed through a pre-trained neural network. in conclusion, the pre-trained neural network makes bearing fault signals stand out and can utilize these signals to extract frequency characteristics of bearing fault. | [
"the gearbox",
"an electric vehicle",
"the high load torque",
"axial load",
"electric vehicles",
"the bearings",
"that",
"the shaft",
"the gearbox",
"several tons",
"axial load",
"the mileage increases",
"fault",
"rolling elements",
"fault",
"a serious impact",
"comfort",
"vehicle safety",
"faults",
"human experts",
"algorithm-based electric vehicle",
"fault diagnosis",
"this paper",
"a deep learning-based bearing vibration signal processing method",
"fault",
"electric vehicle gearboxes",
"the proposed method",
"a deep neural network learning stage",
"an application stage",
"the pre-trained neural network",
"the deep neural network learning stage",
"supervised learning",
"two acceleration sensors",
"the neural network application stage",
"signal processing",
"a single accelerometer signal",
"a pre-trained neural network",
"conclusion",
"the pre-trained neural network",
"fault signals",
"these signals",
"frequency characteristics",
"fault",
"several tons",
"two"
] |
Sentiment analysis using a deep ensemble learning model | [
"Muhammet Sinan Başarslan",
"Fatih Kayaalp"
] | The coronavirus pandemic has kept people away from social life and this has led to an increase in the use of social media over the past two years. Thanks to social media, people can now instantly share their thoughts on various topics such as their favourite movies, restaurants, hotels, etc. This has created a huge amount of data and many researchers from different sciences have focused on analysing this data. Natural Language Processing (NLP) is one of these areas of computer science that uses artificial technologies. Sentiment analysis is also one of the tasks of NLP, which is based on extracting emotions from huge post data. In this study, sentiment analysis was performed on two datasets of tweets about coronavirus and TripAdvisor hotel reviews. A frequency-based word representation method (Term Frequency-Inverse Document Frequency (TF-IDF)) and a prediction-based Word2Vec word embedding method were used to vectorise the datasets. Sentiment analysis models were then built using single machine learning methods (Decision Trees-DT, K-Nearest Neighbour-KNN, Naive Bayes-NB and Support Vector Machine-SVM), single deep learning methods (Long Short Term Memory-LSTM, Recurrent Neural Network-RNN) and heterogeneous ensemble learning methods (Stacking and Majority Voting) based on these single machine learning and deep learning methods. Accuracy was used as a performance measure. The heterogeneous model with stacking (LSTM-RNN) has outperformed the other models with accuracy values of 0.864 on the coronavirus dataset and 0.898 on the Trip Advisor dataset and they have been evaluated as promising results when compared to the literature. It has been observed that the use of single methods as an ensemble gives better results, which is consistent with the literature, which is a step forward in the detection of sentiments through posts. Investigating the performance of heterogeneous ensemble learning models based on different algorithms in sentiment analysis tasks is planned as future work. | 10.1007/s11042-023-17278-6 | sentiment analysis using a deep ensemble learning model | the coronavirus pandemic has kept people away from social life and this has led to an increase in the use of social media over the past two years. thanks to social media, people can now instantly share their thoughts on various topics such as their favourite movies, restaurants, hotels, etc. this has created a huge amount of data and many researchers from different sciences have focused on analysing this data. natural language processing (nlp) is one of these areas of computer science that uses artificial technologies. sentiment analysis is also one of the tasks of nlp, which is based on extracting emotions from huge post data. in this study, sentiment analysis was performed on two datasets of tweets about coronavirus and tripadvisor hotel reviews. a frequency-based word representation method (term frequency-inverse document frequency (tf-idf)) and a prediction-based word2vec word embedding method were used to vectorise the datasets. sentiment analysis models were then built using single machine learning methods (decision trees-dt, k-nearest neighbour-knn, naive bayes-nb and support vector machine-svm), single deep learning methods (long short term memory-lstm, recurrent neural network-rnn) and heterogeneous ensemble learning methods (stacking and majority voting) based on these single machine learning and deep learning methods. accuracy was used as a performance measure. the heterogeneous model with stacking (lstm-rnn) has outperformed the other models with accuracy values of 0.864 on the coronavirus dataset and 0.898 on the trip advisor dataset and they have been evaluated as promising results when compared to the literature. it has been observed that the use of single methods as an ensemble gives better results, which is consistent with the literature, which is a step forward in the detection of sentiments through posts. investigating the performance of heterogeneous ensemble learning models based on different algorithms in sentiment analysis tasks is planned as future work. | [
"people",
"social life",
"this",
"an increase",
"the use",
"social media",
"the past two years",
"social media",
"people",
"their thoughts",
"various topics",
"their favourite movies",
"restaurants",
"hotels",
"this",
"a huge amount",
"data",
"many researchers",
"different sciences",
"this data",
"natural language processing",
"nlp",
"these areas",
"computer science",
"that",
"artificial technologies",
"sentiment analysis",
"the tasks",
"nlp",
"which",
"emotions",
"huge post data",
"this study",
"sentiment analysis",
"two datasets",
"tweets",
"coronavirus and tripadvisor hotel reviews",
"a frequency-based word representation method",
"term frequency-inverse document frequency",
"tf-idf",
"a prediction-based word2vec word",
"embedding method",
"the datasets",
"sentiment analysis models",
"single machine learning methods",
"decision trees",
"dt",
"k-nearest neighbour-knn, naive bayes",
"vector machine-svm",
"single deep learning methods",
"long short term memory-lstm",
"recurrent neural network-rnn",
"heterogeneous ensemble learning methods",
"stacking and majority voting",
"these single machine learning",
"deep learning methods",
"accuracy",
"a performance measure",
"the heterogeneous model",
"lstm-rnn",
"the other models",
"accuracy values",
"the coronavirus dataset",
"the trip advisor dataset",
"they",
"results",
"the literature",
"it",
"the use",
"single methods",
"an ensemble",
"better results",
"which",
"the literature",
"which",
"a step",
"the detection",
"sentiments",
"posts",
"the performance",
"heterogeneous ensemble learning models",
"different algorithms",
"sentiment analysis tasks",
"future work",
"the past two years",
"two",
"accuracy",
"0.864",
"0.898"
] |
Deep Q-learning with hybrid quantum neural network on solving maze problems | [
"Hao-Yuan Chen",
"Yen-Jui Chang",
"Shih-Wei Liao",
"Ching-Ray Chang"
] | Quantum computing holds great potential for advancing the limitations of machine learning algorithms to handle higher dimensions of data and reduce overall training parameters in deep learning (DL) models. This study uses a trainable variational quantum circuit (VQC) on a gate-based quantum computing model to investigate the potential for quantum benefit in a model-free reinforcement learning problem. Through a comprehensive investigation and evaluation of the current model and capabilities of quantum computers, we designed and trained a novel hybrid quantum neural network based on the latest Qiskit and PyTorch framework. We compared its performance with a full-classical CNN with and without an incorporated VQC. Our research provides insights into the potential of deep quantum learning to solve a maze problem and, potentially, other reinforcement learning problems. We conclude that reinforcement learning problems can be practical with reasonable training epochs. Moreover, a comparative study of full-classical and hybrid quantum neural networks is discussed to understand these two approaches’ performance, advantages, and disadvantages to deep Q-learning problems, especially on larger-scale maze problems larger than 4\(\times \)4. | 10.1007/s42484-023-00137-w | deep q-learning with hybrid quantum neural network on solving maze problems | quantum computing holds great potential for advancing the limitations of machine learning algorithms to handle higher dimensions of data and reduce overall training parameters in deep learning (dl) models. this study uses a trainable variational quantum circuit (vqc) on a gate-based quantum computing model to investigate the potential for quantum benefit in a model-free reinforcement learning problem. through a comprehensive investigation and evaluation of the current model and capabilities of quantum computers, we designed and trained a novel hybrid quantum neural network based on the latest qiskit and pytorch framework. we compared its performance with a full-classical cnn with and without an incorporated vqc. our research provides insights into the potential of deep quantum learning to solve a maze problem and, potentially, other reinforcement learning problems. we conclude that reinforcement learning problems can be practical with reasonable training epochs. moreover, a comparative study of full-classical and hybrid quantum neural networks is discussed to understand these two approaches’ performance, advantages, and disadvantages to deep q-learning problems, especially on larger-scale maze problems larger than 4\(\times \)4. | [
"quantum computing",
"great potential",
"the limitations",
"machine learning algorithms",
"higher dimensions",
"data",
"overall training parameters",
"deep learning",
"(dl) models",
"this study",
"a trainable variational quantum circuit",
"vqc",
"a gate-based quantum computing model",
"the potential",
"quantum benefit",
"a model-free reinforcement learning problem",
"a comprehensive investigation",
"evaluation",
"the current model",
"capabilities",
"quantum computers",
"we",
"a novel hybrid quantum neural network",
"the latest qiskit and pytorch framework",
"we",
"its performance",
"a full-classical cnn",
"an incorporated vqc",
"our research",
"insights",
"the potential",
"deep quantum",
"a maze problem",
"potentially, other reinforcement learning problems",
"we",
"reinforcement learning problems",
"reasonable training epochs",
"a comparative study",
"full-classical and hybrid quantum neural networks",
"these two approaches’ performance",
"advantages",
"disadvantages",
"deep q-learning problems",
"larger-scale maze problems",
"4\\(\\times",
"quantum",
"quantum",
"quantum",
"quantum",
"quantum",
"two"
] |
Deep learning algorithms for hedging with frictions | [
"Xiaofei Shi",
"Daran Xu",
"Zhanhao Zhang"
] | This work studies the deep learning-based numerical algorithms for optimal hedging problems in markets with general convex transaction costs. Our main focus is on how these algorithms scale with the length of the trading time horizon. Based on the comparison results of the FBSDE solver by Han, Jentzen, and E (2018) and the Deep Hedging algorithm by Buehler, Gonon, Teichmann, and Wood (2019), we propose a Stable-Transfer Hedging (ST-Hedging) algorithm, to aggregate the convenience of the leading-order approximation formulas and the accuracy of the deep learning-based algorithms. Our ST-Hedging algorithm achieves the same state-of-the-art performance in short and moderately long time horizon as FBSDE solver and Deep Hedging, and generalize well to long time horizon when previous algorithms become suboptimal. With the transfer learning technique, ST-Hedging drastically reduce the training time, and shows great scalability to high-dimensional settings. This opens up new possibilities in model-based deep learning algorithms in economics, finance, and operational research, which takes advantage of the domain expert knowledge and the accuracy of the learning-based methods. | 10.1007/s42521-023-00075-z | deep learning algorithms for hedging with frictions | this work studies the deep learning-based numerical algorithms for optimal hedging problems in markets with general convex transaction costs. our main focus is on how these algorithms scale with the length of the trading time horizon. based on the comparison results of the fbsde solver by han, jentzen, and e (2018) and the deep hedging algorithm by buehler, gonon, teichmann, and wood (2019), we propose a stable-transfer hedging (st-hedging) algorithm, to aggregate the convenience of the leading-order approximation formulas and the accuracy of the deep learning-based algorithms. our st-hedging algorithm achieves the same state-of-the-art performance in short and moderately long time horizon as fbsde solver and deep hedging, and generalize well to long time horizon when previous algorithms become suboptimal. with the transfer learning technique, st-hedging drastically reduce the training time, and shows great scalability to high-dimensional settings. this opens up new possibilities in model-based deep learning algorithms in economics, finance, and operational research, which takes advantage of the domain expert knowledge and the accuracy of the learning-based methods. | [
"this work",
"the deep learning-based numerical algorithms",
"optimal hedging problems",
"markets",
"general convex transaction costs",
"our main focus",
"these algorithms",
"the length",
"the trading time horizon",
"the comparison results",
"the fbsde",
"han",
"jentzen",
"e",
"buehler",
"gonon",
"teichmann",
"wood",
"we",
"a stable-transfer hedging",
"st-hedging",
"the convenience",
"the leading-order approximation formulas",
"the accuracy",
"the deep learning-based algorithms",
"our st-hedging algorithm",
"the-art",
"short and moderately long time horizon",
"fbsde",
"long time horizon",
"previous algorithms",
"the transfer learning technique",
"st-hedging",
"the training time",
"great scalability",
"high-dimensional settings",
"this",
"new possibilities",
"model-based deep learning algorithms",
"economics",
"finance",
"operational research",
"which",
"advantage",
"the domain expert knowledge",
"the accuracy",
"the learning-based methods",
"han",
"2018",
"teichmann",
"2019"
] |
Comparative study and analysis on skin cancer detection using machine learning and deep learning algorithms | [
"V. Auxilia Osvin Nancy",
"P. Prabhavathy",
"Meenakshi S. Arya",
"B. Shamreen Ahamed"
] | Exposure to UV rays due to global warming can lead to sunburn and skin damage, ultimately resulting in skin cancer. Early prediction of this type of cancer is crucial. A detailed review in this paper explores various algorithms, including machine learning (ML) techniques as well as deep learning (DL) techniques. While deep learning strategies, particularly CNNs, are commonly employed for skin cancer identification and classification, there is also some usage of machine learning and hybrid approaches. These techniques have proven to be effective classifiers of skin lesions, offering promising results for early detection. The paper analyzes various researchers’ reviews on skin cancer diagnosis to identify a suitable methodology for improving diagnostic accuracy. A publicly available dataset of dermoscopic images retrieved from the ISIC archive has been trained and evaluated. Performance analysis is done, considering metrics such as test and validation accuracy. The results indicate that the RF(random forest) algorithm outperforms other machine learning algorithms in both scenarios, with accuracies of 58.57% without augmentation and 87.32% with augmentation. MobileNetv2, ensemble of Dense Net and Inceptionv3 exhibit superior performance. During training without augmentation, MobileNetv2 achieves an accuracy of 88.81%, while the ensemble model achieves an accuracy of 88.80%. With augmentation techniques applied, the accuracies improved to 97.58% and 97.50%, respectively. Furthermore, experiment with a customized convolutional neural network (CNN) model was also conducted, varying the number of layers and applying various hyperparameter tuning methodologies. Suitable architectures, including a CNN with 7 layers and batch normalization, a CNN with 5 layers, and a CNN with 3 layers were identified. These models achieved accuracies of 77.92%, 97.72%, and 98.02% on the raw data and augmentation datasets, respectively. The experimental results suggest that these techniques hold promise for integration into clinical settings, and further research and validation are necessary. The results highlight the effectiveness of transfer learning models, in achieving high accuracy rates. The findings support the future adoption of these techniques in clinical practice, pending further research and validation. | 10.1007/s11042-023-16422-6 | comparative study and analysis on skin cancer detection using machine learning and deep learning algorithms | exposure to uv rays due to global warming can lead to sunburn and skin damage, ultimately resulting in skin cancer. early prediction of this type of cancer is crucial. a detailed review in this paper explores various algorithms, including machine learning (ml) techniques as well as deep learning (dl) techniques. while deep learning strategies, particularly cnns, are commonly employed for skin cancer identification and classification, there is also some usage of machine learning and hybrid approaches. these techniques have proven to be effective classifiers of skin lesions, offering promising results for early detection. the paper analyzes various researchers’ reviews on skin cancer diagnosis to identify a suitable methodology for improving diagnostic accuracy. a publicly available dataset of dermoscopic images retrieved from the isic archive has been trained and evaluated. performance analysis is done, considering metrics such as test and validation accuracy. the results indicate that the rf(random forest) algorithm outperforms other machine learning algorithms in both scenarios, with accuracies of 58.57% without augmentation and 87.32% with augmentation. mobilenetv2, ensemble of dense net and inceptionv3 exhibit superior performance. during training without augmentation, mobilenetv2 achieves an accuracy of 88.81%, while the ensemble model achieves an accuracy of 88.80%. with augmentation techniques applied, the accuracies improved to 97.58% and 97.50%, respectively. furthermore, experiment with a customized convolutional neural network (cnn) model was also conducted, varying the number of layers and applying various hyperparameter tuning methodologies. suitable architectures, including a cnn with 7 layers and batch normalization, a cnn with 5 layers, and a cnn with 3 layers were identified. these models achieved accuracies of 77.92%, 97.72%, and 98.02% on the raw data and augmentation datasets, respectively. the experimental results suggest that these techniques hold promise for integration into clinical settings, and further research and validation are necessary. the results highlight the effectiveness of transfer learning models, in achieving high accuracy rates. the findings support the future adoption of these techniques in clinical practice, pending further research and validation. | [
"exposure",
"uv rays",
"global warming",
"sunburn",
"skin damage",
"skin cancer",
"early prediction",
"this type",
"cancer",
"a detailed review",
"this paper",
"various algorithms",
"machine learning",
") techniques",
"deep learning",
"(dl) techniques",
"deep learning strategies",
"particularly cnns",
"skin cancer identification",
"classification",
"some usage",
"machine learning",
"hybrid approaches",
"these techniques",
"effective classifiers",
"skin lesions",
"promising results",
"early detection",
"the paper",
"various researchers’ reviews",
"skin cancer diagnosis",
"a suitable methodology",
"diagnostic accuracy",
"a publicly available dataset",
"dermoscopic images",
"the isic archive",
"performance analysis",
"metrics",
"test",
"validation accuracy",
"the results",
"algorithm",
"other machine learning algorithms",
"both scenarios",
"accuracies",
"58.57%",
"augmentation",
"87.32%",
"augmentation",
"mobilenetv2",
"dense net",
"inceptionv3",
"superior performance",
"training",
"augmentation",
"mobilenetv2",
"an accuracy",
"88.81%",
"the ensemble model",
"an accuracy",
"88.80%",
"augmentation techniques",
"the accuracies",
"97.58%",
"97.50%",
"experiment",
"a customized convolutional neural network (cnn) model",
"the number",
"layers",
"various hyperparameter",
"methodologies",
"suitable architectures",
"a cnn",
"7 layers",
"batch normalization",
"a cnn",
"5 layers",
"a cnn",
"3 layers",
"these models",
"accuracies",
"77.92%",
"97.72%",
"98.02%",
"the raw data",
"augmentation datasets",
"the experimental results",
"these techniques",
"promise",
"integration",
"clinical settings",
"further research",
"validation",
"the results",
"the effectiveness",
"transfer learning models",
"high accuracy rates",
"the findings",
"the future adoption",
"these techniques",
"clinical practice",
"further research",
"validation",
"58.57%",
"87.32%",
"mobilenetv2",
"inceptionv3",
"mobilenetv2",
"88.81%",
"88.80%",
"97.58%",
"97.50%",
"cnn",
"cnn",
"7",
"cnn",
"5",
"cnn",
"3",
"77.92%",
"97.72%",
"98.02%"
] |
Enhancing trash classification in smart cities using federated deep learning | [
"Haroon Ahmed Khan",
"Syed Saud Naqvi",
"Abeer A. K. Alharbi",
"Salihah Alotaibi",
"Mohammed Alkhathami"
] | Efficient Waste management plays a crucial role to ensure clean and green environment in the smart cities. This study investigates the critical role of efficient trash classification in achieving sustainable solid waste management within smart city environments. We conduct a comparative analysis of various trash classification methods utilizing deep learning models built on convolutional neural networks (CNNs). Leveraging the PyTorch open-source framework and the TrashBox dataset, we perform experiments involving ten unique deep neural network models. Our approach aims to maximize training accuracy. Through extensive experimentation, we observe the consistent superiority of the ResNext-101 model compared to others, achieving exceptional training, validation, and test accuracies. These findings illuminate the potential of CNN-based techniques in significantly advancing trash classification for optimized solid waste management within smart city initiatives. Lastly, this study presents a distributed framework based on federated learning that can be used to optimize the performance of a combination of CNN models for trash detection. | 10.1038/s41598-024-62003-4 | enhancing trash classification in smart cities using federated deep learning | efficient waste management plays a crucial role to ensure clean and green environment in the smart cities. this study investigates the critical role of efficient trash classification in achieving sustainable solid waste management within smart city environments. we conduct a comparative analysis of various trash classification methods utilizing deep learning models built on convolutional neural networks (cnns). leveraging the pytorch open-source framework and the trashbox dataset, we perform experiments involving ten unique deep neural network models. our approach aims to maximize training accuracy. through extensive experimentation, we observe the consistent superiority of the resnext-101 model compared to others, achieving exceptional training, validation, and test accuracies. these findings illuminate the potential of cnn-based techniques in significantly advancing trash classification for optimized solid waste management within smart city initiatives. lastly, this study presents a distributed framework based on federated learning that can be used to optimize the performance of a combination of cnn models for trash detection. | [
"efficient waste management",
"a crucial role",
"clean and green environment",
"the smart cities",
"this study",
"the critical role",
"efficient trash classification",
"sustainable solid waste management",
"smart city environments",
"we",
"a comparative analysis",
"various trash classification methods",
"deep learning models",
"convolutional neural networks",
"cnns",
"the pytorch open-source framework",
"the trashbox dataset",
"we",
"experiments",
"ten unique deep neural network models",
"our approach",
"training accuracy",
"extensive experimentation",
"we",
"the consistent superiority",
"the resnext-101 model",
"others",
"exceptional training",
"validation",
"test accuracies",
"these findings",
"the potential",
"cnn-based techniques",
"significantly advancing trash classification",
"optimized solid waste management",
"smart city initiatives",
"this study",
"a distributed framework",
"federated learning",
"that",
"the performance",
"a combination",
"cnn models",
"trash detection",
"smart city",
"cnn",
"smart city",
"cnn"
] |
Interdisciplinary approach to identify language markers for post-traumatic stress disorder using machine learning and deep learning | [
"Robin Quillivic",
"Frédérique Gayraud",
"Yann Auxéméry",
"Laurent Vanni",
"Denis Peschanski",
"Francis Eustache",
"Jacques Dayan",
"Salma Mesmoudi"
] | Post-traumatic stress disorder (PTSD) lacks clear biomarkers in clinical practice. Language as a potential diagnostic biomarker for PTSD is investigated in this study. We analyze an original cohort of 148 individuals exposed to the November 13, 2015, terrorist attacks in Paris. The interviews, conducted 5–11 months after the event, include individuals from similar socioeconomic backgrounds exposed to the same incident, responding to identical questions and using uniform PTSD measures. Using this dataset to collect nuanced insights that might be clinically relevant, we propose a three-step interdisciplinary methodology that integrates expertise from psychiatry, linguistics, and the Natural Language Processing (NLP) community to examine the relationship between language and PTSD. The first step assesses a clinical psychiatrist's ability to diagnose PTSD using interview transcription alone. The second step uses statistical analysis and machine learning models to create language features based on psycholinguistic hypotheses and evaluate their predictive strength. The third step is the application of a hypothesis-free deep learning approach to the classification of PTSD in our cohort. Results show that the clinical psychiatrist achieved a diagnosis of PTSD with an AUC of 0.72. This is comparable to a gold standard questionnaire (Area Under Curve (AUC) ≈ 0.80). The machine learning model achieved a diagnostic AUC of 0.69. The deep learning approach achieved an AUC of 0.64. An examination of model error informs our discussion. Importantly, the study controls for confounding factors, establishes associations between language and DSM-5 subsymptoms, and integrates automated methods with qualitative analysis. This study provides a direct and methodologically robust description of the relationship between PTSD and language. Our work lays the groundwork for advancing early and accurate diagnosis and using linguistic markers to assess the effectiveness of pharmacological treatments and psychotherapies. | 10.1038/s41598-024-61557-7 | interdisciplinary approach to identify language markers for post-traumatic stress disorder using machine learning and deep learning | post-traumatic stress disorder (ptsd) lacks clear biomarkers in clinical practice. language as a potential diagnostic biomarker for ptsd is investigated in this study. we analyze an original cohort of 148 individuals exposed to the november 13, 2015, terrorist attacks in paris. the interviews, conducted 5–11 months after the event, include individuals from similar socioeconomic backgrounds exposed to the same incident, responding to identical questions and using uniform ptsd measures. using this dataset to collect nuanced insights that might be clinically relevant, we propose a three-step interdisciplinary methodology that integrates expertise from psychiatry, linguistics, and the natural language processing (nlp) community to examine the relationship between language and ptsd. the first step assesses a clinical psychiatrist's ability to diagnose ptsd using interview transcription alone. the second step uses statistical analysis and machine learning models to create language features based on psycholinguistic hypotheses and evaluate their predictive strength. the third step is the application of a hypothesis-free deep learning approach to the classification of ptsd in our cohort. results show that the clinical psychiatrist achieved a diagnosis of ptsd with an auc of 0.72. this is comparable to a gold standard questionnaire (area under curve (auc) ≈ 0.80). the machine learning model achieved a diagnostic auc of 0.69. the deep learning approach achieved an auc of 0.64. an examination of model error informs our discussion. importantly, the study controls for confounding factors, establishes associations between language and dsm-5 subsymptoms, and integrates automated methods with qualitative analysis. this study provides a direct and methodologically robust description of the relationship between ptsd and language. our work lays the groundwork for advancing early and accurate diagnosis and using linguistic markers to assess the effectiveness of pharmacological treatments and psychotherapies. | [
"post-traumatic stress disorder",
"ptsd",
"clear biomarkers",
"clinical practice",
"language",
"a potential diagnostic biomarker",
"ptsd",
"this study",
"we",
"an original cohort",
"148 individuals",
"the november",
"terrorist attacks",
"paris",
"the interviews",
"the event",
"individuals",
"similar socioeconomic backgrounds",
"the same incident",
"identical questions",
"uniform ptsd measures",
"this dataset",
"nuanced insights",
"that",
"we",
"a three-step interdisciplinary methodology",
"that",
"expertise",
"psychiatry",
"linguistics",
"the natural language processing (nlp) community",
"the relationship",
"language",
"ptsd",
"the first step",
"a clinical psychiatrist's ability",
"ptsd",
"interview transcription",
"the second step",
"statistical analysis",
"machine learning models",
"language features",
"psycholinguistic hypotheses",
"their predictive strength",
"the third step",
"the application",
"a hypothesis-free deep learning approach",
"the classification",
"ptsd",
"our cohort",
"results",
"the clinical psychiatrist",
"a diagnosis",
"ptsd",
"an auc",
"this",
"a gold standard questionnaire",
"area",
"curve",
"auc",
"≈",
"the machine learning model",
"a diagnostic auc",
"the deep learning approach",
"an auc",
"an examination",
"model error",
"our discussion",
"the study",
"factors",
"associations",
"language",
"dsm-5",
"subsymptoms",
"automated methods",
"qualitative analysis",
"this study",
"a direct and methodologically robust description",
"the relationship",
"ptsd",
"language",
"our work",
"the groundwork",
"early and accurate diagnosis",
"linguistic markers",
"the effectiveness",
"pharmacological treatments",
"psychotherapies",
"148",
"november 13, 2015",
"paris",
"5–11 months",
"three",
"first",
"second",
"third",
"0.72",
"0.69",
"0.64"
] |
Study of Q-learning and deep Q-network learning control for a rotary inverted pendulum system | [
"Zied Ben Hazem"
] | The rotary inverted pendulum system (RIPS) is an underactuated mechanical system with highly nonlinear dynamics and it is difficult to control a RIPS using the classic control models. In the last few years, reinforcement learning (RL) has become a popular nonlinear control method. RL has a powerful potential to control systems with high non-linearity and complex dynamics, such as RIPS. Nevertheless, RL control for RIPS has not been well studied and there is limited research on the development and evaluation of this control method. In this paper, RL control algorithms are developed for the swing-up and stabilization control of a single-link rotary inverted pendulum (SLRIP) and compared with classic control methods such as PID and LQR. A physical model of the SLRIP system is created using the MATLAB/Simscape Toolbox, the model is used as a dynamic simulation in MATLAB/Simulink to train the RL agents. An agent trainer system with Q-learning (QL) and deep Q-network learning (DQNL) is proposed for the data training. Furthermore, agent actions are actuating the horizontal arm of the system and states are the angles and velocities of the pendulum and the horizontal arm. The reward is computed according to the angles of the pendulum and horizontal arm. The reward is zero when the pendulum attends the upright position. The RL algorithms are used without a deep understanding of the classical controllers and are used to implement the agent. Finally, the outcome indicates the effectiveness of the QL and DQNL algorithms compared to the conventional PID and LQR controllers. | 10.1007/s42452-024-05690-y | study of q-learning and deep q-network learning control for a rotary inverted pendulum system | the rotary inverted pendulum system (rips) is an underactuated mechanical system with highly nonlinear dynamics and it is difficult to control a rips using the classic control models. in the last few years, reinforcement learning (rl) has become a popular nonlinear control method. rl has a powerful potential to control systems with high non-linearity and complex dynamics, such as rips. nevertheless, rl control for rips has not been well studied and there is limited research on the development and evaluation of this control method. in this paper, rl control algorithms are developed for the swing-up and stabilization control of a single-link rotary inverted pendulum (slrip) and compared with classic control methods such as pid and lqr. a physical model of the slrip system is created using the matlab/simscape toolbox, the model is used as a dynamic simulation in matlab/simulink to train the rl agents. an agent trainer system with q-learning (ql) and deep q-network learning (dqnl) is proposed for the data training. furthermore, agent actions are actuating the horizontal arm of the system and states are the angles and velocities of the pendulum and the horizontal arm. the reward is computed according to the angles of the pendulum and horizontal arm. the reward is zero when the pendulum attends the upright position. the rl algorithms are used without a deep understanding of the classical controllers and are used to implement the agent. finally, the outcome indicates the effectiveness of the ql and dqnl algorithms compared to the conventional pid and lqr controllers. | [
"the rotary inverted pendulum system",
"rips",
"an underactuated mechanical system",
"highly nonlinear dynamics",
"it",
"a rips",
"the classic control models",
"the last few years",
"reinforcement learning",
"(rl",
"a popular nonlinear control method",
"rl",
"a powerful potential",
"systems",
"high non-linearity and complex dynamics",
"rips",
"rl control",
"rips",
"limited research",
"the development",
"evaluation",
"this control method",
"this paper",
"rl control algorithms",
"the swing-up and stabilization control",
"a single-link rotary inverted pendulum",
"slrip",
"classic control methods",
"pid",
"lqr",
"a physical model",
"the slrip system",
"the matlab/simscape toolbox",
"the model",
"a dynamic simulation",
"matlab/simulink",
"the rl agents",
"an agent trainer system",
"q-learning",
"(ql",
"deep q-network learning",
"dqnl",
"the data training",
"agent actions",
"the horizontal arm",
"the system",
"states",
"the angles",
"velocities",
"the pendulum",
"the horizontal arm",
"the reward",
"the angles",
"the pendulum",
"horizontal arm",
"the reward",
"the pendulum",
"the upright position",
"the rl algorithms",
"a deep understanding",
"the classical controllers",
"the agent",
"the outcome",
"the effectiveness",
"the ql and dqnl algorithms",
"the conventional pid and lqr controllers",
"the last few years",
"zero"
] |
Air combat maneuver decision based on deep reinforcement learning with auxiliary reward | [
"Tingyu Zhang",
"Yongshuai Wang",
"Mingwei Sun",
"Zengqiang Chen"
] | For air combat maneuvering decision, the sparse reward during the application of deep reinforcement learning limits the exploration efficiency of the agents. To address this challenge, we propose an auxiliary reward function considering the impact of angle, range, and altitude. Furthermore, we investigate the influences of the network nodes, layers, and the learning rate on decision system, and reasonable parameter ranges are provided, which can serve as a guideline. Finally, four typical air combat scenarios demonstrate good adaptability and effectiveness of the proposed scheme, and the auxiliary reward significantly improves the learning ability of deep Q network (DQN) by leading the agents to explore more intently. Compared with the original deep deterministic policy gradient and soft actor critic algorithm, the proposed method exhibits superior exploration capability with higher reward, indicating that the trained agent can adapt to different air combats with good performance. | 10.1007/s00521-024-09720-z | air combat maneuver decision based on deep reinforcement learning with auxiliary reward | for air combat maneuvering decision, the sparse reward during the application of deep reinforcement learning limits the exploration efficiency of the agents. to address this challenge, we propose an auxiliary reward function considering the impact of angle, range, and altitude. furthermore, we investigate the influences of the network nodes, layers, and the learning rate on decision system, and reasonable parameter ranges are provided, which can serve as a guideline. finally, four typical air combat scenarios demonstrate good adaptability and effectiveness of the proposed scheme, and the auxiliary reward significantly improves the learning ability of deep q network (dqn) by leading the agents to explore more intently. compared with the original deep deterministic policy gradient and soft actor critic algorithm, the proposed method exhibits superior exploration capability with higher reward, indicating that the trained agent can adapt to different air combats with good performance. | [
"air combat maneuvering decision",
"the sparse",
"the application",
"deep reinforcement learning",
"the exploration efficiency",
"the agents",
"this challenge",
"we",
"an auxiliary reward function",
"the impact",
"angle",
"range",
"altitude",
"we",
"the influences",
"the network nodes",
"layers",
"the learning rate",
"decision system",
"reasonable parameter ranges",
"which",
"a guideline",
"four typical air combat scenarios",
"good adaptability",
"effectiveness",
"the proposed scheme",
"the auxiliary reward",
"the learning ability",
"deep q network",
"dqn",
"the agents",
"the original deep deterministic policy gradient",
"soft actor critic",
"algorithm",
"the proposed method",
"superior exploration capability",
"higher reward",
"the trained agent",
"different air combats",
"good performance",
"four"
] |
Deep learning prediction of renal anomalies for prenatal ultrasound diagnosis | [
"Olivier X. Miguel",
"Emily Kaczmarek",
"Inok Lee",
"Robin Ducharme",
"Alysha L. J. Dingwall-Harvey",
"Ruth Rennicks White",
"Brigitte Bonin",
"Richard I. Aviv",
"Steven Hawken",
"Christine M. Armour",
"Kevin Dick",
"Mark C. Walker"
] | Deep learning algorithms have demonstrated remarkable potential in clinical diagnostics, particularly in the field of medical imaging. In this study, we investigated the application of deep learning models in early detection of fetal kidney anomalies. To provide an enhanced interpretation of those models’ predictions, we proposed an adapted two-class representation and developed a multi-class model interpretation approach for problems with more than two labels and variable hierarchical grouping of labels. Additionally, we employed the explainable AI (XAI) visualization tools Grad-CAM and HiResCAM, to gain insights into model predictions and identify reasons for misclassifications. The study dataset consisted of 969 ultrasound images from unique patients; 646 control images and 323 cases of kidney anomalies, including 259 cases of unilateral urinary tract dilation and 64 cases of unilateral multicystic dysplastic kidney. The best performing model achieved a cross-validated area under the ROC curve of 91.28% ± 0.52%, with an overall accuracy of 84.03% ± 0.76%, sensitivity of 77.39% ± 1.99%, and specificity of 87.35% ± 1.28%. Our findings emphasize the potential of deep learning models in predicting kidney anomalies from limited prenatal ultrasound imagery. The proposed adaptations in model representation and interpretation represent a novel solution to multi-class prediction problems. | 10.1038/s41598-024-59248-4 | deep learning prediction of renal anomalies for prenatal ultrasound diagnosis | deep learning algorithms have demonstrated remarkable potential in clinical diagnostics, particularly in the field of medical imaging. in this study, we investigated the application of deep learning models in early detection of fetal kidney anomalies. to provide an enhanced interpretation of those models’ predictions, we proposed an adapted two-class representation and developed a multi-class model interpretation approach for problems with more than two labels and variable hierarchical grouping of labels. additionally, we employed the explainable ai (xai) visualization tools grad-cam and hirescam, to gain insights into model predictions and identify reasons for misclassifications. the study dataset consisted of 969 ultrasound images from unique patients; 646 control images and 323 cases of kidney anomalies, including 259 cases of unilateral urinary tract dilation and 64 cases of unilateral multicystic dysplastic kidney. the best performing model achieved a cross-validated area under the roc curve of 91.28% ± 0.52%, with an overall accuracy of 84.03% ± 0.76%, sensitivity of 77.39% ± 1.99%, and specificity of 87.35% ± 1.28%. our findings emphasize the potential of deep learning models in predicting kidney anomalies from limited prenatal ultrasound imagery. the proposed adaptations in model representation and interpretation represent a novel solution to multi-class prediction problems. | [
"deep learning algorithms",
"remarkable potential",
"clinical diagnostics",
"the field",
"medical imaging",
"this study",
"we",
"the application",
"deep learning models",
"early detection",
"fetal kidney anomalies",
"an enhanced interpretation",
"those models’ predictions",
"we",
"an adapted two-class representation",
"a multi-class model interpretation approach",
"problems",
"more than two labels",
"variable hierarchical grouping",
"labels",
"we",
"the explainable ai (xai) visualization tools",
"grad-cam",
"hirescam",
"insights",
"model predictions",
"reasons",
"misclassifications",
"the study dataset",
"969 ultrasound images",
"unique patients",
"646 control images",
"323 cases",
"kidney anomalies",
"259 cases",
"unilateral urinary tract dilation",
"64 cases",
"unilateral multicystic dysplastic kidney",
"the best performing model",
"a cross-validated area",
"the roc curve",
"91.28% ±",
"0.52%",
"an overall accuracy",
"84.03%",
"±",
"0.76%",
"sensitivity",
"77.39% ±",
"1.99%",
"specificity",
"87.35%",
"±",
"1.28%",
"our findings",
"the potential",
"deep learning models",
"kidney anomalies",
"limited prenatal ultrasound imagery",
"the proposed adaptations",
"model representation",
"interpretation",
"a novel solution",
"multi-class prediction problems",
"two",
"more than two",
"969",
"646",
"323",
"259",
"64",
"roc",
"91.28%",
"0.52%",
"84.03%",
"0.76%",
"77.39%",
"1.99%",
"87.35%",
"1.28%"
] |
A review of deep learning and Generative Adversarial Networks applications in medical image analysis | [
"D. N. Sindhura",
"Radhika M. Pai",
"Shyamasunder N. Bhat",
"Manohara M. M. Pai"
] | Nowadays, computer-aided decision support systems (CADs) for the analysis of images have been a perennial technique in the medical imaging field. In CADs, deep learning algorithms are widely used to perform tasks like classification, identification of patterns, detection, etc. Deep learning models learn feature representations from images rather than handcrafted features. Hence, deep learning models are quickly becoming the state-of-the-art method to achieve good performances in different computer-aided decision-support systems in medical applications. Similarly, deep learning-based generative models called Generative Adversarial Networks (GANs) have recently been developed as a novel method to produce realistic-looking synthetic data. GANs are used in different domains, including medical imaging generation. The common problems, like class imbalance and a small dataset, in healthcare are well addressed by GANs, and it is a leading area of research. Segmentation, reconstruction, detection, denoising, registration, etc. are the important applications of GANs. So in this work, the successes of deep learning methods in segmentation, classification, cell structure and fracture detection, computer-aided identification, and GANs in synthetic medical image generation, segmentation, reconstruction, detection, denoising, and registration in recent times are reviewed. Lately, the review article concludes by raising research directions for DL models and GANs in medical applications. | 10.1007/s00530-024-01349-1 | a review of deep learning and generative adversarial networks applications in medical image analysis | nowadays, computer-aided decision support systems (cads) for the analysis of images have been a perennial technique in the medical imaging field. in cads, deep learning algorithms are widely used to perform tasks like classification, identification of patterns, detection, etc. deep learning models learn feature representations from images rather than handcrafted features. hence, deep learning models are quickly becoming the state-of-the-art method to achieve good performances in different computer-aided decision-support systems in medical applications. similarly, deep learning-based generative models called generative adversarial networks (gans) have recently been developed as a novel method to produce realistic-looking synthetic data. gans are used in different domains, including medical imaging generation. the common problems, like class imbalance and a small dataset, in healthcare are well addressed by gans, and it is a leading area of research. segmentation, reconstruction, detection, denoising, registration, etc. are the important applications of gans. so in this work, the successes of deep learning methods in segmentation, classification, cell structure and fracture detection, computer-aided identification, and gans in synthetic medical image generation, segmentation, reconstruction, detection, denoising, and registration in recent times are reviewed. lately, the review article concludes by raising research directions for dl models and gans in medical applications. | [
", computer-aided decision support systems",
"cads",
"the analysis",
"images",
"a perennial technique",
"the medical imaging field",
"cads",
"deep learning algorithms",
"tasks",
"classification",
"identification",
"patterns",
"detection",
"deep learning models",
"feature representations",
"images",
"handcrafted features",
"deep learning models",
"the-art",
"good performances",
"different computer-aided decision-support systems",
"medical applications",
"deep learning-based generative models",
"generative adversarial networks",
"gans",
"a novel method",
"realistic-looking synthetic data",
"gans",
"different domains",
"medical imaging generation",
"the common problems",
"class imbalance",
"a small dataset",
"healthcare",
"gans",
"it",
"a leading area",
"research",
"segmentation",
"reconstruction",
"detection",
"denoising",
"registration",
"the important applications",
"gans",
"this work",
"the successes",
"deep learning methods",
"segmentation",
"classification",
"cell structure",
"fracture detection",
"computer-aided identification",
"gans",
"synthetic medical image generation",
"segmentation",
"reconstruction",
"detection",
"denoising",
"registration",
"recent times",
"the review article",
"research directions",
"dl models",
"gans",
"medical applications"
] |
Deep learning segmentation of fibrous cap in intravascular optical coherence tomography images | [
"Juhwan Lee",
"Justin N. Kim",
"Luis A. P. Dallan",
"Vladislav N. Zimin",
"Ammar Hoori",
"Neda S. Hassani",
"Mohamed H. E. Makhlouf",
"Giulio Guagliumi",
"Hiram G. Bezerra",
"David L. Wilson"
] | Thin-cap fibroatheroma (TCFA) is a prominent risk factor for plaque rupture. Intravascular optical coherence tomography (IVOCT) enables identification of fibrous cap (FC), measurement of FC thicknesses, and assessment of plaque vulnerability. We developed a fully-automated deep learning method for FC segmentation. This study included 32,531 images across 227 pullbacks from two registries (TRANSFORM-OCT and UHCMC). Images were semi-automatically labeled using our OCTOPUS with expert editing using established guidelines. We employed preprocessing including guidewire shadow detection, lumen segmentation, pixel-shifting, and Gaussian filtering on raw IVOCT (r,θ) images. Data were augmented in a natural way by changing θ in spiral acquisitions and by changing intensity and noise values. We used a modified SegResNet and comparison networks to segment FCs. We employed transfer learning from our existing much larger, fully-labeled calcification IVOCT dataset to reduce deep-learning training. Postprocessing with a morphological operation enhanced segmentation performance. Overall, our method consistently delivered better FC segmentation results (Dice: 0.837 ± 0.012) than other deep-learning methods. Transfer learning reduced training time by 84% and reduced the need for more training samples. Our method showed a high level of generalizability, evidenced by highly-consistent segmentations across five-fold cross-validation (sensitivity: 85.0 ± 0.3%, Dice: 0.846 ± 0.011) and the held-out test (sensitivity: 84.9%, Dice: 0.816) sets. In addition, we found excellent agreement of FC thickness with ground truth (2.95 ± 20.73 µm), giving clinically insignificant bias. There was excellent reproducibility in pre- and post-stenting pullbacks (average FC angle: 200.9 ± 128.0°/202.0 ± 121.1°). Our fully automated, deep-learning FC segmentation method demonstrated excellent performance, generalizability, and reproducibility on multi-center datasets. It will be useful for multiple research purposes and potentially for planning stent deployments that avoid placing a stent edge over an FC. | 10.1038/s41598-024-55120-7 | deep learning segmentation of fibrous cap in intravascular optical coherence tomography images | thin-cap fibroatheroma (tcfa) is a prominent risk factor for plaque rupture. intravascular optical coherence tomography (ivoct) enables identification of fibrous cap (fc), measurement of fc thicknesses, and assessment of plaque vulnerability. we developed a fully-automated deep learning method for fc segmentation. this study included 32,531 images across 227 pullbacks from two registries (transform-oct and uhcmc). images were semi-automatically labeled using our octopus with expert editing using established guidelines. we employed preprocessing including guidewire shadow detection, lumen segmentation, pixel-shifting, and gaussian filtering on raw ivoct (r,θ) images. data were augmented in a natural way by changing θ in spiral acquisitions and by changing intensity and noise values. we used a modified segresnet and comparison networks to segment fcs. we employed transfer learning from our existing much larger, fully-labeled calcification ivoct dataset to reduce deep-learning training. postprocessing with a morphological operation enhanced segmentation performance. overall, our method consistently delivered better fc segmentation results (dice: 0.837 ± 0.012) than other deep-learning methods. transfer learning reduced training time by 84% and reduced the need for more training samples. our method showed a high level of generalizability, evidenced by highly-consistent segmentations across five-fold cross-validation (sensitivity: 85.0 ± 0.3%, dice: 0.846 ± 0.011) and the held-out test (sensitivity: 84.9%, dice: 0.816) sets. in addition, we found excellent agreement of fc thickness with ground truth (2.95 ± 20.73 µm), giving clinically insignificant bias. there was excellent reproducibility in pre- and post-stenting pullbacks (average fc angle: 200.9 ± 128.0°/202.0 ± 121.1°). our fully automated, deep-learning fc segmentation method demonstrated excellent performance, generalizability, and reproducibility on multi-center datasets. it will be useful for multiple research purposes and potentially for planning stent deployments that avoid placing a stent edge over an fc. | [
"thin-cap fibroatheroma",
"(tcfa",
"a prominent risk factor",
"plaque rupture",
"intravascular optical coherence tomography",
"ivoct",
"identification",
"fibrous cap",
"fc",
"measurement",
"fc thicknesses",
"assessment",
"plaque vulnerability",
"we",
"a fully-automated deep learning method",
"fc segmentation",
"this study",
"32,531 images",
"227 pullbacks",
"two registries",
"transform",
"oct",
"images",
"our octopus",
"expert editing",
"established guidelines",
"we",
"preprocessing",
"guidewire shadow detection",
"lumen segmentation",
"gaussian filtering",
"raw ivoct (r,θ) images",
"data",
"a natural way",
"θ",
"spiral acquisitions",
"intensity",
"noise values",
"we",
"a modified segresnet",
"comparison networks",
"we",
"transfer",
"our existing much larger, fully-labeled calcification ivoct dataset",
"deep-learning training",
"a morphological operation enhanced segmentation performance",
"our method",
"better fc segmentation results",
"dice",
"0.837 ±",
"other deep-learning methods",
"transfer",
"reduced training time",
"84%",
"the need",
"more training samples",
"our method",
"a high level",
"generalizability",
"highly-consistent segmentations",
"five-fold cross",
"validation (sensitivity",
"85.0 ±",
"0.3%",
"dice",
"0.846 ±",
"the held-out test",
"84.9%",
"dice",
"addition",
"we",
"excellent agreement",
"fc thickness",
"ground truth",
"2.95 ±",
"µm",
"clinically insignificant bias",
"excellent reproducibility",
"pre- and post-stenting pullbacks",
"average fc angle",
"200.9 ± 128.0°",
"±",
"our fully automated, deep-learning fc segmentation method",
"excellent performance",
"generalizability",
"reproducibility",
"multi-center datasets",
"it",
"multiple research purposes",
"stent deployments",
"that",
"a stent edge",
"an fc",
"32,531",
"227",
"two",
"0.837 ±",
"84%",
"five-fold",
"85.0",
"0.3%",
"0.846",
"0.011",
"84.9%",
"0.816",
"2.95",
"20.73",
"200.9",
"128.0",
"121.1"
] |
Deep Q-learning with hybrid quantum neural network on solving maze problems | [
"Hao-Yuan Chen",
"Yen-Jui Chang",
"Shih-Wei Liao",
"Ching-Ray Chang"
] | Quantum computing holds great potential for advancing the limitations of machine learning algorithms to handle higher dimensions of data and reduce overall training parameters in deep learning (DL) models. This study uses a trainable variational quantum circuit (VQC) on a gate-based quantum computing model to investigate the potential for quantum benefit in a model-free reinforcement learning problem. Through a comprehensive investigation and evaluation of the current model and capabilities of quantum computers, we designed and trained a novel hybrid quantum neural network based on the latest Qiskit and PyTorch framework. We compared its performance with a full-classical CNN with and without an incorporated VQC. Our research provides insights into the potential of deep quantum learning to solve a maze problem and, potentially, other reinforcement learning problems. We conclude that reinforcement learning problems can be practical with reasonable training epochs. Moreover, a comparative study of full-classical and hybrid quantum neural networks is discussed to understand these two approaches’ performance, advantages, and disadvantages to deep Q-learning problems, especially on larger-scale maze problems larger than 4\(\times \)4. | 10.1007/s42484-023-00137-w | deep q-learning with hybrid quantum neural network on solving maze problems | quantum computing holds great potential for advancing the limitations of machine learning algorithms to handle higher dimensions of data and reduce overall training parameters in deep learning (dl) models. this study uses a trainable variational quantum circuit (vqc) on a gate-based quantum computing model to investigate the potential for quantum benefit in a model-free reinforcement learning problem. through a comprehensive investigation and evaluation of the current model and capabilities of quantum computers, we designed and trained a novel hybrid quantum neural network based on the latest qiskit and pytorch framework. we compared its performance with a full-classical cnn with and without an incorporated vqc. our research provides insights into the potential of deep quantum learning to solve a maze problem and, potentially, other reinforcement learning problems. we conclude that reinforcement learning problems can be practical with reasonable training epochs. moreover, a comparative study of full-classical and hybrid quantum neural networks is discussed to understand these two approaches’ performance, advantages, and disadvantages to deep q-learning problems, especially on larger-scale maze problems larger than 4\(\times \)4. | [
"quantum computing",
"great potential",
"the limitations",
"machine learning algorithms",
"higher dimensions",
"data",
"overall training parameters",
"deep learning",
"(dl) models",
"this study",
"a trainable variational quantum circuit",
"vqc",
"a gate-based quantum computing model",
"the potential",
"quantum benefit",
"a model-free reinforcement learning problem",
"a comprehensive investigation",
"evaluation",
"the current model",
"capabilities",
"quantum computers",
"we",
"a novel hybrid quantum neural network",
"the latest qiskit and pytorch framework",
"we",
"its performance",
"a full-classical cnn",
"an incorporated vqc",
"our research",
"insights",
"the potential",
"deep quantum",
"a maze problem",
"potentially, other reinforcement learning problems",
"we",
"reinforcement learning problems",
"reasonable training epochs",
"a comparative study",
"full-classical and hybrid quantum neural networks",
"these two approaches’ performance",
"advantages",
"disadvantages",
"deep q-learning problems",
"larger-scale maze problems",
"4\\(\\times",
"quantum",
"quantum",
"quantum",
"quantum",
"quantum",
"two"
] |
Diagnosis of EV Gearbox Bearing Fault Using Deep Learning-Based Signal Processing | [
"Kicheol Jeong",
"Chulwoo Moon"
] | The gearbox of an electric vehicle operates under the high load torque and axial load of electric vehicles. In particular, the bearings that support the shaft of the gearbox are subjected to several tons of axial load, and as the mileage increases, fault occurs on bearing rolling elements frequently. Such bearing fault has a serious impact on driving comfort and vehicle safety, however, bearing faults are diagnosed by human experts nowadays, and algorithm-based electric vehicle bearing fault diagnosis has not been implemented. Therefore, in this paper, a deep learning-based bearing vibration signal processing method to diagnose bearing fault in electric vehicle gearboxes is proposed. The proposed method consists of a deep neural network learning stage and an application stage of the pre-trained neural network. In the deep neural network learning stage, supervised learning is carried out based on two acceleration sensors. In the neural network application stage, signal processing of a single accelerometer signal is performed through a pre-trained neural network. In conclusion, the pre-trained neural network makes bearing fault signals stand out and can utilize these signals to extract frequency characteristics of bearing fault. | 10.1007/s12239-024-00094-8 | diagnosis of ev gearbox bearing fault using deep learning-based signal processing | the gearbox of an electric vehicle operates under the high load torque and axial load of electric vehicles. in particular, the bearings that support the shaft of the gearbox are subjected to several tons of axial load, and as the mileage increases, fault occurs on bearing rolling elements frequently. such bearing fault has a serious impact on driving comfort and vehicle safety, however, bearing faults are diagnosed by human experts nowadays, and algorithm-based electric vehicle bearing fault diagnosis has not been implemented. therefore, in this paper, a deep learning-based bearing vibration signal processing method to diagnose bearing fault in electric vehicle gearboxes is proposed. the proposed method consists of a deep neural network learning stage and an application stage of the pre-trained neural network. in the deep neural network learning stage, supervised learning is carried out based on two acceleration sensors. in the neural network application stage, signal processing of a single accelerometer signal is performed through a pre-trained neural network. in conclusion, the pre-trained neural network makes bearing fault signals stand out and can utilize these signals to extract frequency characteristics of bearing fault. | [
"the gearbox",
"an electric vehicle",
"the high load torque",
"axial load",
"electric vehicles",
"the bearings",
"that",
"the shaft",
"the gearbox",
"several tons",
"axial load",
"the mileage increases",
"fault",
"rolling elements",
"fault",
"a serious impact",
"comfort",
"vehicle safety",
"faults",
"human experts",
"algorithm-based electric vehicle",
"fault diagnosis",
"this paper",
"a deep learning-based bearing vibration signal processing method",
"fault",
"electric vehicle gearboxes",
"the proposed method",
"a deep neural network learning stage",
"an application stage",
"the pre-trained neural network",
"the deep neural network learning stage",
"supervised learning",
"two acceleration sensors",
"the neural network application stage",
"signal processing",
"a single accelerometer signal",
"a pre-trained neural network",
"conclusion",
"the pre-trained neural network",
"fault signals",
"these signals",
"frequency characteristics",
"fault",
"several tons",
"two"
] |
Comparative study and analysis on skin cancer detection using machine learning and deep learning algorithms | [
"V. Auxilia Osvin Nancy",
"P. Prabhavathy",
"Meenakshi S. Arya",
"B. Shamreen Ahamed"
] | Exposure to UV rays due to global warming can lead to sunburn and skin damage, ultimately resulting in skin cancer. Early prediction of this type of cancer is crucial. A detailed review in this paper explores various algorithms, including machine learning (ML) techniques as well as deep learning (DL) techniques. While deep learning strategies, particularly CNNs, are commonly employed for skin cancer identification and classification, there is also some usage of machine learning and hybrid approaches. These techniques have proven to be effective classifiers of skin lesions, offering promising results for early detection. The paper analyzes various researchers’ reviews on skin cancer diagnosis to identify a suitable methodology for improving diagnostic accuracy. A publicly available dataset of dermoscopic images retrieved from the ISIC archive has been trained and evaluated. Performance analysis is done, considering metrics such as test and validation accuracy. The results indicate that the RF(random forest) algorithm outperforms other machine learning algorithms in both scenarios, with accuracies of 58.57% without augmentation and 87.32% with augmentation. MobileNetv2, ensemble of Dense Net and Inceptionv3 exhibit superior performance. During training without augmentation, MobileNetv2 achieves an accuracy of 88.81%, while the ensemble model achieves an accuracy of 88.80%. With augmentation techniques applied, the accuracies improved to 97.58% and 97.50%, respectively. Furthermore, experiment with a customized convolutional neural network (CNN) model was also conducted, varying the number of layers and applying various hyperparameter tuning methodologies. Suitable architectures, including a CNN with 7 layers and batch normalization, a CNN with 5 layers, and a CNN with 3 layers were identified. These models achieved accuracies of 77.92%, 97.72%, and 98.02% on the raw data and augmentation datasets, respectively. The experimental results suggest that these techniques hold promise for integration into clinical settings, and further research and validation are necessary. The results highlight the effectiveness of transfer learning models, in achieving high accuracy rates. The findings support the future adoption of these techniques in clinical practice, pending further research and validation. | 10.1007/s11042-023-16422-6 | comparative study and analysis on skin cancer detection using machine learning and deep learning algorithms | exposure to uv rays due to global warming can lead to sunburn and skin damage, ultimately resulting in skin cancer. early prediction of this type of cancer is crucial. a detailed review in this paper explores various algorithms, including machine learning (ml) techniques as well as deep learning (dl) techniques. while deep learning strategies, particularly cnns, are commonly employed for skin cancer identification and classification, there is also some usage of machine learning and hybrid approaches. these techniques have proven to be effective classifiers of skin lesions, offering promising results for early detection. the paper analyzes various researchers’ reviews on skin cancer diagnosis to identify a suitable methodology for improving diagnostic accuracy. a publicly available dataset of dermoscopic images retrieved from the isic archive has been trained and evaluated. performance analysis is done, considering metrics such as test and validation accuracy. the results indicate that the rf(random forest) algorithm outperforms other machine learning algorithms in both scenarios, with accuracies of 58.57% without augmentation and 87.32% with augmentation. mobilenetv2, ensemble of dense net and inceptionv3 exhibit superior performance. during training without augmentation, mobilenetv2 achieves an accuracy of 88.81%, while the ensemble model achieves an accuracy of 88.80%. with augmentation techniques applied, the accuracies improved to 97.58% and 97.50%, respectively. furthermore, experiment with a customized convolutional neural network (cnn) model was also conducted, varying the number of layers and applying various hyperparameter tuning methodologies. suitable architectures, including a cnn with 7 layers and batch normalization, a cnn with 5 layers, and a cnn with 3 layers were identified. these models achieved accuracies of 77.92%, 97.72%, and 98.02% on the raw data and augmentation datasets, respectively. the experimental results suggest that these techniques hold promise for integration into clinical settings, and further research and validation are necessary. the results highlight the effectiveness of transfer learning models, in achieving high accuracy rates. the findings support the future adoption of these techniques in clinical practice, pending further research and validation. | [
"exposure",
"uv rays",
"global warming",
"sunburn",
"skin damage",
"skin cancer",
"early prediction",
"this type",
"cancer",
"a detailed review",
"this paper",
"various algorithms",
"machine learning",
") techniques",
"deep learning",
"(dl) techniques",
"deep learning strategies",
"particularly cnns",
"skin cancer identification",
"classification",
"some usage",
"machine learning",
"hybrid approaches",
"these techniques",
"effective classifiers",
"skin lesions",
"promising results",
"early detection",
"the paper",
"various researchers’ reviews",
"skin cancer diagnosis",
"a suitable methodology",
"diagnostic accuracy",
"a publicly available dataset",
"dermoscopic images",
"the isic archive",
"performance analysis",
"metrics",
"test",
"validation accuracy",
"the results",
"algorithm",
"other machine learning algorithms",
"both scenarios",
"accuracies",
"58.57%",
"augmentation",
"87.32%",
"augmentation",
"mobilenetv2",
"dense net",
"inceptionv3",
"superior performance",
"training",
"augmentation",
"mobilenetv2",
"an accuracy",
"88.81%",
"the ensemble model",
"an accuracy",
"88.80%",
"augmentation techniques",
"the accuracies",
"97.58%",
"97.50%",
"experiment",
"a customized convolutional neural network (cnn) model",
"the number",
"layers",
"various hyperparameter",
"methodologies",
"suitable architectures",
"a cnn",
"7 layers",
"batch normalization",
"a cnn",
"5 layers",
"a cnn",
"3 layers",
"these models",
"accuracies",
"77.92%",
"97.72%",
"98.02%",
"the raw data",
"augmentation datasets",
"the experimental results",
"these techniques",
"promise",
"integration",
"clinical settings",
"further research",
"validation",
"the results",
"the effectiveness",
"transfer learning models",
"high accuracy rates",
"the findings",
"the future adoption",
"these techniques",
"clinical practice",
"further research",
"validation",
"58.57%",
"87.32%",
"mobilenetv2",
"inceptionv3",
"mobilenetv2",
"88.81%",
"88.80%",
"97.58%",
"97.50%",
"cnn",
"cnn",
"7",
"cnn",
"5",
"cnn",
"3",
"77.92%",
"97.72%",
"98.02%"
] |
Deep palmprint recognition algorithm based on self-supervised learning and uncertainty loss | [
"Rui Fan",
"Xiaohong Han"
] | With the rapid development of deep learning technology, an increasing number of people are adopting palmprint recognition algorithms based on deep learning for identity authentication. However, these algorithms are susceptible to factors such as palm placement, light source, and insufficient data sampling, resulting in poor recognition accuracy. To address these issues, this paper proposes a new end-to-end deep palmprint recognition algorithm (SSLAUL), which introduces self-supervised representation learning based on contextual prediction, utilizing unlabeled palmprint data for pre-training before introducing the trained parameters into the downstream model for fine-tuning. An uncertainty loss function is introduced into the downstream model, using the homoskedastic uncertainty as a benchmark to do adaptive weight adjustment for different loss functions dynamically. Channel and spatial attention mechanisms are also introduced to extract highly discriminative local features. In this paper, the algorithm is validated on publicly available IITD, CASIA, and PolyU palmprint datasets. The method always achieves the best recognition performance compared to other state-of-the-art algorithms. | 10.1007/s11760-024-03104-5 | deep palmprint recognition algorithm based on self-supervised learning and uncertainty loss | with the rapid development of deep learning technology, an increasing number of people are adopting palmprint recognition algorithms based on deep learning for identity authentication. however, these algorithms are susceptible to factors such as palm placement, light source, and insufficient data sampling, resulting in poor recognition accuracy. to address these issues, this paper proposes a new end-to-end deep palmprint recognition algorithm (sslaul), which introduces self-supervised representation learning based on contextual prediction, utilizing unlabeled palmprint data for pre-training before introducing the trained parameters into the downstream model for fine-tuning. an uncertainty loss function is introduced into the downstream model, using the homoskedastic uncertainty as a benchmark to do adaptive weight adjustment for different loss functions dynamically. channel and spatial attention mechanisms are also introduced to extract highly discriminative local features. in this paper, the algorithm is validated on publicly available iitd, casia, and polyu palmprint datasets. the method always achieves the best recognition performance compared to other state-of-the-art algorithms. | [
"the rapid development",
"deep learning technology",
"an increasing number",
"people",
"palmprint recognition algorithms",
"deep learning",
"identity authentication",
"these algorithms",
"factors",
"palm placement",
"light source",
"insufficient data sampling",
"poor recognition accuracy",
"these issues",
"this paper",
"end",
"recognition algorithm",
"sslaul",
"which",
"self-supervised representation learning",
"contextual prediction",
"unlabeled palmprint data",
"pre",
"-",
"training",
"the trained parameters",
"the downstream model",
"fine-tuning",
"an uncertainty loss function",
"the downstream model",
"the homoskedastic uncertainty",
"a benchmark",
"adaptive weight adjustment",
"different loss functions",
"channel",
"spatial attention mechanisms",
"highly discriminative local features",
"this paper",
"the algorithm",
"publicly available iitd",
"casia",
"polyu",
"datasets",
"the method",
"the best recognition performance",
"the-art",
"casia"
] |
Challenges and practices of deep learning model reengineering: A case study on computer vision | [
"Wenxin Jiang",
"Vishnu Banna",
"Naveen Vivek",
"Abhinav Goel",
"Nicholas Synovic",
"George K. Thiruvathukal",
"James C. Davis"
] | ContextMany engineering organizations are reimplementing and extending deep neural networks from the research community. We describe this process as deep learning model reengineering. Deep learning model reengineering — reusing, replicating, adapting, and enhancing state-of-the-art deep learning approaches — is challenging for reasons including under-documented reference models, changing requirements, and the cost of implementation and testing.ObjectivePrior work has characterized the challenges of deep learning model development, but as yet we know little about the deep learning model reengineering process and its common challenges. Prior work has examined DL systems from a “product” view, examining defects from projects regardless of the engineers’ purpose. Our study is focused on reengineering activities from a “process” view, and focuses on engineers specifically engaged in the reengineering process.MethodOur goal is to understand the characteristics and challenges of deep learning model reengineering. We conducted a mixed-methods case study of this phenomenon, focusing on the context of computer vision. Our results draw from two data sources: defects reported in open-source reeengineering projects, and interviews conducted with practitioners and the leaders of a reengineering team. From the defect data source, we analyzed 348 defects from 27 open-source deep learning projects. Meanwhile, our reengineering team replicated 7 deep learning models over two years; we interviewed 2 open-source contributors, 4 practitioners, and 6 reengineering team leaders to understand their experiences.ResultsOur results describe how deep learning-based computer vision techniques are reengineered, quantitatively analyze the distribution of defects in this process, and qualitatively discuss challenges and practices. We found that most defects (58%) are reported by re-users, and that reproducibility-related defects tend to be discovered during training (68% of them are). Our analysis shows that most environment defects (88%) are interface defects, and most environment defects (46%) are caused by API defects. We found that training defects have diverse symptoms and root causes. We identified four main challenges in the DL reengineering process: model operationalization, performance debugging, portability of DL operations, and customized data pipeline. Integrating our quantitative and qualitative data, we propose a novel reengineering workflow.ConclusionsOur findings inform several conclusion, including: standardizing model reengineering practices, developing validation tools to support model reengineering, automated support beyond manual model reengineering, and measuring additional unknown aspects of model reengineering. | 10.1007/s10664-024-10521-0 | challenges and practices of deep learning model reengineering: a case study on computer vision | contextmany engineering organizations are reimplementing and extending deep neural networks from the research community. we describe this process as deep learning model reengineering. deep learning model reengineering — reusing, replicating, adapting, and enhancing state-of-the-art deep learning approaches — is challenging for reasons including under-documented reference models, changing requirements, and the cost of implementation and testing.objectiveprior work has characterized the challenges of deep learning model development, but as yet we know little about the deep learning model reengineering process and its common challenges. prior work has examined dl systems from a “product” view, examining defects from projects regardless of the engineers’ purpose. our study is focused on reengineering activities from a “process” view, and focuses on engineers specifically engaged in the reengineering process.methodour goal is to understand the characteristics and challenges of deep learning model reengineering. we conducted a mixed-methods case study of this phenomenon, focusing on the context of computer vision. our results draw from two data sources: defects reported in open-source reeengineering projects, and interviews conducted with practitioners and the leaders of a reengineering team. from the defect data source, we analyzed 348 defects from 27 open-source deep learning projects. meanwhile, our reengineering team replicated 7 deep learning models over two years; we interviewed 2 open-source contributors, 4 practitioners, and 6 reengineering team leaders to understand their experiences.resultsour results describe how deep learning-based computer vision techniques are reengineered, quantitatively analyze the distribution of defects in this process, and qualitatively discuss challenges and practices. we found that most defects (58%) are reported by re-users, and that reproducibility-related defects tend to be discovered during training (68% of them are). our analysis shows that most environment defects (88%) are interface defects, and most environment defects (46%) are caused by api defects. we found that training defects have diverse symptoms and root causes. we identified four main challenges in the dl reengineering process: model operationalization, performance debugging, portability of dl operations, and customized data pipeline. integrating our quantitative and qualitative data, we propose a novel reengineering workflow.conclusionsour findings inform several conclusion, including: standardizing model reengineering practices, developing validation tools to support model reengineering, automated support beyond manual model reengineering, and measuring additional unknown aspects of model reengineering. | [
"contextmany engineering organizations",
"deep neural networks",
"the research community",
"we",
"this process",
"deep learning model reengineering",
"deep learning model reengineering",
"the-art",
"reasons",
"under-documented reference models",
"changing requirements",
"the cost",
"implementation",
"testing.objectiveprior work",
"the challenges",
"deep learning model development",
"we",
"the deep learning model reengineering process",
"its common challenges",
"prior work",
"dl systems",
"a “product” view",
"defects",
"projects",
"the engineers’ purpose",
"our study",
"activities",
"a “process” view",
"engineers",
"the reengineering",
"process.methodour goal",
"the characteristics",
"challenges",
"deep learning model reengineering",
"we",
"a mixed-methods case study",
"this phenomenon",
"the context",
"computer vision",
"our results",
"two data sources",
"defects",
"open-source reeengineering projects",
"interviews",
"practitioners",
"the leaders",
"a reengineering team",
"the defect data source",
"we",
"348 defects",
"27 open-source deep learning projects",
"our reengineering team",
"7 deep learning models",
"two years",
"we",
"2 open-source contributors",
"4 practitioners",
"6 reengineering team leaders",
"their experiences.resultsour results",
"how deep learning-based computer vision techniques",
"the distribution",
"defects",
"this process",
"challenges",
"practices",
"we",
"most defects",
"58%",
"re",
"-",
"users",
"reproducibility-related defects",
"training",
"68%",
"them",
"our analysis",
"most environment defects",
"88%",
"interface defects",
"most environment defects",
"46%",
"api defects",
"we",
"training defects",
"diverse symptoms",
"root causes",
"we",
"four main challenges",
"the dl reengineering process",
"model operationalization",
"performance debugging",
"portability",
"dl operations",
"customized data pipeline",
"our quantitative and qualitative data",
"we",
"a novel",
"workflow.conclusionsour findings",
"several conclusion",
"standardizing model reengineering practices",
"validation tools",
"model reengineering",
"automated support",
"manual model reengineering",
"additional unknown aspects",
"model reengineering",
"two",
"348",
"27",
"7",
"two years",
"2",
"4",
"6",
"58%",
"68%",
"88%",
"46%",
"four"
] |
Study of Q-learning and deep Q-network learning control for a rotary inverted pendulum system | [
"Zied Ben Hazem"
] | The rotary inverted pendulum system (RIPS) is an underactuated mechanical system with highly nonlinear dynamics and it is difficult to control a RIPS using the classic control models. In the last few years, reinforcement learning (RL) has become a popular nonlinear control method. RL has a powerful potential to control systems with high non-linearity and complex dynamics, such as RIPS. Nevertheless, RL control for RIPS has not been well studied and there is limited research on the development and evaluation of this control method. In this paper, RL control algorithms are developed for the swing-up and stabilization control of a single-link rotary inverted pendulum (SLRIP) and compared with classic control methods such as PID and LQR. A physical model of the SLRIP system is created using the MATLAB/Simscape Toolbox, the model is used as a dynamic simulation in MATLAB/Simulink to train the RL agents. An agent trainer system with Q-learning (QL) and deep Q-network learning (DQNL) is proposed for the data training. Furthermore, agent actions are actuating the horizontal arm of the system and states are the angles and velocities of the pendulum and the horizontal arm. The reward is computed according to the angles of the pendulum and horizontal arm. The reward is zero when the pendulum attends the upright position. The RL algorithms are used without a deep understanding of the classical controllers and are used to implement the agent. Finally, the outcome indicates the effectiveness of the QL and DQNL algorithms compared to the conventional PID and LQR controllers. | 10.1007/s42452-024-05690-y | study of q-learning and deep q-network learning control for a rotary inverted pendulum system | the rotary inverted pendulum system (rips) is an underactuated mechanical system with highly nonlinear dynamics and it is difficult to control a rips using the classic control models. in the last few years, reinforcement learning (rl) has become a popular nonlinear control method. rl has a powerful potential to control systems with high non-linearity and complex dynamics, such as rips. nevertheless, rl control for rips has not been well studied and there is limited research on the development and evaluation of this control method. in this paper, rl control algorithms are developed for the swing-up and stabilization control of a single-link rotary inverted pendulum (slrip) and compared with classic control methods such as pid and lqr. a physical model of the slrip system is created using the matlab/simscape toolbox, the model is used as a dynamic simulation in matlab/simulink to train the rl agents. an agent trainer system with q-learning (ql) and deep q-network learning (dqnl) is proposed for the data training. furthermore, agent actions are actuating the horizontal arm of the system and states are the angles and velocities of the pendulum and the horizontal arm. the reward is computed according to the angles of the pendulum and horizontal arm. the reward is zero when the pendulum attends the upright position. the rl algorithms are used without a deep understanding of the classical controllers and are used to implement the agent. finally, the outcome indicates the effectiveness of the ql and dqnl algorithms compared to the conventional pid and lqr controllers. | [
"the rotary inverted pendulum system",
"rips",
"an underactuated mechanical system",
"highly nonlinear dynamics",
"it",
"a rips",
"the classic control models",
"the last few years",
"reinforcement learning",
"(rl",
"a popular nonlinear control method",
"rl",
"a powerful potential",
"systems",
"high non-linearity and complex dynamics",
"rips",
"rl control",
"rips",
"limited research",
"the development",
"evaluation",
"this control method",
"this paper",
"rl control algorithms",
"the swing-up and stabilization control",
"a single-link rotary inverted pendulum",
"slrip",
"classic control methods",
"pid",
"lqr",
"a physical model",
"the slrip system",
"the matlab/simscape toolbox",
"the model",
"a dynamic simulation",
"matlab/simulink",
"the rl agents",
"an agent trainer system",
"q-learning",
"(ql",
"deep q-network learning",
"dqnl",
"the data training",
"agent actions",
"the horizontal arm",
"the system",
"states",
"the angles",
"velocities",
"the pendulum",
"the horizontal arm",
"the reward",
"the angles",
"the pendulum",
"horizontal arm",
"the reward",
"the pendulum",
"the upright position",
"the rl algorithms",
"a deep understanding",
"the classical controllers",
"the agent",
"the outcome",
"the effectiveness",
"the ql and dqnl algorithms",
"the conventional pid and lqr controllers",
"the last few years",
"zero"
] |
Hybrid deep learning approach to improve classification of low-volume high-dimensional data | [
"Pegah Mavaie",
"Lawrence Holder",
"Michael K. Skinner"
] | BackgroundThe performance of machine learning classification methods relies heavily on the choice of features. In many domains, feature generation can be labor-intensive and require domain knowledge, and feature selection methods do not scale well in high-dimensional datasets. Deep learning has shown success in feature generation but requires large datasets to achieve high classification accuracy. Biology domains typically exhibit these challenges with numerous handcrafted features (high-dimensional) and small amounts of training data (low volume).MethodA hybrid learning approach is proposed that first trains a deep network on the training data, extracts features from the deep network, and then uses these features to re-express the data for input to a non-deep learning method, which is trained to perform the final classification.ResultsThe approach is systematically evaluated to determine the best layer of the deep learning network from which to extract features and the threshold on training data volume that prefers this approach. Results from several domains show that this hybrid approach outperforms standalone deep and non-deep learning methods, especially on low-volume, high-dimensional datasets. The diverse collection of datasets further supports the robustness of the approach across different domains.ConclusionsThe hybrid approach combines the strengths of deep and non-deep learning paradigms to achieve high performance on high-dimensional, low volume learning tasks that are typical in biology domains. | 10.1186/s12859-023-05557-w | hybrid deep learning approach to improve classification of low-volume high-dimensional data | backgroundthe performance of machine learning classification methods relies heavily on the choice of features. in many domains, feature generation can be labor-intensive and require domain knowledge, and feature selection methods do not scale well in high-dimensional datasets. deep learning has shown success in feature generation but requires large datasets to achieve high classification accuracy. biology domains typically exhibit these challenges with numerous handcrafted features (high-dimensional) and small amounts of training data (low volume).methoda hybrid learning approach is proposed that first trains a deep network on the training data, extracts features from the deep network, and then uses these features to re-express the data for input to a non-deep learning method, which is trained to perform the final classification.resultsthe approach is systematically evaluated to determine the best layer of the deep learning network from which to extract features and the threshold on training data volume that prefers this approach. results from several domains show that this hybrid approach outperforms standalone deep and non-deep learning methods, especially on low-volume, high-dimensional datasets. the diverse collection of datasets further supports the robustness of the approach across different domains.conclusionsthe hybrid approach combines the strengths of deep and non-deep learning paradigms to achieve high performance on high-dimensional, low volume learning tasks that are typical in biology domains. | [
"backgroundthe performance",
"classification methods",
"the choice",
"features",
"many domains",
"feature generation",
"domain knowledge",
"feature selection methods",
"high-dimensional datasets",
"deep learning",
"success",
"feature generation",
"large datasets",
"high classification accuracy",
"biology domains",
"these challenges",
"numerous handcrafted features",
"small amounts",
"training data",
"low volume).methoda hybrid learning approach",
"a deep network",
"the training data",
"features",
"the deep network",
"these features",
"the data",
"input",
"a non-deep learning method",
"which",
"the final classification.resultsthe approach",
"the best layer",
"the deep learning network",
"which",
"features",
"the threshold",
"training data volume",
"that",
"this approach",
"results",
"several domains",
"this hybrid approach",
"deep and non-deep learning methods",
"low-volume, high-dimensional datasets",
"the diverse collection",
"datasets",
"the robustness",
"the approach",
"different domains.conclusionsthe hybrid approach",
"the strengths",
"deep and non-deep learning paradigms",
"high performance",
"high-dimensional, low volume learning tasks",
"that",
"biology domains",
"first"
] |
Air combat maneuver decision based on deep reinforcement learning with auxiliary reward | [
"Tingyu Zhang",
"Yongshuai Wang",
"Mingwei Sun",
"Zengqiang Chen"
] | For air combat maneuvering decision, the sparse reward during the application of deep reinforcement learning limits the exploration efficiency of the agents. To address this challenge, we propose an auxiliary reward function considering the impact of angle, range, and altitude. Furthermore, we investigate the influences of the network nodes, layers, and the learning rate on decision system, and reasonable parameter ranges are provided, which can serve as a guideline. Finally, four typical air combat scenarios demonstrate good adaptability and effectiveness of the proposed scheme, and the auxiliary reward significantly improves the learning ability of deep Q network (DQN) by leading the agents to explore more intently. Compared with the original deep deterministic policy gradient and soft actor critic algorithm, the proposed method exhibits superior exploration capability with higher reward, indicating that the trained agent can adapt to different air combats with good performance. | 10.1007/s00521-024-09720-z | air combat maneuver decision based on deep reinforcement learning with auxiliary reward | for air combat maneuvering decision, the sparse reward during the application of deep reinforcement learning limits the exploration efficiency of the agents. to address this challenge, we propose an auxiliary reward function considering the impact of angle, range, and altitude. furthermore, we investigate the influences of the network nodes, layers, and the learning rate on decision system, and reasonable parameter ranges are provided, which can serve as a guideline. finally, four typical air combat scenarios demonstrate good adaptability and effectiveness of the proposed scheme, and the auxiliary reward significantly improves the learning ability of deep q network (dqn) by leading the agents to explore more intently. compared with the original deep deterministic policy gradient and soft actor critic algorithm, the proposed method exhibits superior exploration capability with higher reward, indicating that the trained agent can adapt to different air combats with good performance. | [
"air combat maneuvering decision",
"the sparse",
"the application",
"deep reinforcement learning",
"the exploration efficiency",
"the agents",
"this challenge",
"we",
"an auxiliary reward function",
"the impact",
"angle",
"range",
"altitude",
"we",
"the influences",
"the network nodes",
"layers",
"the learning rate",
"decision system",
"reasonable parameter ranges",
"which",
"a guideline",
"four typical air combat scenarios",
"good adaptability",
"effectiveness",
"the proposed scheme",
"the auxiliary reward",
"the learning ability",
"deep q network",
"dqn",
"the agents",
"the original deep deterministic policy gradient",
"soft actor critic",
"algorithm",
"the proposed method",
"superior exploration capability",
"higher reward",
"the trained agent",
"different air combats",
"good performance",
"four"
] |
Deep learning-based classification and application test of multiple crop leaf diseases using transfer learning and the attention mechanism | [
"Yifu Zhang",
"Qian Sun",
"Ji Chen",
"Huini Zhou"
] | Crop diseases are among the major natural disasters in agricultural production that seriously restrict the growth and development of crops, threatening food security. Timely classification, accurate identification, and the application of methods suitable for the situation can effectively prevent and control crop diseases, improving the quality of agricultural products. Considering the huge variety of crops, diseases, and differences in the characteristics of diseases during each stage, the current convolutional neural network models based on deep learning need to meet the higher requirement of classifying crop diseases accurately. It is necessary to introduce a new architecture scheme to improve the recognition effect. Therefore, in this study, we optimized the deep learning-based classification model for multiple crop leaf diseases using combined transfer learning and the attention mechanism, the modified model was deployed in the smartphone for testing. Dataset that containing 10 types of crops, 61 types of diseases, and different degrees was established, the algorithm structure based on ResNet50 was designed using transfer learning and the SE attention mechanism. The classification performances of different improvement methods were compared by model training. Result indicates that the average accuracy of the proposed TL-SE-ResNet50 model is increased by 7.7%, reaching 96.32%. The model was also integrated and implemented in the smartphone and the test result of the application reaches 94.8%, and the average response time is 882 ms. The improved model proposed has a good effect on the identification of diseases and their condition of multiple crops, and the application can meet the portable usage needs of farmers. This study can provide reference for more crop disease management research in agricultural production. | 10.1007/s00607-024-01308-8 | deep learning-based classification and application test of multiple crop leaf diseases using transfer learning and the attention mechanism | crop diseases are among the major natural disasters in agricultural production that seriously restrict the growth and development of crops, threatening food security. timely classification, accurate identification, and the application of methods suitable for the situation can effectively prevent and control crop diseases, improving the quality of agricultural products. considering the huge variety of crops, diseases, and differences in the characteristics of diseases during each stage, the current convolutional neural network models based on deep learning need to meet the higher requirement of classifying crop diseases accurately. it is necessary to introduce a new architecture scheme to improve the recognition effect. therefore, in this study, we optimized the deep learning-based classification model for multiple crop leaf diseases using combined transfer learning and the attention mechanism, the modified model was deployed in the smartphone for testing. dataset that containing 10 types of crops, 61 types of diseases, and different degrees was established, the algorithm structure based on resnet50 was designed using transfer learning and the se attention mechanism. the classification performances of different improvement methods were compared by model training. result indicates that the average accuracy of the proposed tl-se-resnet50 model is increased by 7.7%, reaching 96.32%. the model was also integrated and implemented in the smartphone and the test result of the application reaches 94.8%, and the average response time is 882 ms. the improved model proposed has a good effect on the identification of diseases and their condition of multiple crops, and the application can meet the portable usage needs of farmers. this study can provide reference for more crop disease management research in agricultural production. | [
"crop diseases",
"the major natural disasters",
"agricultural production",
"that",
"the growth",
"development",
"crops",
"food security",
"timely classification",
"accurate identification",
"the application",
"methods",
"the situation",
"crop diseases",
"the quality",
"agricultural products",
"the huge variety",
"crops",
"diseases",
"differences",
"the characteristics",
"diseases",
"each stage",
"the current convolutional neural network models",
"deep learning",
"the higher requirement",
"crop diseases",
"it",
"a new architecture scheme",
"the recognition effect",
"this study",
"we",
"the deep learning-based classification model",
"multiple crop leaf diseases",
"combined transfer learning",
"the attention mechanism",
"the modified model",
"the smartphone",
"testing",
"10 types",
"crops",
"61 types",
"diseases",
"different degrees",
"the algorithm structure",
"resnet50",
"transfer learning",
"the se attention mechanism",
"the classification performances",
"different improvement methods",
"model training",
"result",
"the average accuracy",
"the proposed tl-se-resnet50 model",
"7.7%",
"96.32%",
"the model",
"the smartphone",
"the test result",
"the application",
"94.8%",
"the average response time",
"882 ms",
"the improved model",
"a good effect",
"the identification",
"diseases",
"their condition",
"multiple crops",
"the application",
"the portable usage needs",
"farmers",
"this study",
"reference",
"more crop disease management research",
"agricultural production",
"10",
"61",
"resnet50",
"7.7%",
"96.32%",
"94.8%",
"882"
] |
Sparse subspace clustering incorporated deep convolutional transform learning for hyperspectral band selection | [
"Anurag Goel",
"Angshul Majumdar"
] | This work delves into a research area with a limited number of studies, that of convolutional filter-learning based clustering. Since clustering is an unsupervised formulation, one cannot formulate it around a conventional convolutional neural network since the latter is inherently supervised. This work, therefore, builds upon the recently developed framework of deep convolutional transform learning. The sparse subspace clustering formulation is embedded into deep convolutional transform learning to get an end-to-end clustering architecture. The proposed approach has been applied to the problem of hyperspectral band selection. Comparison with a variety of state-of-the-art techniques in this domain shows that our proposed approach improves over them in terms of the classification metrics. | 10.1007/s12145-024-01312-8 | sparse subspace clustering incorporated deep convolutional transform learning for hyperspectral band selection | this work delves into a research area with a limited number of studies, that of convolutional filter-learning based clustering. since clustering is an unsupervised formulation, one cannot formulate it around a conventional convolutional neural network since the latter is inherently supervised. this work, therefore, builds upon the recently developed framework of deep convolutional transform learning. the sparse subspace clustering formulation is embedded into deep convolutional transform learning to get an end-to-end clustering architecture. the proposed approach has been applied to the problem of hyperspectral band selection. comparison with a variety of state-of-the-art techniques in this domain shows that our proposed approach improves over them in terms of the classification metrics. | [
"this work",
"a research area",
"a limited number",
"studies",
"that",
"convolutional filter-learning based clustering",
"an unsupervised formulation",
"one",
"it",
"a conventional convolutional neural network",
"the latter",
"this work",
"the recently developed framework",
"the sparse",
"clustering formulation",
"deep convolutional transform",
"end",
"the proposed approach",
"the problem",
"hyperspectral band selection",
"comparison",
"a variety",
"the-art",
"this domain",
"our proposed approach",
"them",
"terms",
"the classification metrics"
] |
Effectiveness evaluation of sprint sports techniques and tactics based on deep learning | [
"Jiankui Yan"
] | The assessment of sprint-like sports techniques and tactics involves evaluating their underlying running principles, structures, functions, methods, and real-world applications using existing technology. Particularly in the context of the fast-evolving sprint domain, a deficiency in evaluation expertise can result in lagging behind. Using deep learning algorithms, useful features, and information can be automatically extracted from extensive training and competition data, providing a more accurate and objective basis for evaluating sprint sports techniques and tactics. We can reduce manual intervention and tedious workloads by applying deep learning algorithms in automated processing and analysis, enabling intelligent training and competition. Deep learning algorithms can uncover and analyze patterns and trends in data, assisting coaches and athletes in identifying potential problems and areas for improvement and providing a foundation for athletes to develop targeted training plans and competition strategies. Therefore, studying the role of Deep Learning (DL) in the technological process is of great significance. This paper aims to effectively combine DL algorithms with sprint skill evaluation, utilizing DL algorithms for mining and leveraging them to achieve optimal sprint skill performance through effective tactical evaluation. This study trains a new model based on the classic VGGNet-16 model and compares different CNN models. The experiment reveals that the CaffeNet model is more susceptible to objects that do not entirely contain the target, resulting in an enlarging search box and eventually losing track of the target. The experimental results demonstrate that maintaining a step frequency of approximately 95% is crucial for achieving the highest speed. The SDP algorithm was tested on the VOC2007 dataset and achieved good results, with a mean average precision value of 69.4%, significantly surpassing the benchmark Fast R-CNN algorithm. Additionally, we have implemented a traditional tracking algorithm, the CamShift algorithm, based on color features. | 10.1007/s11761-024-00411-0 | effectiveness evaluation of sprint sports techniques and tactics based on deep learning | the assessment of sprint-like sports techniques and tactics involves evaluating their underlying running principles, structures, functions, methods, and real-world applications using existing technology. particularly in the context of the fast-evolving sprint domain, a deficiency in evaluation expertise can result in lagging behind. using deep learning algorithms, useful features, and information can be automatically extracted from extensive training and competition data, providing a more accurate and objective basis for evaluating sprint sports techniques and tactics. we can reduce manual intervention and tedious workloads by applying deep learning algorithms in automated processing and analysis, enabling intelligent training and competition. deep learning algorithms can uncover and analyze patterns and trends in data, assisting coaches and athletes in identifying potential problems and areas for improvement and providing a foundation for athletes to develop targeted training plans and competition strategies. therefore, studying the role of deep learning (dl) in the technological process is of great significance. this paper aims to effectively combine dl algorithms with sprint skill evaluation, utilizing dl algorithms for mining and leveraging them to achieve optimal sprint skill performance through effective tactical evaluation. this study trains a new model based on the classic vggnet-16 model and compares different cnn models. the experiment reveals that the caffenet model is more susceptible to objects that do not entirely contain the target, resulting in an enlarging search box and eventually losing track of the target. the experimental results demonstrate that maintaining a step frequency of approximately 95% is crucial for achieving the highest speed. the sdp algorithm was tested on the voc2007 dataset and achieved good results, with a mean average precision value of 69.4%, significantly surpassing the benchmark fast r-cnn algorithm. additionally, we have implemented a traditional tracking algorithm, the camshift algorithm, based on color features. | [
"the assessment",
"sprint-like sports techniques",
"tactics",
"their underlying running principles",
"structures",
"functions",
"methods",
"real-world applications",
"existing technology",
"the context",
"the fast-evolving sprint domain",
"a deficiency",
"evaluation expertise",
"deep learning algorithms",
"useful features",
"information",
"extensive training",
"competition data",
"a more accurate and objective basis",
"sprint sports techniques",
"tactics",
"we",
"manual intervention",
"tedious workloads",
"deep learning algorithms",
"automated processing",
"analysis",
"intelligent training",
"competition",
"deep learning algorithms",
"patterns",
"trends",
"data",
"coaches",
"athletes",
"potential problems",
"areas",
"improvement",
"a foundation",
"athletes",
"targeted training plans",
"competition strategies",
"the role",
"deep learning",
"dl",
"the technological process",
"great significance",
"this paper",
"dl algorithms",
"sprint skill evaluation",
"dl algorithms",
"them",
"optimal sprint skill performance",
"effective tactical evaluation",
"this study",
"a new model",
"the classic vggnet-16 model",
"different cnn models",
"the experiment",
"the caffenet model",
"objects",
"that",
"the target",
"an enlarging search box",
"track",
"the target",
"the experimental results",
"a step frequency",
"approximately 95%",
"the highest speed",
"the sdp algorithm",
"the voc2007 dataset",
"good results",
"a mean average precision value",
"69.4%",
"the benchmark fast r-cnn algorithm",
"we",
"a traditional tracking algorithm",
"the camshift algorithm",
"color features",
"vggnet-16",
"cnn",
"approximately 95%",
"69.4%"
] |
Image super-resolution reconstruction based on deep dictionary learning and A+ | [
"Yi Huang",
"Weixin Bian",
"Biao Jie",
"Zhiqiang Zhu",
"Wenhu Li"
] | The method of image super-resolution reconstruction through the dictionary usually only uses a single-layer dictionary, which not only cannot extract the deep features of the image but also requires a large trained dictionary if the reconstruction effect is to be better. This paper proposes a new deep dictionary learning model. Firstly, after preprocessing the images of the training set, the dictionary is trained by the deep dictionary learning method, and the adjusted anchored neighborhood regression method is used for image super-resolution reconstruction. The proposed algorithm is compared with several classical algorithms on Set5 dataset and Set14 dataset. The visualization and quantification results show that the proposed method improves PSNR and SSIM, effectively reduces the dictionary size and saves reconstruction time compared with traditional super-resolution algorithms. | 10.1007/s11760-023-02936-x | image super-resolution reconstruction based on deep dictionary learning and a+ | the method of image super-resolution reconstruction through the dictionary usually only uses a single-layer dictionary, which not only cannot extract the deep features of the image but also requires a large trained dictionary if the reconstruction effect is to be better. this paper proposes a new deep dictionary learning model. firstly, after preprocessing the images of the training set, the dictionary is trained by the deep dictionary learning method, and the adjusted anchored neighborhood regression method is used for image super-resolution reconstruction. the proposed algorithm is compared with several classical algorithms on set5 dataset and set14 dataset. the visualization and quantification results show that the proposed method improves psnr and ssim, effectively reduces the dictionary size and saves reconstruction time compared with traditional super-resolution algorithms. | [
"the method",
"image super-resolution reconstruction",
"the dictionary",
"a single-layer dictionary",
"which",
"the deep features",
"the image",
"a large trained dictionary",
"the reconstruction effect",
"this paper",
"a new deep dictionary learning model",
"the images",
"the training set",
"the dictionary",
"the deep dictionary learning method",
"the adjusted anchored neighborhood regression method",
"image super-resolution reconstruction",
"the proposed algorithm",
"several classical algorithms",
"set5 dataset",
"the visualization and quantification results",
"the proposed method",
"psnr",
"ssim",
"the dictionary size",
"reconstruction time",
"traditional super-resolution algorithms",
"firstly",
"set5"
] |
Efficient deep learning-based approach for malaria detection using red blood cell smears | [
"Muhammad Mujahid",
"Furqan Rustam",
"Rahman Shafique",
"Elizabeth Caro Montero",
"Eduardo Silva Alvarado",
"Isabel de la Torre Diez",
"Imran Ashraf"
] | Malaria is an extremely malignant disease and is caused by the bites of infected female mosquitoes. This disease is not only infectious among humans, but among animals as well. Malaria causes mild symptoms like fever, headache, sweating and vomiting, and muscle discomfort; severe symptoms include coma, seizures, and kidney failure. The timely identification of malaria parasites is a challenging and chaotic endeavor for health staff. An expert technician examines the schematic blood smears of infected red blood cells through a microscope. The conventional methods for identifying malaria are not efficient. Machine learning approaches are effective for simple classification challenges but not for complex tasks. Furthermore, machine learning involves rigorous feature engineering to train the model and detect patterns in the features. On the other hand, deep learning works well with complex tasks and automatically extracts low and high-level features from the images to detect disease. In this paper, EfficientNet, a deep learning-based approach for detecting Malaria, is proposed that uses red blood cell images. Experiments are carried out and performance comparison is made with pre-trained deep learning models. In addition, k-fold cross-validation is also used to substantiate the results of the proposed approach. Experiments show that the proposed approach is 97.57% accurate in detecting Malaria from red blood cell images and can be beneficial practically for medical healthcare staff. | 10.1038/s41598-024-63831-0 | efficient deep learning-based approach for malaria detection using red blood cell smears | malaria is an extremely malignant disease and is caused by the bites of infected female mosquitoes. this disease is not only infectious among humans, but among animals as well. malaria causes mild symptoms like fever, headache, sweating and vomiting, and muscle discomfort; severe symptoms include coma, seizures, and kidney failure. the timely identification of malaria parasites is a challenging and chaotic endeavor for health staff. an expert technician examines the schematic blood smears of infected red blood cells through a microscope. the conventional methods for identifying malaria are not efficient. machine learning approaches are effective for simple classification challenges but not for complex tasks. furthermore, machine learning involves rigorous feature engineering to train the model and detect patterns in the features. on the other hand, deep learning works well with complex tasks and automatically extracts low and high-level features from the images to detect disease. in this paper, efficientnet, a deep learning-based approach for detecting malaria, is proposed that uses red blood cell images. experiments are carried out and performance comparison is made with pre-trained deep learning models. in addition, k-fold cross-validation is also used to substantiate the results of the proposed approach. experiments show that the proposed approach is 97.57% accurate in detecting malaria from red blood cell images and can be beneficial practically for medical healthcare staff. | [
"malaria",
"an extremely malignant disease",
"the bites",
"infected female mosquitoes",
"this disease",
"humans",
"animals",
"malaria",
"mild symptoms",
"fever",
"headache",
"sweating",
"vomiting",
"muscle discomfort",
"severe symptoms",
"coma",
"seizures",
"kidney failure",
"the timely identification",
"malaria parasites",
"a challenging and chaotic endeavor",
"health staff",
"an expert technician",
"the schematic blood smears",
"infected red blood cells",
"a microscope",
"the conventional methods",
"malaria",
"machine learning approaches",
"simple classification challenges",
"complex tasks",
"machine learning",
"rigorous feature engineering",
"the model",
"patterns",
"the features",
"the other hand",
"deep learning",
"complex tasks",
"low and high-level features",
"the images",
"disease",
"this paper",
"efficientnet",
"a deep learning-based approach",
"malaria",
"that",
"red blood cell images",
"experiments",
"performance comparison",
"pre-trained deep learning models",
"addition",
"cross",
"-",
"validation",
"the results",
"the proposed approach",
"experiments",
"the proposed approach",
"97.57%",
"malaria",
"red blood cell images",
"medical healthcare staff",
"97.57%"
] |
Optical electrocardiogram based heart disease prediction using hybrid deep learning | [
"Avinash L. Golande",
"T. Pavankumar"
] | The diagnosis and categorization of cardiac disease using the low-cost tool electrocardiogram (ECG) becomes an intriguing study topic when contemplating intelligent healthcare applications. An ECG-based cardiac disease prediction system must be automated, accurate, and lightweight. The deep learning methods recently achieved automation and accuracy across multiple domains. However, applying deep learning for automatic ECG-based heart disease classification is a challenging research problem. Because using solely deep learning approaches failed to detect all of the important beats from the input ECG signal, a hybrid strategy is necessary to improve detection efficiency. The main objective of the proposed model is to enhance the ECG-based heart disease classification efficiency using a hybrid feature engineering approach. The proposed model consists of pre-processing, hybrid feature engineering, and classification. Pre-processing an ECG aims to eliminate powerline and baseline interference without disrupting the heartbeat. To efficiently classify data, we design a hybrid approach using a conventional ECG beats extraction algorithm and Convolutional Neural Network (CNN)-based features. For heart disease prediction, the hybrid feature vector is fed successively into the deep learning classifier Long Term Short Memory (LSTM). The results of the simulations show that the proposed model reduces both the number of diagnostic errors and the amount of time spent on each one when compared to the existing methods. | 10.1186/s40537-023-00820-6 | optical electrocardiogram based heart disease prediction using hybrid deep learning | the diagnosis and categorization of cardiac disease using the low-cost tool electrocardiogram (ecg) becomes an intriguing study topic when contemplating intelligent healthcare applications. an ecg-based cardiac disease prediction system must be automated, accurate, and lightweight. the deep learning methods recently achieved automation and accuracy across multiple domains. however, applying deep learning for automatic ecg-based heart disease classification is a challenging research problem. because using solely deep learning approaches failed to detect all of the important beats from the input ecg signal, a hybrid strategy is necessary to improve detection efficiency. the main objective of the proposed model is to enhance the ecg-based heart disease classification efficiency using a hybrid feature engineering approach. the proposed model consists of pre-processing, hybrid feature engineering, and classification. pre-processing an ecg aims to eliminate powerline and baseline interference without disrupting the heartbeat. to efficiently classify data, we design a hybrid approach using a conventional ecg beats extraction algorithm and convolutional neural network (cnn)-based features. for heart disease prediction, the hybrid feature vector is fed successively into the deep learning classifier long term short memory (lstm). the results of the simulations show that the proposed model reduces both the number of diagnostic errors and the amount of time spent on each one when compared to the existing methods. | [
"the diagnosis",
"categorization",
"cardiac disease",
"the low-cost tool electrocardiogram",
"ecg",
"an intriguing study topic",
"intelligent healthcare applications",
"an ecg-based cardiac disease prediction system",
"the deep learning methods",
"automation",
"accuracy",
"multiple domains",
"deep learning",
"automatic ecg-based heart disease classification",
"a challenging research problem",
"solely deep learning approaches",
"all",
"the important beats",
"the input ecg signal",
"a hybrid strategy",
"detection efficiency",
"the main objective",
"the proposed model",
"the ecg-based heart disease classification efficiency",
"a hybrid feature engineering approach",
"the proposed model",
"pre-processing, hybrid feature engineering",
"classification",
"an ecg",
"powerline",
"baseline interference",
"the heartbeat",
"data",
"we",
"a hybrid approach",
"a conventional ecg beats extraction algorithm",
"convolutional neural network",
"cnn)-based features",
"heart disease prediction",
"the hybrid feature vector",
"the deep learning classifier long term short memory",
"lstm",
"the results",
"the simulations",
"the proposed model",
"both the number",
"diagnostic errors",
"the amount",
"time",
"each one",
"the existing methods",
"fed"
] |
Deep learning-based automated assessment of canine hip dysplasia | [
"Cátia Loureiro",
"Lio Gonçalves",
"Pedro Leite",
"Pedro Franco-Gonçalo",
"Ana Inês Pereira",
"Bruno Colaço",
"Sofia Alves-Pimenta",
"Fintan McEvoy",
"Mário Ginja",
"Vítor Filipe"
] | Radiographic canine hip dysplasia (CHD) diagnosis is crucial for breeding selection and disease management, delaying progression and alleviating the associated pain. Radiography is the primary imaging modality for CHD diagnosis, and visual assessment of radiographic features is sometimes used for accurate diagnosis. Specifically, alterations in femoral neck shape are crucial radiographic signs, with existing literature suggesting that dysplastic hips have a greater femoral neck thickness (FNT). In this study we aimed to develop a three-stage deep learning-based system that can automatically identify and quantify a femoral neck thickness index (FNTi) as a key metric to improve CHD diagnosis. Our system trained a keypoint detection model and a segmentation model to determine landmark and boundary coordinates of the femur and acetabulum, respectively. We then executed a series of mathematical operations to calculate the FNTi. The keypoint detection model achieved a mean absolute error (MAE) of 0.013 during training, while the femur segmentation results achieved a dice score (DS) of 0.978. Our three-stage deep learning-based system achieved an intraclass correlation coefficient of 0.86 (95% confidence interval) and showed no significant differences in paired t-test compared to a specialist (p > 0.05). As far as we know, this is the initial study to thoroughly measure FNTi by applying computer vision and deep learning-based approaches, which can provide reliable support in CHD diagnosis. | 10.1007/s11042-024-20072-7 | deep learning-based automated assessment of canine hip dysplasia | radiographic canine hip dysplasia (chd) diagnosis is crucial for breeding selection and disease management, delaying progression and alleviating the associated pain. radiography is the primary imaging modality for chd diagnosis, and visual assessment of radiographic features is sometimes used for accurate diagnosis. specifically, alterations in femoral neck shape are crucial radiographic signs, with existing literature suggesting that dysplastic hips have a greater femoral neck thickness (fnt). in this study we aimed to develop a three-stage deep learning-based system that can automatically identify and quantify a femoral neck thickness index (fnti) as a key metric to improve chd diagnosis. our system trained a keypoint detection model and a segmentation model to determine landmark and boundary coordinates of the femur and acetabulum, respectively. we then executed a series of mathematical operations to calculate the fnti. the keypoint detection model achieved a mean absolute error (mae) of 0.013 during training, while the femur segmentation results achieved a dice score (ds) of 0.978. our three-stage deep learning-based system achieved an intraclass correlation coefficient of 0.86 (95% confidence interval) and showed no significant differences in paired t-test compared to a specialist (p > 0.05). as far as we know, this is the initial study to thoroughly measure fnti by applying computer vision and deep learning-based approaches, which can provide reliable support in chd diagnosis. | [
"radiographic canine hip dysplasia (chd) diagnosis",
"selection",
"disease management",
"progression",
"the associated pain",
"radiography",
"the primary imaging modality",
"chd diagnosis",
"visual assessment",
"radiographic features",
"accurate diagnosis",
"alterations",
"femoral neck shape",
"crucial radiographic signs",
"existing literature",
"dysplastic hips",
"a greater femoral neck thickness",
"fnt",
"this study",
"we",
"a three-stage deep learning-based system",
"that",
"a femoral neck",
"thickness index",
"a key metric",
"chd diagnosis",
"our system",
"a keypoint detection model",
"a segmentation model",
"landmark",
"boundary coordinates",
"the femur",
"acetabulum",
"we",
"a series",
"mathematical operations",
"the keypoint detection model",
"a mean absolute error",
"mae",
"training",
"the femur segmentation results",
"a dice score",
"ds",
"our three-stage deep learning-based system",
"an intraclass correlation",
"coefficient",
"0.86 (95% confidence interval",
"no significant differences",
"paired t-test",
"a specialist",
"(p",
"we",
"this",
"the initial study",
"computer vision",
"deep learning-based approaches",
"which",
"reliable support",
"chd diagnosis",
"three",
"0.013",
"0.978",
"three",
"0.86",
"95%",
"0.05"
] |
An Explainable Deep Learning Approach for Oral Cancer Detection | [
"P. Ashok Babu",
"Anjani Kumar Rai",
"Janjhyam Venkata Naga Ramesh",
"A. Nithyasri",
"S. Sangeetha",
"Pravin R. Kshirsagar",
"A. Rajendran",
"A. Rajaram",
"S. Dilipkumar"
] | With a high death rate, oral cancer is a major worldwide health problem, particularly in low- and middle-income nations. Timely detection and diagnosis are crucial for effective prevention and treatment. To address this challenge, there is a growing need for automated detection systems to aid healthcare professionals. Regular dental examinations play a vital role in early detection. Transfer learning, which leverages knowledge from related domains, can enhance performance in target categories. This study presents a unique approach to the early detection and diagnosis of oral cancer that makes use of the exceptional sensory capabilities of the mouth. Deep neural networks, particularly those based on automated systems, are employed to identify intricate patterns associated with the disease. By combining various transfer learning approaches and conducting comparative analyses, an optimal learning rate is achieved. The categorization analysis of the reference results is presented in detail. Our preliminary findings demonstrate that deep learning effectively addresses this challenging problem, with the Inception-V3 algorithm exhibiting superior accuracy compared to other algorithms. | 10.1007/s42835-023-01654-1 | an explainable deep learning approach for oral cancer detection | with a high death rate, oral cancer is a major worldwide health problem, particularly in low- and middle-income nations. timely detection and diagnosis are crucial for effective prevention and treatment. to address this challenge, there is a growing need for automated detection systems to aid healthcare professionals. regular dental examinations play a vital role in early detection. transfer learning, which leverages knowledge from related domains, can enhance performance in target categories. this study presents a unique approach to the early detection and diagnosis of oral cancer that makes use of the exceptional sensory capabilities of the mouth. deep neural networks, particularly those based on automated systems, are employed to identify intricate patterns associated with the disease. by combining various transfer learning approaches and conducting comparative analyses, an optimal learning rate is achieved. the categorization analysis of the reference results is presented in detail. our preliminary findings demonstrate that deep learning effectively addresses this challenging problem, with the inception-v3 algorithm exhibiting superior accuracy compared to other algorithms. | [
"a high death rate",
"oral cancer",
"a major worldwide health problem",
"low- and middle-income nations",
"timely detection",
"diagnosis",
"effective prevention",
"treatment",
"this challenge",
"a growing need",
"automated detection systems",
"healthcare professionals",
"regular dental examinations",
"a vital role",
"early detection",
"transfer learning",
"which",
"knowledge",
"related domains",
"performance",
"target categories",
"this study",
"a unique approach",
"the early detection",
"diagnosis",
"oral cancer",
"that",
"use",
"the exceptional sensory capabilities",
"the mouth",
"deep neural networks",
"particularly those",
"automated systems",
"intricate patterns",
"the disease",
"various transfer",
"approaches",
"comparative analyses",
"an optimal learning rate",
"the categorization analysis",
"the reference results",
"detail",
"our preliminary findings",
"this challenging problem",
"the inception-v3 algorithm",
"superior accuracy",
"other algorithms"
] |
Semantic speech analysis using machine learning and deep learning techniques: a comprehensive review | [
"Suryakant Tyagi",
"Sándor Szénási"
] | Human cognitive functions such as perception, attention, learning, memory, reasoning, and problem-solving are all significantly influenced by emotion. Emotion has a particularly potent impact on attention, modifying its selectivity in particular and influencing behavior and action motivation. Artificial Emotional Intelligence (AEI) technologies enable computers to understand a user's emotional state and respond appropriately. These systems enable a realistic dialogue between people and machines. The current generation of adaptive user interference technologies is built on techniques from data analytics and machine learning (ML), namely deep learning (DL) artificial neural networks (ANN) from multimodal data, such as videos of facial expressions, stance, and gesture, voice, and bio-physiological data (such as eye movement, ECG, respiration, EEG, FMRT, EMG, eye tracking). In this study, we reviewed existing literature based on ML and data analytics techniques being used to detect emotions in speech. The efficacy of data analytics and ML techniques in this unique area of multimodal data processing and extracting emotions from speech. This study analyzes how emotional chatbots, facial expressions, images, and social media texts can be effective in detecting emotions. PRISMA methodology is used to review the existing survey. Support Vector Machines (SVM), Naïve Bayes (NB), Random Forests (RF), Recurrent Neural Networks (RNN), Logistic Regression (LR), etc., are commonly used ML techniques for emotion extraction purposes. This study provides a new taxonomy about the application of ML in SER. The result shows that Long-Short Term Memory (LSTM) and Convolutional Neural Networks (CNN) are found to be the most useful methodology for this purpose. | 10.1007/s11042-023-17769-6 | semantic speech analysis using machine learning and deep learning techniques: a comprehensive review | human cognitive functions such as perception, attention, learning, memory, reasoning, and problem-solving are all significantly influenced by emotion. emotion has a particularly potent impact on attention, modifying its selectivity in particular and influencing behavior and action motivation. artificial emotional intelligence (aei) technologies enable computers to understand a user's emotional state and respond appropriately. these systems enable a realistic dialogue between people and machines. the current generation of adaptive user interference technologies is built on techniques from data analytics and machine learning (ml), namely deep learning (dl) artificial neural networks (ann) from multimodal data, such as videos of facial expressions, stance, and gesture, voice, and bio-physiological data (such as eye movement, ecg, respiration, eeg, fmrt, emg, eye tracking). in this study, we reviewed existing literature based on ml and data analytics techniques being used to detect emotions in speech. the efficacy of data analytics and ml techniques in this unique area of multimodal data processing and extracting emotions from speech. this study analyzes how emotional chatbots, facial expressions, images, and social media texts can be effective in detecting emotions. prisma methodology is used to review the existing survey. support vector machines (svm), naïve bayes (nb), random forests (rf), recurrent neural networks (rnn), logistic regression (lr), etc., are commonly used ml techniques for emotion extraction purposes. this study provides a new taxonomy about the application of ml in ser. the result shows that long-short term memory (lstm) and convolutional neural networks (cnn) are found to be the most useful methodology for this purpose. | [
"human cognitive functions",
"perception",
"attention",
"learning",
"memory",
"reasoning",
"problem-solving",
"emotion",
"emotion",
"a particularly potent impact",
"attention",
"its selectivity",
"behavior",
"action motivation",
"artificial emotional intelligence (aei) technologies",
"computers",
"a user's emotional state",
"these systems",
"a realistic dialogue",
"people",
"machines",
"the current generation",
"adaptive user",
"interference technologies",
"techniques",
"data analytics",
"machine learning",
"ml",
"namely deep learning",
"(dl) artificial neural networks",
"ann",
"multimodal data",
"videos",
"facial expressions",
"stance",
"gesture",
"voice",
"bio-physiological data",
"eye movement",
"ecg",
"respiration",
"fmrt",
"emg",
"eye tracking",
"this study",
"we",
"existing literature",
"ml and data analytics techniques",
"emotions",
"speech",
"the efficacy",
"data analytics",
"ml",
"techniques",
"this unique area",
"multimodal data processing",
"emotions",
"speech",
"this study",
"emotional chatbots",
"facial expressions",
"images",
"social media texts",
"emotions",
"prisma methodology",
"the existing survey",
"support vector machines",
"svm",
"naïve bayes",
"random forests",
"neural networks",
"rnn",
"logistic regression",
"lr",
"ml techniques",
"emotion extraction purposes",
"this study",
"a new taxonomy",
"the application",
"ml",
"ser",
"the result",
"long-short term memory",
"lstm",
"convolutional neural networks",
"cnn",
"the most useful methodology",
"this purpose",
"naïve bayes (nb",
"cnn"
] |
An Explainable Deep Learning Approach for Oral Cancer Detection | [
"P. Ashok Babu",
"Anjani Kumar Rai",
"Janjhyam Venkata Naga Ramesh",
"A. Nithyasri",
"S. Sangeetha",
"Pravin R. Kshirsagar",
"A. Rajendran",
"A. Rajaram",
"S. Dilipkumar"
] | With a high death rate, oral cancer is a major worldwide health problem, particularly in low- and middle-income nations. Timely detection and diagnosis are crucial for effective prevention and treatment. To address this challenge, there is a growing need for automated detection systems to aid healthcare professionals. Regular dental examinations play a vital role in early detection. Transfer learning, which leverages knowledge from related domains, can enhance performance in target categories. This study presents a unique approach to the early detection and diagnosis of oral cancer that makes use of the exceptional sensory capabilities of the mouth. Deep neural networks, particularly those based on automated systems, are employed to identify intricate patterns associated with the disease. By combining various transfer learning approaches and conducting comparative analyses, an optimal learning rate is achieved. The categorization analysis of the reference results is presented in detail. Our preliminary findings demonstrate that deep learning effectively addresses this challenging problem, with the Inception-V3 algorithm exhibiting superior accuracy compared to other algorithms. | 10.1007/s42835-023-01654-1 | an explainable deep learning approach for oral cancer detection | with a high death rate, oral cancer is a major worldwide health problem, particularly in low- and middle-income nations. timely detection and diagnosis are crucial for effective prevention and treatment. to address this challenge, there is a growing need for automated detection systems to aid healthcare professionals. regular dental examinations play a vital role in early detection. transfer learning, which leverages knowledge from related domains, can enhance performance in target categories. this study presents a unique approach to the early detection and diagnosis of oral cancer that makes use of the exceptional sensory capabilities of the mouth. deep neural networks, particularly those based on automated systems, are employed to identify intricate patterns associated with the disease. by combining various transfer learning approaches and conducting comparative analyses, an optimal learning rate is achieved. the categorization analysis of the reference results is presented in detail. our preliminary findings demonstrate that deep learning effectively addresses this challenging problem, with the inception-v3 algorithm exhibiting superior accuracy compared to other algorithms. | [
"a high death rate",
"oral cancer",
"a major worldwide health problem",
"low- and middle-income nations",
"timely detection",
"diagnosis",
"effective prevention",
"treatment",
"this challenge",
"a growing need",
"automated detection systems",
"healthcare professionals",
"regular dental examinations",
"a vital role",
"early detection",
"transfer learning",
"which",
"knowledge",
"related domains",
"performance",
"target categories",
"this study",
"a unique approach",
"the early detection",
"diagnosis",
"oral cancer",
"that",
"use",
"the exceptional sensory capabilities",
"the mouth",
"deep neural networks",
"particularly those",
"automated systems",
"intricate patterns",
"the disease",
"various transfer",
"approaches",
"comparative analyses",
"an optimal learning rate",
"the categorization analysis",
"the reference results",
"detail",
"our preliminary findings",
"this challenging problem",
"the inception-v3 algorithm",
"superior accuracy",
"other algorithms"
] |
Deep Learning Based Video Compression Techniques with Future Research Issues | [
"Helen K. Joy",
"Manjunath R. Kounte",
"Arunkumar Chandrasekhar",
"Manoranjan Paul"
] | The advancements in the domain of video coding technologies are tremendously fluctuating in recent years. As the public got acquainted with the creation and availability of videos through internet boom and video acquisition devices including mobile phones, camera etc., the necessity of video compression become crucial. The resolution variance (4 K, 2 K etc.), framerate, display is some of the features that glorifies the importance of compression. Improving compression ratio with better efficiency and quality was the focus and it has many stumbling blocks to achieve it. The era of artificial intelligence, neural network, and especially deep learning provided light in the path of video processing area, particularly in compression. The paper mainly focuses on a precise, organized, meticulous review of the impact of deep learning on video compression. The content adaptivity quality of deep learning marks its importance in video compression to traditional signal processing. The development of intelligent and self-trained steps in video compression with deep learning is reviewed in detail. The relevant and noteworthy work that arose in each step of compression is inculcated in this paper. A detailed survey in the development of intra- prediction, inter-prediction, in-loop filtering, quantization, and entropy coding in hand with deep learning techniques are pointed along with envisages ideas in each field. The future scope of enhancement in various stages of compression and relevant research scope to explore with Deep Learning is emphasized. | 10.1007/s11277-023-10558-2 | deep learning based video compression techniques with future research issues | the advancements in the domain of video coding technologies are tremendously fluctuating in recent years. as the public got acquainted with the creation and availability of videos through internet boom and video acquisition devices including mobile phones, camera etc., the necessity of video compression become crucial. the resolution variance (4 k, 2 k etc.), framerate, display is some of the features that glorifies the importance of compression. improving compression ratio with better efficiency and quality was the focus and it has many stumbling blocks to achieve it. the era of artificial intelligence, neural network, and especially deep learning provided light in the path of video processing area, particularly in compression. the paper mainly focuses on a precise, organized, meticulous review of the impact of deep learning on video compression. the content adaptivity quality of deep learning marks its importance in video compression to traditional signal processing. the development of intelligent and self-trained steps in video compression with deep learning is reviewed in detail. the relevant and noteworthy work that arose in each step of compression is inculcated in this paper. a detailed survey in the development of intra- prediction, inter-prediction, in-loop filtering, quantization, and entropy coding in hand with deep learning techniques are pointed along with envisages ideas in each field. the future scope of enhancement in various stages of compression and relevant research scope to explore with deep learning is emphasized. | [
"the advancements",
"the domain",
"video coding technologies",
"recent years",
"the public",
"the creation",
"availability",
"videos",
"internet boom and video acquisition devices",
"mobile phones",
"the necessity",
"video compression",
"the resolution variance",
"4 k",
"framerate, display",
"some",
"the features",
"that",
"the importance",
"compression",
"compression ratio",
"better efficiency",
"quality",
"the focus",
"it",
"many stumbling blocks",
"it",
"the era",
"artificial intelligence",
"neural network",
"especially deep learning",
"light",
"the path",
"video processing area",
"compression",
"the paper",
"a precise, organized, meticulous review",
"the impact",
"deep learning",
"video compression",
"the content adaptivity quality",
"deep learning",
"its importance",
"video compression",
"traditional signal processing",
"the development",
"intelligent and self-trained steps",
"video compression",
"deep learning",
"detail",
"the relevant and noteworthy work",
"that",
"each step",
"compression",
"this paper",
"a detailed survey",
"the development",
"intra- prediction",
"inter",
"-",
"loop",
"quantization",
"hand",
"deep learning techniques",
"envisages ideas",
"each field",
"the future scope",
"enhancement",
"various stages",
"compression",
"relevant research scope",
"deep learning",
"recent years",
"4",
"2"
] |
Dermatological disease prediction and diagnosis system using deep learning | [
"Neda Fatima",
"Syed Afzal Murtaza Rizvi",
"Major Syed Bilal Abbas Rizvi"
] | The prevalence of skin illnesses is higher than that of other diseases. Fungal infection, bacteria, allergies, viruses, genetic factors, and environmental factors are among important causative factors that have continuously escalated the degree and incidence of skin diseases. Medical technology based on lasers and photonics has made it possible to identify skin illnesses considerably more rapidly and correctly. However, the cost of such a diagnosis is currently limited and prohibitively high and restricted to developed areas. The present paper develops a holistic, critical, and important skin disease prediction system that utilizes machine learning and deep learning algorithms to accurately identify up to 20 different skin diseases with a high F1 score and efficiency. Deep learning algorithms like Xception, Inception-v3, Resnet50, DenseNet121, and Inception-ResNet-v2 were employed to accurately classify diseases based on the images. The training and testing have been performed on an enlarged dataset, and classification was performed for 20 diseases. The algorithm developed was free from any inherent bias and treated all classes equally. The present model, which was trained using the Xception algorithm, is highly efficient and accurate for 20 different skin conditions, with a dataset of over 10,000 photos. The developed system was able to classify 20 different dermatological diseases with high accuracy and precision. | 10.1007/s11845-023-03578-1 | dermatological disease prediction and diagnosis system using deep learning | the prevalence of skin illnesses is higher than that of other diseases. fungal infection, bacteria, allergies, viruses, genetic factors, and environmental factors are among important causative factors that have continuously escalated the degree and incidence of skin diseases. medical technology based on lasers and photonics has made it possible to identify skin illnesses considerably more rapidly and correctly. however, the cost of such a diagnosis is currently limited and prohibitively high and restricted to developed areas. the present paper develops a holistic, critical, and important skin disease prediction system that utilizes machine learning and deep learning algorithms to accurately identify up to 20 different skin diseases with a high f1 score and efficiency. deep learning algorithms like xception, inception-v3, resnet50, densenet121, and inception-resnet-v2 were employed to accurately classify diseases based on the images. the training and testing have been performed on an enlarged dataset, and classification was performed for 20 diseases. the algorithm developed was free from any inherent bias and treated all classes equally. the present model, which was trained using the xception algorithm, is highly efficient and accurate for 20 different skin conditions, with a dataset of over 10,000 photos. the developed system was able to classify 20 different dermatological diseases with high accuracy and precision. | [
"the prevalence",
"skin illnesses",
"that",
"other diseases",
"fungal infection",
"bacteria",
"allergies",
"viruses",
"genetic factors",
"environmental factors",
"important causative factors",
"that",
"the degree",
"incidence",
"skin diseases",
"medical technology",
"lasers",
"photonics",
"it",
"skin illnesses",
"the cost",
"such a diagnosis",
"developed areas",
"the present paper",
"a holistic, critical, and important skin disease prediction system",
"that",
"machine learning",
"deep learning algorithms",
"up to 20 different skin diseases",
"a high f1 score",
"efficiency",
"algorithms",
"xception",
"inception-v3",
"resnet50",
"densenet121",
"inception-resnet-v2",
"diseases",
"the images",
"the training",
"testing",
"an enlarged dataset",
"classification",
"20 diseases",
"the algorithm",
"any inherent bias",
"all classes",
"the present model",
"which",
"the xception",
"algorithm",
"20 different skin conditions",
"a dataset",
"over 10,000 photos",
"the developed system",
"20 different dermatological diseases",
"high accuracy",
"precision",
"up to 20",
"resnet50",
"20",
"20",
"over 10,000",
"20"
] |
Deep SqueezeNet learning model for diagnosis and prediction of maize leaf diseases | [
"Prasannavenkatesan Theerthagiri",
"A. Usha Ruby",
"J. George Chellin Chandran",
"Tanvir Habib Sardar",
"Ahamed Shafeeq B. M."
] | The maize leaf diseases create severe yield reductions and critical problems. The maize leaf disease should be discovered early, perfectly identified, and precisely diagnosed to make greater yield. This work studies three main leaf diseases: common rust, blight, and grey leaf spot. This approach involves pre-processing, including sampling and labelling, while ensuring class balance and preventing overfitting via the SMOTE algorithm. The maize leaf dataset with augmentation was used to classify these diseases using several deep-learning pre-trained networks, including VGG16, Resnet34, Resnet50, and SqueezeNet. The model was evaluated using a maize leaf dataset that included various leaf classes, mini-batch sizes, and input sizes. Performance measures, recall, precision, accuracy, F1-score, and confusion matrix were computed for each network. The SqueezeNet learning model produces an accuracy of 97% in classifying four different classes of plant leaf datasets. Comparatively, the SqueezeNet learning model has improved accuracy by 2–5% and reduced the mean square error by 4–11% over VGG16, Resnet34, and Resnet50 deep learning models. | 10.1186/s40537-024-00972-z | deep squeezenet learning model for diagnosis and prediction of maize leaf diseases | the maize leaf diseases create severe yield reductions and critical problems. the maize leaf disease should be discovered early, perfectly identified, and precisely diagnosed to make greater yield. this work studies three main leaf diseases: common rust, blight, and grey leaf spot. this approach involves pre-processing, including sampling and labelling, while ensuring class balance and preventing overfitting via the smote algorithm. the maize leaf dataset with augmentation was used to classify these diseases using several deep-learning pre-trained networks, including vgg16, resnet34, resnet50, and squeezenet. the model was evaluated using a maize leaf dataset that included various leaf classes, mini-batch sizes, and input sizes. performance measures, recall, precision, accuracy, f1-score, and confusion matrix were computed for each network. the squeezenet learning model produces an accuracy of 97% in classifying four different classes of plant leaf datasets. comparatively, the squeezenet learning model has improved accuracy by 2–5% and reduced the mean square error by 4–11% over vgg16, resnet34, and resnet50 deep learning models. | [
"the maize leaf diseases",
"severe yield reductions",
"critical problems",
"the maize leaf disease",
"greater yield",
"this work",
"three main leaf diseases",
"common rust",
"blight",
"grey leaf spot",
"this approach",
"labelling",
"class balance",
"the smote algorithm",
"the maize leaf dataset",
"augmentation",
"these diseases",
"several deep-learning pre-trained networks",
"vgg16",
"resnet34",
"resnet50",
"squeezenet",
"the model",
"a maize leaf dataset",
"that",
"various leaf classes",
"mini-batch sizes",
"input sizes",
"performance measures",
"recall",
"precision",
"accuracy",
"f1-score",
"confusion matrix",
"each network",
"the squeezenet learning model",
"an accuracy",
"97%",
"four different classes",
"plant leaf datasets",
"the squeezenet learning model",
"accuracy",
"2–5%",
"the mean square error",
"4–11%",
"vgg16",
"resnet34",
"deep learning models",
"three",
"grey leaf spot",
"resnet34",
"resnet50",
"97%",
"four",
"2–5%",
"4–11%",
"resnet34",
"resnet50"
] |
Review and perspective on bioinformatics tools using machine learning and deep learning for predicting antiviral peptides | [
"Nicolás Lefin",
"Lisandra Herrera-Belén",
"Jorge G. Farias",
"Jorge F. Beltrán"
] | Viruses constitute a constant threat to global health and have caused millions of human and animal deaths throughout human history. Despite advances in the discovery of antiviral compounds that help fight these pathogens, finding a solution to this problem continues to be a task that consumes time and financial resources. Currently, artificial intelligence (AI) has revolutionized many areas of the biological sciences, making it possible to decipher patterns in amino acid sequences that encode different functions and activities. Within the field of AI, machine learning, and deep learning algorithms have been used to discover antimicrobial peptides. Due to their effectiveness and specificity, antimicrobial peptides (AMPs) hold excellent promise for treating various infections caused by pathogens. Antiviral peptides (AVPs) are a specific type of AMPs that have activity against certain viruses. Unlike the research focused on the development of tools and methods for the prediction of antimicrobial peptides, those related to the prediction of AVPs are still scarce. Given the significance of AVPs as potential pharmaceutical options for human and animal health and the ongoing AI revolution, we have reviewed and summarized the current machine learning and deep learning-based tools and methods available for predicting these types of peptides. | 10.1007/s11030-023-10718-3 | review and perspective on bioinformatics tools using machine learning and deep learning for predicting antiviral peptides | viruses constitute a constant threat to global health and have caused millions of human and animal deaths throughout human history. despite advances in the discovery of antiviral compounds that help fight these pathogens, finding a solution to this problem continues to be a task that consumes time and financial resources. currently, artificial intelligence (ai) has revolutionized many areas of the biological sciences, making it possible to decipher patterns in amino acid sequences that encode different functions and activities. within the field of ai, machine learning, and deep learning algorithms have been used to discover antimicrobial peptides. due to their effectiveness and specificity, antimicrobial peptides (amps) hold excellent promise for treating various infections caused by pathogens. antiviral peptides (avps) are a specific type of amps that have activity against certain viruses. unlike the research focused on the development of tools and methods for the prediction of antimicrobial peptides, those related to the prediction of avps are still scarce. given the significance of avps as potential pharmaceutical options for human and animal health and the ongoing ai revolution, we have reviewed and summarized the current machine learning and deep learning-based tools and methods available for predicting these types of peptides. | [
"viruses",
"a constant threat",
"global health",
"millions",
"human and animal deaths",
"human history",
"advances",
"the discovery",
"antiviral compounds",
"that",
"these pathogens",
"a solution",
"this problem",
"a task",
"that",
"time",
"financial resources",
"artificial intelligence",
"ai",
"many areas",
"the biological sciences",
"it",
"patterns",
"amino acid sequences",
"that",
"different functions",
"activities",
"the field",
"ai",
", machine learning",
"deep learning algorithms",
"antimicrobial peptides",
"their effectiveness",
"specificity",
"antimicrobial peptides",
"amps",
"excellent promise",
"various infections",
"pathogens",
"antiviral peptides",
"avps",
"a specific type",
"amps",
"that",
"activity",
"certain viruses",
"the research",
"the development",
"tools",
"methods",
"the prediction",
"antimicrobial peptides",
"those",
"the prediction",
"avps",
"the significance",
"avps",
"potential pharmaceutical options",
"human and animal health",
"the ongoing ai revolution",
"we",
"the current machine learning",
"deep learning-based tools",
"methods",
"these types",
"peptides",
"millions",
"avps"
] |
A review of deep learning and Generative Adversarial Networks applications in medical image analysis | [
"D. N. Sindhura",
"Radhika M. Pai",
"Shyamasunder N. Bhat",
"Manohara M. M. Pai"
] | Nowadays, computer-aided decision support systems (CADs) for the analysis of images have been a perennial technique in the medical imaging field. In CADs, deep learning algorithms are widely used to perform tasks like classification, identification of patterns, detection, etc. Deep learning models learn feature representations from images rather than handcrafted features. Hence, deep learning models are quickly becoming the state-of-the-art method to achieve good performances in different computer-aided decision-support systems in medical applications. Similarly, deep learning-based generative models called Generative Adversarial Networks (GANs) have recently been developed as a novel method to produce realistic-looking synthetic data. GANs are used in different domains, including medical imaging generation. The common problems, like class imbalance and a small dataset, in healthcare are well addressed by GANs, and it is a leading area of research. Segmentation, reconstruction, detection, denoising, registration, etc. are the important applications of GANs. So in this work, the successes of deep learning methods in segmentation, classification, cell structure and fracture detection, computer-aided identification, and GANs in synthetic medical image generation, segmentation, reconstruction, detection, denoising, and registration in recent times are reviewed. Lately, the review article concludes by raising research directions for DL models and GANs in medical applications. | 10.1007/s00530-024-01349-1 | a review of deep learning and generative adversarial networks applications in medical image analysis | nowadays, computer-aided decision support systems (cads) for the analysis of images have been a perennial technique in the medical imaging field. in cads, deep learning algorithms are widely used to perform tasks like classification, identification of patterns, detection, etc. deep learning models learn feature representations from images rather than handcrafted features. hence, deep learning models are quickly becoming the state-of-the-art method to achieve good performances in different computer-aided decision-support systems in medical applications. similarly, deep learning-based generative models called generative adversarial networks (gans) have recently been developed as a novel method to produce realistic-looking synthetic data. gans are used in different domains, including medical imaging generation. the common problems, like class imbalance and a small dataset, in healthcare are well addressed by gans, and it is a leading area of research. segmentation, reconstruction, detection, denoising, registration, etc. are the important applications of gans. so in this work, the successes of deep learning methods in segmentation, classification, cell structure and fracture detection, computer-aided identification, and gans in synthetic medical image generation, segmentation, reconstruction, detection, denoising, and registration in recent times are reviewed. lately, the review article concludes by raising research directions for dl models and gans in medical applications. | [
", computer-aided decision support systems",
"cads",
"the analysis",
"images",
"a perennial technique",
"the medical imaging field",
"cads",
"deep learning algorithms",
"tasks",
"classification",
"identification",
"patterns",
"detection",
"deep learning models",
"feature representations",
"images",
"handcrafted features",
"deep learning models",
"the-art",
"good performances",
"different computer-aided decision-support systems",
"medical applications",
"deep learning-based generative models",
"generative adversarial networks",
"gans",
"a novel method",
"realistic-looking synthetic data",
"gans",
"different domains",
"medical imaging generation",
"the common problems",
"class imbalance",
"a small dataset",
"healthcare",
"gans",
"it",
"a leading area",
"research",
"segmentation",
"reconstruction",
"detection",
"denoising",
"registration",
"the important applications",
"gans",
"this work",
"the successes",
"deep learning methods",
"segmentation",
"classification",
"cell structure",
"fracture detection",
"computer-aided identification",
"gans",
"synthetic medical image generation",
"segmentation",
"reconstruction",
"detection",
"denoising",
"registration",
"recent times",
"the review article",
"research directions",
"dl models",
"gans",
"medical applications"
] |
Deep learning-based fishing ground prediction with multiple environmental factors | [
"Mingyang Xie",
"Bin Liu",
"Xinjun Chen"
] | Improving the accuracy of fishing ground prediction for oceanic economic species has always been one of the most concerning issues in fisheries research. Recent studies have confirmed that deep learning has achieved superior results over traditional methods in the era of big data. However, the deep learning-based fishing ground prediction model with a single environment suffers from the problem that the area of the fishing ground is too large and not concentrated. In this study, we developed a deep learning-based fishing ground prediction model with multiple environmental factors using neon flying squid (Ommastrephes bartramii) in Northwest Pacific Ocean as an example. Based on the modified U-Net model, the approach involves the sea surface temperature, sea surface height, sea surface salinity, and chlorophyll a as inputs, and the center fishing ground as the output. The model is trained with data from July to November in 2002–2019, and tested with data of 2020. We considered and compared five temporal scales (3, 6, 10, 15, and 30 days) and seven multiple environmental factor combinations. By comparing different cases, we found that the optimal temporal scale is 30 days, and the optimal multiple environmental factor combination contained SST and Chl a. The inclusion of multiple factors in the model greatly improved the concentration of the center fishing ground. The selection of a suitable combination of multiple environmental factors is beneficial to the precise spatial distribution of fishing grounds. This study deepens the understanding of the mechanism of environmental field influence on fishing grounds from the perspective of artificial intelligence and fishery science. | 10.1007/s42995-024-00222-4 | deep learning-based fishing ground prediction with multiple environmental factors | improving the accuracy of fishing ground prediction for oceanic economic species has always been one of the most concerning issues in fisheries research. recent studies have confirmed that deep learning has achieved superior results over traditional methods in the era of big data. however, the deep learning-based fishing ground prediction model with a single environment suffers from the problem that the area of the fishing ground is too large and not concentrated. in this study, we developed a deep learning-based fishing ground prediction model with multiple environmental factors using neon flying squid (ommastrephes bartramii) in northwest pacific ocean as an example. based on the modified u-net model, the approach involves the sea surface temperature, sea surface height, sea surface salinity, and chlorophyll a as inputs, and the center fishing ground as the output. the model is trained with data from july to november in 2002–2019, and tested with data of 2020. we considered and compared five temporal scales (3, 6, 10, 15, and 30 days) and seven multiple environmental factor combinations. by comparing different cases, we found that the optimal temporal scale is 30 days, and the optimal multiple environmental factor combination contained sst and chl a. the inclusion of multiple factors in the model greatly improved the concentration of the center fishing ground. the selection of a suitable combination of multiple environmental factors is beneficial to the precise spatial distribution of fishing grounds. this study deepens the understanding of the mechanism of environmental field influence on fishing grounds from the perspective of artificial intelligence and fishery science. | [
"the accuracy",
"fishing ground prediction",
"oceanic economic species",
"the most concerning issues",
"fisheries research",
"recent studies",
"deep learning",
"superior results",
"traditional methods",
"the era",
"big data",
"the deep learning-based fishing ground prediction model",
"a single environment",
"the problem",
"the area",
"the fishing ground",
"this study",
"we",
"a deep learning-based fishing ground prediction model",
"multiple environmental factors",
"neon flying squid",
"ommastrephes",
"northwest pacific ocean",
"an example",
"the modified u-net model",
"the approach",
"the sea surface temperature",
"sea surface height",
"sea surface salinity",
"a",
"inputs",
"the center fishing ground",
"the output",
"the model",
"data",
"july",
"november",
"data",
"we",
"five temporal scales",
"30 days",
"seven multiple environmental factor combinations",
"different cases",
"we",
"the optimal temporal scale",
"30 days",
"the optimal multiple environmental factor combination",
"sst and chl a.",
"the inclusion",
"multiple factors",
"the model",
"the concentration",
"the center fishing ground",
"the selection",
"a suitable combination",
"multiple environmental factors",
"the precise spatial distribution",
"fishing grounds",
"this study",
"the understanding",
"the mechanism",
"environmental field influence",
"fishing grounds",
"the perspective",
"artificial intelligence",
"fishery science",
"july",
"november",
"2002–2019",
"2020",
"five",
"3",
"6",
"10",
"15",
"30 days",
"seven",
"30 days",
"chl a."
] |
DEEPBIN: Deep Learning Based Garbage Classification for Households Using Sustainable Natural Technologies | [
"Yu Song",
"Xin He",
"Xiwang Tang",
"Bo Yin",
"Jie Du",
"Jiali Liu",
"Zhongbao Zhao",
"Shigang Geng"
] | Today, things that are accessible worldwide are upgrading to innovative technology. In this research, an intelligent garbage system will be designed with State-of-the-art methods using deep learning technologies. Garbage is highly produced due to urbanization and the rising population in urban areas. It is essential to manage daily trash from homes and living environments. This research aims to provide an intelligent IoT-based garbage bin system, and classification is done using Deep learning techniques. This smart bin is capable of sensing more varieties of garbage from home. Though there are more technologies successfully implemented with IoT and machine learning, there is still a need for sustainable natural technologies to manage daily waste. The innovative IoT-based garbage system uses various sensors like humidity, temperature, gas, and liquid sensors to identify the garbage condition. Initially, the Smart Garbage Bin system is designed, and then the data are collected using a garbage annotation application. Next, the deep learning method is used for object detection and classification of garbage images. Arithmetic Optimization Algorithm (AOA) with Improved RefineDet (IRD) is used for object detection. Next, the EfficientNet-B0 model is used for the classification of garbage images. The garbage content is identified, and the content is prepared to train the deep learning model to perform efficient classification tasks. For result evaluation, smart bins are deployed in real-time, and accuracy is estimated. Furthermore, fine-tuning region-specific litter photos led to enhanced categorization. | 10.1007/s10723-023-09722-6 | deepbin: deep learning based garbage classification for households using sustainable natural technologies | today, things that are accessible worldwide are upgrading to innovative technology. in this research, an intelligent garbage system will be designed with state-of-the-art methods using deep learning technologies. garbage is highly produced due to urbanization and the rising population in urban areas. it is essential to manage daily trash from homes and living environments. this research aims to provide an intelligent iot-based garbage bin system, and classification is done using deep learning techniques. this smart bin is capable of sensing more varieties of garbage from home. though there are more technologies successfully implemented with iot and machine learning, there is still a need for sustainable natural technologies to manage daily waste. the innovative iot-based garbage system uses various sensors like humidity, temperature, gas, and liquid sensors to identify the garbage condition. initially, the smart garbage bin system is designed, and then the data are collected using a garbage annotation application. next, the deep learning method is used for object detection and classification of garbage images. arithmetic optimization algorithm (aoa) with improved refinedet (ird) is used for object detection. next, the efficientnet-b0 model is used for the classification of garbage images. the garbage content is identified, and the content is prepared to train the deep learning model to perform efficient classification tasks. for result evaluation, smart bins are deployed in real-time, and accuracy is estimated. furthermore, fine-tuning region-specific litter photos led to enhanced categorization. | [
"things",
"that",
"innovative technology",
"this research",
"an intelligent garbage system",
"the-art",
"deep learning technologies",
"garbage",
"urbanization",
"the rising population",
"urban areas",
"it",
"daily trash",
"homes",
"living environments",
"this research",
"an intelligent iot-based garbage bin system",
"classification",
"deep learning techniques",
"this smart bin",
"more varieties",
"garbage",
"home",
"more technologies",
"iot and machine learning",
"a need",
"sustainable natural technologies",
"daily waste",
"the innovative iot-based garbage system",
"various sensors",
"humidity",
"temperature",
"gas",
"liquid sensors",
"the garbage condition",
"the smart garbage bin system",
"the data",
"a garbage annotation application",
"the deep learning method",
"object detection",
"classification",
"garbage images",
"arithmetic optimization algorithm",
"aoa",
"improved refinedet",
"ird",
"object detection",
"the efficientnet-b0 model",
"the classification",
"garbage images",
"the garbage content",
"the content",
"the deep learning model",
"efficient classification tasks",
"result evaluation",
"smart bins",
"real-time",
"accuracy",
"fine-tuning region-specific litter photos",
"enhanced categorization",
"today",
"daily",
"bin system",
"daily",
"bin system"
] |
Epileptic Seizures Detection Using iEEG Signals and Deep Learning Models | [
"Nourane Abderrahim",
"Amira Echtioui",
"Rafik Khemakhem",
"Wassim Zouch",
"Mohamed Ghorbel",
"Ahmed Ben Hamida"
] | Epilepsy is a common neurological disorder that affects millions of people worldwide, and many patients do not respond well to traditional anti-epileptic drugs. To improve the lives of these patients, there is a need to develop accurate methods for predicting epileptic seizures. Seizure prediction involves classifying preictal and interictal states, which is a challenging classification problem. Deep learning techniques, such as convolutional neural networks (CNNs), have shown great promise in analyzing and classifying EEG signals related to epilepsy. In this study, we proposed four deep learning models (S-CNN, Modif-CNN, CNN-SVM, and Comb-2CNN) to classify epilepsy states, which we evaluated on an iEEG dataset from the American Epilepsy Society database. Our models achieved high accuracy rates, with the S-CNN and Comb-2CNN models achieving 96.53%, CNN-SVM achieving 96.99%, and the Modif-CNN model achieving 97.96% in our experiments. These findings suggest that deep learning models could be an effective approach for classifying epilepsy states and could potentially improve seizure prediction methods, ultimately enhancing the quality of life for people with epilepsy. | 10.1007/s00034-023-02527-8 | epileptic seizures detection using ieeg signals and deep learning models | epilepsy is a common neurological disorder that affects millions of people worldwide, and many patients do not respond well to traditional anti-epileptic drugs. to improve the lives of these patients, there is a need to develop accurate methods for predicting epileptic seizures. seizure prediction involves classifying preictal and interictal states, which is a challenging classification problem. deep learning techniques, such as convolutional neural networks (cnns), have shown great promise in analyzing and classifying eeg signals related to epilepsy. in this study, we proposed four deep learning models (s-cnn, modif-cnn, cnn-svm, and comb-2cnn) to classify epilepsy states, which we evaluated on an ieeg dataset from the american epilepsy society database. our models achieved high accuracy rates, with the s-cnn and comb-2cnn models achieving 96.53%, cnn-svm achieving 96.99%, and the modif-cnn model achieving 97.96% in our experiments. these findings suggest that deep learning models could be an effective approach for classifying epilepsy states and could potentially improve seizure prediction methods, ultimately enhancing the quality of life for people with epilepsy. | [
"epilepsy",
"a common neurological disorder",
"that",
"millions",
"people",
"many patients",
"traditional anti-epileptic drugs",
"the lives",
"these patients",
"a need",
"accurate methods",
"epileptic seizures",
"seizure prediction",
"preictal and interictal states",
"which",
"a challenging classification problem",
"deep learning techniques",
"convolutional neural networks",
"cnns",
"great promise",
"eeg signals",
"epilepsy",
"this study",
"we",
"four deep learning models",
"s",
"cnn",
"modif-cnn, cnn-svm",
"comb-2cnn",
"epilepsy states",
"which",
"we",
"an ieeg dataset",
"the american epilepsy society database",
"our models",
"high accuracy rates",
"the s-cnn and comb-2cnn models",
"96.53%",
"cnn-svm",
"96.99%",
"the modif-cnn model",
"97.96%",
"our experiments",
"these findings",
"deep learning models",
"an effective approach",
"epilepsy states",
"seizure prediction methods",
"the quality",
"life",
"people",
"epilepsy",
"millions",
"four",
"modif-cnn",
"cnn",
"american",
"comb-2cnn",
"96.53%",
"cnn",
"96.99%",
"modif",
"97.96%"
] |
Real-time deep learning-based model predictive control of a 3-DOF biped robot leg | [
"Haitham El-Hussieny"
] | Our research utilized deep learning to enhance the control of a 3 Degrees of Freedom biped robot leg. We created a dynamic model based on a detailed joint angles and actuator torques dataset. This model was then integrated into a Model Predictive Control (MPC) framework, allowing for precise trajectory tracking without the need for traditional analytical dynamic models. By incorporating specific constraints within the MPC, we met operational and safety standards. The experimental results demonstrate the effectiveness of deep learning models in improving robotic control, leading to precise trajectory tracking and suggesting potential for further integration of deep learning into robotic system control. This approach not only outperforms traditional control methods in accuracy and efficiency but also opens the way for new research in robotics, highlighting the potential of utilizing deep learning models in predictive control techniques. | 10.1038/s41598-024-66104-y | real-time deep learning-based model predictive control of a 3-dof biped robot leg | our research utilized deep learning to enhance the control of a 3 degrees of freedom biped robot leg. we created a dynamic model based on a detailed joint angles and actuator torques dataset. this model was then integrated into a model predictive control (mpc) framework, allowing for precise trajectory tracking without the need for traditional analytical dynamic models. by incorporating specific constraints within the mpc, we met operational and safety standards. the experimental results demonstrate the effectiveness of deep learning models in improving robotic control, leading to precise trajectory tracking and suggesting potential for further integration of deep learning into robotic system control. this approach not only outperforms traditional control methods in accuracy and efficiency but also opens the way for new research in robotics, highlighting the potential of utilizing deep learning models in predictive control techniques. | [
"our research",
"deep learning",
"the control",
"a 3 degrees",
"freedom biped robot leg",
"we",
"a dynamic model",
"a detailed joint angles",
"actuator torques",
"this model",
"a model predictive control",
"mpc",
"framework",
"precise trajectory tracking",
"the need",
"traditional analytical dynamic models",
"specific constraints",
"the mpc",
"we",
"operational and safety standards",
"the experimental results",
"the effectiveness",
"deep learning models",
"robotic control",
"precise trajectory tracking",
"potential",
"further integration",
"deep learning",
"robotic system control",
"this approach",
"traditional control methods",
"accuracy",
"efficiency",
"the way",
"new research",
"robotics",
"the potential",
"deep learning models",
"predictive control techniques",
"3 degrees"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.