title
stringlengths 31
206
| authors
sequencelengths 1
85
| abstract
stringlengths 428
3.21k
| doi
stringlengths 21
31
| cleaned_title
stringlengths 31
206
| cleaned_abstract
stringlengths 428
3.21k
| key_phrases
sequencelengths 19
150
|
---|---|---|---|---|---|---|
Investigating Deep Learning for Early Detection and Decision-Making in Alzheimer’s Disease: A Comprehensive Review | [
"Ghazala Hcini",
"Imen Jdey",
"Habib Dhahri"
] | Alzheimer’s disease (AD) is a neurodegenerative disorder that affects millions of people worldwide, making early detection essential for effective intervention. This review paper provides a comprehensive analysis of the use of deep learning techniques, specifically convolutional neural networks (CNN) and vision transformers (ViT), for the classification of AD using brain imaging data. While previous reviews have covered similar topics, this paper offers a unique perspective by providing a detailed comparison of CNN and ViT for AD classification, highlighting the strengths and limitations of each approach. Additionally, the review presents an updated and thorough analysis of the most recent studies in the field, including the latest advancements in CNN and ViT architectures, training methods, and performance evaluation metrics. Furthermore, the paper discusses the ethical considerations and challenges associated with the use of deep learning models for AD classification, such as the need for interpretability and the potential for bias. By addressing these issues, this review aims to provide valuable insights for future research and clinical applications, ultimately advancing the field of AD classification using deep learning techniques. | 10.1007/s11063-024-11600-5 | investigating deep learning for early detection and decision-making in alzheimer’s disease: a comprehensive review | alzheimer’s disease (ad) is a neurodegenerative disorder that affects millions of people worldwide, making early detection essential for effective intervention. this review paper provides a comprehensive analysis of the use of deep learning techniques, specifically convolutional neural networks (cnn) and vision transformers (vit), for the classification of ad using brain imaging data. while previous reviews have covered similar topics, this paper offers a unique perspective by providing a detailed comparison of cnn and vit for ad classification, highlighting the strengths and limitations of each approach. additionally, the review presents an updated and thorough analysis of the most recent studies in the field, including the latest advancements in cnn and vit architectures, training methods, and performance evaluation metrics. furthermore, the paper discusses the ethical considerations and challenges associated with the use of deep learning models for ad classification, such as the need for interpretability and the potential for bias. by addressing these issues, this review aims to provide valuable insights for future research and clinical applications, ultimately advancing the field of ad classification using deep learning techniques. | [
"alzheimer’s disease",
"ad",
"a neurodegenerative disorder",
"that",
"millions",
"people",
"early detection",
"effective intervention",
"this review paper",
"a comprehensive analysis",
"the use",
"deep learning techniques",
"specifically convolutional neural networks",
"cnn",
"vision transformers",
"vit",
"the classification",
"ad",
"brain imaging data",
"previous reviews",
"similar topics",
"this paper",
"a unique perspective",
"a detailed comparison",
"cnn",
"vit",
"ad classification",
"the strengths",
"limitations",
"each approach",
"the review",
"an updated and thorough analysis",
"the most recent studies",
"the field",
"the latest advancements",
"cnn",
"vit architectures",
"training methods",
"performance evaluation metrics",
"the paper",
"the ethical considerations",
"challenges",
"the use",
"deep learning models",
"ad classification",
"the need",
"interpretability",
"the potential",
"bias",
"these issues",
"this review",
"valuable insights",
"future research and clinical applications",
"the field",
"ad classification",
"deep learning techniques",
"millions",
"cnn",
"cnn",
"vit",
"cnn",
"vit"
] |
Phase unwrapping based on deep learning in light field fringe projection 3D measurement | [
"Xinjun Zhu",
"Haichuan Zhao",
"Mengkai Yuan",
"Zhizhi Zhang",
"Hongyi Wang",
"Limei Song"
] | Phase unwrapping is one of the key roles in fringe projection three-dimensional (3D) measurement technology. We propose a new method to achieve phase unwrapping in camera array light filed fringe projection 3D measurement based on deep learning. A multi-stream convolutional neural network (CNN) is proposed to learn the mapping relationship between camera array light filed wrapped phases and fringe orders of the expected central view, and is used to predict the fringe order to achieve the phase unwrapping. Experiments are performed on the light field fringe projection data generated by the simulated camera array fringe projection measurement system in Blender and by the experimental 3×3 camera array light field fringe projection system. The performance of the proposed network with light field wrapped phases using multiple directions as network input data is studied, and the advantages of phase unwrapping based on deep learning in light filed fringe projection are demonstrated. | 10.1007/s11801-023-3002-4 | phase unwrapping based on deep learning in light field fringe projection 3d measurement | phase unwrapping is one of the key roles in fringe projection three-dimensional (3d) measurement technology. we propose a new method to achieve phase unwrapping in camera array light filed fringe projection 3d measurement based on deep learning. a multi-stream convolutional neural network (cnn) is proposed to learn the mapping relationship between camera array light filed wrapped phases and fringe orders of the expected central view, and is used to predict the fringe order to achieve the phase unwrapping. experiments are performed on the light field fringe projection data generated by the simulated camera array fringe projection measurement system in blender and by the experimental 3×3 camera array light field fringe projection system. the performance of the proposed network with light field wrapped phases using multiple directions as network input data is studied, and the advantages of phase unwrapping based on deep learning in light filed fringe projection are demonstrated. | [
"phase",
"the key roles",
"fringe projection three-dimensional (3d) measurement technology",
"we",
"a new method",
"phase",
"camera array light",
"fringe projection 3d measurement",
"deep learning",
"a multi-stream convolutional neural network",
"cnn",
"the mapping relationship",
"camera array light",
"phases",
"fringe orders",
"the expected central view",
"the fringe order",
"the phase",
"experiments",
"the light field fringe projection data",
"the simulated camera array fringe projection measurement system",
"blender",
"the experimental 3×3 camera array light field fringe projection system",
"the performance",
"the proposed network",
"light field",
"phases",
"multiple directions",
"network input data",
"the advantages",
"phase",
"deep learning",
"light filed fringe projection",
"three",
"3d",
"3d",
"cnn",
"3×3"
] |
Development and application of a deep learning-based comprehensive early diagnostic model for chronic obstructive pulmonary disease | [
"Zecheng Zhu",
"Shunjin Zhao",
"Jiahui Li",
"Yuting Wang",
"Luopiao Xu",
"Yubing Jia",
"Zihan Li",
"Wenyuan Li",
"Gang Chen",
"Xifeng Wu"
] | BackgroundChronic obstructive pulmonary disease (COPD) is a frequently diagnosed yet treatable condition, provided it is identified early and managed effectively. This study aims to develop an advanced COPD diagnostic model by integrating deep learning and radiomics features.MethodsWe utilized a dataset comprising CT images from 2,983 participants, of which 2,317 participants also provided epidemiological data through questionnaires. Deep learning features were extracted using a Variational Autoencoder, and radiomics features were obtained using the PyRadiomics package. Multi-Layer Perceptrons were used to construct models based on deep learning and radiomics features independently, as well as a fusion model integrating both. Subsequently, epidemiological questionnaire data were incorporated to establish a more comprehensive model. The diagnostic performance of standalone models, the fusion model and the comprehensive model was evaluated and compared using metrics including accuracy, precision, recall, F1-score, Brier score, receiver operating characteristic curves, and area under the curve (AUC).ResultsThe fusion model exhibited outstanding performance with an AUC of 0.952, surpassing the standalone models based solely on deep learning features (AUC = 0.844) or radiomics features (AUC = 0.944). Notably, the comprehensive model, incorporating deep learning features, radiomics features, and questionnaire variables demonstrated the highest diagnostic performance among all models, yielding an AUC of 0.971.ConclusionWe developed and implemented a data fusion strategy to construct a state-of-the-art COPD diagnostic model integrating deep learning features, radiomics features, and questionnaire variables. Our data fusion strategy proved effective, and the model can be easily deployed in clinical settings.Trial registrationNot applicable. This study is NOT a clinical trial, it does not report the results of a health care intervention on human participants. | 10.1186/s12931-024-02793-3 | development and application of a deep learning-based comprehensive early diagnostic model for chronic obstructive pulmonary disease | backgroundchronic obstructive pulmonary disease (copd) is a frequently diagnosed yet treatable condition, provided it is identified early and managed effectively. this study aims to develop an advanced copd diagnostic model by integrating deep learning and radiomics features.methodswe utilized a dataset comprising ct images from 2,983 participants, of which 2,317 participants also provided epidemiological data through questionnaires. deep learning features were extracted using a variational autoencoder, and radiomics features were obtained using the pyradiomics package. multi-layer perceptrons were used to construct models based on deep learning and radiomics features independently, as well as a fusion model integrating both. subsequently, epidemiological questionnaire data were incorporated to establish a more comprehensive model. the diagnostic performance of standalone models, the fusion model and the comprehensive model was evaluated and compared using metrics including accuracy, precision, recall, f1-score, brier score, receiver operating characteristic curves, and area under the curve (auc).resultsthe fusion model exhibited outstanding performance with an auc of 0.952, surpassing the standalone models based solely on deep learning features (auc = 0.844) or radiomics features (auc = 0.944). notably, the comprehensive model, incorporating deep learning features, radiomics features, and questionnaire variables demonstrated the highest diagnostic performance among all models, yielding an auc of 0.971.conclusionwe developed and implemented a data fusion strategy to construct a state-of-the-art copd diagnostic model integrating deep learning features, radiomics features, and questionnaire variables. our data fusion strategy proved effective, and the model can be easily deployed in clinical settings.trial registrationnot applicable. this study is not a clinical trial, it does not report the results of a health care intervention on human participants. | [
"backgroundchronic obstructive pulmonary disease",
"copd",
"a frequently diagnosed yet treatable condition",
"it",
"this study",
"an advanced copd diagnostic model",
"deep learning and radiomics features.methodswe",
"a dataset",
"ct images",
"2,983 participants",
"which",
"2,317 participants",
"epidemiological data",
"questionnaires",
"deep learning features",
"a variational autoencoder",
"radiomics features",
"the pyradiomics package",
"multi-layer perceptrons",
"models",
"deep learning",
"radiomics",
"a fusion model",
"both",
"epidemiological questionnaire data",
"a more comprehensive model",
"the diagnostic performance",
"standalone models",
"the fusion model",
"the comprehensive model",
"metrics",
"accuracy",
"precision",
"recall, f1-score",
"brier score",
"the curve",
"auc).resultsthe fusion model",
"outstanding performance",
"an auc",
"the standalone models",
"deep learning features",
"auc",
"radiomics features",
"auc =",
"the comprehensive model",
"deep learning features",
"radiomics features",
"questionnaire variables",
"the highest diagnostic performance",
"all models",
"an auc",
"0.971.conclusionwe",
"a data fusion strategy",
"the-art",
"deep learning features",
"radiomics features",
"questionnaire variables",
"our data fusion strategy",
"the model",
"clinical settings.trial registrationnot",
"this study",
"a clinical trial",
"it",
"the results",
"a health care intervention",
"human participants",
"2,983",
"2,317",
"0.952",
"0.844",
"0.944",
"0.971.conclusionwe"
] |
A multi-agent adaptive deep learning framework for online intrusion detection | [
"Mahdi Soltani",
"Khashayar Khajavi",
"Mahdi Jafari Siavoshani",
"Amir Hossein Jahangir"
] | The network security analyzers use intrusion detection systems (IDSes) to distinguish malicious traffic from benign ones. The deep learning-based (DL-based) IDSes are proposed to auto-extract high-level features and eliminate the time-consuming and costly signature extraction process. However, this new generation of IDSes still needs to overcome a number of challenges to be employed in practical environments. One of the main issues of an applicable IDS is facing traffic concept drift, which manifests itself as new (i.e. , zero-day) attacks, in addition to the changing behavior of benign users/applications. Furthermore, a practical DL-based IDS needs to be conformed to a distributed (i.e. , multi-sensor) architecture in order to yield more accurate detections, create a collective attack knowledge based on the observations of different sensors, and also handle big data challenges for supporting high throughput networks. This paper proposes a novel multi-agent network intrusion detection framework to address the above shortcomings, considering a more practical scenario (i.e., online adaptable IDSes). This framework employs continual deep anomaly detectors for adapting each agent to the changing attack/benign patterns in its local traffic. In addition, a federated learning approach is proposed for sharing and exchanging local knowledge between different agents. Furthermore, the proposed framework implements sequential packet labeling for each flow, which provides an attack probability score for the flow by gradually observing each flow packet and updating its estimation. We evaluate the proposed framework by employing different deep models (including CNN-based and LSTM-based) over the CIC-IDS2017 and CSE-CIC-IDS2018 datasets. Through extensive evaluations and experiments, we show that the proposed distributed framework is well adapted to the traffic concept drift. More precisely, our results indicate that the CNN-based models are well suited for continually adapting to the traffic concept drift (i.e. , achieving an average detection rate of above 95% while needing just 128 new flows for the updating phase), and the LSTM-based models are a good candidate for sequential packet labeling in practical online IDSes (i.e. , detecting intrusions by just observing their first 15 packets). | 10.1186/s42400-023-00199-0 | a multi-agent adaptive deep learning framework for online intrusion detection | the network security analyzers use intrusion detection systems (idses) to distinguish malicious traffic from benign ones. the deep learning-based (dl-based) idses are proposed to auto-extract high-level features and eliminate the time-consuming and costly signature extraction process. however, this new generation of idses still needs to overcome a number of challenges to be employed in practical environments. one of the main issues of an applicable ids is facing traffic concept drift, which manifests itself as new (i.e. , zero-day) attacks, in addition to the changing behavior of benign users/applications. furthermore, a practical dl-based ids needs to be conformed to a distributed (i.e. , multi-sensor) architecture in order to yield more accurate detections, create a collective attack knowledge based on the observations of different sensors, and also handle big data challenges for supporting high throughput networks. this paper proposes a novel multi-agent network intrusion detection framework to address the above shortcomings, considering a more practical scenario (i.e., online adaptable idses). this framework employs continual deep anomaly detectors for adapting each agent to the changing attack/benign patterns in its local traffic. in addition, a federated learning approach is proposed for sharing and exchanging local knowledge between different agents. furthermore, the proposed framework implements sequential packet labeling for each flow, which provides an attack probability score for the flow by gradually observing each flow packet and updating its estimation. we evaluate the proposed framework by employing different deep models (including cnn-based and lstm-based) over the cic-ids2017 and cse-cic-ids2018 datasets. through extensive evaluations and experiments, we show that the proposed distributed framework is well adapted to the traffic concept drift. more precisely, our results indicate that the cnn-based models are well suited for continually adapting to the traffic concept drift (i.e. , achieving an average detection rate of above 95% while needing just 128 new flows for the updating phase), and the lstm-based models are a good candidate for sequential packet labeling in practical online idses (i.e. , detecting intrusions by just observing their first 15 packets). | [
"the network security analyzers",
"intrusion detection systems",
"(idses",
"malicious traffic",
"benign ones",
"the deep learning-based (dl-based) idses",
"auto-extract high-level features",
"the time-consuming and costly signature extraction process",
"this new generation",
"idses",
"a number",
"challenges",
"practical environments",
"the main issues",
"an applicable ids",
"traffic concept drift",
"which",
"itself",
"new (i.e. , zero-day",
"attacks",
"addition",
"the changing behavior",
"benign users/applications",
"a practical dl-based ids",
"a distributed (i.e. , multi-sensor) architecture",
"order",
"more accurate detections",
"a collective attack knowledge",
"the observations",
"different sensors",
"big data challenges",
"high throughput networks",
"this paper",
"a novel multi-agent network intrusion detection framework",
"the above shortcomings",
"a more practical scenario",
"(i.e., online adaptable idses",
"this framework",
"continual deep anomaly detectors",
"each agent",
"the changing attack/benign patterns",
"its local traffic",
"addition",
"a federated learning approach",
"local knowledge",
"different agents",
"the proposed framework",
"sequential packet labeling",
"each flow",
"which",
"an attack probability score",
"the flow",
"each flow packet",
"its estimation",
"we",
"the proposed framework",
"different deep models",
"the cic",
"-ids2017",
"cse-cic-ids2018",
"datasets",
"extensive evaluations",
"experiments",
"we",
"the proposed distributed framework",
"the traffic concept drift",
"our results",
"the cnn-based models",
"the traffic concept drift",
"an average detection rate",
"above 95%",
"just 128 new flows",
"the updating phase",
"the lstm-based models",
"a good candidate",
"sequential packet labeling",
"practical online idses",
"intrusions",
"their first 15 packets",
"one",
"zero-day",
"cnn",
"cnn",
"95%",
"just 128",
"first",
"15"
] |
Addressing data imbalance challenges in oral cavity histopathological whole slide images with advanced deep learning techniques | [
"Tabasum Majeed",
"Tariq Ahmad Masoodi",
"Muzafar Ahmad Macha",
"Muzafar Rasool Bhat",
"Khalid Muzaffar",
"Assif Assad"
] | Oral Cavity Squamous Cell Carcinoma (OCSCC) represents a common form of head and neck cancer originating from the mucosal lining of the oral cavity, often detected in advanced stages. Traditional detection methods rely on analyzing hematoxylin and eosin (H&E)-stained histopathological whole-slide images, which are time-consuming and require expert pathology skills. Hence, automated analysis is urgently needed to expedite diagnosis and improve patient outcomes. Deep learning, through automated feature extraction, offers a promising avenue for capturing high-level abstract features with greater accuracy than traditional methods. However, the imbalance in class distribution within datasets significantly affects the performance of deep learning models during training, necessitating specialized approaches. To address the issue, various methods have been proposed at both data and algorithmic levels. This study investigates strategies to mitigate class imbalance by employing a publicly available OCSCC imbalance dataset. We evaluated undersampling methods (Near Miss, Edited Nearest Neighbors) and oversampling techniques (SMOTE, Deep SMOTE, ADASYN) integrated with transfer learning across different imbalance ratios (0.1, 0.15, 0.20, 0.30). Our findings demonstrate the effectiveness of SMOTE in improving test performance, highlighting the efficacy of strategic oversampling combined with transfer learning in classifying imbalanced medical datasets. This enhances OCSCC diagnostic accuracy, streamlines clinical decisions, and reduces reliance on costly histopathological tests. | 10.1007/s13198-024-02440-6 | addressing data imbalance challenges in oral cavity histopathological whole slide images with advanced deep learning techniques | oral cavity squamous cell carcinoma (ocscc) represents a common form of head and neck cancer originating from the mucosal lining of the oral cavity, often detected in advanced stages. traditional detection methods rely on analyzing hematoxylin and eosin (h&e)-stained histopathological whole-slide images, which are time-consuming and require expert pathology skills. hence, automated analysis is urgently needed to expedite diagnosis and improve patient outcomes. deep learning, through automated feature extraction, offers a promising avenue for capturing high-level abstract features with greater accuracy than traditional methods. however, the imbalance in class distribution within datasets significantly affects the performance of deep learning models during training, necessitating specialized approaches. to address the issue, various methods have been proposed at both data and algorithmic levels. this study investigates strategies to mitigate class imbalance by employing a publicly available ocscc imbalance dataset. we evaluated undersampling methods (near miss, edited nearest neighbors) and oversampling techniques (smote, deep smote, adasyn) integrated with transfer learning across different imbalance ratios (0.1, 0.15, 0.20, 0.30). our findings demonstrate the effectiveness of smote in improving test performance, highlighting the efficacy of strategic oversampling combined with transfer learning in classifying imbalanced medical datasets. this enhances ocscc diagnostic accuracy, streamlines clinical decisions, and reduces reliance on costly histopathological tests. | [
"oral cavity squamous cell carcinoma",
"ocscc",
"a common form",
"head and neck cancer",
"the mucosal lining",
"the oral cavity",
"advanced stages",
"traditional detection methods",
"hematoxylin",
"eosin",
"histopathological whole-slide images",
"which",
"expert pathology skills",
"automated analysis",
"diagnosis",
"patient outcomes",
"deep learning",
"automated feature extraction",
"a promising avenue",
"high-level abstract features",
"greater accuracy",
"traditional methods",
"the imbalance",
"class distribution",
"datasets",
"the performance",
"deep learning models",
"training",
"specialized approaches",
"the issue",
"various methods",
"both data",
"algorithmic levels",
"this study",
"strategies",
"class imbalance",
"a publicly available ocscc imbalance dataset",
"we",
"undersampling methods",
"near miss",
"neighbors",
"techniques",
"smote",
"deep smote",
"adasyn",
"different imbalance ratios",
"our findings",
"the effectiveness",
"smote",
"test performance",
"the efficacy",
"strategic oversampling",
"imbalanced medical datasets",
"diagnostic accuracy",
"clinical decisions",
"reliance",
"costly histopathological tests",
"hematoxylin",
"0.1",
"0.15",
"0.20",
"0.30"
] |
Deep learning implementation of image segmentation in agricultural applications: a comprehensive review | [
"Lian Lei",
"Qiliang Yang",
"Ling Yang",
"Tao Shen",
"Ruoxi Wang",
"Chengbiao Fu"
] | Image segmentation is a crucial task in computer vision, which divides a digital image into multiple segments and objects. In agriculture, image segmentation is extensively used for crop and soil monitoring, predicting the best times to sow, fertilize, and harvest, estimating crop yield, and detecting plant diseases. However, image segmentation faces difficulties in agriculture, such as the challenges of disease staging recognition, labeling inconsistency, and changes in plant morphology with the environment. Consequently, we have conducted a comprehensive review of image segmentation techniques based on deep learning, exploring the development and prospects of image segmentation in agriculture. Deep learning-based image segmentation solutions widely used in agriculture are categorized into eight main groups: encoder-decoder structures, multi-scale and pyramid-based methods, dilated convolutional networks, visual attention models, generative adversarial networks, graph neural networks, instance segmentation networks, and transformer-based models. In addition, the applications of image segmentation methods in agriculture are presented, such as plant disease detection, weed identification, crop growth monitoring, crop yield estimation, and counting. Furthermore, a collection of publicly available plant image segmentation datasets has been reviewed, and the evaluation and comparison of performance for image segmentation algorithms have been conducted on benchmark datasets. Finally, there is a discussion of the challenges and future prospects of image segmentation in agriculture. | 10.1007/s10462-024-10775-6 | deep learning implementation of image segmentation in agricultural applications: a comprehensive review | image segmentation is a crucial task in computer vision, which divides a digital image into multiple segments and objects. in agriculture, image segmentation is extensively used for crop and soil monitoring, predicting the best times to sow, fertilize, and harvest, estimating crop yield, and detecting plant diseases. however, image segmentation faces difficulties in agriculture, such as the challenges of disease staging recognition, labeling inconsistency, and changes in plant morphology with the environment. consequently, we have conducted a comprehensive review of image segmentation techniques based on deep learning, exploring the development and prospects of image segmentation in agriculture. deep learning-based image segmentation solutions widely used in agriculture are categorized into eight main groups: encoder-decoder structures, multi-scale and pyramid-based methods, dilated convolutional networks, visual attention models, generative adversarial networks, graph neural networks, instance segmentation networks, and transformer-based models. in addition, the applications of image segmentation methods in agriculture are presented, such as plant disease detection, weed identification, crop growth monitoring, crop yield estimation, and counting. furthermore, a collection of publicly available plant image segmentation datasets has been reviewed, and the evaluation and comparison of performance for image segmentation algorithms have been conducted on benchmark datasets. finally, there is a discussion of the challenges and future prospects of image segmentation in agriculture. | [
"image segmentation",
"a crucial task",
"computer vision",
"which",
"a digital image",
"multiple segments",
"objects",
"agriculture",
"image segmentation",
"crop",
"soil monitoring",
"the best times",
"crop yield",
"plant diseases",
"image segmentation",
"difficulties",
"agriculture",
"the challenges",
"disease staging recognition",
"labeling inconsistency",
"changes",
"plant morphology",
"the environment",
"we",
"a comprehensive review",
"image segmentation techniques",
"deep learning",
"the development",
"prospects",
"image segmentation",
"agriculture",
"deep learning-based image segmentation solutions",
"agriculture",
"eight main groups",
"encoder-decoder structures",
"-scale and pyramid-based methods",
"dilated convolutional networks",
"visual attention models",
"generative adversarial networks",
"graph neural networks",
"instance segmentation networks",
"transformer-based models",
"addition",
"the applications",
"image segmentation methods",
"agriculture",
"plant disease detection",
"identification",
"crop growth monitoring",
"crop yield estimation",
"a collection",
"publicly available plant image segmentation datasets",
"the evaluation",
"comparison",
"performance",
"image segmentation algorithms",
"benchmark datasets",
"a discussion",
"the challenges",
"future prospects",
"image segmentation",
"agriculture",
"eight"
] |
COVID-19 Fake News Detection using Deep Learning Model | [
"Mahabuba Akhter",
"Syed Md. Minhaz Hossain",
"Rizma Sijana Nigar",
"Srabanti Paul",
"Khaleque Md. Aashiq Kamal",
"Anik Sen",
"Iqbal H. Sarker"
] | People may now receive and share information more quickly and easily than ever due to the widespread use of mobile networked devices. However, this can occasionally lead to the spread of false information. Such information is being disseminated widely, which may cause people to make incorrect decisions about potentially crucial topics. This occurred in 2020, the year of the fatal and extremely contagious Coronavirus Disease (COVID-19) outbreak. The spread of false information about COVID-19 on social media has already been labeled as an “infodemic” by the World Health Organization (WHO), causing serious difficulties for governments attempting to control the pandemic. Consequently, it is crucial to have a model for detecting fake news related to COVID-19. In this paper, we present an effective Convolutional Neural Network (CNN)-based deep learning model using word embedding. For selecting the best CNN architecture, we take into account the optimal values of model hyper-parameters using grid search. Further, for measuring the effectiveness of our proposed CNN model, various state-of-the-art machine learning algorithms are conducted for COVID-19 fake news detection. Among them, CNN outperforms with 96.19% mean accuracy, 95% mean F1-score, and 0.985 area under ROC curve (AUC). | 10.1007/s40745-023-00507-y | covid-19 fake news detection using deep learning model | people may now receive and share information more quickly and easily than ever due to the widespread use of mobile networked devices. however, this can occasionally lead to the spread of false information. such information is being disseminated widely, which may cause people to make incorrect decisions about potentially crucial topics. this occurred in 2020, the year of the fatal and extremely contagious coronavirus disease (covid-19) outbreak. the spread of false information about covid-19 on social media has already been labeled as an “infodemic” by the world health organization (who), causing serious difficulties for governments attempting to control the pandemic. consequently, it is crucial to have a model for detecting fake news related to covid-19. in this paper, we present an effective convolutional neural network (cnn)-based deep learning model using word embedding. for selecting the best cnn architecture, we take into account the optimal values of model hyper-parameters using grid search. further, for measuring the effectiveness of our proposed cnn model, various state-of-the-art machine learning algorithms are conducted for covid-19 fake news detection. among them, cnn outperforms with 96.19% mean accuracy, 95% mean f1-score, and 0.985 area under roc curve (auc). | [
"people",
"information",
"the widespread use",
"mobile networked devices",
"this",
"the spread",
"false information",
"such information",
"which",
"people",
"incorrect decisions",
"potentially crucial topics",
"this",
"the year",
"the fatal and extremely contagious coronavirus disease",
"covid-19",
"the spread",
"false information",
"covid-19",
"social media",
"the world health organization",
"who",
"serious difficulties",
"governments",
"it",
"a model",
"fake news",
"covid-19",
"this paper",
"we",
"an effective convolutional neural network",
"cnn)-based deep learning model",
"word",
"the best cnn architecture",
"we",
"account",
"the optimal values",
"model hyper-parameters",
"grid search",
"the effectiveness",
"our proposed cnn model",
"the-art",
"covid-19 fake news detection",
"them",
"cnn",
"96.19%",
"mean accuracy",
"95%",
"0.985 area",
"roc curve",
"auc",
"2020",
"the year",
"covid-19",
"covid-19",
"the world health organization",
"covid-19",
"cnn",
"cnn",
"covid-19",
"cnn",
"96.19%",
"95%",
"0.985",
"roc"
] |
Model-based deep learning framework for accelerated optical projection tomography | [
"Marcos Obando",
"Andrea Bassi",
"Nicolas Ducros",
"Germán Mato",
"Teresa M. Correia"
] | In this work, we propose a model-based deep learning reconstruction algorithm for optical projection tomography (ToMoDL), to greatly reduce acquisition and reconstruction times. The proposed method iterates over a data consistency step and an image domain artefact removal step achieved by a convolutional neural network. A preprocessing stage is also included to avoid potential misalignments between the sample center of rotation and the detector. The algorithm is trained using a database of wild-type zebrafish (Danio rerio) at different stages of development to minimise the mean square error for a fixed number of iterations. Using a cross-validation scheme, we compare the results to other reconstruction methods, such as filtered backprojection, compressed sensing and a direct deep learning method where the pseudo-inverse solution is corrected by a U-Net. The proposed method performs equally well or better than the alternatives. For a highly reduced number of projections, only the U-Net method provides images comparable to those obtained with ToMoDL. However, ToMoDL has a much better performance if the amount of data available for training is limited, given that the number of network trainable parameters is smaller. | 10.1038/s41598-023-47650-3 | model-based deep learning framework for accelerated optical projection tomography | in this work, we propose a model-based deep learning reconstruction algorithm for optical projection tomography (tomodl), to greatly reduce acquisition and reconstruction times. the proposed method iterates over a data consistency step and an image domain artefact removal step achieved by a convolutional neural network. a preprocessing stage is also included to avoid potential misalignments between the sample center of rotation and the detector. the algorithm is trained using a database of wild-type zebrafish (danio rerio) at different stages of development to minimise the mean square error for a fixed number of iterations. using a cross-validation scheme, we compare the results to other reconstruction methods, such as filtered backprojection, compressed sensing and a direct deep learning method where the pseudo-inverse solution is corrected by a u-net. the proposed method performs equally well or better than the alternatives. for a highly reduced number of projections, only the u-net method provides images comparable to those obtained with tomodl. however, tomodl has a much better performance if the amount of data available for training is limited, given that the number of network trainable parameters is smaller. | [
"this work",
"we",
"a model-based deep learning reconstruction algorithm",
"optical projection tomography",
"acquisition and reconstruction times",
"the proposed method",
"a data consistency step",
"an image domain artefact removal step",
"a convolutional neural network",
"a preprocessing stage",
"potential misalignments",
"the sample center",
"rotation",
"the detector",
"the algorithm",
"a database",
"wild-type zebrafish",
"danio rerio",
"different stages",
"development",
"the mean square error",
"a fixed number",
"iterations",
"a cross-validation scheme",
"we",
"the results",
"other reconstruction methods",
"filtered backprojection",
"compressed sensing",
"a direct deep learning method",
"the pseudo-inverse solution",
"a u",
"-",
"net",
"the proposed method",
"the alternatives",
"a highly reduced number",
"projections",
"only the u-net method",
"images",
"those",
"a much better performance",
"the amount",
"data",
"training",
"the number",
"network trainable parameters",
"danio rerio"
] |
Mammography using low-frequency electromagnetic fields with deep learning | [
"Hamid Akbari-Chelaresi",
"Dawood Alsaedi",
"Seyed Hossein Mirjahanmardi",
"Mohamed El Badawe",
"Ali M. Albishi",
"Vahid Nayyeri",
"Omar M. Ramahi"
] | In this paper, a novel technique for detecting female breast anomalous tissues is presented and validated through numerical simulations. The technique, to a high degree, resembles X-ray mammography; however, instead of using X-rays for obtaining images of the breast, low-frequency electromagnetic fields are leveraged. To capture breast impressions, a metasurface, which can be thought of as analogous to X-rays film, has been employed. To achieve deep and sufficient penetration within the breast tissues, the source of excitation is a simple narrow-band dipole antenna operating at 200 MHz. The metasurface is designed to operate at the same frequency. The detection mechanism is based on comparing the impressions obtained from the breast under examination to the reference case (healthy breasts) using machine learning techniques. Using this system, not only would it be possible to detect tumors (benign or malignant), but one can also determine the location and size of the tumors. Remarkably, deep learning models were found to achieve very high classification accuracy. | 10.1038/s41598-023-40494-x | mammography using low-frequency electromagnetic fields with deep learning | in this paper, a novel technique for detecting female breast anomalous tissues is presented and validated through numerical simulations. the technique, to a high degree, resembles x-ray mammography; however, instead of using x-rays for obtaining images of the breast, low-frequency electromagnetic fields are leveraged. to capture breast impressions, a metasurface, which can be thought of as analogous to x-rays film, has been employed. to achieve deep and sufficient penetration within the breast tissues, the source of excitation is a simple narrow-band dipole antenna operating at 200 mhz. the metasurface is designed to operate at the same frequency. the detection mechanism is based on comparing the impressions obtained from the breast under examination to the reference case (healthy breasts) using machine learning techniques. using this system, not only would it be possible to detect tumors (benign or malignant), but one can also determine the location and size of the tumors. remarkably, deep learning models were found to achieve very high classification accuracy. | [
"this paper",
"a novel technique",
"female breast anomalous tissues",
"numerical simulations",
"the technique",
"a high degree",
"x-ray mammography",
"x",
"-",
"rays",
"images",
"the breast, low-frequency electromagnetic fields",
"breast impressions",
"a metasurface",
"which",
"x-rays film",
"deep and sufficient penetration",
"the breast tissues",
"the source",
"excitation",
"a simple narrow-band dipole antenna",
"200 mhz",
"the metasurface",
"the same frequency",
"the detection mechanism",
"the impressions",
"the breast",
"examination",
"the reference case",
"healthy breasts",
"machine learning techniques",
"this system",
"it",
"tumors",
"one",
"the location",
"size",
"the tumors",
"deep learning models",
"very high classification accuracy",
"dipole antenna",
"200"
] |
Taxonomy of deep learning-based intrusion detection system approaches in fog computing: a systematic review | [
"Sepide Najafli",
"Abolfazl Toroghi Haghighat",
"Babak Karasfi"
] | The Internet of Things (IoT) has been used in various aspects. Fundamental security issues must be addressed to accelerate and develop the Internet of Things. An intrusion detection system (IDS) is an essential element in network security designed to detect and determine the type of attacks. The use of deep learning (DL) shows promising results in the design of IDS based on IoT. DL facilitates analytics and learning in the dynamic IoT domain. Some deep learning-based IDS in IOT sensors cannot be executed, because of resource restrictions. Although cloud computing could overcome limitations, the distance between the cloud and the end IoT sensors causes high communication costs, security problems and delays. Fog computing has been presented to handle these issues and can bring resources to the edge of the network. Many studies have been conducted to investigate IDS based on IoT. Our goal is to investigate and classify deep learning-based IDS on fog processing. In this paper, researchers can access comprehensive resources in this field. Therefore, first, we provide a complete classification of IDS in IoT. Then practical and important proposed IDSs in the fog environment are discussed in three groups (binary, multi-class, and hybrid), and are examined the advantages and disadvantages of each approach. The results show that most of the studied methods consider hybrid strategies (binary and multi-class). In addition, in the reviewed papers the average Accuracy obtained in the binary method is better than the multi-class. Finally, we highlight some challenges and future directions for the next research in IDS techniques. | 10.1007/s10115-024-02162-y | taxonomy of deep learning-based intrusion detection system approaches in fog computing: a systematic review | the internet of things (iot) has been used in various aspects. fundamental security issues must be addressed to accelerate and develop the internet of things. an intrusion detection system (ids) is an essential element in network security designed to detect and determine the type of attacks. the use of deep learning (dl) shows promising results in the design of ids based on iot. dl facilitates analytics and learning in the dynamic iot domain. some deep learning-based ids in iot sensors cannot be executed, because of resource restrictions. although cloud computing could overcome limitations, the distance between the cloud and the end iot sensors causes high communication costs, security problems and delays. fog computing has been presented to handle these issues and can bring resources to the edge of the network. many studies have been conducted to investigate ids based on iot. our goal is to investigate and classify deep learning-based ids on fog processing. in this paper, researchers can access comprehensive resources in this field. therefore, first, we provide a complete classification of ids in iot. then practical and important proposed idss in the fog environment are discussed in three groups (binary, multi-class, and hybrid), and are examined the advantages and disadvantages of each approach. the results show that most of the studied methods consider hybrid strategies (binary and multi-class). in addition, in the reviewed papers the average accuracy obtained in the binary method is better than the multi-class. finally, we highlight some challenges and future directions for the next research in ids techniques. | [
"the internet",
"things",
"iot",
"various aspects",
"fundamental security issues",
"the internet",
"things",
"an intrusion detection system",
"ids",
"an essential element",
"network security",
"the type",
"attacks",
"the use",
"deep learning",
"dl",
"results",
"the design",
"ids",
"iot",
"dl",
"analytics",
"the dynamic iot domain",
"some deep learning-based ids",
"iot sensors",
"resource restrictions",
"cloud computing",
"limitations",
"the distance",
"the cloud",
"the end iot sensors",
"high communication costs",
"security problems",
"delays",
"fog computing",
"these issues",
"resources",
"the edge",
"the network",
"many studies",
"ids",
"iot",
"our goal",
"deep learning-based ids",
"fog processing",
"this paper",
"researchers",
"comprehensive resources",
"this field",
"we",
"a complete classification",
"ids",
"iot",
"practical and important proposed idss",
"the fog environment",
"three groups",
"the advantages",
"disadvantages",
"each approach",
"the results",
"the studied methods",
"hybrid strategies",
"binary and multi-class",
"addition",
"the reviewed papers",
"the average accuracy",
"the binary method",
"the multi",
"-",
"class",
"we",
"some challenges",
"future directions",
"the next research",
"ids techniques",
"fog computing",
"first",
"three"
] |
Deep learning-based pathway-centric approach to characterize recurrent hepatocellular carcinoma after liver transplantation | [
"Jeffrey To",
"Soumita Ghosh",
"Xun Zhao",
"Elisa Pasini",
"Sandra Fischer",
"Gonzalo Sapisochin",
"Anand Ghanekar",
"Elmar Jaeckel",
"Mamatha Bhat"
] | BackgroundLiver transplantation (LT) is offered as a cure for Hepatocellular carcinoma (HCC), however 15–20% develop recurrence post-transplant which tends to be aggressive. In this study, we examined the transcriptome profiles of patients with recurrent HCC to identify differentially expressed genes (DEGs), the involved pathways, biological functions, and potential gene signatures of recurrent HCC post-transplant using deep machine learning (ML) methodology.Materials and methodsWe analyzed the transcriptomic profiles of primary and recurrent tumor samples from 7 pairs of patients who underwent LT. Following differential gene expression analysis, we performed pathway enrichment, gene ontology (GO) analyses and protein-protein interactions (PPIs) with top 10 hub gene networks. We also predicted the landscape of infiltrating immune cells using Cibersortx. We next develop pathway and GO term-based deep learning models leveraging primary tissue gene expression data from The Cancer Genome Atlas (TCGA) to identify gene signatures in recurrent HCC.ResultsThe PI3K/Akt signaling pathway and cytokine-mediated signaling pathway were particularly activated in HCC recurrence. The recurrent tumors exhibited upregulation of an immune-escape related gene, CD274, in the top 10 hub gene analysis. Significantly higher infiltration of monocytes and lower M1 macrophages were found in recurrent HCC tumors. Our deep learning approach identified a 20-gene signature in recurrent HCC. Amongst the 20 genes, through multiple analysis, IL6 was found to be significantly associated with HCC recurrence.ConclusionOur deep learning approach identified PI3K/Akt signaling as potentially regulating cytokine-mediated functions and the expression of immune escape genes, leading to alterations in the pattern of immune cell infiltration. In conclusion, IL6 was identified to play an important role in HCC recurrence. | 10.1186/s40246-024-00624-6 | deep learning-based pathway-centric approach to characterize recurrent hepatocellular carcinoma after liver transplantation | backgroundliver transplantation (lt) is offered as a cure for hepatocellular carcinoma (hcc), however 15–20% develop recurrence post-transplant which tends to be aggressive. in this study, we examined the transcriptome profiles of patients with recurrent hcc to identify differentially expressed genes (degs), the involved pathways, biological functions, and potential gene signatures of recurrent hcc post-transplant using deep machine learning (ml) methodology.materials and methodswe analyzed the transcriptomic profiles of primary and recurrent tumor samples from 7 pairs of patients who underwent lt. following differential gene expression analysis, we performed pathway enrichment, gene ontology (go) analyses and protein-protein interactions (ppis) with top 10 hub gene networks. we also predicted the landscape of infiltrating immune cells using cibersortx. we next develop pathway and go term-based deep learning models leveraging primary tissue gene expression data from the cancer genome atlas (tcga) to identify gene signatures in recurrent hcc.resultsthe pi3k/akt signaling pathway and cytokine-mediated signaling pathway were particularly activated in hcc recurrence. the recurrent tumors exhibited upregulation of an immune-escape related gene, cd274, in the top 10 hub gene analysis. significantly higher infiltration of monocytes and lower m1 macrophages were found in recurrent hcc tumors. our deep learning approach identified a 20-gene signature in recurrent hcc. amongst the 20 genes, through multiple analysis, il6 was found to be significantly associated with hcc recurrence.conclusionour deep learning approach identified pi3k/akt signaling as potentially regulating cytokine-mediated functions and the expression of immune escape genes, leading to alterations in the pattern of immune cell infiltration. in conclusion, il6 was identified to play an important role in hcc recurrence. | [
"backgroundliver transplantation",
"lt",
"a cure",
"hepatocellular carcinoma",
"hcc",
"15–20%",
"recurrence",
"transplant",
"which",
"this study",
"we",
"the transcriptome profiles",
"patients",
"recurrent hcc",
"differentially expressed genes",
"the involved pathways",
"biological functions",
"potential gene signatures",
"recurrent hcc post",
"-",
"deep machine learning",
"(ml) methodology.materials",
"methodswe",
"the transcriptomic profiles",
"primary and recurrent tumor samples",
"7 pairs",
"patients",
"who",
"lt",
"differential gene expression analysis",
"we",
"pathway enrichment",
"gene ontology",
"analyses",
"protein-protein interactions",
"top 10 hub gene networks",
"we",
"the landscape",
"infiltrating immune cells",
"cibersortx",
"we",
"term-based deep learning models",
"primary tissue gene expression data",
"the cancer genome atlas",
"(tcga",
"gene signatures",
"recurrent hcc.resultsthe pi3k/akt",
"pathway",
"hcc recurrence",
"the recurrent tumors",
"upregulation",
"an immune-escape related gene",
"cd274",
"the top 10 hub gene analysis",
"significantly higher infiltration",
"monocytes",
"lower m1 macrophages",
"recurrent hcc tumors",
"our deep learning approach",
"a 20-gene signature",
"recurrent hcc",
"the 20 genes",
"multiple analysis",
"il6",
"hcc",
"recurrence.conclusionour deep learning approach",
"pi3k/akt",
"cytokine-mediated functions",
"the expression",
"immune escape genes",
"alterations",
"the pattern",
"immune cell infiltration",
"conclusion",
"il6",
"an important role",
"hcc recurrence",
"15–20%",
"7",
"10",
"10",
"20",
"20"
] |
Deep learning models for perception of brightness related illusions | [
"Amrita Mukherjee",
"Avijit Paul",
"Kuntal Ghosh"
] | Illusions are like holes in our effortless visual mechanism through which we can peep into the internal mechanisms of the brain. Scientists attempted to explain underlying physiological, physical, and cognitive mechanisms of illusions by the receptive field hierarchical organizations, information sampling, filtering, etc. Some antagonistic illusions cannot be explained by them and for this, deep learning networks were used recently as a model for illusion perception. To further broaden the scope of the perceptual functionality in the brightness contrast genre, handle the background removal effects on some illusions that reduce the illusory effects, and replicate the antagonistic illusions with the same parameter setup, we have used Convolutional Neural Network, Autoencoder, U-Net, and U-Net++ models for replicating the visual illusions. The networks are specialized in low-level vision tasks like De-noising, De-blurring, and a combination of both. A high number of brightness contrast visual illusions are tested on all the networks and most of the outcomes significantly matched human perceptions. Overall, our method will guide the development of neurobiological frameworks which might enrich the computational neuroscience study by distilling some biological principles. On the other hand, the machine learning community will benefit from knowing the inherent flaws of the networks so that the true image of reality can be taken into consideration, especially in imaging situations where experts too can be deceived. | 10.1007/s10489-024-05658-w | deep learning models for perception of brightness related illusions | illusions are like holes in our effortless visual mechanism through which we can peep into the internal mechanisms of the brain. scientists attempted to explain underlying physiological, physical, and cognitive mechanisms of illusions by the receptive field hierarchical organizations, information sampling, filtering, etc. some antagonistic illusions cannot be explained by them and for this, deep learning networks were used recently as a model for illusion perception. to further broaden the scope of the perceptual functionality in the brightness contrast genre, handle the background removal effects on some illusions that reduce the illusory effects, and replicate the antagonistic illusions with the same parameter setup, we have used convolutional neural network, autoencoder, u-net, and u-net++ models for replicating the visual illusions. the networks are specialized in low-level vision tasks like de-noising, de-blurring, and a combination of both. a high number of brightness contrast visual illusions are tested on all the networks and most of the outcomes significantly matched human perceptions. overall, our method will guide the development of neurobiological frameworks which might enrich the computational neuroscience study by distilling some biological principles. on the other hand, the machine learning community will benefit from knowing the inherent flaws of the networks so that the true image of reality can be taken into consideration, especially in imaging situations where experts too can be deceived. | [
"illusions",
"holes",
"our effortless visual mechanism",
"which",
"we",
"the internal mechanisms",
"the brain",
"scientists",
"underlying physiological, physical, and cognitive mechanisms",
"illusions",
"the receptive field hierarchical organizations",
"information sampling",
"filtering",
"some antagonistic illusions",
"them",
"this",
"deep learning networks",
"a model",
"illusion perception",
"the scope",
"the perceptual functionality",
"the brightness contrast genre",
"the background removal effects",
"some illusions",
"that",
"the illusory effects",
"the antagonistic illusions",
"the same parameter setup",
"we",
"convolutional neural network",
"autoencoder",
"-",
"net",
"u-net++ models",
"the visual illusions",
"the networks",
"low-level vision tasks",
"de",
"de",
"-",
"a combination",
"both",
"a high number",
"brightness contrast visual illusions",
"all the networks",
"the outcomes",
"human perceptions",
"our method",
"the development",
"neurobiological frameworks",
"which",
"the computational neuroscience study",
"some biological principles",
"the other hand",
"the machine learning community",
"the inherent flaws",
"the networks",
"the true image",
"reality",
"consideration",
"imaging situations",
"experts"
] |
Exploiting biochemical data to improve osteosarcoma diagnosis with deep learning | [
"Shidong Wang",
"Yangyang Shen",
"Fanwei Zeng",
"Meng Wang",
"Bohan Li",
"Dian Shen",
"Xiaodong Tang",
"Beilun Wang"
] | Early and accurate diagnosis of osteosarcomas (OS) is of great clinical significance, and machine learning (ML) based methods are increasingly adopted. However, current ML-based methods for osteosarcoma diagnosis consider only X-ray images, usually fail to generalize to new cases, and lack explainability. In this paper, we seek to explore the capability of deep learning models in diagnosing primary OS, with higher accuracy, explainability, and generality. Concretely, we analyze the added value of integrating the biochemical data, i.e., alkaline phosphatase (ALP) and lactate dehydrogenase (LDH), and design a model that incorporates the numerical features of ALP and LDH and the visual features of X-ray imaging through a late fusion approach in the feature space. We evaluate this model on real-world clinic data with 848 patients aged from 4 to 81. The experimental results reveal the effectiveness of incorporating ALP and LDH simultaneously in a late fusion approach, with the accuracy of the considered 2608 cases increased to 97.17%, compared to 94.35% in the baseline. Grad-CAM visualizations consistent with orthopedic specialists further justified the model’s explainability. | 10.1007/s13755-024-00288-5 | exploiting biochemical data to improve osteosarcoma diagnosis with deep learning | early and accurate diagnosis of osteosarcomas (os) is of great clinical significance, and machine learning (ml) based methods are increasingly adopted. however, current ml-based methods for osteosarcoma diagnosis consider only x-ray images, usually fail to generalize to new cases, and lack explainability. in this paper, we seek to explore the capability of deep learning models in diagnosing primary os, with higher accuracy, explainability, and generality. concretely, we analyze the added value of integrating the biochemical data, i.e., alkaline phosphatase (alp) and lactate dehydrogenase (ldh), and design a model that incorporates the numerical features of alp and ldh and the visual features of x-ray imaging through a late fusion approach in the feature space. we evaluate this model on real-world clinic data with 848 patients aged from 4 to 81. the experimental results reveal the effectiveness of incorporating alp and ldh simultaneously in a late fusion approach, with the accuracy of the considered 2608 cases increased to 97.17%, compared to 94.35% in the baseline. grad-cam visualizations consistent with orthopedic specialists further justified the model’s explainability. | [
"early and accurate diagnosis",
"great clinical significance",
"machine learning (ml) based methods",
"current ml-based methods",
"osteosarcoma diagnosis",
"only x-ray images",
"new cases",
"lack explainability",
"this paper",
"we",
"the capability",
"deep learning models",
"higher accuracy",
"explainability",
"generality",
"we",
"the added value",
"the biochemical data",
"i.e., alkaline phosphatase",
"alp",
"lactate dehydrogenase",
"ldh",
"a model",
"that",
"the numerical features",
"alp",
"ldh",
"the visual features",
"x",
"-ray imaging",
"a late fusion approach",
"the feature space",
"we",
"this model",
"real-world clinic data",
"848 patients",
"the experimental results",
"the effectiveness",
"alp",
"ldh",
"a late fusion approach",
"the accuracy",
"the considered 2608 cases",
"97.17%",
"94.35%",
"the baseline",
"grad-cam visualizations",
"orthopedic specialists",
"the model’s explainability",
"848",
"4",
"81",
"2608",
"97.17%",
"94.35%"
] |
Deep learning-based predictive classification of functional subpopulations of hematopoietic stem cells and multipotent progenitors | [
"Shen Wang",
"Jianzhong Han",
"Jingru Huang",
"Khayrul Islam",
"Yuheng Shi",
"Yuyuan Zhou",
"Dongwook Kim",
"Jane Zhou",
"Zhaorui Lian",
"Yaling Liu",
"Jian Huang"
] | BackgroundHematopoietic stem cells (HSCs) and multipotent progenitors (MPPs) play a pivotal role in maintaining lifelong hematopoiesis. The distinction between stem cells and other progenitors, as well as the assessment of their functions, has long been a central focus in stem cell research. In recent years, deep learning has emerged as a powerful tool for cell image analysis and classification/prediction.MethodsIn this study, we explored the feasibility of employing deep learning techniques to differentiate murine HSCs and MPPs based solely on their morphology, as observed through light microscopy (DIC) images.ResultsAfter rigorous training and validation using extensive image datasets, we successfully developed a three-class classifier, referred to as the LSM model, capable of reliably distinguishing long-term HSCs, short-term HSCs, and MPPs. The LSM model extracts intrinsic morphological features unique to different cell types, irrespective of the methods used for cell identification and isolation, such as surface markers or intracellular GFP markers. Furthermore, employing the same deep learning framework, we created a two-class classifier that effectively discriminates between aged HSCs and young HSCs. This discovery is particularly significant as both cell types share identical surface markers yet serve distinct functions. This classifier holds the potential to offer a novel, rapid, and efficient means of assessing the functional states of HSCs, thus obviating the need for time-consuming transplantation experiments.ConclusionOur study represents the pioneering use of deep learning to differentiate HSCs and MPPs under steady-state conditions. This novel and robust deep learning-based platform will provide a basis for the future development of a new generation stem cell identification and separation system. It may also provide new insight into the molecular mechanisms underlying stem cell self-renewal. | 10.1186/s13287-024-03682-8 | deep learning-based predictive classification of functional subpopulations of hematopoietic stem cells and multipotent progenitors | backgroundhematopoietic stem cells (hscs) and multipotent progenitors (mpps) play a pivotal role in maintaining lifelong hematopoiesis. the distinction between stem cells and other progenitors, as well as the assessment of their functions, has long been a central focus in stem cell research. in recent years, deep learning has emerged as a powerful tool for cell image analysis and classification/prediction.methodsin this study, we explored the feasibility of employing deep learning techniques to differentiate murine hscs and mpps based solely on their morphology, as observed through light microscopy (dic) images.resultsafter rigorous training and validation using extensive image datasets, we successfully developed a three-class classifier, referred to as the lsm model, capable of reliably distinguishing long-term hscs, short-term hscs, and mpps. the lsm model extracts intrinsic morphological features unique to different cell types, irrespective of the methods used for cell identification and isolation, such as surface markers or intracellular gfp markers. furthermore, employing the same deep learning framework, we created a two-class classifier that effectively discriminates between aged hscs and young hscs. this discovery is particularly significant as both cell types share identical surface markers yet serve distinct functions. this classifier holds the potential to offer a novel, rapid, and efficient means of assessing the functional states of hscs, thus obviating the need for time-consuming transplantation experiments.conclusionour study represents the pioneering use of deep learning to differentiate hscs and mpps under steady-state conditions. this novel and robust deep learning-based platform will provide a basis for the future development of a new generation stem cell identification and separation system. it may also provide new insight into the molecular mechanisms underlying stem cell self-renewal. | [
"backgroundhematopoietic stem cells",
"hscs",
"multipotent progenitors",
"mpps",
"a pivotal role",
"lifelong hematopoiesis",
"the distinction",
"stem cells",
"other progenitors",
"the assessment",
"their functions",
"a central focus",
"stem cell research",
"recent years",
"deep learning",
"a powerful tool",
"cell image analysis",
"classification/prediction.methodsin",
"we",
"the feasibility",
"deep learning techniques",
"murine hscs",
"mpps",
"their morphology",
"light microscopy (dic) images.resultsafter rigorous training",
"validation",
"extensive image datasets",
"we",
"a three-class classifier",
"the lsm model",
"reliably distinguishing long-term hscs",
"short-term hscs",
"mpps",
"the lsm model",
"intrinsic morphological features",
"different cell types",
"the methods",
"cell identification",
"isolation",
"surface markers",
"intracellular gfp markers",
"the same deep learning framework",
"we",
"a two-class classifier",
"that",
"aged hscs",
"young hscs",
"this discovery",
"both cell types",
"identical surface markers",
"distinct functions",
"this classifier",
"the potential",
"a novel, rapid, and efficient means",
"the functional states",
"hscs",
"the need",
"time-consuming transplantation",
"experiments.conclusionour study",
"the pioneering use",
"deep learning",
"hscs",
"mpps",
"steady-state conditions",
"this novel and robust deep learning-based platform",
"a basis",
"the future development",
"a new generation stem cell identification",
"separation system",
"it",
"new insight",
"the molecular mechanisms",
"stem cell self-renewal",
"recent years",
"murine hscs",
"three",
"two"
] |
Deep source transfer learning for the estimation of internal brain dynamics using scalp EEG | [
"Haitao Yu",
"Zhiwen Hu",
"Quanfa Zhao",
"Jing Liu"
] | Electroencephalography (EEG) provides high temporal resolution neural data for brain-computer interfacing via noninvasive electrophysiological recording. Estimating the internal brain activity by means of source imaging techniques can further improve the spatial resolution of EEG and enhance the reliability of neural decoding and brain-computer interaction. In this work, we propose a novel EEG data-driven source imaging scheme for precise and efficient estimation of macroscale spatiotemporal brain dynamics across thalamus and cortical regions with deep learning methods. A deep source imaging framework with a convolutional-recurrent neural network is designed to estimate the internal brain dynamics from high-density EEG recordings. Moreover, a brain model including 210 cortical regions and 16 thalamic nuclei is established based on human brain connectome to provide synthetic training data, which manifests intrinsic characteristics of underlying brain dynamics in spontaneous, stimulation-evoked, and pathological states. Transfer learning algorithm is further applied to the trained network to reduce the dynamical differences between synthetic and realistic EEG. Extensive experiments exhibit that the proposed deep-learning method can accurately estimate the spatial and temporal activity of brain sources and achieves superior performance compared to the state-of-the-art approaches. Moreover, the EEG data-driven source imaging framework is effective in the location of seizure onset zone in epilepsy and reconstruction of dynamical thalamocortical interactions during sensory processing of acupuncture stimulation, implying its applicability in brain-computer interfacing for neuroscience research and clinical applications. | 10.1007/s11571-024-10149-2 | deep source transfer learning for the estimation of internal brain dynamics using scalp eeg | electroencephalography (eeg) provides high temporal resolution neural data for brain-computer interfacing via noninvasive electrophysiological recording. estimating the internal brain activity by means of source imaging techniques can further improve the spatial resolution of eeg and enhance the reliability of neural decoding and brain-computer interaction. in this work, we propose a novel eeg data-driven source imaging scheme for precise and efficient estimation of macroscale spatiotemporal brain dynamics across thalamus and cortical regions with deep learning methods. a deep source imaging framework with a convolutional-recurrent neural network is designed to estimate the internal brain dynamics from high-density eeg recordings. moreover, a brain model including 210 cortical regions and 16 thalamic nuclei is established based on human brain connectome to provide synthetic training data, which manifests intrinsic characteristics of underlying brain dynamics in spontaneous, stimulation-evoked, and pathological states. transfer learning algorithm is further applied to the trained network to reduce the dynamical differences between synthetic and realistic eeg. extensive experiments exhibit that the proposed deep-learning method can accurately estimate the spatial and temporal activity of brain sources and achieves superior performance compared to the state-of-the-art approaches. moreover, the eeg data-driven source imaging framework is effective in the location of seizure onset zone in epilepsy and reconstruction of dynamical thalamocortical interactions during sensory processing of acupuncture stimulation, implying its applicability in brain-computer interfacing for neuroscience research and clinical applications. | [
"electroencephalography",
"(eeg",
"high temporal resolution neural data",
"brain-computer",
"noninvasive electrophysiological recording",
"the internal brain activity",
"means",
"source",
"imaging techniques",
"the spatial resolution",
"eeg",
"the reliability",
"neural decoding and brain-computer interaction",
"this work",
"we",
"a novel eeg data-driven source",
"scheme",
"precise and efficient estimation",
"macroscale spatiotemporal brain dynamics",
"thalamus and cortical regions",
"deep learning methods",
"a deep source imaging framework",
"a convolutional-recurrent neural network",
"the internal brain dynamics",
"high-density eeg recordings",
"a brain model",
"210 cortical regions",
"16 thalamic nuclei",
"human brain connectome",
"synthetic training data",
"which",
"intrinsic characteristics",
"underlying brain dynamics",
"spontaneous, stimulation-evoked, and pathological states",
"transfer learning algorithm",
"the trained network",
"the dynamical differences",
"synthetic and realistic eeg",
"extensive experiments",
"the proposed deep-learning method",
"the spatial and temporal activity",
"brain sources",
"superior performance",
"the-art",
"the eeg data-driven source imaging framework",
"the location",
"seizure onset zone",
"epilepsy",
"reconstruction",
"dynamical thalamocortical interactions",
"sensory processing",
"acupuncture stimulation",
"its applicability",
"brain-computer",
"neuroscience research and clinical applications",
"electroencephalography",
"210",
"16"
] |
Fully dynamic reorder policies with deep reinforcement learning for multi-echelon inventory management | [
"Patric Hammler",
"Nicolas Riesterer",
"Torsten Braun"
] | The operation of inventory systems plays an important role in the success of manufacturing companies, making it a highly relevant domain for optimization. In particular, the domain lends itself to being approached via Deep Reinforcement Learning (DRL) models due to it requiring sequential reorder decisions based on uncertainty to minimize cost. In this paper, we evaluate state-of-the-art optimization approaches to determine whether Deep Reinforcement Learning can be applied to the multi-echelon inventory optimization (MEIO) framework in a practically feasible manner to generate fully dynamic reorder policies. We investigate how it performs in comparison to an optimized static reorder policy, how robust it is when it comes to structural changes in the environment, and whether the use of DRL is safe in terms of risk in real-world applications. Our results show promising performance for DRL with potential for improvement in terms of minimizing risky behavior. | 10.1007/s00287-023-01556-6 | fully dynamic reorder policies with deep reinforcement learning for multi-echelon inventory management | the operation of inventory systems plays an important role in the success of manufacturing companies, making it a highly relevant domain for optimization. in particular, the domain lends itself to being approached via deep reinforcement learning (drl) models due to it requiring sequential reorder decisions based on uncertainty to minimize cost. in this paper, we evaluate state-of-the-art optimization approaches to determine whether deep reinforcement learning can be applied to the multi-echelon inventory optimization (meio) framework in a practically feasible manner to generate fully dynamic reorder policies. we investigate how it performs in comparison to an optimized static reorder policy, how robust it is when it comes to structural changes in the environment, and whether the use of drl is safe in terms of risk in real-world applications. our results show promising performance for drl with potential for improvement in terms of minimizing risky behavior. | [
"the operation",
"inventory systems",
"an important role",
"the success",
"manufacturing companies",
"it",
"optimization",
"the domain",
"itself",
"deep reinforcement learning (drl) models",
"it",
"sequential reorder decisions",
"uncertainty",
"cost",
"this paper",
"we",
"the-art",
"deep reinforcement learning",
"the multi-echelon inventory optimization",
"meio) framework",
"a practically feasible manner",
"fully dynamic reorder policies",
"we",
"it",
"comparison",
"an optimized static reorder policy",
"it",
"it",
"structural changes",
"the environment",
"the use",
"drl",
"terms",
"risk",
"real-world applications",
"our results",
"promising performance",
"drl",
"potential",
"improvement",
"terms",
"risky behavior"
] |
New hybrid deep learning models for multi-target NILM disaggregation | [
"Jamila Ouzine",
"Manal Marzouq",
"Saad Dosse Bennani",
"Khadija Lahrech",
"Hakim EL Fadili"
] | Non-Intrusive Load Monitoring (NILM) technique or energy disaggregation is a technique used to detect the appliance’s states and estimate their individual energy consumption, given the aggregated data through the main smart meter. Indeed, energy efficiency is the main goal of the NILM techniques, which can be achieved by providing energy disaggregation feedback to the consumers. Unlike single models where training must be performed for each appliance, this work proposes multi-target disaggregation which is more appropriate due to the drastic reduction of resources when training is performed for all target appliances simultaneously. For this purpose, new hybrid models are proposed by combining well-known deep learning models: Convolutional Neural Network (CNN), Denoising Autoencoder (DAE), Recurrent Neural Network (RNN), and Long Short-Term Memory network (LSTM). An implementation and detailed comparative study is then suggested between the proposed hybrid deep learning models and conventional single models in terms of various performance metrics on the UK-Domestic Appliance-Level Electricity (UKDALE) benchmarking database. The experimental results show that the proposed hybrid models provide the best disaggregation performances for multi-target disaggregation compared to single models. Specifically, the CNN-LSTM and the DAE-LSTM are the best hybrid models with the highest overall F1-score of 78.90% and 72.94% respectively. | 10.1007/s12053-023-10161-1 | new hybrid deep learning models for multi-target nilm disaggregation | non-intrusive load monitoring (nilm) technique or energy disaggregation is a technique used to detect the appliance’s states and estimate their individual energy consumption, given the aggregated data through the main smart meter. indeed, energy efficiency is the main goal of the nilm techniques, which can be achieved by providing energy disaggregation feedback to the consumers. unlike single models where training must be performed for each appliance, this work proposes multi-target disaggregation which is more appropriate due to the drastic reduction of resources when training is performed for all target appliances simultaneously. for this purpose, new hybrid models are proposed by combining well-known deep learning models: convolutional neural network (cnn), denoising autoencoder (dae), recurrent neural network (rnn), and long short-term memory network (lstm). an implementation and detailed comparative study is then suggested between the proposed hybrid deep learning models and conventional single models in terms of various performance metrics on the uk-domestic appliance-level electricity (ukdale) benchmarking database. the experimental results show that the proposed hybrid models provide the best disaggregation performances for multi-target disaggregation compared to single models. specifically, the cnn-lstm and the dae-lstm are the best hybrid models with the highest overall f1-score of 78.90% and 72.94% respectively. | [
"non-intrusive load monitoring (nilm) technique or energy disaggregation",
"a technique",
"the appliance’s states",
"their individual energy consumption",
"the aggregated data",
"the main smart meter",
"energy efficiency",
"the main goal",
"the nilm techniques",
"which",
"energy disaggregation feedback",
"the consumers",
"single models",
"training",
"each appliance",
"this work",
"multi-target disaggregation",
"which",
"the drastic reduction",
"resources",
"training",
"all target appliances",
"this purpose",
"new hybrid models",
"well-known deep learning models",
"convolutional neural network",
"cnn",
"autoencoder",
"dae",
"recurrent neural network",
"rnn",
"long short-term memory network",
"lstm",
"an implementation",
"detailed comparative study",
"the proposed hybrid deep learning models",
"conventional single models",
"terms",
"various performance metrics",
"the uk-domestic appliance-level electricity",
"(ukdale",
"benchmarking database",
"the experimental results",
"the proposed hybrid models",
"the best disaggregation performances",
"multi-target disaggregation",
"single models",
"the cnn-lstm",
"the dae-lstm",
"the best hybrid models",
"the highest overall f1-score",
"78.90%",
"72.94%",
"cnn",
"dae",
"uk",
"cnn",
"78.90%",
"72.94%"
] |
Identifying depression in the United States veterans using deep learning algorithms, NHANES 2005–2018 | [
"Zihan Qu",
"Yashan Wang",
"Dingjie Guo",
"Guangliang He",
"Chuanying Sui",
"Yuqing Duan",
"Xin Zhang",
"Linwei Lan",
"Hengyu Meng",
"Yajing Wang",
"Xin Liu"
] | BackgroundDepression is a common mental health problem among veterans, with high mortality. Despite the numerous conducted investigations, the prediction and identification of risk factors for depression are still severely limited. This study used a deep learning algorithm to identify depression in veterans and its factors associated with clinical manifestations.MethodsOur data originated from the National Health and Nutrition Examination Survey (2005–2018). A dataset of 2,546 veterans was identified using deep learning and five traditional machine learning algorithms with 10-fold cross-validation. Model performance was assessed by examining the area under the subject operating characteristic curve (AUC), accuracy, recall, specificity, precision, and F1 score.ResultsDeep learning had the highest AUC (0.891, 95%CI 0.869–0.914) and specificity (0.906) in identifying depression in veterans. Further study on depression among veterans of different ages showed that the AUC values for deep learning were 0.929 (95%CI 0.904–0.955) in the middle-aged group and 0.924(95%CI 0.900-0.948) in the older age group. In addition to general health conditions, sleep difficulties, memory impairment, work incapacity, income, BMI, and chronic diseases, factors such as vitamins E and C, and palmitic acid were also identified as important influencing factors.ConclusionsCompared with traditional machine learning methods, deep learning algorithms achieved optimal performance, making it conducive for identifying depression and its risk factors among veterans. | 10.1186/s12888-023-05109-9 | identifying depression in the united states veterans using deep learning algorithms, nhanes 2005–2018 | backgrounddepression is a common mental health problem among veterans, with high mortality. despite the numerous conducted investigations, the prediction and identification of risk factors for depression are still severely limited. this study used a deep learning algorithm to identify depression in veterans and its factors associated with clinical manifestations.methodsour data originated from the national health and nutrition examination survey (2005–2018). a dataset of 2,546 veterans was identified using deep learning and five traditional machine learning algorithms with 10-fold cross-validation. model performance was assessed by examining the area under the subject operating characteristic curve (auc), accuracy, recall, specificity, precision, and f1 score.resultsdeep learning had the highest auc (0.891, 95%ci 0.869–0.914) and specificity (0.906) in identifying depression in veterans. further study on depression among veterans of different ages showed that the auc values for deep learning were 0.929 (95%ci 0.904–0.955) in the middle-aged group and 0.924(95%ci 0.900-0.948) in the older age group. in addition to general health conditions, sleep difficulties, memory impairment, work incapacity, income, bmi, and chronic diseases, factors such as vitamins e and c, and palmitic acid were also identified as important influencing factors.conclusionscompared with traditional machine learning methods, deep learning algorithms achieved optimal performance, making it conducive for identifying depression and its risk factors among veterans. | [
"backgrounddepression",
"a common mental health problem",
"veterans",
"high mortality",
"the numerous conducted investigations",
"the prediction",
"identification",
"risk factors",
"depression",
"this study",
"a deep learning algorithm",
"depression",
"veterans",
"its factors",
"clinical manifestations.methodsour data",
"the national health and nutrition examination survey",
"a dataset",
"2,546 veterans",
"deep learning",
"five traditional machine learning algorithms",
"10-fold cross",
"validation",
"model performance",
"the area",
"the subject operating characteristic curve",
"auc",
"accuracy",
"recall",
"specificity",
"precision",
"f1 score.resultsdeep learning",
"the highest auc",
"specificity",
"depression",
"veterans",
"further study",
"depression",
"veterans",
"different ages",
"the auc values",
"deep learning",
"the middle-aged group",
"0.924(95%ci",
"the older age group",
"addition",
"general health conditions",
"sleep difficulties",
"memory impairment",
"work incapacity",
"income",
"bmi",
"chronic diseases",
"factors",
"vitamins e",
"c",
"palmitic acid",
"important influencing",
"traditional machine learning methods",
"deep learning algorithms",
"optimal performance",
"it",
"depression",
"its risk factors",
"veterans",
"2005–2018",
"2,546",
"five",
"10-fold",
"0.891",
"95%ci 0.869–0.914",
"0.906",
"0.929",
"95%ci 0.904–0.955",
"0.900-0.948"
] |
Designing observables for measurements with deep learning | [
"Owen Long",
"Benjamin Nachman"
] | Many analyses in particle and nuclear physics use simulations to infer fundamental, effective, or phenomenological parameters of the underlying physics models. When the inference is performed with unfolded cross sections, the observables are designed using physics intuition and heuristics. We propose to design targeted observables with machine learning. Unfolded, differential cross sections in a neural network output contain the most information about parameters of interest and can be well-measured by construction. The networks are trained using a custom loss function that rewards outputs that are sensitive to the parameter(s) of interest while simultaneously penalizing outputs that are different between particle-level and detector-level (to minimize detector distortions). We demonstrate this idea in simulation using two physics models for inclusive measurements in deep inelastic scattering. We find that the new approach is more sensitive than classical observables at distinguishing the two models and also has a reduced unfolding uncertainty due to the reduced detector distortions. | 10.1140/epjc/s10052-024-13135-4 | designing observables for measurements with deep learning | many analyses in particle and nuclear physics use simulations to infer fundamental, effective, or phenomenological parameters of the underlying physics models. when the inference is performed with unfolded cross sections, the observables are designed using physics intuition and heuristics. we propose to design targeted observables with machine learning. unfolded, differential cross sections in a neural network output contain the most information about parameters of interest and can be well-measured by construction. the networks are trained using a custom loss function that rewards outputs that are sensitive to the parameter(s) of interest while simultaneously penalizing outputs that are different between particle-level and detector-level (to minimize detector distortions). we demonstrate this idea in simulation using two physics models for inclusive measurements in deep inelastic scattering. we find that the new approach is more sensitive than classical observables at distinguishing the two models and also has a reduced unfolding uncertainty due to the reduced detector distortions. | [
"many analyses",
"particle",
"nuclear physics",
"simulations",
"fundamental, effective, or phenomenological parameters",
"the underlying physics models",
"the inference",
"unfolded cross sections",
"the observables",
"physics intuition",
"heuristics",
"we",
"targeted observables",
"machine learning",
"unfolded, differential cross sections",
"a neural network output",
"the most information",
"parameters",
"interest",
"construction",
"the networks",
"a custom loss function",
"that",
"outputs",
"that",
"the parameter(s",
"interest",
"outputs",
"that",
"particle-level",
"detector-level",
"detector distortions",
"we",
"this idea",
"simulation",
"two physics models",
"inclusive measurements",
"deep inelastic scattering",
"we",
"the new approach",
"classical observables",
"the two models",
"a reduced unfolding uncertainty",
"the reduced detector distortions",
"two",
"two"
] |
DEL-Thyroid: deep ensemble learning framework for detection of thyroid cancer progression through genomic mutation | [
"Asghar Ali Shah",
"Ali Daud",
"Amal Bukhari",
"Bader Alshemaimri",
"Muhammad Ahsan",
"Rehmana Younis"
] | Genes, expressed as sequences of nucleotides, are susceptible to mutations, some of which can lead to cancer. Machine learning and deep learning methods have emerged as vital tools in identifying mutations associated with cancer. Thyroid cancer ranks as the 5th most prevalent cancer in the USA, with thousands diagnosed annually. This paper presents an ensemble learning model leveraging deep learning techniques such as Long Short-Term Memory (LSTM), Gated Recurrent Units (GRUs), and Bi-directional LSTM (Bi-LSTM) to detect thyroid cancer mutations early. The model is trained on a dataset sourced from asia.ensembl.org and IntOGen.org, consisting of 633 samples with 969 mutations across 41 genes, collected from individuals of various demographics. Feature extraction encompasses techniques including Hahn moments, central moments, raw moments, and various matrix-based methods. Evaluation employs three testing methods: self-consistency test (SCT), independent set test (IST), and 10-fold cross-validation test (10-FCVT). The proposed ensemble learning model demonstrates promising performance, achieving 96% accuracy in the independent set test (IST). Statistical measures such as training accuracy, testing accuracy, recall, sensitivity, specificity, Mathew's Correlation Coefficient (MCC), loss, training accuracy, F1 Score, and Cohen's kappa are utilized for comprehensive evaluation. | 10.1186/s12911-024-02604-1 | del-thyroid: deep ensemble learning framework for detection of thyroid cancer progression through genomic mutation | genes, expressed as sequences of nucleotides, are susceptible to mutations, some of which can lead to cancer. machine learning and deep learning methods have emerged as vital tools in identifying mutations associated with cancer. thyroid cancer ranks as the 5th most prevalent cancer in the usa, with thousands diagnosed annually. this paper presents an ensemble learning model leveraging deep learning techniques such as long short-term memory (lstm), gated recurrent units (grus), and bi-directional lstm (bi-lstm) to detect thyroid cancer mutations early. the model is trained on a dataset sourced from asia.ensembl.org and intogen.org, consisting of 633 samples with 969 mutations across 41 genes, collected from individuals of various demographics. feature extraction encompasses techniques including hahn moments, central moments, raw moments, and various matrix-based methods. evaluation employs three testing methods: self-consistency test (sct), independent set test (ist), and 10-fold cross-validation test (10-fcvt). the proposed ensemble learning model demonstrates promising performance, achieving 96% accuracy in the independent set test (ist). statistical measures such as training accuracy, testing accuracy, recall, sensitivity, specificity, mathew's correlation coefficient (mcc), loss, training accuracy, f1 score, and cohen's kappa are utilized for comprehensive evaluation. | [
"genes",
"sequences",
"nucleotides",
"mutations",
"some",
"which",
"cancer",
"machine learning",
"deep learning methods",
"vital tools",
"mutations",
"cancer",
"thyroid cancer",
"the 5th most prevalent cancer",
"the usa",
"thousands",
"this paper",
"an ensemble learning model",
"deep learning techniques",
"long short-term memory",
"lstm",
"gated recurrent units",
"grus",
"bi-directional lstm",
"bi",
"-",
"lstm",
"thyroid cancer mutations",
"the model",
"a dataset",
"intogen.org",
"633 samples",
"969 mutations",
"41 genes",
"individuals",
"various demographics",
"feature extraction",
"techniques",
"hahn moments",
"central moments",
"raw moments",
"various matrix-based methods",
"evaluation",
"three testing methods",
"self-consistency test",
"sct",
"independent set test",
"ist",
"10-fold cross-validation test",
"the proposed ensemble learning model",
"performance",
"96% accuracy",
"the independent set test",
"(ist",
"statistical measures",
"training accuracy",
"testing accuracy",
"recall",
"sensitivity",
"specificity",
"mathew's correlation coefficient",
"mcc",
"loss",
"training accuracy",
"f1 score",
"cohen's kappa",
"comprehensive evaluation",
"5th",
"thousands",
"annually",
"633",
"969",
"41",
"three",
"sct",
"10-fold",
"10-fcvt",
"96%",
"cohen"
] |
Advancing horizons in remote sensing: a comprehensive survey of deep learning models and applications in image classification and beyond | [
"Sidike Paheding",
"Ashraf Saleem",
"Mohammad Faridul Haque Siddiqui",
"Nathir Rawashdeh",
"Almabrok Essa",
"Abel A. Reyes"
] | In recent years, deep learning has significantly reshaped numerous fields and applications, fundamentally altering how we tackle a variety of challenges. Areas such as natural language processing (NLP), computer vision, healthcare, network security, wide-area surveillance, and precision agriculture have leveraged the merits of the deep learning era. Particularly, deep learning has significantly improved the analysis of remote sensing images, with a continuous increase in the number of researchers and contributions to the field. The high impact of deep learning development is complemented by rapid advancements and the availability of data from a variety of sensors, including high-resolution RGB, thermal, LiDAR, and multi-/hyperspectral cameras, as well as emerging sensing platforms such as satellites and aerial vehicles that can be captured by multi-temporal, multi-sensor, and sensing devices with a wider view. This study aims to present an extensive survey that encapsulates widely used deep learning strategies for tackling image classification challenges in remote sensing. It encompasses an exploration of remote sensing imaging platforms, sensor varieties, practical applications, and prospective developments in the field. | 10.1007/s00521-024-10165-7 | advancing horizons in remote sensing: a comprehensive survey of deep learning models and applications in image classification and beyond | in recent years, deep learning has significantly reshaped numerous fields and applications, fundamentally altering how we tackle a variety of challenges. areas such as natural language processing (nlp), computer vision, healthcare, network security, wide-area surveillance, and precision agriculture have leveraged the merits of the deep learning era. particularly, deep learning has significantly improved the analysis of remote sensing images, with a continuous increase in the number of researchers and contributions to the field. the high impact of deep learning development is complemented by rapid advancements and the availability of data from a variety of sensors, including high-resolution rgb, thermal, lidar, and multi-/hyperspectral cameras, as well as emerging sensing platforms such as satellites and aerial vehicles that can be captured by multi-temporal, multi-sensor, and sensing devices with a wider view. this study aims to present an extensive survey that encapsulates widely used deep learning strategies for tackling image classification challenges in remote sensing. it encompasses an exploration of remote sensing imaging platforms, sensor varieties, practical applications, and prospective developments in the field. | [
"recent years",
"deep learning",
"numerous fields",
"applications",
"we",
"a variety",
"challenges",
"areas",
"natural language processing",
"nlp",
"computer vision",
"healthcare",
"network security",
"wide-area surveillance",
"precision agriculture",
"the merits",
"the deep learning era",
"deep learning",
"the analysis",
"remote sensing images",
"a continuous increase",
"the number",
"researchers",
"contributions",
"the field",
"the high impact",
"deep learning development",
"rapid advancements",
"the availability",
"data",
"a variety",
"sensors",
"high-resolution rgb, thermal, lidar, and multi-/hyperspectral cameras, as well as emerging sensing platforms",
"satellites",
"aerial vehicles",
"that",
"-",
"sensor",
"devices",
"a wider view",
"this study",
"an extensive survey",
"that",
"widely used deep learning strategies",
"image classification challenges",
"remote sensing",
"it",
"an exploration",
"remote sensing imaging platforms",
"sensor varieties",
"practical applications",
"prospective developments",
"the field",
"recent years",
"healthcare"
] |
Boosting in-transit entertainment: deep reinforcement learning for intelligent multimedia caching in bus networks | [
"Dan Lan",
"Incheol Shin"
] | Multimedia content delivery in advanced networks faces exponential growth in data volumes, rendering existing solutions obsolete. This research investigates deep reinforcement learning (DRL) for autonomous optimization without extensive datasets. The work analyzes two prominent DRL algorithms, i.e., Dueling Deep Q-Network (DDQN) and Deep Q-Network (DQN) for multimedia delivery in simulated bus networks. DDQN utilizes a novel “dueling” architecture to estimate state value and action advantages, accelerating learning separately. DQN employs deep neural networks to approximate optimal policies. The environment simulates urban buses with passenger file requests and cache sizes modeled on actual data. Comparative analysis evaluates cumulative rewards and losses over 1500 training episodes to analyze learning efficiency, stability, and performance. Results demonstrate DDQN’s superior convergence and 32% higher cumulative rewards than DQN. However, DQN showed potential for gains over successive runs despite inconsistencies. It establishes DRL’s promise for automated decision-making while revealing enhancements to improve DQN. Further research should evaluate generalizability across problem domains, investigate hybrid models, and test physical systems. DDQN emerged as the most efficient algorithm, highlighting DRL’s potential to enable intelligent agents that optimize multimedia delivery. | 10.1007/s00500-023-09354-8 | boosting in-transit entertainment: deep reinforcement learning for intelligent multimedia caching in bus networks | multimedia content delivery in advanced networks faces exponential growth in data volumes, rendering existing solutions obsolete. this research investigates deep reinforcement learning (drl) for autonomous optimization without extensive datasets. the work analyzes two prominent drl algorithms, i.e., dueling deep q-network (ddqn) and deep q-network (dqn) for multimedia delivery in simulated bus networks. ddqn utilizes a novel “dueling” architecture to estimate state value and action advantages, accelerating learning separately. dqn employs deep neural networks to approximate optimal policies. the environment simulates urban buses with passenger file requests and cache sizes modeled on actual data. comparative analysis evaluates cumulative rewards and losses over 1500 training episodes to analyze learning efficiency, stability, and performance. results demonstrate ddqn’s superior convergence and 32% higher cumulative rewards than dqn. however, dqn showed potential for gains over successive runs despite inconsistencies. it establishes drl’s promise for automated decision-making while revealing enhancements to improve dqn. further research should evaluate generalizability across problem domains, investigate hybrid models, and test physical systems. ddqn emerged as the most efficient algorithm, highlighting drl’s potential to enable intelligent agents that optimize multimedia delivery. | [
"multimedia content delivery",
"advanced networks",
"exponential growth",
"data volumes",
"this research investigates",
"deep reinforcement learning",
"drl",
"autonomous optimization",
"extensive datasets",
"the work",
"two prominent drl algorithms",
"deep q-network (ddqn",
"deep q-network",
"dqn",
"multimedia delivery",
"simulated bus networks",
"ddqn",
"a novel “dueling” architecture",
"state value",
"action advantages",
"dqn",
"deep neural networks",
"optimal policies",
"the environment",
"urban buses",
"passenger file requests",
"cache sizes",
"actual data",
"comparative analysis",
"cumulative rewards",
"losses",
"1500 training episodes",
"learning efficiency",
"stability",
"performance",
"results",
"ddqn’s superior convergence",
"32% higher cumulative rewards",
"dqn",
"dqn",
"potential",
"gains",
"successive runs",
"inconsistencies",
"it",
"drl’s promise",
"automated decision-making",
"enhancements",
"dqn",
"further research",
"generalizability",
"problem domains",
"hybrid models",
"physical systems",
"ddqn",
"the most efficient algorithm",
"drl’s potential",
"intelligent agents",
"that",
"multimedia delivery",
"two",
"1500",
"32%"
] |
Parameter-Free Reduction of the Estimation Bias in Deep Reinforcement Learning for Deterministic Policy Gradients | [
"Baturay Saglam",
"Furkan Burak Mutlu",
"Dogan Can Cicek",
"Suleyman Serdar Kozat"
] | Approximation of the value functions in value-based deep reinforcement learning induces overestimation bias, resulting in suboptimal policies. We show that when the reinforcement signals received by the agents have a high variance, deep actor-critic approaches that overcome the overestimation bias lead to a substantial underestimation bias. We first address the detrimental issues in the existing approaches that aim to overcome such underestimation error. Then, through extensive statistical analysis, we introduce a novel, parameter-free Deep Q-learning variant to reduce this underestimation bias in deterministic policy gradients. By sampling the weights of a linear combination of two approximate critics from a highly shrunk estimation bias interval, our Q-value update rule is not affected by the variance of the rewards received by the agents throughout learning. We test the performance of the introduced improvement on a set of MuJoCo and Box2D continuous control tasks and demonstrate that it outperforms the existing approaches and improves the baseline actor-critic algorithm in most of the environments tested. | 10.1007/s11063-024-11461-y | parameter-free reduction of the estimation bias in deep reinforcement learning for deterministic policy gradients | approximation of the value functions in value-based deep reinforcement learning induces overestimation bias, resulting in suboptimal policies. we show that when the reinforcement signals received by the agents have a high variance, deep actor-critic approaches that overcome the overestimation bias lead to a substantial underestimation bias. we first address the detrimental issues in the existing approaches that aim to overcome such underestimation error. then, through extensive statistical analysis, we introduce a novel, parameter-free deep q-learning variant to reduce this underestimation bias in deterministic policy gradients. by sampling the weights of a linear combination of two approximate critics from a highly shrunk estimation bias interval, our q-value update rule is not affected by the variance of the rewards received by the agents throughout learning. we test the performance of the introduced improvement on a set of mujoco and box2d continuous control tasks and demonstrate that it outperforms the existing approaches and improves the baseline actor-critic algorithm in most of the environments tested. | [
"approximation",
"the value functions",
"value-based deep reinforcement learning",
"overestimation bias",
"suboptimal policies",
"we",
"the reinforcement signals",
"the agents",
"a high variance",
"deep actor-critic approaches",
"that",
"the overestimation bias",
"a substantial underestimation bias",
"we",
"the detrimental issues",
"the existing approaches",
"that",
"such underestimation error",
"extensive statistical analysis",
"we",
"a novel, parameter-free deep q-learning variant",
"this underestimation bias",
"deterministic policy gradients",
"the weights",
"a linear combination",
"two approximate critics",
"a highly shrunk estimation bias interval",
"our q-value update rule",
"the variance",
"the rewards",
"the agents",
"we",
"the performance",
"the introduced improvement",
"a set",
"mujoco",
"box2d continuous control tasks",
"it",
"the existing approaches",
"the baseline actor-critic algorithm",
"the environments",
"first",
"two"
] |
Advanced deep learning and large language models for suicide ideation detection on social media | [
"Mohammed Qorich",
"Rajae El Ouazzani"
] | Recently, suicide ideations represent a worldwide health concern and pose many anticipation challenges. Actually, the prevalence of expressing self-destructive thoughts especially on forums and social media requires effective monitoring for suicide prevention, and early intervention. Meanwhile, deep learning techniques and Large Language Models (LLMs) have emerged as promising tools in diverse Natural Language Processing (NLP) tasks, including sentiment analysis and text classification. In this paper, we propose a deep learning model incorporating triple models of word embeddings, as well as various fine-tuned LLMs, to identify suicidal thoughts in Reddit posts. In effect, we implemented a Bidirectional Long Short-Term Memory (BiLSTM), and a Convolutional Neural Network (CNN) model to categorize posts associated with non-suicidal and suicidal thoughts. Besides, through the combination of Word2Vec, FastText and GloVe embeddings, our models learn intricate patterns and prevalent nuances in suicide-related language. Furthermore, we employed a merged version of CNN and BiLSTM models, entitled C-BiLSTM, and several LLMs, including pre-trained Bidirectional Encoder Representations from Transformers (BERT) models and a Generative Pre-training Transformer (GPT) model. The analysis of all our proposed models shows that our C-BiLSTM model with triple word embedding and our GPT model got the best performance compared to deep learning and LLMs baseline models, reaching accuracies of 94.5% and 97.69%, respectively. In fact, our best model’s capacity to extract meaningful interdependencies among words significantly promotes its classification performance. This analysis contributes to a deeper understanding of the psychological factors and linguistic markers indicative of suicidal thoughts, thereby informing future research and intervention strategies. | 10.1007/s13748-024-00326-z | advanced deep learning and large language models for suicide ideation detection on social media | recently, suicide ideations represent a worldwide health concern and pose many anticipation challenges. actually, the prevalence of expressing self-destructive thoughts especially on forums and social media requires effective monitoring for suicide prevention, and early intervention. meanwhile, deep learning techniques and large language models (llms) have emerged as promising tools in diverse natural language processing (nlp) tasks, including sentiment analysis and text classification. in this paper, we propose a deep learning model incorporating triple models of word embeddings, as well as various fine-tuned llms, to identify suicidal thoughts in reddit posts. in effect, we implemented a bidirectional long short-term memory (bilstm), and a convolutional neural network (cnn) model to categorize posts associated with non-suicidal and suicidal thoughts. besides, through the combination of word2vec, fasttext and glove embeddings, our models learn intricate patterns and prevalent nuances in suicide-related language. furthermore, we employed a merged version of cnn and bilstm models, entitled c-bilstm, and several llms, including pre-trained bidirectional encoder representations from transformers (bert) models and a generative pre-training transformer (gpt) model. the analysis of all our proposed models shows that our c-bilstm model with triple word embedding and our gpt model got the best performance compared to deep learning and llms baseline models, reaching accuracies of 94.5% and 97.69%, respectively. in fact, our best model’s capacity to extract meaningful interdependencies among words significantly promotes its classification performance. this analysis contributes to a deeper understanding of the psychological factors and linguistic markers indicative of suicidal thoughts, thereby informing future research and intervention strategies. | [
"suicide ideations",
"a worldwide health concern",
"many anticipation challenges",
"the prevalence",
"self-destructive thoughts",
"forums",
"social media",
"effective monitoring",
"suicide prevention",
"early intervention",
"deep learning techniques",
"large language models",
"llms",
"promising tools",
"nlp",
"sentiment analysis",
"text classification",
"this paper",
"we",
"a deep learning model",
"triple models",
"word embeddings",
"various fine-tuned llms",
"suicidal thoughts",
"reddit posts",
"effect",
"we",
"a bidirectional long short-term memory",
"bilstm",
"a convolutional neural network (cnn) model",
"posts",
"-suicidal and suicidal thoughts",
"the combination",
"word2vec, fasttext and glove embeddings",
"our models",
"intricate patterns",
"prevalent nuances",
"suicide-related language",
"we",
"a merged version",
"cnn",
"bilstm models",
"c-bilstm",
"several llms",
"pre-trained bidirectional encoder representations",
"transformers",
"(bert) models",
"a generative pre-training transformer",
"(gpt) model",
"the analysis",
"all our proposed models",
"our c-bilstm model",
"triple word",
"our gpt model",
"the best performance",
"deep learning and llms baseline models",
"accuracies",
"94.5%",
"97.69%",
"fact",
"our best model’s capacity",
"meaningful interdependencies",
"words",
"its classification performance",
"this analysis",
"a deeper understanding",
"the psychological factors",
"linguistic markers",
"suicidal thoughts",
"future research and intervention strategies",
"cnn",
"cnn",
"gpt",
"gpt",
"94.5%",
"97.69%"
] |
Comparative study of machine learning and deep learning techniques for fault diagnosis in suspension system | [
"P. Arun Balaji",
"V. Sugumaran"
] | Comfort and stability are the prime reasons to own an automobile (car). Suspension system of an automobile plays a major role in providing comfort, stability and control. Over a period of time, internal components in the suspension system exhibit faults due to fatigue and wear. Hence, it is essential to perform fault diagnosis such that the performance of the suspension components is restored. However, high instrumentation cost, skilled labor requirement and expertise in the particular field of study are certain drawbacks of traditional fault diagnosis techniques. Such challenges have made industrialists and the research communities look for advanced fault diagnosis techniques. Advancements in machine learning and deep learning techniques can be used to fulfill the need of a high degree intelligent fault diagnosis system. In the current study, the performance of machine learning (ML) classifiers are compared with the performance of deep learning (DL) models and the best performing model among them is adopted to detect faults in the automobile suspension system. A total of eight test conditions, namely strut external damage, strut mount failure, ball joint worn out, control arm bush worn out, control arm ball joint worn out, strut worn out, low wheel pressure and good condition, were considered in the study. The vibration measurements were acquired for three load conditions. Among all the techniques considered for classification, the pre-trained VGG16 model outperformed other DL and ML models with an overall classification accuracy of 98.10%. | 10.1007/s40430-023-04145-6 | comparative study of machine learning and deep learning techniques for fault diagnosis in suspension system | comfort and stability are the prime reasons to own an automobile (car). suspension system of an automobile plays a major role in providing comfort, stability and control. over a period of time, internal components in the suspension system exhibit faults due to fatigue and wear. hence, it is essential to perform fault diagnosis such that the performance of the suspension components is restored. however, high instrumentation cost, skilled labor requirement and expertise in the particular field of study are certain drawbacks of traditional fault diagnosis techniques. such challenges have made industrialists and the research communities look for advanced fault diagnosis techniques. advancements in machine learning and deep learning techniques can be used to fulfill the need of a high degree intelligent fault diagnosis system. in the current study, the performance of machine learning (ml) classifiers are compared with the performance of deep learning (dl) models and the best performing model among them is adopted to detect faults in the automobile suspension system. a total of eight test conditions, namely strut external damage, strut mount failure, ball joint worn out, control arm bush worn out, control arm ball joint worn out, strut worn out, low wheel pressure and good condition, were considered in the study. the vibration measurements were acquired for three load conditions. among all the techniques considered for classification, the pre-trained vgg16 model outperformed other dl and ml models with an overall classification accuracy of 98.10%. | [
"comfort",
"stability",
"the prime reasons",
"an automobile (car",
"suspension system",
"an automobile",
"a major role",
"comfort",
"stability",
"control",
"a period",
"time",
"the suspension system",
"exhibit faults",
"fatigue",
"it",
"fault diagnosis",
"the performance",
"the suspension components",
"high instrumentation cost",
"skilled labor requirement",
"expertise",
"the particular field",
"study",
"certain drawbacks",
"traditional fault diagnosis techniques",
"such challenges",
"industrialists",
"the research communities",
"advanced fault diagnosis techniques",
"advancements",
"machine learning",
"deep learning techniques",
"the need",
"a high degree intelligent fault diagnosis system",
"the current study",
"the performance",
"machine learning (ml) classifiers",
"the performance",
"deep learning (dl) models",
"the best performing model",
"them",
"faults",
"the automobile suspension system",
"a total",
"eight test conditions",
"namely strut external damage",
"strut mount failure",
"ball joint",
"control arm bush",
"control arm ball joint",
"strut",
"low wheel pressure",
"good condition",
"the study",
"the vibration measurements",
"three load conditions",
"all the techniques",
"classification",
"the pre-trained vgg16 model",
"other dl and ml models",
"an overall classification accuracy",
"98.10%",
"eight",
"strut mount",
"bush",
"three",
"98.10%"
] |
A Deep Learning Framework for Monitoring Audience Engagement in Online Video Events | [
"Alexandros Vrochidis",
"Nikolaos Dimitriou",
"Stelios Krinidis",
"Savvas Panagiotidis",
"Stathis Parcharidis",
"Dimitrios Tzovaras"
] | This paper introduces a deep learning methodology for analyzing audience engagement in online video events. The proposed deep learning framework consists of six layers and starts with keyframe extraction from the video stream and the participants’ face detection. Subsequently, the head pose and emotion per participant are estimated using the HopeNet and JAA-Net deep architectures. Complementary to video analysis, the audio signal is also processed using a neural network that follows the DenseNet-121 architecture. Its purpose is to detect events related to audience engagement, including speech, pauses, and applause. With the combined analysis of video and audio streams, the interest and attention of each participant are inferred more accurately. An experimental evaluation is performed on a newly generated dataset consisting of recordings from online video events, where the proposed framework achieves promising results. Concretely, the F1 scores were 79.21% for interest estimation according to pose, 65.38% for emotion estimation, and 80% for sound event detection. The proposed framework has applications in online educational events, where it can help tutors assess audience engagement and comprehension while hinting at points in their lectures that may require further clarification. It is effective for video streaming platforms that want to provide video recommendations to online users according to audience engagement. | 10.1007/s44196-024-00512-w | a deep learning framework for monitoring audience engagement in online video events | this paper introduces a deep learning methodology for analyzing audience engagement in online video events. the proposed deep learning framework consists of six layers and starts with keyframe extraction from the video stream and the participants’ face detection. subsequently, the head pose and emotion per participant are estimated using the hopenet and jaa-net deep architectures. complementary to video analysis, the audio signal is also processed using a neural network that follows the densenet-121 architecture. its purpose is to detect events related to audience engagement, including speech, pauses, and applause. with the combined analysis of video and audio streams, the interest and attention of each participant are inferred more accurately. an experimental evaluation is performed on a newly generated dataset consisting of recordings from online video events, where the proposed framework achieves promising results. concretely, the f1 scores were 79.21% for interest estimation according to pose, 65.38% for emotion estimation, and 80% for sound event detection. the proposed framework has applications in online educational events, where it can help tutors assess audience engagement and comprehension while hinting at points in their lectures that may require further clarification. it is effective for video streaming platforms that want to provide video recommendations to online users according to audience engagement. | [
"this paper",
"a deep learning methodology",
"audience engagement",
"online video events",
"the proposed deep learning framework",
"six layers",
"keyframe extraction",
"the video stream",
"the participants",
"face detection",
"the head",
"emotion",
"participant",
"the hopenet and jaa-net deep architectures",
"video analysis",
"the audio signal",
"a neural network",
"that",
"the densenet-121 architecture",
"its purpose",
"events",
"audience engagement",
"speech",
"pauses",
"applause",
"the combined analysis",
"video and audio streams",
"the interest",
"attention",
"each participant",
"an experimental evaluation",
"a newly generated dataset",
"recordings",
"online video events",
"the proposed framework",
"promising results",
"the f1 scores",
"79.21%",
"interest estimation",
"emotion estimation",
"sound event detection",
"the proposed framework",
"applications",
"online educational events",
"it",
"tutors",
"audience engagement",
"comprehension",
"points",
"their lectures",
"that",
"further clarification",
"it",
"video streaming platforms",
"that",
"video recommendations",
"online users",
"audience engagement",
"six",
"79.21%",
"65.38%",
"80%"
] |
Automated detection and recognition system for chewable food items using advanced deep learning models | [
"Yogesh Kumar",
"Apeksha Koul",
"Kamini",
"Marcin Woźniak",
"Jana Shafi",
"Muhammad Fazal Ijaz"
] | Identifying and recognizing the food on the basis of its eating sounds is a challenging task, as it plays an important role in avoiding allergic foods, providing dietary preferences to people who are restricted to a particular diet, showcasing its cultural significance, etc. In this research paper, the aim is to design a novel methodology that helps to identify food items by analyzing their eating sounds using various deep learning models. To achieve this objective, a system has been proposed that extracts meaningful features from food-eating sounds with the help of signal processing techniques and deep learning models for classifying them into their respective food classes. Initially, 1200 audio files for 20 food items labeled have been collected and visualized to find relationships between the sound files of different food items. Later, to extract meaningful features, various techniques such as spectrograms, spectral rolloff, spectral bandwidth, and mel-frequency cepstral coefficients are used for the cleaning of audio files as well as to capture the unique characteristics of different food items. In the next phase, various deep learning models like GRU, LSTM, InceptionResNetV2, and the customized CNN model have been trained to learn spectral and temporal patterns in audio signals. Besides this, the models have also been hybridized i.e. Bidirectional LSTM + GRU and RNN + Bidirectional LSTM, and RNN + Bidirectional GRU to analyze their performance for the same labeled data in order to associate particular patterns of sound with their corresponding class of food item. During evaluation, the highest accuracy, precision,F1 score, and recall have been obtained by GRU with 99.28%, Bidirectional LSTM + GRU with 97.7% as well as 97.3%, and RNN + Bidirectional LSTM with 97.45%, respectively. The results of this study demonstrate that deep learning models have the potential to precisely identify foods on the basis of their sound by computing the best outcomes. | 10.1038/s41598-024-57077-z | automated detection and recognition system for chewable food items using advanced deep learning models | identifying and recognizing the food on the basis of its eating sounds is a challenging task, as it plays an important role in avoiding allergic foods, providing dietary preferences to people who are restricted to a particular diet, showcasing its cultural significance, etc. in this research paper, the aim is to design a novel methodology that helps to identify food items by analyzing their eating sounds using various deep learning models. to achieve this objective, a system has been proposed that extracts meaningful features from food-eating sounds with the help of signal processing techniques and deep learning models for classifying them into their respective food classes. initially, 1200 audio files for 20 food items labeled have been collected and visualized to find relationships between the sound files of different food items. later, to extract meaningful features, various techniques such as spectrograms, spectral rolloff, spectral bandwidth, and mel-frequency cepstral coefficients are used for the cleaning of audio files as well as to capture the unique characteristics of different food items. in the next phase, various deep learning models like gru, lstm, inceptionresnetv2, and the customized cnn model have been trained to learn spectral and temporal patterns in audio signals. besides this, the models have also been hybridized i.e. bidirectional lstm + gru and rnn + bidirectional lstm, and rnn + bidirectional gru to analyze their performance for the same labeled data in order to associate particular patterns of sound with their corresponding class of food item. during evaluation, the highest accuracy, precision,f1 score, and recall have been obtained by gru with 99.28%, bidirectional lstm + gru with 97.7% as well as 97.3%, and rnn + bidirectional lstm with 97.45%, respectively. the results of this study demonstrate that deep learning models have the potential to precisely identify foods on the basis of their sound by computing the best outcomes. | [
"the food",
"the basis",
"its eating sounds",
"a challenging task",
"it",
"an important role",
"allergic foods",
"dietary preferences",
"people",
"who",
"a particular diet",
"its cultural significance",
"this research paper",
"the aim",
"a novel methodology",
"that",
"food items",
"their eating sounds",
"various deep learning models",
"this objective",
"a system",
"meaningful features",
"the help",
"signal processing techniques",
"deep learning models",
"them",
"their respective food classes",
"1200 audio files",
"20 food items",
"relationships",
"the sound files",
"different food items",
"meaningful features",
"various techniques",
"spectrograms",
"spectral rolloff",
"spectral",
"mel-frequency cepstral coefficients",
"the cleaning",
"audio files",
"the unique characteristics",
"different food items",
"the next phase",
"various deep learning models",
"gru",
"lstm",
"inceptionresnetv2",
"the customized cnn model",
"spectral and temporal patterns",
"audio signals",
"this",
"the models",
"i.e. bidirectional lstm",
"bidirectional lstm",
"bidirectional gru",
"their performance",
"the same labeled data",
"order",
"particular patterns",
"sound",
"their corresponding class",
"food item",
"evaluation",
"the highest accuracy",
"precision",
"f1 score",
"recall",
"gru",
"99.28%",
"bidirectional lstm",
"97.7%",
"97.3%",
"bidirectional lstm",
"97.45%",
"the results",
"this study",
"deep learning models",
"the potential",
"foods",
"the basis",
"their sound",
"the best outcomes",
"1200",
"20",
"mel",
"inceptionresnetv2",
"cnn",
"99.28%",
"97.7%",
"97.3%",
"+ bidirectional lstm",
"97.45%"
] |
Rapid diagnosis of celiac disease based on plasma Raman spectroscopy combined with deep learning | [
"Tian Shi",
"Jiahe Li",
"Na Li",
"Cheng Chen",
"Chen Chen",
"Chenjie Chang",
"Shenglong Xue",
"Weidong Liu",
"Ainur Maimaiti Reyim",
"Feng Gao",
"Xiaoyi Lv"
] | Celiac Disease (CD) is a primary malabsorption syndrome resulting from the interplay of genetic, immune, and dietary factors. CD negatively impacts daily activities and may lead to conditions such as osteoporosis, malignancies in the small intestine, ulcerative jejunitis, and enteritis, ultimately causing severe malnutrition. Therefore, an effective and rapid differentiation between healthy individuals and those with celiac disease is crucial for early diagnosis and treatment. This study utilizes Raman spectroscopy combined with deep learning models to achieve a non-invasive, rapid, and accurate diagnostic method for celiac disease and healthy controls. A total of 59 plasma samples, comprising 29 celiac disease cases and 30 healthy controls, were collected for experimental purposes. Convolutional Neural Network (CNN), Multi-Scale Convolutional Neural Network (MCNN), Residual Network (ResNet), and Deep Residual Shrinkage Network (DRSN) classification models were employed. The accuracy rates for these models were found to be 86.67%, 90.76%, 86.67% and 95.00%, respectively. Comparative validation results revealed that the DRSN model exhibited the best performance, with an AUC value and accuracy of 97.60% and 95%, respectively. This confirms the superiority of Raman spectroscopy combined with deep learning in the diagnosis of celiac disease. | 10.1038/s41598-024-64621-4 | rapid diagnosis of celiac disease based on plasma raman spectroscopy combined with deep learning | celiac disease (cd) is a primary malabsorption syndrome resulting from the interplay of genetic, immune, and dietary factors. cd negatively impacts daily activities and may lead to conditions such as osteoporosis, malignancies in the small intestine, ulcerative jejunitis, and enteritis, ultimately causing severe malnutrition. therefore, an effective and rapid differentiation between healthy individuals and those with celiac disease is crucial for early diagnosis and treatment. this study utilizes raman spectroscopy combined with deep learning models to achieve a non-invasive, rapid, and accurate diagnostic method for celiac disease and healthy controls. a total of 59 plasma samples, comprising 29 celiac disease cases and 30 healthy controls, were collected for experimental purposes. convolutional neural network (cnn), multi-scale convolutional neural network (mcnn), residual network (resnet), and deep residual shrinkage network (drsn) classification models were employed. the accuracy rates for these models were found to be 86.67%, 90.76%, 86.67% and 95.00%, respectively. comparative validation results revealed that the drsn model exhibited the best performance, with an auc value and accuracy of 97.60% and 95%, respectively. this confirms the superiority of raman spectroscopy combined with deep learning in the diagnosis of celiac disease. | [
"celiac disease",
"cd",
"a primary malabsorption syndrome",
"the interplay",
"genetic, immune, and dietary factors",
"cd",
"daily activities",
"conditions",
"osteoporosis",
"malignancies",
"the small intestine",
"ulcerative jejunitis",
"enteritis",
"severe malnutrition",
"an effective and rapid differentiation",
"healthy individuals",
"those",
"celiac disease",
"early diagnosis",
"treatment",
"this study",
"raman spectroscopy",
"deep learning models",
"a non-invasive, rapid, and accurate diagnostic method",
"celiac disease",
"healthy controls",
"a total",
"59 plasma samples",
"29 celiac disease cases",
"30 healthy controls",
"experimental purposes",
"convolutional neural network",
"cnn",
"multi-scale convolutional neural network",
"mcnn",
"residual network",
"resnet",
"deep residual shrinkage network",
"drsn) classification models",
"the accuracy rates",
"these models",
"86.67%",
"90.76%",
"86.67%",
"95.00%",
"comparative validation results",
"the drsn model",
"the best performance",
"an auc value",
"accuracy",
"97.60%",
"95%",
"this",
"the superiority",
"raman spectroscopy",
"deep learning",
"the diagnosis",
"celiac disease",
"daily",
"59",
"29",
"30",
"cnn",
"86.67%",
"90.76%",
"86.67%",
"95.00%",
"97.60% and",
"95%"
] |
A deep learning model for brain segmentation across pediatric and adult populations | [
"Jaime Simarro",
"Maria Ines Meyer",
"Simon Van Eyndhoven",
"Thanh Vân Phan",
"Thibo Billiet",
"Diana M. Sima",
"Els Ortibus"
] | Automated quantification of brain tissues on MR images has greatly contributed to the diagnosis and follow-up of neurological pathologies across various life stages. However, existing solutions are specifically designed for certain age ranges, limiting their applicability in monitoring brain development from infancy to late adulthood. This retrospective study aims to develop and validate a brain segmentation model across pediatric and adult populations. First, we trained a deep learning model to segment tissues and brain structures using T1-weighted MR images from 390 patients (age range: 2–81 years) across four different datasets. Subsequently, the model was validated on a cohort of 280 patients from six distinct test datasets (age range: 4–90 years). In the initial experiment, the proposed deep learning-based pipeline, icobrain-dl, demonstrated segmentation accuracy comparable to both pediatric and adult-specific models across diverse age groups. Subsequently, we evaluated intra- and inter-scanner variability in measurements of various tissues and structures in both pediatric and adult populations computed by icobrain-dl. Results demonstrated significantly higher reproducibility compared to similar brain quantification tools, including childmetrix, FastSurfer, and the medical device icobrain v5.9 (p-value< 0.01). Finally, we explored the potential clinical applications of icobrain-dl measurements in diagnosing pediatric patients with Cerebral Visual Impairment and adult patients with Alzheimer’s Disease. | 10.1038/s41598-024-61798-6 | a deep learning model for brain segmentation across pediatric and adult populations | automated quantification of brain tissues on mr images has greatly contributed to the diagnosis and follow-up of neurological pathologies across various life stages. however, existing solutions are specifically designed for certain age ranges, limiting their applicability in monitoring brain development from infancy to late adulthood. this retrospective study aims to develop and validate a brain segmentation model across pediatric and adult populations. first, we trained a deep learning model to segment tissues and brain structures using t1-weighted mr images from 390 patients (age range: 2–81 years) across four different datasets. subsequently, the model was validated on a cohort of 280 patients from six distinct test datasets (age range: 4–90 years). in the initial experiment, the proposed deep learning-based pipeline, icobrain-dl, demonstrated segmentation accuracy comparable to both pediatric and adult-specific models across diverse age groups. subsequently, we evaluated intra- and inter-scanner variability in measurements of various tissues and structures in both pediatric and adult populations computed by icobrain-dl. results demonstrated significantly higher reproducibility compared to similar brain quantification tools, including childmetrix, fastsurfer, and the medical device icobrain v5.9 (p-value< 0.01). finally, we explored the potential clinical applications of icobrain-dl measurements in diagnosing pediatric patients with cerebral visual impairment and adult patients with alzheimer’s disease. | [
"automated quantification",
"brain tissues",
"mr images",
"the diagnosis",
"follow-up",
"neurological pathologies",
"various life stages",
"existing solutions",
"certain age ranges",
"their applicability",
"brain development",
"infancy",
"late adulthood",
"this retrospective study",
"a brain segmentation model",
"pediatric and adult populations",
"we",
"a deep learning model",
"segment tissues",
"brain structures",
"mr images",
"390 patients",
"age range",
"2–81 years",
"four different datasets",
"the model",
"a cohort",
"280 patients",
"six distinct test datasets",
"age range",
"4–90 years",
"the initial experiment",
"the proposed deep learning-based pipeline",
"icobrain-dl",
"segmentation accuracy",
"both pediatric and adult-specific models",
"diverse age groups",
"we",
"intra- and inter-scanner variability",
"measurements",
"various tissues",
"structures",
"both pediatric and adult populations",
"icobrain-dl. results",
"significantly higher reproducibility",
"similar brain quantification tools",
"childmetrix",
"fastsurfer",
"the medical device icobrain v5.9",
"p-value",
"we",
"the potential clinical applications",
"icobrain-dl measurements",
"pediatric patients",
"cerebral visual impairment",
"adult patients",
"disease",
"first",
"390",
"2–81 years",
"four",
"280",
"six",
"4–90 years"
] |
Phase unwrapping based on deep learning in light field fringe projection 3D measurement | [
"Xinjun Zhu",
"Haichuan Zhao",
"Mengkai Yuan",
"Zhizhi Zhang",
"Hongyi Wang",
"Limei Song"
] | Phase unwrapping is one of the key roles in fringe projection three-dimensional (3D) measurement technology. We propose a new method to achieve phase unwrapping in camera array light filed fringe projection 3D measurement based on deep learning. A multi-stream convolutional neural network (CNN) is proposed to learn the mapping relationship between camera array light filed wrapped phases and fringe orders of the expected central view, and is used to predict the fringe order to achieve the phase unwrapping. Experiments are performed on the light field fringe projection data generated by the simulated camera array fringe projection measurement system in Blender and by the experimental 3×3 camera array light field fringe projection system. The performance of the proposed network with light field wrapped phases using multiple directions as network input data is studied, and the advantages of phase unwrapping based on deep learning in light filed fringe projection are demonstrated. | 10.1007/s11801-023-3002-4 | phase unwrapping based on deep learning in light field fringe projection 3d measurement | phase unwrapping is one of the key roles in fringe projection three-dimensional (3d) measurement technology. we propose a new method to achieve phase unwrapping in camera array light filed fringe projection 3d measurement based on deep learning. a multi-stream convolutional neural network (cnn) is proposed to learn the mapping relationship between camera array light filed wrapped phases and fringe orders of the expected central view, and is used to predict the fringe order to achieve the phase unwrapping. experiments are performed on the light field fringe projection data generated by the simulated camera array fringe projection measurement system in blender and by the experimental 3×3 camera array light field fringe projection system. the performance of the proposed network with light field wrapped phases using multiple directions as network input data is studied, and the advantages of phase unwrapping based on deep learning in light filed fringe projection are demonstrated. | [
"phase",
"the key roles",
"fringe projection three-dimensional (3d) measurement technology",
"we",
"a new method",
"phase",
"camera array light",
"fringe projection 3d measurement",
"deep learning",
"a multi-stream convolutional neural network",
"cnn",
"the mapping relationship",
"camera array light",
"phases",
"fringe orders",
"the expected central view",
"the fringe order",
"the phase",
"experiments",
"the light field fringe projection data",
"the simulated camera array fringe projection measurement system",
"blender",
"the experimental 3×3 camera array light field fringe projection system",
"the performance",
"the proposed network",
"light field",
"phases",
"multiple directions",
"network input data",
"the advantages",
"phase",
"deep learning",
"light filed fringe projection",
"three",
"3d",
"3d",
"cnn",
"3×3"
] |
An experimental evaluation of deep reinforcement learning algorithms for HVAC control | [
"Antonio Manjavacas",
"Alejandro Campoy-Nieves",
"Javier Jiménez-Raboso",
"Miguel Molina-Solana",
"Juan Gómez-Romero"
] | Heating, ventilation, and air conditioning (HVAC) systems are a major driver of energy consumption in commercial and residential buildings. Recent studies have shown that Deep Reinforcement Learning (DRL) algorithms can outperform traditional reactive controllers. However, DRL-based solutions are generally designed for ad hoc setups and lack standardization for comparison. To fill this gap, this paper provides a critical and reproducible evaluation, in terms of comfort and energy consumption, of several state-of-the-art DRL algorithms for HVAC control. The study examines the controllers’ robustness, adaptability, and trade-off between optimization goals by using the Sinergym framework. The results obtained confirm the potential of DRL algorithms, such as SAC and TD3, in complex scenarios and reveal several challenges related to generalization and incremental learning. | 10.1007/s10462-024-10819-x | an experimental evaluation of deep reinforcement learning algorithms for hvac control | heating, ventilation, and air conditioning (hvac) systems are a major driver of energy consumption in commercial and residential buildings. recent studies have shown that deep reinforcement learning (drl) algorithms can outperform traditional reactive controllers. however, drl-based solutions are generally designed for ad hoc setups and lack standardization for comparison. to fill this gap, this paper provides a critical and reproducible evaluation, in terms of comfort and energy consumption, of several state-of-the-art drl algorithms for hvac control. the study examines the controllers’ robustness, adaptability, and trade-off between optimization goals by using the sinergym framework. the results obtained confirm the potential of drl algorithms, such as sac and td3, in complex scenarios and reveal several challenges related to generalization and incremental learning. | [
"heating, ventilation, and air conditioning (hvac) systems",
"a major driver",
"energy consumption",
"commercial and residential buildings",
"recent studies",
"drl",
"traditional reactive controllers",
"drl-based solutions",
"ad hoc setups",
"lack standardization",
"comparison",
"this gap",
"this paper",
"a critical and reproducible evaluation",
"terms",
"comfort and energy consumption",
"the-art",
"hvac control",
"the study",
"the controllers’ robustness",
"adaptability",
"trade-off",
"optimization goals",
"the sinergym framework",
"the results",
"the potential",
"drl algorithms",
"sac",
"td3",
"complex scenarios",
"several challenges",
"generalization",
"incremental learning"
] |
Development and application of a deep learning-based comprehensive early diagnostic model for chronic obstructive pulmonary disease | [
"Zecheng Zhu",
"Shunjin Zhao",
"Jiahui Li",
"Yuting Wang",
"Luopiao Xu",
"Yubing Jia",
"Zihan Li",
"Wenyuan Li",
"Gang Chen",
"Xifeng Wu"
] | BackgroundChronic obstructive pulmonary disease (COPD) is a frequently diagnosed yet treatable condition, provided it is identified early and managed effectively. This study aims to develop an advanced COPD diagnostic model by integrating deep learning and radiomics features.MethodsWe utilized a dataset comprising CT images from 2,983 participants, of which 2,317 participants also provided epidemiological data through questionnaires. Deep learning features were extracted using a Variational Autoencoder, and radiomics features were obtained using the PyRadiomics package. Multi-Layer Perceptrons were used to construct models based on deep learning and radiomics features independently, as well as a fusion model integrating both. Subsequently, epidemiological questionnaire data were incorporated to establish a more comprehensive model. The diagnostic performance of standalone models, the fusion model and the comprehensive model was evaluated and compared using metrics including accuracy, precision, recall, F1-score, Brier score, receiver operating characteristic curves, and area under the curve (AUC).ResultsThe fusion model exhibited outstanding performance with an AUC of 0.952, surpassing the standalone models based solely on deep learning features (AUC = 0.844) or radiomics features (AUC = 0.944). Notably, the comprehensive model, incorporating deep learning features, radiomics features, and questionnaire variables demonstrated the highest diagnostic performance among all models, yielding an AUC of 0.971.ConclusionWe developed and implemented a data fusion strategy to construct a state-of-the-art COPD diagnostic model integrating deep learning features, radiomics features, and questionnaire variables. Our data fusion strategy proved effective, and the model can be easily deployed in clinical settings.Trial registrationNot applicable. This study is NOT a clinical trial, it does not report the results of a health care intervention on human participants. | 10.1186/s12931-024-02793-3 | development and application of a deep learning-based comprehensive early diagnostic model for chronic obstructive pulmonary disease | backgroundchronic obstructive pulmonary disease (copd) is a frequently diagnosed yet treatable condition, provided it is identified early and managed effectively. this study aims to develop an advanced copd diagnostic model by integrating deep learning and radiomics features.methodswe utilized a dataset comprising ct images from 2,983 participants, of which 2,317 participants also provided epidemiological data through questionnaires. deep learning features were extracted using a variational autoencoder, and radiomics features were obtained using the pyradiomics package. multi-layer perceptrons were used to construct models based on deep learning and radiomics features independently, as well as a fusion model integrating both. subsequently, epidemiological questionnaire data were incorporated to establish a more comprehensive model. the diagnostic performance of standalone models, the fusion model and the comprehensive model was evaluated and compared using metrics including accuracy, precision, recall, f1-score, brier score, receiver operating characteristic curves, and area under the curve (auc).resultsthe fusion model exhibited outstanding performance with an auc of 0.952, surpassing the standalone models based solely on deep learning features (auc = 0.844) or radiomics features (auc = 0.944). notably, the comprehensive model, incorporating deep learning features, radiomics features, and questionnaire variables demonstrated the highest diagnostic performance among all models, yielding an auc of 0.971.conclusionwe developed and implemented a data fusion strategy to construct a state-of-the-art copd diagnostic model integrating deep learning features, radiomics features, and questionnaire variables. our data fusion strategy proved effective, and the model can be easily deployed in clinical settings.trial registrationnot applicable. this study is not a clinical trial, it does not report the results of a health care intervention on human participants. | [
"backgroundchronic obstructive pulmonary disease",
"copd",
"a frequently diagnosed yet treatable condition",
"it",
"this study",
"an advanced copd diagnostic model",
"deep learning and radiomics features.methodswe",
"a dataset",
"ct images",
"2,983 participants",
"which",
"2,317 participants",
"epidemiological data",
"questionnaires",
"deep learning features",
"a variational autoencoder",
"radiomics features",
"the pyradiomics package",
"multi-layer perceptrons",
"models",
"deep learning",
"radiomics",
"a fusion model",
"both",
"epidemiological questionnaire data",
"a more comprehensive model",
"the diagnostic performance",
"standalone models",
"the fusion model",
"the comprehensive model",
"metrics",
"accuracy",
"precision",
"recall, f1-score",
"brier score",
"the curve",
"auc).resultsthe fusion model",
"outstanding performance",
"an auc",
"the standalone models",
"deep learning features",
"auc",
"radiomics features",
"auc =",
"the comprehensive model",
"deep learning features",
"radiomics features",
"questionnaire variables",
"the highest diagnostic performance",
"all models",
"an auc",
"0.971.conclusionwe",
"a data fusion strategy",
"the-art",
"deep learning features",
"radiomics features",
"questionnaire variables",
"our data fusion strategy",
"the model",
"clinical settings.trial registrationnot",
"this study",
"a clinical trial",
"it",
"the results",
"a health care intervention",
"human participants",
"2,983",
"2,317",
"0.952",
"0.844",
"0.944",
"0.971.conclusionwe"
] |
A multi-agent adaptive deep learning framework for online intrusion detection | [
"Mahdi Soltani",
"Khashayar Khajavi",
"Mahdi Jafari Siavoshani",
"Amir Hossein Jahangir"
] | The network security analyzers use intrusion detection systems (IDSes) to distinguish malicious traffic from benign ones. The deep learning-based (DL-based) IDSes are proposed to auto-extract high-level features and eliminate the time-consuming and costly signature extraction process. However, this new generation of IDSes still needs to overcome a number of challenges to be employed in practical environments. One of the main issues of an applicable IDS is facing traffic concept drift, which manifests itself as new (i.e. , zero-day) attacks, in addition to the changing behavior of benign users/applications. Furthermore, a practical DL-based IDS needs to be conformed to a distributed (i.e. , multi-sensor) architecture in order to yield more accurate detections, create a collective attack knowledge based on the observations of different sensors, and also handle big data challenges for supporting high throughput networks. This paper proposes a novel multi-agent network intrusion detection framework to address the above shortcomings, considering a more practical scenario (i.e., online adaptable IDSes). This framework employs continual deep anomaly detectors for adapting each agent to the changing attack/benign patterns in its local traffic. In addition, a federated learning approach is proposed for sharing and exchanging local knowledge between different agents. Furthermore, the proposed framework implements sequential packet labeling for each flow, which provides an attack probability score for the flow by gradually observing each flow packet and updating its estimation. We evaluate the proposed framework by employing different deep models (including CNN-based and LSTM-based) over the CIC-IDS2017 and CSE-CIC-IDS2018 datasets. Through extensive evaluations and experiments, we show that the proposed distributed framework is well adapted to the traffic concept drift. More precisely, our results indicate that the CNN-based models are well suited for continually adapting to the traffic concept drift (i.e. , achieving an average detection rate of above 95% while needing just 128 new flows for the updating phase), and the LSTM-based models are a good candidate for sequential packet labeling in practical online IDSes (i.e. , detecting intrusions by just observing their first 15 packets). | 10.1186/s42400-023-00199-0 | a multi-agent adaptive deep learning framework for online intrusion detection | the network security analyzers use intrusion detection systems (idses) to distinguish malicious traffic from benign ones. the deep learning-based (dl-based) idses are proposed to auto-extract high-level features and eliminate the time-consuming and costly signature extraction process. however, this new generation of idses still needs to overcome a number of challenges to be employed in practical environments. one of the main issues of an applicable ids is facing traffic concept drift, which manifests itself as new (i.e. , zero-day) attacks, in addition to the changing behavior of benign users/applications. furthermore, a practical dl-based ids needs to be conformed to a distributed (i.e. , multi-sensor) architecture in order to yield more accurate detections, create a collective attack knowledge based on the observations of different sensors, and also handle big data challenges for supporting high throughput networks. this paper proposes a novel multi-agent network intrusion detection framework to address the above shortcomings, considering a more practical scenario (i.e., online adaptable idses). this framework employs continual deep anomaly detectors for adapting each agent to the changing attack/benign patterns in its local traffic. in addition, a federated learning approach is proposed for sharing and exchanging local knowledge between different agents. furthermore, the proposed framework implements sequential packet labeling for each flow, which provides an attack probability score for the flow by gradually observing each flow packet and updating its estimation. we evaluate the proposed framework by employing different deep models (including cnn-based and lstm-based) over the cic-ids2017 and cse-cic-ids2018 datasets. through extensive evaluations and experiments, we show that the proposed distributed framework is well adapted to the traffic concept drift. more precisely, our results indicate that the cnn-based models are well suited for continually adapting to the traffic concept drift (i.e. , achieving an average detection rate of above 95% while needing just 128 new flows for the updating phase), and the lstm-based models are a good candidate for sequential packet labeling in practical online idses (i.e. , detecting intrusions by just observing their first 15 packets). | [
"the network security analyzers",
"intrusion detection systems",
"(idses",
"malicious traffic",
"benign ones",
"the deep learning-based (dl-based) idses",
"auto-extract high-level features",
"the time-consuming and costly signature extraction process",
"this new generation",
"idses",
"a number",
"challenges",
"practical environments",
"the main issues",
"an applicable ids",
"traffic concept drift",
"which",
"itself",
"new (i.e. , zero-day",
"attacks",
"addition",
"the changing behavior",
"benign users/applications",
"a practical dl-based ids",
"a distributed (i.e. , multi-sensor) architecture",
"order",
"more accurate detections",
"a collective attack knowledge",
"the observations",
"different sensors",
"big data challenges",
"high throughput networks",
"this paper",
"a novel multi-agent network intrusion detection framework",
"the above shortcomings",
"a more practical scenario",
"(i.e., online adaptable idses",
"this framework",
"continual deep anomaly detectors",
"each agent",
"the changing attack/benign patterns",
"its local traffic",
"addition",
"a federated learning approach",
"local knowledge",
"different agents",
"the proposed framework",
"sequential packet labeling",
"each flow",
"which",
"an attack probability score",
"the flow",
"each flow packet",
"its estimation",
"we",
"the proposed framework",
"different deep models",
"the cic",
"-ids2017",
"cse-cic-ids2018",
"datasets",
"extensive evaluations",
"experiments",
"we",
"the proposed distributed framework",
"the traffic concept drift",
"our results",
"the cnn-based models",
"the traffic concept drift",
"an average detection rate",
"above 95%",
"just 128 new flows",
"the updating phase",
"the lstm-based models",
"a good candidate",
"sequential packet labeling",
"practical online idses",
"intrusions",
"their first 15 packets",
"one",
"zero-day",
"cnn",
"cnn",
"95%",
"just 128",
"first",
"15"
] |
Addressing data imbalance challenges in oral cavity histopathological whole slide images with advanced deep learning techniques | [
"Tabasum Majeed",
"Tariq Ahmad Masoodi",
"Muzafar Ahmad Macha",
"Muzafar Rasool Bhat",
"Khalid Muzaffar",
"Assif Assad"
] | Oral Cavity Squamous Cell Carcinoma (OCSCC) represents a common form of head and neck cancer originating from the mucosal lining of the oral cavity, often detected in advanced stages. Traditional detection methods rely on analyzing hematoxylin and eosin (H&E)-stained histopathological whole-slide images, which are time-consuming and require expert pathology skills. Hence, automated analysis is urgently needed to expedite diagnosis and improve patient outcomes. Deep learning, through automated feature extraction, offers a promising avenue for capturing high-level abstract features with greater accuracy than traditional methods. However, the imbalance in class distribution within datasets significantly affects the performance of deep learning models during training, necessitating specialized approaches. To address the issue, various methods have been proposed at both data and algorithmic levels. This study investigates strategies to mitigate class imbalance by employing a publicly available OCSCC imbalance dataset. We evaluated undersampling methods (Near Miss, Edited Nearest Neighbors) and oversampling techniques (SMOTE, Deep SMOTE, ADASYN) integrated with transfer learning across different imbalance ratios (0.1, 0.15, 0.20, 0.30). Our findings demonstrate the effectiveness of SMOTE in improving test performance, highlighting the efficacy of strategic oversampling combined with transfer learning in classifying imbalanced medical datasets. This enhances OCSCC diagnostic accuracy, streamlines clinical decisions, and reduces reliance on costly histopathological tests. | 10.1007/s13198-024-02440-6 | addressing data imbalance challenges in oral cavity histopathological whole slide images with advanced deep learning techniques | oral cavity squamous cell carcinoma (ocscc) represents a common form of head and neck cancer originating from the mucosal lining of the oral cavity, often detected in advanced stages. traditional detection methods rely on analyzing hematoxylin and eosin (h&e)-stained histopathological whole-slide images, which are time-consuming and require expert pathology skills. hence, automated analysis is urgently needed to expedite diagnosis and improve patient outcomes. deep learning, through automated feature extraction, offers a promising avenue for capturing high-level abstract features with greater accuracy than traditional methods. however, the imbalance in class distribution within datasets significantly affects the performance of deep learning models during training, necessitating specialized approaches. to address the issue, various methods have been proposed at both data and algorithmic levels. this study investigates strategies to mitigate class imbalance by employing a publicly available ocscc imbalance dataset. we evaluated undersampling methods (near miss, edited nearest neighbors) and oversampling techniques (smote, deep smote, adasyn) integrated with transfer learning across different imbalance ratios (0.1, 0.15, 0.20, 0.30). our findings demonstrate the effectiveness of smote in improving test performance, highlighting the efficacy of strategic oversampling combined with transfer learning in classifying imbalanced medical datasets. this enhances ocscc diagnostic accuracy, streamlines clinical decisions, and reduces reliance on costly histopathological tests. | [
"oral cavity squamous cell carcinoma",
"ocscc",
"a common form",
"head and neck cancer",
"the mucosal lining",
"the oral cavity",
"advanced stages",
"traditional detection methods",
"hematoxylin",
"eosin",
"histopathological whole-slide images",
"which",
"expert pathology skills",
"automated analysis",
"diagnosis",
"patient outcomes",
"deep learning",
"automated feature extraction",
"a promising avenue",
"high-level abstract features",
"greater accuracy",
"traditional methods",
"the imbalance",
"class distribution",
"datasets",
"the performance",
"deep learning models",
"training",
"specialized approaches",
"the issue",
"various methods",
"both data",
"algorithmic levels",
"this study",
"strategies",
"class imbalance",
"a publicly available ocscc imbalance dataset",
"we",
"undersampling methods",
"near miss",
"neighbors",
"techniques",
"smote",
"deep smote",
"adasyn",
"different imbalance ratios",
"our findings",
"the effectiveness",
"smote",
"test performance",
"the efficacy",
"strategic oversampling",
"imbalanced medical datasets",
"diagnostic accuracy",
"clinical decisions",
"reliance",
"costly histopathological tests",
"hematoxylin",
"0.1",
"0.15",
"0.20",
"0.30"
] |
Deep learning implementation of image segmentation in agricultural applications: a comprehensive review | [
"Lian Lei",
"Qiliang Yang",
"Ling Yang",
"Tao Shen",
"Ruoxi Wang",
"Chengbiao Fu"
] | Image segmentation is a crucial task in computer vision, which divides a digital image into multiple segments and objects. In agriculture, image segmentation is extensively used for crop and soil monitoring, predicting the best times to sow, fertilize, and harvest, estimating crop yield, and detecting plant diseases. However, image segmentation faces difficulties in agriculture, such as the challenges of disease staging recognition, labeling inconsistency, and changes in plant morphology with the environment. Consequently, we have conducted a comprehensive review of image segmentation techniques based on deep learning, exploring the development and prospects of image segmentation in agriculture. Deep learning-based image segmentation solutions widely used in agriculture are categorized into eight main groups: encoder-decoder structures, multi-scale and pyramid-based methods, dilated convolutional networks, visual attention models, generative adversarial networks, graph neural networks, instance segmentation networks, and transformer-based models. In addition, the applications of image segmentation methods in agriculture are presented, such as plant disease detection, weed identification, crop growth monitoring, crop yield estimation, and counting. Furthermore, a collection of publicly available plant image segmentation datasets has been reviewed, and the evaluation and comparison of performance for image segmentation algorithms have been conducted on benchmark datasets. Finally, there is a discussion of the challenges and future prospects of image segmentation in agriculture. | 10.1007/s10462-024-10775-6 | deep learning implementation of image segmentation in agricultural applications: a comprehensive review | image segmentation is a crucial task in computer vision, which divides a digital image into multiple segments and objects. in agriculture, image segmentation is extensively used for crop and soil monitoring, predicting the best times to sow, fertilize, and harvest, estimating crop yield, and detecting plant diseases. however, image segmentation faces difficulties in agriculture, such as the challenges of disease staging recognition, labeling inconsistency, and changes in plant morphology with the environment. consequently, we have conducted a comprehensive review of image segmentation techniques based on deep learning, exploring the development and prospects of image segmentation in agriculture. deep learning-based image segmentation solutions widely used in agriculture are categorized into eight main groups: encoder-decoder structures, multi-scale and pyramid-based methods, dilated convolutional networks, visual attention models, generative adversarial networks, graph neural networks, instance segmentation networks, and transformer-based models. in addition, the applications of image segmentation methods in agriculture are presented, such as plant disease detection, weed identification, crop growth monitoring, crop yield estimation, and counting. furthermore, a collection of publicly available plant image segmentation datasets has been reviewed, and the evaluation and comparison of performance for image segmentation algorithms have been conducted on benchmark datasets. finally, there is a discussion of the challenges and future prospects of image segmentation in agriculture. | [
"image segmentation",
"a crucial task",
"computer vision",
"which",
"a digital image",
"multiple segments",
"objects",
"agriculture",
"image segmentation",
"crop",
"soil monitoring",
"the best times",
"crop yield",
"plant diseases",
"image segmentation",
"difficulties",
"agriculture",
"the challenges",
"disease staging recognition",
"labeling inconsistency",
"changes",
"plant morphology",
"the environment",
"we",
"a comprehensive review",
"image segmentation techniques",
"deep learning",
"the development",
"prospects",
"image segmentation",
"agriculture",
"deep learning-based image segmentation solutions",
"agriculture",
"eight main groups",
"encoder-decoder structures",
"-scale and pyramid-based methods",
"dilated convolutional networks",
"visual attention models",
"generative adversarial networks",
"graph neural networks",
"instance segmentation networks",
"transformer-based models",
"addition",
"the applications",
"image segmentation methods",
"agriculture",
"plant disease detection",
"identification",
"crop growth monitoring",
"crop yield estimation",
"a collection",
"publicly available plant image segmentation datasets",
"the evaluation",
"comparison",
"performance",
"image segmentation algorithms",
"benchmark datasets",
"a discussion",
"the challenges",
"future prospects",
"image segmentation",
"agriculture",
"eight"
] |
COVID-19 Fake News Detection using Deep Learning Model | [
"Mahabuba Akhter",
"Syed Md. Minhaz Hossain",
"Rizma Sijana Nigar",
"Srabanti Paul",
"Khaleque Md. Aashiq Kamal",
"Anik Sen",
"Iqbal H. Sarker"
] | People may now receive and share information more quickly and easily than ever due to the widespread use of mobile networked devices. However, this can occasionally lead to the spread of false information. Such information is being disseminated widely, which may cause people to make incorrect decisions about potentially crucial topics. This occurred in 2020, the year of the fatal and extremely contagious Coronavirus Disease (COVID-19) outbreak. The spread of false information about COVID-19 on social media has already been labeled as an “infodemic” by the World Health Organization (WHO), causing serious difficulties for governments attempting to control the pandemic. Consequently, it is crucial to have a model for detecting fake news related to COVID-19. In this paper, we present an effective Convolutional Neural Network (CNN)-based deep learning model using word embedding. For selecting the best CNN architecture, we take into account the optimal values of model hyper-parameters using grid search. Further, for measuring the effectiveness of our proposed CNN model, various state-of-the-art machine learning algorithms are conducted for COVID-19 fake news detection. Among them, CNN outperforms with 96.19% mean accuracy, 95% mean F1-score, and 0.985 area under ROC curve (AUC). | 10.1007/s40745-023-00507-y | covid-19 fake news detection using deep learning model | people may now receive and share information more quickly and easily than ever due to the widespread use of mobile networked devices. however, this can occasionally lead to the spread of false information. such information is being disseminated widely, which may cause people to make incorrect decisions about potentially crucial topics. this occurred in 2020, the year of the fatal and extremely contagious coronavirus disease (covid-19) outbreak. the spread of false information about covid-19 on social media has already been labeled as an “infodemic” by the world health organization (who), causing serious difficulties for governments attempting to control the pandemic. consequently, it is crucial to have a model for detecting fake news related to covid-19. in this paper, we present an effective convolutional neural network (cnn)-based deep learning model using word embedding. for selecting the best cnn architecture, we take into account the optimal values of model hyper-parameters using grid search. further, for measuring the effectiveness of our proposed cnn model, various state-of-the-art machine learning algorithms are conducted for covid-19 fake news detection. among them, cnn outperforms with 96.19% mean accuracy, 95% mean f1-score, and 0.985 area under roc curve (auc). | [
"people",
"information",
"the widespread use",
"mobile networked devices",
"this",
"the spread",
"false information",
"such information",
"which",
"people",
"incorrect decisions",
"potentially crucial topics",
"this",
"the year",
"the fatal and extremely contagious coronavirus disease",
"covid-19",
"the spread",
"false information",
"covid-19",
"social media",
"the world health organization",
"who",
"serious difficulties",
"governments",
"it",
"a model",
"fake news",
"covid-19",
"this paper",
"we",
"an effective convolutional neural network",
"cnn)-based deep learning model",
"word",
"the best cnn architecture",
"we",
"account",
"the optimal values",
"model hyper-parameters",
"grid search",
"the effectiveness",
"our proposed cnn model",
"the-art",
"covid-19 fake news detection",
"them",
"cnn",
"96.19%",
"mean accuracy",
"95%",
"0.985 area",
"roc curve",
"auc",
"2020",
"the year",
"covid-19",
"covid-19",
"the world health organization",
"covid-19",
"cnn",
"cnn",
"covid-19",
"cnn",
"96.19%",
"95%",
"0.985",
"roc"
] |
An Outlook for Deep Learning in Ecosystem Science | [
"George L. W. Perry",
"Rupert Seidl",
"André M. Bellvé",
"Werner Rammer"
] | Rapid advances in hardware and software, accompanied by public- and private-sector investment, have led to a new generation of data-driven computational tools. Recently, there has been a particular focus on deep learning—a class of machine learning algorithms that uses deep neural networks to identify patterns in large and heterogeneous datasets. These developments have been accompanied by both hype and scepticism by ecologists and others. This review describes the context in which deep learning methods have emerged, the deep learning methods most relevant to ecosystem ecologists, and some of the problem domains they have been applied to. Deep learning methods have high predictive performance in a range of ecological contexts, leveraging the large data resources now available. Furthermore, deep learning tools offer ecosystem ecologists new ways to learn about ecosystem dynamics. In particular, recent advances in interpretable machine learning and in developing hybrid approaches combining deep learning and mechanistic models provide a bridge between pure prediction and causal explanation. We conclude by looking at the opportunities that deep learning tools offer ecosystem ecologists and assess the challenges in interpretability that deep learning applications pose. | 10.1007/s10021-022-00789-y | an outlook for deep learning in ecosystem science | rapid advances in hardware and software, accompanied by public- and private-sector investment, have led to a new generation of data-driven computational tools. recently, there has been a particular focus on deep learning—a class of machine learning algorithms that uses deep neural networks to identify patterns in large and heterogeneous datasets. these developments have been accompanied by both hype and scepticism by ecologists and others. this review describes the context in which deep learning methods have emerged, the deep learning methods most relevant to ecosystem ecologists, and some of the problem domains they have been applied to. deep learning methods have high predictive performance in a range of ecological contexts, leveraging the large data resources now available. furthermore, deep learning tools offer ecosystem ecologists new ways to learn about ecosystem dynamics. in particular, recent advances in interpretable machine learning and in developing hybrid approaches combining deep learning and mechanistic models provide a bridge between pure prediction and causal explanation. we conclude by looking at the opportunities that deep learning tools offer ecosystem ecologists and assess the challenges in interpretability that deep learning applications pose. | [
"rapid advances",
"hardware",
"software",
"public-",
"private-sector investment",
"a new generation",
"data-driven computational tools",
"a particular focus",
"deep learning",
"a class",
"machine learning algorithms",
"that",
"deep neural networks",
"patterns",
"large and heterogeneous datasets",
"these developments",
"both hype",
"scepticism",
"ecologists",
"others",
"this review",
"the context",
"which",
"deep learning methods",
"the deep learning methods",
"ecosystem ecologists",
"some",
"the problem domains",
"they",
"deep learning methods",
"high predictive performance",
"a range",
"ecological contexts",
"the large data resources",
"deep learning tools",
"ecosystem",
"new ways",
"ecosystem dynamics",
"recent advances",
"interpretable machine learning",
"hybrid approaches",
"deep learning",
"mechanistic models",
"a bridge",
"pure prediction",
"causal explanation",
"we",
"the opportunities",
"deep learning tools",
"ecosystem ecologists",
"the challenges",
"interpretability",
"deep learning applications"
] |
A multi-agent adaptive deep learning framework for online intrusion detection | [
"Mahdi Soltani",
"Khashayar Khajavi",
"Mahdi Jafari Siavoshani",
"Amir Hossein Jahangir"
] | The network security analyzers use intrusion detection systems (IDSes) to distinguish malicious traffic from benign ones. The deep learning-based (DL-based) IDSes are proposed to auto-extract high-level features and eliminate the time-consuming and costly signature extraction process. However, this new generation of IDSes still needs to overcome a number of challenges to be employed in practical environments. One of the main issues of an applicable IDS is facing traffic concept drift, which manifests itself as new (i.e. , zero-day) attacks, in addition to the changing behavior of benign users/applications. Furthermore, a practical DL-based IDS needs to be conformed to a distributed (i.e. , multi-sensor) architecture in order to yield more accurate detections, create a collective attack knowledge based on the observations of different sensors, and also handle big data challenges for supporting high throughput networks. This paper proposes a novel multi-agent network intrusion detection framework to address the above shortcomings, considering a more practical scenario (i.e., online adaptable IDSes). This framework employs continual deep anomaly detectors for adapting each agent to the changing attack/benign patterns in its local traffic. In addition, a federated learning approach is proposed for sharing and exchanging local knowledge between different agents. Furthermore, the proposed framework implements sequential packet labeling for each flow, which provides an attack probability score for the flow by gradually observing each flow packet and updating its estimation. We evaluate the proposed framework by employing different deep models (including CNN-based and LSTM-based) over the CIC-IDS2017 and CSE-CIC-IDS2018 datasets. Through extensive evaluations and experiments, we show that the proposed distributed framework is well adapted to the traffic concept drift. More precisely, our results indicate that the CNN-based models are well suited for continually adapting to the traffic concept drift (i.e. , achieving an average detection rate of above 95% while needing just 128 new flows for the updating phase), and the LSTM-based models are a good candidate for sequential packet labeling in practical online IDSes (i.e. , detecting intrusions by just observing their first 15 packets). | 10.1186/s42400-023-00199-0 | a multi-agent adaptive deep learning framework for online intrusion detection | the network security analyzers use intrusion detection systems (idses) to distinguish malicious traffic from benign ones. the deep learning-based (dl-based) idses are proposed to auto-extract high-level features and eliminate the time-consuming and costly signature extraction process. however, this new generation of idses still needs to overcome a number of challenges to be employed in practical environments. one of the main issues of an applicable ids is facing traffic concept drift, which manifests itself as new (i.e. , zero-day) attacks, in addition to the changing behavior of benign users/applications. furthermore, a practical dl-based ids needs to be conformed to a distributed (i.e. , multi-sensor) architecture in order to yield more accurate detections, create a collective attack knowledge based on the observations of different sensors, and also handle big data challenges for supporting high throughput networks. this paper proposes a novel multi-agent network intrusion detection framework to address the above shortcomings, considering a more practical scenario (i.e., online adaptable idses). this framework employs continual deep anomaly detectors for adapting each agent to the changing attack/benign patterns in its local traffic. in addition, a federated learning approach is proposed for sharing and exchanging local knowledge between different agents. furthermore, the proposed framework implements sequential packet labeling for each flow, which provides an attack probability score for the flow by gradually observing each flow packet and updating its estimation. we evaluate the proposed framework by employing different deep models (including cnn-based and lstm-based) over the cic-ids2017 and cse-cic-ids2018 datasets. through extensive evaluations and experiments, we show that the proposed distributed framework is well adapted to the traffic concept drift. more precisely, our results indicate that the cnn-based models are well suited for continually adapting to the traffic concept drift (i.e. , achieving an average detection rate of above 95% while needing just 128 new flows for the updating phase), and the lstm-based models are a good candidate for sequential packet labeling in practical online idses (i.e. , detecting intrusions by just observing their first 15 packets). | [
"the network security analyzers",
"intrusion detection systems",
"(idses",
"malicious traffic",
"benign ones",
"the deep learning-based (dl-based) idses",
"auto-extract high-level features",
"the time-consuming and costly signature extraction process",
"this new generation",
"idses",
"a number",
"challenges",
"practical environments",
"the main issues",
"an applicable ids",
"traffic concept drift",
"which",
"itself",
"new (i.e. , zero-day",
"attacks",
"addition",
"the changing behavior",
"benign users/applications",
"a practical dl-based ids",
"a distributed (i.e. , multi-sensor) architecture",
"order",
"more accurate detections",
"a collective attack knowledge",
"the observations",
"different sensors",
"big data challenges",
"high throughput networks",
"this paper",
"a novel multi-agent network intrusion detection framework",
"the above shortcomings",
"a more practical scenario",
"(i.e., online adaptable idses",
"this framework",
"continual deep anomaly detectors",
"each agent",
"the changing attack/benign patterns",
"its local traffic",
"addition",
"a federated learning approach",
"local knowledge",
"different agents",
"the proposed framework",
"sequential packet labeling",
"each flow",
"which",
"an attack probability score",
"the flow",
"each flow packet",
"its estimation",
"we",
"the proposed framework",
"different deep models",
"the cic",
"-ids2017",
"cse-cic-ids2018",
"datasets",
"extensive evaluations",
"experiments",
"we",
"the proposed distributed framework",
"the traffic concept drift",
"our results",
"the cnn-based models",
"the traffic concept drift",
"an average detection rate",
"above 95%",
"just 128 new flows",
"the updating phase",
"the lstm-based models",
"a good candidate",
"sequential packet labeling",
"practical online idses",
"intrusions",
"their first 15 packets",
"one",
"zero-day",
"cnn",
"cnn",
"95%",
"just 128",
"first",
"15"
] |
Deep learning implementation of image segmentation in agricultural applications: a comprehensive review | [
"Lian Lei",
"Qiliang Yang",
"Ling Yang",
"Tao Shen",
"Ruoxi Wang",
"Chengbiao Fu"
] | Image segmentation is a crucial task in computer vision, which divides a digital image into multiple segments and objects. In agriculture, image segmentation is extensively used for crop and soil monitoring, predicting the best times to sow, fertilize, and harvest, estimating crop yield, and detecting plant diseases. However, image segmentation faces difficulties in agriculture, such as the challenges of disease staging recognition, labeling inconsistency, and changes in plant morphology with the environment. Consequently, we have conducted a comprehensive review of image segmentation techniques based on deep learning, exploring the development and prospects of image segmentation in agriculture. Deep learning-based image segmentation solutions widely used in agriculture are categorized into eight main groups: encoder-decoder structures, multi-scale and pyramid-based methods, dilated convolutional networks, visual attention models, generative adversarial networks, graph neural networks, instance segmentation networks, and transformer-based models. In addition, the applications of image segmentation methods in agriculture are presented, such as plant disease detection, weed identification, crop growth monitoring, crop yield estimation, and counting. Furthermore, a collection of publicly available plant image segmentation datasets has been reviewed, and the evaluation and comparison of performance for image segmentation algorithms have been conducted on benchmark datasets. Finally, there is a discussion of the challenges and future prospects of image segmentation in agriculture. | 10.1007/s10462-024-10775-6 | deep learning implementation of image segmentation in agricultural applications: a comprehensive review | image segmentation is a crucial task in computer vision, which divides a digital image into multiple segments and objects. in agriculture, image segmentation is extensively used for crop and soil monitoring, predicting the best times to sow, fertilize, and harvest, estimating crop yield, and detecting plant diseases. however, image segmentation faces difficulties in agriculture, such as the challenges of disease staging recognition, labeling inconsistency, and changes in plant morphology with the environment. consequently, we have conducted a comprehensive review of image segmentation techniques based on deep learning, exploring the development and prospects of image segmentation in agriculture. deep learning-based image segmentation solutions widely used in agriculture are categorized into eight main groups: encoder-decoder structures, multi-scale and pyramid-based methods, dilated convolutional networks, visual attention models, generative adversarial networks, graph neural networks, instance segmentation networks, and transformer-based models. in addition, the applications of image segmentation methods in agriculture are presented, such as plant disease detection, weed identification, crop growth monitoring, crop yield estimation, and counting. furthermore, a collection of publicly available plant image segmentation datasets has been reviewed, and the evaluation and comparison of performance for image segmentation algorithms have been conducted on benchmark datasets. finally, there is a discussion of the challenges and future prospects of image segmentation in agriculture. | [
"image segmentation",
"a crucial task",
"computer vision",
"which",
"a digital image",
"multiple segments",
"objects",
"agriculture",
"image segmentation",
"crop",
"soil monitoring",
"the best times",
"crop yield",
"plant diseases",
"image segmentation",
"difficulties",
"agriculture",
"the challenges",
"disease staging recognition",
"labeling inconsistency",
"changes",
"plant morphology",
"the environment",
"we",
"a comprehensive review",
"image segmentation techniques",
"deep learning",
"the development",
"prospects",
"image segmentation",
"agriculture",
"deep learning-based image segmentation solutions",
"agriculture",
"eight main groups",
"encoder-decoder structures",
"-scale and pyramid-based methods",
"dilated convolutional networks",
"visual attention models",
"generative adversarial networks",
"graph neural networks",
"instance segmentation networks",
"transformer-based models",
"addition",
"the applications",
"image segmentation methods",
"agriculture",
"plant disease detection",
"identification",
"crop growth monitoring",
"crop yield estimation",
"a collection",
"publicly available plant image segmentation datasets",
"the evaluation",
"comparison",
"performance",
"image segmentation algorithms",
"benchmark datasets",
"a discussion",
"the challenges",
"future prospects",
"image segmentation",
"agriculture",
"eight"
] |
Model-based deep learning framework for accelerated optical projection tomography | [
"Marcos Obando",
"Andrea Bassi",
"Nicolas Ducros",
"Germán Mato",
"Teresa M. Correia"
] | In this work, we propose a model-based deep learning reconstruction algorithm for optical projection tomography (ToMoDL), to greatly reduce acquisition and reconstruction times. The proposed method iterates over a data consistency step and an image domain artefact removal step achieved by a convolutional neural network. A preprocessing stage is also included to avoid potential misalignments between the sample center of rotation and the detector. The algorithm is trained using a database of wild-type zebrafish (Danio rerio) at different stages of development to minimise the mean square error for a fixed number of iterations. Using a cross-validation scheme, we compare the results to other reconstruction methods, such as filtered backprojection, compressed sensing and a direct deep learning method where the pseudo-inverse solution is corrected by a U-Net. The proposed method performs equally well or better than the alternatives. For a highly reduced number of projections, only the U-Net method provides images comparable to those obtained with ToMoDL. However, ToMoDL has a much better performance if the amount of data available for training is limited, given that the number of network trainable parameters is smaller. | 10.1038/s41598-023-47650-3 | model-based deep learning framework for accelerated optical projection tomography | in this work, we propose a model-based deep learning reconstruction algorithm for optical projection tomography (tomodl), to greatly reduce acquisition and reconstruction times. the proposed method iterates over a data consistency step and an image domain artefact removal step achieved by a convolutional neural network. a preprocessing stage is also included to avoid potential misalignments between the sample center of rotation and the detector. the algorithm is trained using a database of wild-type zebrafish (danio rerio) at different stages of development to minimise the mean square error for a fixed number of iterations. using a cross-validation scheme, we compare the results to other reconstruction methods, such as filtered backprojection, compressed sensing and a direct deep learning method where the pseudo-inverse solution is corrected by a u-net. the proposed method performs equally well or better than the alternatives. for a highly reduced number of projections, only the u-net method provides images comparable to those obtained with tomodl. however, tomodl has a much better performance if the amount of data available for training is limited, given that the number of network trainable parameters is smaller. | [
"this work",
"we",
"a model-based deep learning reconstruction algorithm",
"optical projection tomography",
"acquisition and reconstruction times",
"the proposed method",
"a data consistency step",
"an image domain artefact removal step",
"a convolutional neural network",
"a preprocessing stage",
"potential misalignments",
"the sample center",
"rotation",
"the detector",
"the algorithm",
"a database",
"wild-type zebrafish",
"danio rerio",
"different stages",
"development",
"the mean square error",
"a fixed number",
"iterations",
"a cross-validation scheme",
"we",
"the results",
"other reconstruction methods",
"filtered backprojection",
"compressed sensing",
"a direct deep learning method",
"the pseudo-inverse solution",
"a u",
"-",
"net",
"the proposed method",
"the alternatives",
"a highly reduced number",
"projections",
"only the u-net method",
"images",
"those",
"a much better performance",
"the amount",
"data",
"training",
"the number",
"network trainable parameters",
"danio rerio"
] |
Mammography using low-frequency electromagnetic fields with deep learning | [
"Hamid Akbari-Chelaresi",
"Dawood Alsaedi",
"Seyed Hossein Mirjahanmardi",
"Mohamed El Badawe",
"Ali M. Albishi",
"Vahid Nayyeri",
"Omar M. Ramahi"
] | In this paper, a novel technique for detecting female breast anomalous tissues is presented and validated through numerical simulations. The technique, to a high degree, resembles X-ray mammography; however, instead of using X-rays for obtaining images of the breast, low-frequency electromagnetic fields are leveraged. To capture breast impressions, a metasurface, which can be thought of as analogous to X-rays film, has been employed. To achieve deep and sufficient penetration within the breast tissues, the source of excitation is a simple narrow-band dipole antenna operating at 200 MHz. The metasurface is designed to operate at the same frequency. The detection mechanism is based on comparing the impressions obtained from the breast under examination to the reference case (healthy breasts) using machine learning techniques. Using this system, not only would it be possible to detect tumors (benign or malignant), but one can also determine the location and size of the tumors. Remarkably, deep learning models were found to achieve very high classification accuracy. | 10.1038/s41598-023-40494-x | mammography using low-frequency electromagnetic fields with deep learning | in this paper, a novel technique for detecting female breast anomalous tissues is presented and validated through numerical simulations. the technique, to a high degree, resembles x-ray mammography; however, instead of using x-rays for obtaining images of the breast, low-frequency electromagnetic fields are leveraged. to capture breast impressions, a metasurface, which can be thought of as analogous to x-rays film, has been employed. to achieve deep and sufficient penetration within the breast tissues, the source of excitation is a simple narrow-band dipole antenna operating at 200 mhz. the metasurface is designed to operate at the same frequency. the detection mechanism is based on comparing the impressions obtained from the breast under examination to the reference case (healthy breasts) using machine learning techniques. using this system, not only would it be possible to detect tumors (benign or malignant), but one can also determine the location and size of the tumors. remarkably, deep learning models were found to achieve very high classification accuracy. | [
"this paper",
"a novel technique",
"female breast anomalous tissues",
"numerical simulations",
"the technique",
"a high degree",
"x-ray mammography",
"x",
"-",
"rays",
"images",
"the breast, low-frequency electromagnetic fields",
"breast impressions",
"a metasurface",
"which",
"x-rays film",
"deep and sufficient penetration",
"the breast tissues",
"the source",
"excitation",
"a simple narrow-band dipole antenna",
"200 mhz",
"the metasurface",
"the same frequency",
"the detection mechanism",
"the impressions",
"the breast",
"examination",
"the reference case",
"healthy breasts",
"machine learning techniques",
"this system",
"it",
"tumors",
"one",
"the location",
"size",
"the tumors",
"deep learning models",
"very high classification accuracy",
"dipole antenna",
"200"
] |
Taxonomy of deep learning-based intrusion detection system approaches in fog computing: a systematic review | [
"Sepide Najafli",
"Abolfazl Toroghi Haghighat",
"Babak Karasfi"
] | The Internet of Things (IoT) has been used in various aspects. Fundamental security issues must be addressed to accelerate and develop the Internet of Things. An intrusion detection system (IDS) is an essential element in network security designed to detect and determine the type of attacks. The use of deep learning (DL) shows promising results in the design of IDS based on IoT. DL facilitates analytics and learning in the dynamic IoT domain. Some deep learning-based IDS in IOT sensors cannot be executed, because of resource restrictions. Although cloud computing could overcome limitations, the distance between the cloud and the end IoT sensors causes high communication costs, security problems and delays. Fog computing has been presented to handle these issues and can bring resources to the edge of the network. Many studies have been conducted to investigate IDS based on IoT. Our goal is to investigate and classify deep learning-based IDS on fog processing. In this paper, researchers can access comprehensive resources in this field. Therefore, first, we provide a complete classification of IDS in IoT. Then practical and important proposed IDSs in the fog environment are discussed in three groups (binary, multi-class, and hybrid), and are examined the advantages and disadvantages of each approach. The results show that most of the studied methods consider hybrid strategies (binary and multi-class). In addition, in the reviewed papers the average Accuracy obtained in the binary method is better than the multi-class. Finally, we highlight some challenges and future directions for the next research in IDS techniques. | 10.1007/s10115-024-02162-y | taxonomy of deep learning-based intrusion detection system approaches in fog computing: a systematic review | the internet of things (iot) has been used in various aspects. fundamental security issues must be addressed to accelerate and develop the internet of things. an intrusion detection system (ids) is an essential element in network security designed to detect and determine the type of attacks. the use of deep learning (dl) shows promising results in the design of ids based on iot. dl facilitates analytics and learning in the dynamic iot domain. some deep learning-based ids in iot sensors cannot be executed, because of resource restrictions. although cloud computing could overcome limitations, the distance between the cloud and the end iot sensors causes high communication costs, security problems and delays. fog computing has been presented to handle these issues and can bring resources to the edge of the network. many studies have been conducted to investigate ids based on iot. our goal is to investigate and classify deep learning-based ids on fog processing. in this paper, researchers can access comprehensive resources in this field. therefore, first, we provide a complete classification of ids in iot. then practical and important proposed idss in the fog environment are discussed in three groups (binary, multi-class, and hybrid), and are examined the advantages and disadvantages of each approach. the results show that most of the studied methods consider hybrid strategies (binary and multi-class). in addition, in the reviewed papers the average accuracy obtained in the binary method is better than the multi-class. finally, we highlight some challenges and future directions for the next research in ids techniques. | [
"the internet",
"things",
"iot",
"various aspects",
"fundamental security issues",
"the internet",
"things",
"an intrusion detection system",
"ids",
"an essential element",
"network security",
"the type",
"attacks",
"the use",
"deep learning",
"dl",
"results",
"the design",
"ids",
"iot",
"dl",
"analytics",
"the dynamic iot domain",
"some deep learning-based ids",
"iot sensors",
"resource restrictions",
"cloud computing",
"limitations",
"the distance",
"the cloud",
"the end iot sensors",
"high communication costs",
"security problems",
"delays",
"fog computing",
"these issues",
"resources",
"the edge",
"the network",
"many studies",
"ids",
"iot",
"our goal",
"deep learning-based ids",
"fog processing",
"this paper",
"researchers",
"comprehensive resources",
"this field",
"we",
"a complete classification",
"ids",
"iot",
"practical and important proposed idss",
"the fog environment",
"three groups",
"the advantages",
"disadvantages",
"each approach",
"the results",
"the studied methods",
"hybrid strategies",
"binary and multi-class",
"addition",
"the reviewed papers",
"the average accuracy",
"the binary method",
"the multi",
"-",
"class",
"we",
"some challenges",
"future directions",
"the next research",
"ids techniques",
"fog computing",
"first",
"three"
] |
Deep learning-based pathway-centric approach to characterize recurrent hepatocellular carcinoma after liver transplantation | [
"Jeffrey To",
"Soumita Ghosh",
"Xun Zhao",
"Elisa Pasini",
"Sandra Fischer",
"Gonzalo Sapisochin",
"Anand Ghanekar",
"Elmar Jaeckel",
"Mamatha Bhat"
] | BackgroundLiver transplantation (LT) is offered as a cure for Hepatocellular carcinoma (HCC), however 15–20% develop recurrence post-transplant which tends to be aggressive. In this study, we examined the transcriptome profiles of patients with recurrent HCC to identify differentially expressed genes (DEGs), the involved pathways, biological functions, and potential gene signatures of recurrent HCC post-transplant using deep machine learning (ML) methodology.Materials and methodsWe analyzed the transcriptomic profiles of primary and recurrent tumor samples from 7 pairs of patients who underwent LT. Following differential gene expression analysis, we performed pathway enrichment, gene ontology (GO) analyses and protein-protein interactions (PPIs) with top 10 hub gene networks. We also predicted the landscape of infiltrating immune cells using Cibersortx. We next develop pathway and GO term-based deep learning models leveraging primary tissue gene expression data from The Cancer Genome Atlas (TCGA) to identify gene signatures in recurrent HCC.ResultsThe PI3K/Akt signaling pathway and cytokine-mediated signaling pathway were particularly activated in HCC recurrence. The recurrent tumors exhibited upregulation of an immune-escape related gene, CD274, in the top 10 hub gene analysis. Significantly higher infiltration of monocytes and lower M1 macrophages were found in recurrent HCC tumors. Our deep learning approach identified a 20-gene signature in recurrent HCC. Amongst the 20 genes, through multiple analysis, IL6 was found to be significantly associated with HCC recurrence.ConclusionOur deep learning approach identified PI3K/Akt signaling as potentially regulating cytokine-mediated functions and the expression of immune escape genes, leading to alterations in the pattern of immune cell infiltration. In conclusion, IL6 was identified to play an important role in HCC recurrence. | 10.1186/s40246-024-00624-6 | deep learning-based pathway-centric approach to characterize recurrent hepatocellular carcinoma after liver transplantation | backgroundliver transplantation (lt) is offered as a cure for hepatocellular carcinoma (hcc), however 15–20% develop recurrence post-transplant which tends to be aggressive. in this study, we examined the transcriptome profiles of patients with recurrent hcc to identify differentially expressed genes (degs), the involved pathways, biological functions, and potential gene signatures of recurrent hcc post-transplant using deep machine learning (ml) methodology.materials and methodswe analyzed the transcriptomic profiles of primary and recurrent tumor samples from 7 pairs of patients who underwent lt. following differential gene expression analysis, we performed pathway enrichment, gene ontology (go) analyses and protein-protein interactions (ppis) with top 10 hub gene networks. we also predicted the landscape of infiltrating immune cells using cibersortx. we next develop pathway and go term-based deep learning models leveraging primary tissue gene expression data from the cancer genome atlas (tcga) to identify gene signatures in recurrent hcc.resultsthe pi3k/akt signaling pathway and cytokine-mediated signaling pathway were particularly activated in hcc recurrence. the recurrent tumors exhibited upregulation of an immune-escape related gene, cd274, in the top 10 hub gene analysis. significantly higher infiltration of monocytes and lower m1 macrophages were found in recurrent hcc tumors. our deep learning approach identified a 20-gene signature in recurrent hcc. amongst the 20 genes, through multiple analysis, il6 was found to be significantly associated with hcc recurrence.conclusionour deep learning approach identified pi3k/akt signaling as potentially regulating cytokine-mediated functions and the expression of immune escape genes, leading to alterations in the pattern of immune cell infiltration. in conclusion, il6 was identified to play an important role in hcc recurrence. | [
"backgroundliver transplantation",
"lt",
"a cure",
"hepatocellular carcinoma",
"hcc",
"15–20%",
"recurrence",
"transplant",
"which",
"this study",
"we",
"the transcriptome profiles",
"patients",
"recurrent hcc",
"differentially expressed genes",
"the involved pathways",
"biological functions",
"potential gene signatures",
"recurrent hcc post",
"-",
"deep machine learning",
"(ml) methodology.materials",
"methodswe",
"the transcriptomic profiles",
"primary and recurrent tumor samples",
"7 pairs",
"patients",
"who",
"lt",
"differential gene expression analysis",
"we",
"pathway enrichment",
"gene ontology",
"analyses",
"protein-protein interactions",
"top 10 hub gene networks",
"we",
"the landscape",
"infiltrating immune cells",
"cibersortx",
"we",
"term-based deep learning models",
"primary tissue gene expression data",
"the cancer genome atlas",
"(tcga",
"gene signatures",
"recurrent hcc.resultsthe pi3k/akt",
"pathway",
"hcc recurrence",
"the recurrent tumors",
"upregulation",
"an immune-escape related gene",
"cd274",
"the top 10 hub gene analysis",
"significantly higher infiltration",
"monocytes",
"lower m1 macrophages",
"recurrent hcc tumors",
"our deep learning approach",
"a 20-gene signature",
"recurrent hcc",
"the 20 genes",
"multiple analysis",
"il6",
"hcc",
"recurrence.conclusionour deep learning approach",
"pi3k/akt",
"cytokine-mediated functions",
"the expression",
"immune escape genes",
"alterations",
"the pattern",
"immune cell infiltration",
"conclusion",
"il6",
"an important role",
"hcc recurrence",
"15–20%",
"7",
"10",
"10",
"20",
"20"
] |
Deep learning models for perception of brightness related illusions | [
"Amrita Mukherjee",
"Avijit Paul",
"Kuntal Ghosh"
] | Illusions are like holes in our effortless visual mechanism through which we can peep into the internal mechanisms of the brain. Scientists attempted to explain underlying physiological, physical, and cognitive mechanisms of illusions by the receptive field hierarchical organizations, information sampling, filtering, etc. Some antagonistic illusions cannot be explained by them and for this, deep learning networks were used recently as a model for illusion perception. To further broaden the scope of the perceptual functionality in the brightness contrast genre, handle the background removal effects on some illusions that reduce the illusory effects, and replicate the antagonistic illusions with the same parameter setup, we have used Convolutional Neural Network, Autoencoder, U-Net, and U-Net++ models for replicating the visual illusions. The networks are specialized in low-level vision tasks like De-noising, De-blurring, and a combination of both. A high number of brightness contrast visual illusions are tested on all the networks and most of the outcomes significantly matched human perceptions. Overall, our method will guide the development of neurobiological frameworks which might enrich the computational neuroscience study by distilling some biological principles. On the other hand, the machine learning community will benefit from knowing the inherent flaws of the networks so that the true image of reality can be taken into consideration, especially in imaging situations where experts too can be deceived. | 10.1007/s10489-024-05658-w | deep learning models for perception of brightness related illusions | illusions are like holes in our effortless visual mechanism through which we can peep into the internal mechanisms of the brain. scientists attempted to explain underlying physiological, physical, and cognitive mechanisms of illusions by the receptive field hierarchical organizations, information sampling, filtering, etc. some antagonistic illusions cannot be explained by them and for this, deep learning networks were used recently as a model for illusion perception. to further broaden the scope of the perceptual functionality in the brightness contrast genre, handle the background removal effects on some illusions that reduce the illusory effects, and replicate the antagonistic illusions with the same parameter setup, we have used convolutional neural network, autoencoder, u-net, and u-net++ models for replicating the visual illusions. the networks are specialized in low-level vision tasks like de-noising, de-blurring, and a combination of both. a high number of brightness contrast visual illusions are tested on all the networks and most of the outcomes significantly matched human perceptions. overall, our method will guide the development of neurobiological frameworks which might enrich the computational neuroscience study by distilling some biological principles. on the other hand, the machine learning community will benefit from knowing the inherent flaws of the networks so that the true image of reality can be taken into consideration, especially in imaging situations where experts too can be deceived. | [
"illusions",
"holes",
"our effortless visual mechanism",
"which",
"we",
"the internal mechanisms",
"the brain",
"scientists",
"underlying physiological, physical, and cognitive mechanisms",
"illusions",
"the receptive field hierarchical organizations",
"information sampling",
"filtering",
"some antagonistic illusions",
"them",
"this",
"deep learning networks",
"a model",
"illusion perception",
"the scope",
"the perceptual functionality",
"the brightness contrast genre",
"the background removal effects",
"some illusions",
"that",
"the illusory effects",
"the antagonistic illusions",
"the same parameter setup",
"we",
"convolutional neural network",
"autoencoder",
"-",
"net",
"u-net++ models",
"the visual illusions",
"the networks",
"low-level vision tasks",
"de",
"de",
"-",
"a combination",
"both",
"a high number",
"brightness contrast visual illusions",
"all the networks",
"the outcomes",
"human perceptions",
"our method",
"the development",
"neurobiological frameworks",
"which",
"the computational neuroscience study",
"some biological principles",
"the other hand",
"the machine learning community",
"the inherent flaws",
"the networks",
"the true image",
"reality",
"consideration",
"imaging situations",
"experts"
] |
Exploiting biochemical data to improve osteosarcoma diagnosis with deep learning | [
"Shidong Wang",
"Yangyang Shen",
"Fanwei Zeng",
"Meng Wang",
"Bohan Li",
"Dian Shen",
"Xiaodong Tang",
"Beilun Wang"
] | Early and accurate diagnosis of osteosarcomas (OS) is of great clinical significance, and machine learning (ML) based methods are increasingly adopted. However, current ML-based methods for osteosarcoma diagnosis consider only X-ray images, usually fail to generalize to new cases, and lack explainability. In this paper, we seek to explore the capability of deep learning models in diagnosing primary OS, with higher accuracy, explainability, and generality. Concretely, we analyze the added value of integrating the biochemical data, i.e., alkaline phosphatase (ALP) and lactate dehydrogenase (LDH), and design a model that incorporates the numerical features of ALP and LDH and the visual features of X-ray imaging through a late fusion approach in the feature space. We evaluate this model on real-world clinic data with 848 patients aged from 4 to 81. The experimental results reveal the effectiveness of incorporating ALP and LDH simultaneously in a late fusion approach, with the accuracy of the considered 2608 cases increased to 97.17%, compared to 94.35% in the baseline. Grad-CAM visualizations consistent with orthopedic specialists further justified the model’s explainability. | 10.1007/s13755-024-00288-5 | exploiting biochemical data to improve osteosarcoma diagnosis with deep learning | early and accurate diagnosis of osteosarcomas (os) is of great clinical significance, and machine learning (ml) based methods are increasingly adopted. however, current ml-based methods for osteosarcoma diagnosis consider only x-ray images, usually fail to generalize to new cases, and lack explainability. in this paper, we seek to explore the capability of deep learning models in diagnosing primary os, with higher accuracy, explainability, and generality. concretely, we analyze the added value of integrating the biochemical data, i.e., alkaline phosphatase (alp) and lactate dehydrogenase (ldh), and design a model that incorporates the numerical features of alp and ldh and the visual features of x-ray imaging through a late fusion approach in the feature space. we evaluate this model on real-world clinic data with 848 patients aged from 4 to 81. the experimental results reveal the effectiveness of incorporating alp and ldh simultaneously in a late fusion approach, with the accuracy of the considered 2608 cases increased to 97.17%, compared to 94.35% in the baseline. grad-cam visualizations consistent with orthopedic specialists further justified the model’s explainability. | [
"early and accurate diagnosis",
"great clinical significance",
"machine learning (ml) based methods",
"current ml-based methods",
"osteosarcoma diagnosis",
"only x-ray images",
"new cases",
"lack explainability",
"this paper",
"we",
"the capability",
"deep learning models",
"higher accuracy",
"explainability",
"generality",
"we",
"the added value",
"the biochemical data",
"i.e., alkaline phosphatase",
"alp",
"lactate dehydrogenase",
"ldh",
"a model",
"that",
"the numerical features",
"alp",
"ldh",
"the visual features",
"x",
"-ray imaging",
"a late fusion approach",
"the feature space",
"we",
"this model",
"real-world clinic data",
"848 patients",
"the experimental results",
"the effectiveness",
"alp",
"ldh",
"a late fusion approach",
"the accuracy",
"the considered 2608 cases",
"97.17%",
"94.35%",
"the baseline",
"grad-cam visualizations",
"orthopedic specialists",
"the model’s explainability",
"848",
"4",
"81",
"2608",
"97.17%",
"94.35%"
] |
Deep learning-based predictive classification of functional subpopulations of hematopoietic stem cells and multipotent progenitors | [
"Shen Wang",
"Jianzhong Han",
"Jingru Huang",
"Khayrul Islam",
"Yuheng Shi",
"Yuyuan Zhou",
"Dongwook Kim",
"Jane Zhou",
"Zhaorui Lian",
"Yaling Liu",
"Jian Huang"
] | BackgroundHematopoietic stem cells (HSCs) and multipotent progenitors (MPPs) play a pivotal role in maintaining lifelong hematopoiesis. The distinction between stem cells and other progenitors, as well as the assessment of their functions, has long been a central focus in stem cell research. In recent years, deep learning has emerged as a powerful tool for cell image analysis and classification/prediction.MethodsIn this study, we explored the feasibility of employing deep learning techniques to differentiate murine HSCs and MPPs based solely on their morphology, as observed through light microscopy (DIC) images.ResultsAfter rigorous training and validation using extensive image datasets, we successfully developed a three-class classifier, referred to as the LSM model, capable of reliably distinguishing long-term HSCs, short-term HSCs, and MPPs. The LSM model extracts intrinsic morphological features unique to different cell types, irrespective of the methods used for cell identification and isolation, such as surface markers or intracellular GFP markers. Furthermore, employing the same deep learning framework, we created a two-class classifier that effectively discriminates between aged HSCs and young HSCs. This discovery is particularly significant as both cell types share identical surface markers yet serve distinct functions. This classifier holds the potential to offer a novel, rapid, and efficient means of assessing the functional states of HSCs, thus obviating the need for time-consuming transplantation experiments.ConclusionOur study represents the pioneering use of deep learning to differentiate HSCs and MPPs under steady-state conditions. This novel and robust deep learning-based platform will provide a basis for the future development of a new generation stem cell identification and separation system. It may also provide new insight into the molecular mechanisms underlying stem cell self-renewal. | 10.1186/s13287-024-03682-8 | deep learning-based predictive classification of functional subpopulations of hematopoietic stem cells and multipotent progenitors | backgroundhematopoietic stem cells (hscs) and multipotent progenitors (mpps) play a pivotal role in maintaining lifelong hematopoiesis. the distinction between stem cells and other progenitors, as well as the assessment of their functions, has long been a central focus in stem cell research. in recent years, deep learning has emerged as a powerful tool for cell image analysis and classification/prediction.methodsin this study, we explored the feasibility of employing deep learning techniques to differentiate murine hscs and mpps based solely on their morphology, as observed through light microscopy (dic) images.resultsafter rigorous training and validation using extensive image datasets, we successfully developed a three-class classifier, referred to as the lsm model, capable of reliably distinguishing long-term hscs, short-term hscs, and mpps. the lsm model extracts intrinsic morphological features unique to different cell types, irrespective of the methods used for cell identification and isolation, such as surface markers or intracellular gfp markers. furthermore, employing the same deep learning framework, we created a two-class classifier that effectively discriminates between aged hscs and young hscs. this discovery is particularly significant as both cell types share identical surface markers yet serve distinct functions. this classifier holds the potential to offer a novel, rapid, and efficient means of assessing the functional states of hscs, thus obviating the need for time-consuming transplantation experiments.conclusionour study represents the pioneering use of deep learning to differentiate hscs and mpps under steady-state conditions. this novel and robust deep learning-based platform will provide a basis for the future development of a new generation stem cell identification and separation system. it may also provide new insight into the molecular mechanisms underlying stem cell self-renewal. | [
"backgroundhematopoietic stem cells",
"hscs",
"multipotent progenitors",
"mpps",
"a pivotal role",
"lifelong hematopoiesis",
"the distinction",
"stem cells",
"other progenitors",
"the assessment",
"their functions",
"a central focus",
"stem cell research",
"recent years",
"deep learning",
"a powerful tool",
"cell image analysis",
"classification/prediction.methodsin",
"we",
"the feasibility",
"deep learning techniques",
"murine hscs",
"mpps",
"their morphology",
"light microscopy (dic) images.resultsafter rigorous training",
"validation",
"extensive image datasets",
"we",
"a three-class classifier",
"the lsm model",
"reliably distinguishing long-term hscs",
"short-term hscs",
"mpps",
"the lsm model",
"intrinsic morphological features",
"different cell types",
"the methods",
"cell identification",
"isolation",
"surface markers",
"intracellular gfp markers",
"the same deep learning framework",
"we",
"a two-class classifier",
"that",
"aged hscs",
"young hscs",
"this discovery",
"both cell types",
"identical surface markers",
"distinct functions",
"this classifier",
"the potential",
"a novel, rapid, and efficient means",
"the functional states",
"hscs",
"the need",
"time-consuming transplantation",
"experiments.conclusionour study",
"the pioneering use",
"deep learning",
"hscs",
"mpps",
"steady-state conditions",
"this novel and robust deep learning-based platform",
"a basis",
"the future development",
"a new generation stem cell identification",
"separation system",
"it",
"new insight",
"the molecular mechanisms",
"stem cell self-renewal",
"recent years",
"murine hscs",
"three",
"two"
] |
Fully dynamic reorder policies with deep reinforcement learning for multi-echelon inventory management | [
"Patric Hammler",
"Nicolas Riesterer",
"Torsten Braun"
] | The operation of inventory systems plays an important role in the success of manufacturing companies, making it a highly relevant domain for optimization. In particular, the domain lends itself to being approached via Deep Reinforcement Learning (DRL) models due to it requiring sequential reorder decisions based on uncertainty to minimize cost. In this paper, we evaluate state-of-the-art optimization approaches to determine whether Deep Reinforcement Learning can be applied to the multi-echelon inventory optimization (MEIO) framework in a practically feasible manner to generate fully dynamic reorder policies. We investigate how it performs in comparison to an optimized static reorder policy, how robust it is when it comes to structural changes in the environment, and whether the use of DRL is safe in terms of risk in real-world applications. Our results show promising performance for DRL with potential for improvement in terms of minimizing risky behavior. | 10.1007/s00287-023-01556-6 | fully dynamic reorder policies with deep reinforcement learning for multi-echelon inventory management | the operation of inventory systems plays an important role in the success of manufacturing companies, making it a highly relevant domain for optimization. in particular, the domain lends itself to being approached via deep reinforcement learning (drl) models due to it requiring sequential reorder decisions based on uncertainty to minimize cost. in this paper, we evaluate state-of-the-art optimization approaches to determine whether deep reinforcement learning can be applied to the multi-echelon inventory optimization (meio) framework in a practically feasible manner to generate fully dynamic reorder policies. we investigate how it performs in comparison to an optimized static reorder policy, how robust it is when it comes to structural changes in the environment, and whether the use of drl is safe in terms of risk in real-world applications. our results show promising performance for drl with potential for improvement in terms of minimizing risky behavior. | [
"the operation",
"inventory systems",
"an important role",
"the success",
"manufacturing companies",
"it",
"optimization",
"the domain",
"itself",
"deep reinforcement learning (drl) models",
"it",
"sequential reorder decisions",
"uncertainty",
"cost",
"this paper",
"we",
"the-art",
"deep reinforcement learning",
"the multi-echelon inventory optimization",
"meio) framework",
"a practically feasible manner",
"fully dynamic reorder policies",
"we",
"it",
"comparison",
"an optimized static reorder policy",
"it",
"it",
"structural changes",
"the environment",
"the use",
"drl",
"terms",
"risk",
"real-world applications",
"our results",
"promising performance",
"drl",
"potential",
"improvement",
"terms",
"risky behavior"
] |
Deep learning models for webcam eye tracking in online experiments | [
"Shreshth Saxena",
"Lauren K. Fink",
"Elke B. Lange"
] | Eye tracking is prevalent in scientific and commercial applications. Recent computer vision and deep learning methods enable eye tracking with off-the-shelf webcams and reduce dependence on expensive, restrictive hardware. However, such deep learning methods have not yet been applied and evaluated for remote, online psychological experiments. In this study, we tackle critical challenges faced in remote eye tracking setups and systematically evaluate appearance-based deep learning methods of gaze tracking and blink detection. From their own homes and laptops, 65 participants performed a battery of eye tracking tasks including (i) fixation, (ii) zone classification, (iii) free viewing, (iv) smooth pursuit, and (v) blink detection. Webcam recordings of the participants performing these tasks were processed offline through appearance-based models of gaze and blink detection. The task battery required different eye movements that characterized gaze and blink prediction accuracy over a comprehensive list of measures. We find the best gaze accuracy to be 2.4° and precision of 0.47°, which outperforms previous online eye tracking studies and reduces the gap between laboratory-based and online eye tracking performance. We release the experiment template, recorded data, and analysis code with the motivation to escalate affordable, accessible, and scalable eye tracking that has the potential to accelerate research in the fields of psychological science, cognitive neuroscience, user experience design, and human–computer interfaces. | 10.3758/s13428-023-02190-6 | deep learning models for webcam eye tracking in online experiments | eye tracking is prevalent in scientific and commercial applications. recent computer vision and deep learning methods enable eye tracking with off-the-shelf webcams and reduce dependence on expensive, restrictive hardware. however, such deep learning methods have not yet been applied and evaluated for remote, online psychological experiments. in this study, we tackle critical challenges faced in remote eye tracking setups and systematically evaluate appearance-based deep learning methods of gaze tracking and blink detection. from their own homes and laptops, 65 participants performed a battery of eye tracking tasks including (i) fixation, (ii) zone classification, (iii) free viewing, (iv) smooth pursuit, and (v) blink detection. webcam recordings of the participants performing these tasks were processed offline through appearance-based models of gaze and blink detection. the task battery required different eye movements that characterized gaze and blink prediction accuracy over a comprehensive list of measures. we find the best gaze accuracy to be 2.4° and precision of 0.47°, which outperforms previous online eye tracking studies and reduces the gap between laboratory-based and online eye tracking performance. we release the experiment template, recorded data, and analysis code with the motivation to escalate affordable, accessible, and scalable eye tracking that has the potential to accelerate research in the fields of psychological science, cognitive neuroscience, user experience design, and human–computer interfaces. | [
"eye tracking",
"scientific and commercial applications",
"recent computer vision",
"deep learning methods",
"eye tracking",
"the-shelf",
"dependence",
"expensive, restrictive hardware",
"such deep learning methods",
"remote, online psychological experiments",
"this study",
"we",
"critical challenges",
"remote eye tracking setups",
"appearance-based deep learning methods",
"gaze tracking",
"blink",
"detection",
"their own homes",
"laptops",
"65 participants",
"a battery",
"eye tracking tasks",
"(i) fixation",
"(ii) zone classification",
"iii",
"free viewing",
"smooth pursuit",
"(v) blink detection",
"webcam recordings",
"the participants",
"these tasks",
"appearance-based models",
"gaze",
"detection",
"the task battery",
"different eye movements",
"that",
"gaze",
"prediction accuracy",
"a comprehensive list",
"measures",
"we",
"the best gaze accuracy",
"precision",
"which",
"previous online eye tracking studies",
"the gap",
"laboratory-based and online eye tracking performance",
"we",
"the experiment template",
"recorded data",
"analysis code",
"the motivation",
"affordable, accessible, and scalable eye tracking",
"that",
"the potential",
"research",
"the fields",
"psychological science",
"cognitive neuroscience",
"user experience design",
"human–computer interfaces",
"65",
"2.4",
"0.47"
] |
Assessment of indoor risk through deep learning -based object recognition in disaster situations | [
"Irshad Khan",
"Ziyi Guo",
"Kihwan Lim",
"Jaeseon Kim",
"Young-Woo Kwon"
] | Disasters can devastate individuals and their properties, highlighting the importance of risk assessment to promote safety. Recently, deep learning techniques have shown the potential in identifying hazardous situations during disasters. Recognizing potentially dangerous objects in indoor environments can be essential for assisting individuals in responding appropriately to emergencies. In this article, we present an indoor-risk analysis framework for disasters based on deep learning. Our framework utilizes modern deep learning techniques to calculate an indoor risk rating based on dangerous objects’ sizes, enabling comprehensive risk assessment of indoor environments during disasters. To that end, we use (Mask R-CNN) to identify hazardous indoor objects in disaster situations with 94% accuracy. By incorporating object size information, our framework offers a more nuanced and detailed risk assessment than previous approaches. Our proposed system provides a valuable tool for promoting ongoing safety improvement and enhancing indoor safety during natural disasters. | 10.1007/s11042-023-16711-0 | assessment of indoor risk through deep learning -based object recognition in disaster situations | disasters can devastate individuals and their properties, highlighting the importance of risk assessment to promote safety. recently, deep learning techniques have shown the potential in identifying hazardous situations during disasters. recognizing potentially dangerous objects in indoor environments can be essential for assisting individuals in responding appropriately to emergencies. in this article, we present an indoor-risk analysis framework for disasters based on deep learning. our framework utilizes modern deep learning techniques to calculate an indoor risk rating based on dangerous objects’ sizes, enabling comprehensive risk assessment of indoor environments during disasters. to that end, we use (mask r-cnn) to identify hazardous indoor objects in disaster situations with 94% accuracy. by incorporating object size information, our framework offers a more nuanced and detailed risk assessment than previous approaches. our proposed system provides a valuable tool for promoting ongoing safety improvement and enhancing indoor safety during natural disasters. | [
"disasters",
"individuals",
"their properties",
"the importance",
"risk assessment",
"safety",
"deep learning techniques",
"the potential",
"hazardous situations",
"disasters",
"potentially dangerous objects",
"indoor environments",
"individuals",
"emergencies",
"this article",
"we",
"an indoor-risk analysis framework",
"disasters",
"deep learning",
"our framework",
"modern deep learning techniques",
"an indoor risk rating",
"dangerous objects’ sizes",
"comprehensive risk assessment",
"indoor environments",
"disasters",
"that end",
"we",
"-cnn",
"hazardous indoor objects",
"disaster situations",
"94% accuracy",
"object size information",
"our framework",
"a more nuanced and detailed risk assessment",
"previous approaches",
"our proposed system",
"a valuable tool",
"ongoing safety improvement",
"indoor safety",
"natural disasters",
"94%"
] |
A deep learning dataset for sample preparation artefacts detection in multispectral high-content microscopy | [
"Vaibhav Sharma",
"Artur Yakimovich"
] | High-content image-based screening is widely used in Drug Discovery and Systems Biology. However, sample preparation artefacts may significantly deteriorate the quality of image-based screening assays. While detection and circumvention of such artefacts could be addressed using modern-day machine learning and deep learning algorithms, this is widely impeded by the lack of suitable datasets. To address this, here we present a purpose-created open dataset of high-content microscopy sample preparation artefact. It consists of high-content microscopy of laboratory dust titrated on fixed cell culture specimens imaged with fluorescence filters covering the complete spectral range. To ensure this dataset is suitable for supervised machine learning tasks like image classification or segmentation we propose rule-based annotation strategies on categorical and pixel levels. We demonstrate the applicability of our dataset for deep learning by training a convolutional-neural-network-based classifier. | 10.1038/s41597-024-03064-y | a deep learning dataset for sample preparation artefacts detection in multispectral high-content microscopy | high-content image-based screening is widely used in drug discovery and systems biology. however, sample preparation artefacts may significantly deteriorate the quality of image-based screening assays. while detection and circumvention of such artefacts could be addressed using modern-day machine learning and deep learning algorithms, this is widely impeded by the lack of suitable datasets. to address this, here we present a purpose-created open dataset of high-content microscopy sample preparation artefact. it consists of high-content microscopy of laboratory dust titrated on fixed cell culture specimens imaged with fluorescence filters covering the complete spectral range. to ensure this dataset is suitable for supervised machine learning tasks like image classification or segmentation we propose rule-based annotation strategies on categorical and pixel levels. we demonstrate the applicability of our dataset for deep learning by training a convolutional-neural-network-based classifier. | [
"high-content image-based screening",
"drug discovery and systems biology",
"sample preparation artefacts",
"the quality",
"image-based screening assays",
"detection",
"circumvention",
"such artefacts",
"modern-day machine learning",
"deep learning algorithms",
"this",
"the lack",
"suitable datasets",
"this",
"we",
"a purpose-created open dataset",
"high-content microscopy sample preparation artefact",
"it",
"high-content microscopy",
"laboratory dust",
"fixed cell culture specimens",
"fluorescence filters",
"the complete spectral range",
"this dataset",
"supervised machine learning tasks",
"image classification",
"segmentation",
"we",
"rule-based annotation strategies",
"categorical and pixel levels",
"we",
"the applicability",
"our dataset",
"deep learning",
"a convolutional-neural-network-based classifier"
] |
CT-based deep learning model for predicting hospital discharge outcome in spontaneous intracerebral hemorrhage | [
"Xianjing Zhao",
"Bijing Zhou",
"Yong Luo",
"Lei Chen",
"Lequn Zhu",
"Shixin Chang",
"Xiangming Fang",
"Zhenwei Yao"
] | ObjectivesTo predict the functional outcome of patients with intracerebral hemorrhage (ICH) using deep learning models based on computed tomography (CT) images.MethodsA retrospective, bi-center study of ICH patients was conducted. Firstly, a custom 3D convolutional model was built for predicting the functional outcome of ICH patients based on CT scans from randomly selected ICH patients in H training dataset collected from H hospital. Secondly, clinical data and radiological features were collected at admission and the Extreme Gradient Boosting (XGBoost) algorithm was used to establish a second model, named the XGBoost model. Finally, the Convolution model and XGBoost model were fused to build the third “Fusion model.” Favorable outcome was defined as modified Rankin Scale score of 0–3 at discharge. The prognostic predictive accuracy of the three models was evaluated using an H test dataset and an external Y dataset, and compared with the performance of ICH score and ICH grading scale (ICH-GS).ResultsA total of 604 patients with ICH were included in this study, of which 450 patients were in the H training dataset, 50 patients in the H test dataset, and 104 patients in the Y dataset. In the Y dataset, the areas under the curve (AUCs) of the Convolution model, XGBoost model, and Fusion model were 0.829, 0.871, and 0.905, respectively. The Fusion model prognostic performance exceeded that of ICH score and ICH-GS (p = 0.043 and p = 0.045, respectively).ConclusionsDeep learning models have good accuracy for predicting functional outcome of patients with spontaneous intracerebral hemorrhage.Clinical relevance statementThe proposed deep learning Fusion model may assist clinicians in predicting functional outcome and developing treatment strategies, thereby improving the survival and quality of life of patients with spontaneous intracerebral hemorrhage.Key Points• Integrating clinical presentations, CT images, and radiological features to establish deep learning model for functional outcome prediction of patients with intracerebral hemorrhage.• Deep learning applied to CT images provides great help in prognosing functional outcome of intracerebral hemorrhage patients.• The developed deep learning model performs better than clinical prognostic scores in predicting functional outcome of patients with intracerebral hemorrhage. | 10.1007/s00330-023-10505-6 | ct-based deep learning model for predicting hospital discharge outcome in spontaneous intracerebral hemorrhage | objectivesto predict the functional outcome of patients with intracerebral hemorrhage (ich) using deep learning models based on computed tomography (ct) images.methodsa retrospective, bi-center study of ich patients was conducted. firstly, a custom 3d convolutional model was built for predicting the functional outcome of ich patients based on ct scans from randomly selected ich patients in h training dataset collected from h hospital. secondly, clinical data and radiological features were collected at admission and the extreme gradient boosting (xgboost) algorithm was used to establish a second model, named the xgboost model. finally, the convolution model and xgboost model were fused to build the third “fusion model.” favorable outcome was defined as modified rankin scale score of 0–3 at discharge. the prognostic predictive accuracy of the three models was evaluated using an h test dataset and an external y dataset, and compared with the performance of ich score and ich grading scale (ich-gs).resultsa total of 604 patients with ich were included in this study, of which 450 patients were in the h training dataset, 50 patients in the h test dataset, and 104 patients in the y dataset. in the y dataset, the areas under the curve (aucs) of the convolution model, xgboost model, and fusion model were 0.829, 0.871, and 0.905, respectively. the fusion model prognostic performance exceeded that of ich score and ich-gs (p = 0.043 and p = 0.045, respectively).conclusionsdeep learning models have good accuracy for predicting functional outcome of patients with spontaneous intracerebral hemorrhage.clinical relevance statementthe proposed deep learning fusion model may assist clinicians in predicting functional outcome and developing treatment strategies, thereby improving the survival and quality of life of patients with spontaneous intracerebral hemorrhage.key points• integrating clinical presentations, ct images, and radiological features to establish deep learning model for functional outcome prediction of patients with intracerebral hemorrhage.• deep learning applied to ct images provides great help in prognosing functional outcome of intracerebral hemorrhage patients.• the developed deep learning model performs better than clinical prognostic scores in predicting functional outcome of patients with intracerebral hemorrhage. | [
"objectivesto",
"the functional outcome",
"patients",
"intracerebral hemorrhage",
"ich",
"deep learning models",
"computed tomography",
"(ct",
"images.methodsa retrospective, bi-center study",
"ich patients",
"a custom 3d convolutional model",
"the functional outcome",
"ich patients",
"ct scans",
"randomly selected ich patients",
"h training dataset",
"h hospital",
"clinical data",
"radiological features",
"admission",
"the extreme gradient",
"algorithm",
"a second model",
"the xgboost model",
"the convolution model",
"xgboost model",
"the third “fusion model",
"favorable outcome",
"modified rankin scale score",
"0–3",
"discharge",
"the prognostic predictive accuracy",
"the three models",
"an h test dataset",
"an external y dataset",
"the performance",
"ich score",
"ich grading scale",
"ich-gs).resultsa total",
"604 patients",
"ich",
"this study",
"which",
"450 patients",
"the h training dataset",
"the h test dataset",
"104 patients",
"the y dataset",
"the y dataset",
"the areas",
"the curve",
"aucs",
"the convolution model",
"xgboost model",
"fusion model",
"the fusion model prognostic performance",
"ich score",
"ich-gs",
"respectively).conclusionsdeep learning models",
"good accuracy",
"functional outcome",
"patients",
"spontaneous intracerebral hemorrhage.clinical relevance statementthe proposed deep learning fusion model",
"clinicians",
"functional outcome",
"treatment strategies",
"the survival",
"quality",
"life",
"patients",
"clinical presentations",
"ct images",
"radiological features",
"deep learning model",
"functional outcome prediction",
"patients",
"intracerebral hemorrhage.• deep learning",
"ct images",
"great help",
"functional outcome",
"intracerebral hemorrhage patients.•",
"the developed deep learning model",
"clinical prognostic scores",
"functional outcome",
"patients",
"intracerebral hemorrhage",
"images.methodsa",
"firstly",
"3d",
"secondly",
"second",
"third",
"three",
"604",
"450",
"50",
"104",
"0.829",
"0.871",
"0.905",
"0.043",
"0.045",
"clinicians"
] |
Enhanced variational mode decomposition with deep learning SVM kernels for river streamflow forecasting | [
"Subramaniam Nachimuthu Deepa",
"Narayanan Natarajan",
"Mohanadhas Berlin"
] | The present scenario of global climatic change challenges the sustainability and existence of water bodies around the globe. Due to which, it is always important and necessary to forecast the streamflow of rivers with respect to natural precipitation process. In this research study, novel enhanced variational mode decomposition (EVMD) with deep support vector machine (DSVM) kernels is proposed to perform forecasting of river streamflow. The developed computational intelligent machine learning model is a hybrid combination of the new enhanced VMD and the novel deep SVM kernels that is trained suitably to forecast the streamflow with respect to their deep learning layers. Initially, singular spectrum analysis (SSA) is employed for noise removal and the enhanced VMD with its features of decomposing and extracting more prominent features from the data are hybridized with the deep SVM kernel models to predict the streamflow of the considered Cahaba River data sets. Deterministic grey wolf optimizer (DGWO) is modelled in this research paper to fine tune the parameters of the deep SVM model. Previous prediction techniques modelled had difficulties in respect of local and global minima occurrences, stagnation, delayed and premature convergence and so on. Hence, in this study, the hybrid deep learning model forecasted the streamflow for the considered data sets and its superiority was validated with the comparative analysis with the previous forecasting techniques adopted. The developed forecasting model shall be used by the hydrologists for predicting the daily streamflow with the highest prediction accuracy rate of 97.54% with respect to training process and in case of testing mechanism the prediction accuracy rate is 96.47% This 97.54% of training prediction accuracy rate confirms the effectiveness of modelled new deep SVM algorithm for the streamflow forecasting of hydrologists. | 10.1007/s12665-023-11222-5 | enhanced variational mode decomposition with deep learning svm kernels for river streamflow forecasting | the present scenario of global climatic change challenges the sustainability and existence of water bodies around the globe. due to which, it is always important and necessary to forecast the streamflow of rivers with respect to natural precipitation process. in this research study, novel enhanced variational mode decomposition (evmd) with deep support vector machine (dsvm) kernels is proposed to perform forecasting of river streamflow. the developed computational intelligent machine learning model is a hybrid combination of the new enhanced vmd and the novel deep svm kernels that is trained suitably to forecast the streamflow with respect to their deep learning layers. initially, singular spectrum analysis (ssa) is employed for noise removal and the enhanced vmd with its features of decomposing and extracting more prominent features from the data are hybridized with the deep svm kernel models to predict the streamflow of the considered cahaba river data sets. deterministic grey wolf optimizer (dgwo) is modelled in this research paper to fine tune the parameters of the deep svm model. previous prediction techniques modelled had difficulties in respect of local and global minima occurrences, stagnation, delayed and premature convergence and so on. hence, in this study, the hybrid deep learning model forecasted the streamflow for the considered data sets and its superiority was validated with the comparative analysis with the previous forecasting techniques adopted. the developed forecasting model shall be used by the hydrologists for predicting the daily streamflow with the highest prediction accuracy rate of 97.54% with respect to training process and in case of testing mechanism the prediction accuracy rate is 96.47% this 97.54% of training prediction accuracy rate confirms the effectiveness of modelled new deep svm algorithm for the streamflow forecasting of hydrologists. | [
"the present scenario",
"global climatic change",
"the sustainability",
"existence",
"water bodies",
"the globe",
"which",
"it",
"the streamflow",
"rivers",
"respect",
"natural precipitation process",
"this research study",
"novel enhanced variational mode decomposition",
"(evmd",
"deep support vector machine (dsvm) kernels",
"forecasting",
"river streamflow",
"the developed computational intelligent machine learning model",
"a hybrid combination",
"the new enhanced vmd",
"the novel deep svm kernels",
"that",
"the streamflow",
"respect",
"their deep learning layers",
"singular spectrum analysis",
"ssa",
"noise removal",
"the enhanced vmd",
"its features",
"more prominent features",
"the data",
"the deep svm kernel models",
"the streamflow",
"the considered cahaba river data sets",
"deterministic grey wolf optimizer",
"dgwo",
"this research paper",
"fine tune",
"the parameters",
"the deep svm model",
"previous prediction techniques",
"difficulties",
"respect",
"local and global minima occurrences",
"stagnation",
"convergence",
"this study",
"the hybrid deep learning model",
"the streamflow",
"the considered data sets",
"its superiority",
"the comparative analysis",
"the previous forecasting techniques",
"the developed forecasting model",
"the hydrologists",
"the daily streamflow",
"the highest prediction accuracy rate",
"97.54%",
"respect",
"training process",
"case",
"testing mechanism",
"the prediction accuracy rate",
"96.47%",
"this 97.54%",
"training prediction accuracy rate",
"the effectiveness",
"modelled new deep svm algorithm",
"the streamflow forecasting",
"hydrologists",
"kernels",
"grey wolf",
"daily",
"97.54%",
"96.47%",
"97.54%"
] |
Sound signal analysis in Japanese speech recognition based on deep learning algorithm | [
"Yang Xiaoxing"
] | As an important carrier of information, since sound can be collected quickly and is not limited by angle and light, it is often used to assist in understanding the environment and creating information. Voice signal recognition technology is a typical speech recognition application. This article focuses on the voice signal recognition technology around various deep learning models. By using deep learning neural networks with different structures and different types, information and representations related to the recognition of sound signal samples can be obtained, so as to further improve the detection accuracy of the sound signal recognition detection system. Based on this, this paper proposes an enhanced deep learning model of multi-scale neural convolutional network and uses it to recognize sound signals. The CCCP layer is used to reduce the dimensionality of the underlying feature map, so that the units captured in the network will eventually have internal features in each layer, thereby retaining the feature information to the maximum extent, which will form a convolutional multi-scale model in network deep learning Neurons. Finally, the article discusses the related issues of Japanese speech recognition on this basis. This article first uses the font (gra-phonem), that is, all these Japanese kana and common Chinese characters, using a total of 2795 units for modeling. There is a big gap between the experiment and the (BiLSTM-HMM) system. In addition, when Japanese speech is known, it is incorporated into the end-to-end recognition system to improve the performance of the Japanese speech recognition system. Based on the above-mentioned deep learning and sound signal analysis experiments and principles, the final effect obtained is better than the main effect of the Japanese speech recognition system of the latent Markov model and the long–short memory network, thus promoting its development. | 10.1007/s13198-023-02025-9 | sound signal analysis in japanese speech recognition based on deep learning algorithm | as an important carrier of information, since sound can be collected quickly and is not limited by angle and light, it is often used to assist in understanding the environment and creating information. voice signal recognition technology is a typical speech recognition application. this article focuses on the voice signal recognition technology around various deep learning models. by using deep learning neural networks with different structures and different types, information and representations related to the recognition of sound signal samples can be obtained, so as to further improve the detection accuracy of the sound signal recognition detection system. based on this, this paper proposes an enhanced deep learning model of multi-scale neural convolutional network and uses it to recognize sound signals. the cccp layer is used to reduce the dimensionality of the underlying feature map, so that the units captured in the network will eventually have internal features in each layer, thereby retaining the feature information to the maximum extent, which will form a convolutional multi-scale model in network deep learning neurons. finally, the article discusses the related issues of japanese speech recognition on this basis. this article first uses the font (gra-phonem), that is, all these japanese kana and common chinese characters, using a total of 2795 units for modeling. there is a big gap between the experiment and the (bilstm-hmm) system. in addition, when japanese speech is known, it is incorporated into the end-to-end recognition system to improve the performance of the japanese speech recognition system. based on the above-mentioned deep learning and sound signal analysis experiments and principles, the final effect obtained is better than the main effect of the japanese speech recognition system of the latent markov model and the long–short memory network, thus promoting its development. | [
"an important carrier",
"information",
"sound",
"angle",
"light",
"it",
"the environment",
"information",
"voice signal recognition technology",
"a typical speech recognition application",
"this article",
"the voice signal recognition technology",
"various deep learning models",
"deep learning neural networks",
"different structures",
"different types",
"information",
"representations",
"the recognition",
"sound signal samples",
"the detection accuracy",
"the sound signal recognition detection system",
"this",
"this paper",
"an enhanced deep learning model",
"multi-scale neural convolutional network",
"it",
"sound signals",
"the cccp layer",
"the dimensionality",
"the underlying feature map",
"the units",
"the network",
"internal features",
"each layer",
"the feature information",
"the maximum extent",
"which",
"a convolutional multi-scale model",
"network deep learning neurons",
"the article",
"the related issues",
"japanese speech recognition",
"this basis",
"this article",
"the font",
"gra-phonem",
"all these japanese kana",
"common chinese characters",
"a total",
"2795 units",
"modeling",
"a big gap",
"the experiment",
"the (bilstm-hmm) system",
"addition",
"japanese speech",
"it",
"end",
"recognition",
"the performance",
"the japanese speech recognition system",
"the above-mentioned deep learning and sound signal analysis experiments",
"principles",
"the final effect",
"the main effect",
"the japanese speech recognition system",
"the latent markov model",
"the long–short memory network",
"its development",
"japanese",
"japanese",
"chinese",
"2795",
"japanese",
"japanese",
"japanese"
] |
SEFWaM–deep learning based smart ensembled framework for waste management | [
"Sujal Goel",
"Anannya Mishra",
"Garima Dua",
"Vandana Bhatia"
] | Waste generation has seen a significant surge in the last decade, presenting an urgent need for efficient and sustainable waste management strategies. The mounting piles of landfill waste underscore the criticality of segregating recyclable from non-recyclable waste, a measure that could alleviate numerous global environmental challenges. The increasing necessity to differentiate waste into specific categories such as paper, plastic, and metal is becoming evident; these categories demand distinct disposal and recycling methods. Addressing this automatic detection and segregation issue calls for an automated garbage classification system, achievable through computer vision. This paper introduces a deep learning and computer vision-based Smart Ensembled Framework for Waste Management (SEFWaM) to categorize garbage into various classes for improved waste management. This innovative model employs a fusion of transfer learning—a deep learning technique and boosting to devise an ensembled approach for efficient waste classification. Our proposed approach, trained on the Trashnet 2.0 dataset, has demonstrated superior performance over competing algorithms in terms of Weighted Macro Precision (WMP), Weighted Macro Recall (WMR), Weighted Macro F1-Score (WMF), and test accuracy. The proposed SEFWaM model achieves an accuracy of 94.2% for the considered dataset, proving its superior efficiency over other deep learning-based models. This research thereby contributes to a pressing environmental issue by offering an automated, efficient solution for waste management. | 10.1007/s10668-023-03568-4 | sefwam–deep learning based smart ensembled framework for waste management | waste generation has seen a significant surge in the last decade, presenting an urgent need for efficient and sustainable waste management strategies. the mounting piles of landfill waste underscore the criticality of segregating recyclable from non-recyclable waste, a measure that could alleviate numerous global environmental challenges. the increasing necessity to differentiate waste into specific categories such as paper, plastic, and metal is becoming evident; these categories demand distinct disposal and recycling methods. addressing this automatic detection and segregation issue calls for an automated garbage classification system, achievable through computer vision. this paper introduces a deep learning and computer vision-based smart ensembled framework for waste management (sefwam) to categorize garbage into various classes for improved waste management. this innovative model employs a fusion of transfer learning—a deep learning technique and boosting to devise an ensembled approach for efficient waste classification. our proposed approach, trained on the trashnet 2.0 dataset, has demonstrated superior performance over competing algorithms in terms of weighted macro precision (wmp), weighted macro recall (wmr), weighted macro f1-score (wmf), and test accuracy. the proposed sefwam model achieves an accuracy of 94.2% for the considered dataset, proving its superior efficiency over other deep learning-based models. this research thereby contributes to a pressing environmental issue by offering an automated, efficient solution for waste management. | [
"waste generation",
"a significant surge",
"the last decade",
"an urgent need",
"efficient and sustainable waste management strategies",
"the mounting piles",
"landfill waste",
"the criticality",
"non-recyclable waste",
"a measure",
"that",
"numerous global environmental challenges",
"the increasing necessity",
"waste",
"specific categories",
"paper",
"plastic",
"metal",
"these categories",
"distinct disposal",
"recycling methods",
"this automatic detection and segregation issue",
"an automated garbage classification system",
"computer vision",
"this paper",
"a deep learning and computer vision-based smart ensembled framework",
"waste management",
"sefwam",
"garbage",
"various classes",
"improved waste management",
"this innovative model",
"a fusion",
"transfer learning",
"a deep learning technique",
"an ensembled approach",
"efficient waste classification",
"our proposed approach",
"the trashnet 2.0 dataset",
"superior performance",
"competing algorithms",
"terms",
"weighted macro precision",
"wmp",
"macro recall",
"wmr",
"macro f1-score",
"wmf",
"test accuracy",
"the proposed sefwam model",
"an accuracy",
"94.2%",
"the considered dataset",
"its superior efficiency",
"other deep learning-based models",
"this research",
"a pressing environmental issue",
"an automated, efficient solution",
"waste management",
"the last decade",
"2.0",
"94.2%"
] |
Graph-ensemble fusion for enhanced IoT intrusion detection: leveraging GCN and deep learning | [
"Kajol Mittal",
"Payal Khurana Batra"
] | The proliferation of Internet of Things (IoT) applications has heightened the vulnerability of information security, making it susceptible to attacks that may lead to the compromise of sensitive data. Intrusion Detection System (IDS) is deployed in IoT networks for the detection of attacks and to ensure the security of information. In previous works, the IDS datasets suffer from an imbalanced distribution of data about attacks and, the flow of packets in IDS which hinders the ability of deep learning models for potent and coherent classification. With the emergence of graph convolution neural network (GCN), a new sub-field of deep learning models, the structure of graphs can be leveraged to represent the data effectively. IDS datasets typically consist of flow records of data which can naturally be represented as graph structures capturing both edge features and network topology information for classification of attacks. Hence, in this paper, a novel GCN-Ensemble fusion model is proposed for enhanced IoT IDS. There are three stages in this proposed model: (1) Data processing and attribute graph generation, (2) Feature engineering and (3) Classification. The flow attributes of data packets in IDS datasets are represented as the edges and the corresponding varying attacks as nodes of the attribute graph. Here the GCN model is leveraged for feature engineering of the IDS dataset. Further, a novel Ensemble of Convolution Neural Networks is proposed for the classification task. The evaluation of the proposed model encompasses the utilization of four distinct datasets, namely BoT-IoT, ToN-IoT, CIC-IDS2018, and NF UQ NIDS. In the BoT-IoT dataset, the proposed model demonstrates superior performance compared to state-of-art models like Deep learning and Graph neural network (GNN), achieving accuracy improvements of 3.16 and 0.91%, respectively. The observed superior performance of the model in comparison to the baseline models serves to emphasize its potential to augment IoT network security. | 10.1007/s10586-024-04404-8 | graph-ensemble fusion for enhanced iot intrusion detection: leveraging gcn and deep learning | the proliferation of internet of things (iot) applications has heightened the vulnerability of information security, making it susceptible to attacks that may lead to the compromise of sensitive data. intrusion detection system (ids) is deployed in iot networks for the detection of attacks and to ensure the security of information. in previous works, the ids datasets suffer from an imbalanced distribution of data about attacks and, the flow of packets in ids which hinders the ability of deep learning models for potent and coherent classification. with the emergence of graph convolution neural network (gcn), a new sub-field of deep learning models, the structure of graphs can be leveraged to represent the data effectively. ids datasets typically consist of flow records of data which can naturally be represented as graph structures capturing both edge features and network topology information for classification of attacks. hence, in this paper, a novel gcn-ensemble fusion model is proposed for enhanced iot ids. there are three stages in this proposed model: (1) data processing and attribute graph generation, (2) feature engineering and (3) classification. the flow attributes of data packets in ids datasets are represented as the edges and the corresponding varying attacks as nodes of the attribute graph. here the gcn model is leveraged for feature engineering of the ids dataset. further, a novel ensemble of convolution neural networks is proposed for the classification task. the evaluation of the proposed model encompasses the utilization of four distinct datasets, namely bot-iot, ton-iot, cic-ids2018, and nf uq nids. in the bot-iot dataset, the proposed model demonstrates superior performance compared to state-of-art models like deep learning and graph neural network (gnn), achieving accuracy improvements of 3.16 and 0.91%, respectively. the observed superior performance of the model in comparison to the baseline models serves to emphasize its potential to augment iot network security. | [
"the proliferation",
"internet",
"things",
"(iot) applications",
"the vulnerability",
"information security",
"it",
"attacks",
"that",
"the compromise",
"sensitive data",
"intrusion detection system",
"ids",
"iot networks",
"the detection",
"attacks",
"the security",
"information",
"previous works",
"the ids datasets",
"an imbalanced distribution",
"data",
"attacks",
"the flow",
"packets",
"ids",
"which",
"the ability",
"deep learning models",
"potent and coherent classification",
"the emergence",
"graph convolution neural network",
"gcn",
"a new sub",
"-",
"field",
"deep learning models",
"the structure",
"graphs",
"the data",
"ids datasets",
"flow records",
"data",
"which",
"graph structures",
"both edge features",
"network topology information",
"classification",
"attacks",
"this paper",
"a novel gcn-ensemble fusion model",
"enhanced iot ids",
"three stages",
"this proposed model",
"graph generation",
"(3) classification",
"the flow attributes",
"data packets",
"ids datasets",
"the edges",
"the corresponding varying attacks",
"nodes",
"the attribute graph",
"the gcn model",
"feature engineering",
"the ids dataset",
"a novel ensemble",
"convolution neural networks",
"the classification task",
"the evaluation",
"the proposed model",
"the utilization",
"four distinct datasets",
"namely bot-iot, ton-iot",
"cic-ids2018",
"uq nids",
"the bot-iot dataset",
"the proposed model",
"superior performance",
"art",
"deep learning",
"graph neural network",
"gnn",
"accuracy improvements",
"0.91%",
"the observed superior performance",
"the model",
"comparison",
"the baseline models",
"its potential",
"iot network security",
"gcn",
"three",
"1",
"2",
"3",
"gcn",
"four",
"gnn",
"3.16",
"0.91%"
] |
Information set supported deep learning architectures for improving noisy image classification | [
"Saurabh Bhardwaj",
"Yizhi Wang",
"Guoqiang Yu",
"Yue Wang"
] | Deep learning models have been widely used in many supervised learning applications. However, these models suffer from overfitting due to various types of uncertainty with deteriorating performance when facing data biases, class imbalance, or noise propagation. The Information-Set Deep learning (ISDL) architectures with four variants are developed by integrating information set theory and deep learning principles to address the critical problem of the absence of robust deep learning models. There is a description of the ISDL architectures, learning algorithms, and analytic workflows. The performance of the ISDL models and standard architectures is evaluated using a noise-corrupted benchmark dataset. The experimental results show that the ISDL models can efficiently handle noise-dominated uncertainty and outperform peer architectures. | 10.1038/s41598-023-31462-6 | information set supported deep learning architectures for improving noisy image classification | deep learning models have been widely used in many supervised learning applications. however, these models suffer from overfitting due to various types of uncertainty with deteriorating performance when facing data biases, class imbalance, or noise propagation. the information-set deep learning (isdl) architectures with four variants are developed by integrating information set theory and deep learning principles to address the critical problem of the absence of robust deep learning models. there is a description of the isdl architectures, learning algorithms, and analytic workflows. the performance of the isdl models and standard architectures is evaluated using a noise-corrupted benchmark dataset. the experimental results show that the isdl models can efficiently handle noise-dominated uncertainty and outperform peer architectures. | [
"deep learning models",
"many supervised learning applications",
"these models",
"various types",
"uncertainty",
"deteriorating performance",
"data biases",
"class imbalance",
"noise propagation",
"the information-set deep learning",
"isdl",
"four variants",
"information set theory",
"deep learning principles",
"the critical problem",
"the absence",
"robust deep learning models",
"a description",
"the isdl architectures",
"algorithms",
"analytic workflows",
"the performance",
"the isdl models",
"standard architectures",
"a noise-corrupted benchmark dataset",
"the experimental results",
"the isdl models",
"noise-dominated uncertainty",
"outperform peer architectures",
"four"
] |
Deep transfer learning-based automated detection of blast disease in paddy crop | [
"Amandeep Singh",
"Jaspreet Kaur",
"Kuldeep Singh",
"Maninder Lal Singh"
] | A major proportion of the loss faced by the agricultural industry originates from the diseases of the crop during cultivation. Paddy crop is one of the dominant crops which provides food to a huge population. In this crop, the losses caused by such diseases vary from 30 to 90% of the yield. Therefore, the automated detection of different diseases in paddy crops seeks the attention of the research community. In this context, the present work proposes a deep transfer learning solution for the automated detection of blast disease of paddy, which is the major cause of its yield reduction. For this purpose, an image dataset of healthy and blast disease-infected leave images of paddy crop has been developed. These images are fed to five convolutional neural network-based deep transfer learning algorithms, viz., LeNet, AlexNet, VGG 16, Inception v1, and Xception models for binary classification. The performance analysis of given algorithms reveals that AlexNet provides better results for binary classification with an average accuracy of 98.7% followed by VGG 16 and LeNet architectures having accuracies of 98.2% and 97.8%. So, this deep transfer learning-based approach may assist in reducing the gap between experts and farmers by providing an automated expert advice platform for the timely detection of diseases in paddy crop. | 10.1007/s11760-023-02735-4 | deep transfer learning-based automated detection of blast disease in paddy crop | a major proportion of the loss faced by the agricultural industry originates from the diseases of the crop during cultivation. paddy crop is one of the dominant crops which provides food to a huge population. in this crop, the losses caused by such diseases vary from 30 to 90% of the yield. therefore, the automated detection of different diseases in paddy crops seeks the attention of the research community. in this context, the present work proposes a deep transfer learning solution for the automated detection of blast disease of paddy, which is the major cause of its yield reduction. for this purpose, an image dataset of healthy and blast disease-infected leave images of paddy crop has been developed. these images are fed to five convolutional neural network-based deep transfer learning algorithms, viz., lenet, alexnet, vgg 16, inception v1, and xception models for binary classification. the performance analysis of given algorithms reveals that alexnet provides better results for binary classification with an average accuracy of 98.7% followed by vgg 16 and lenet architectures having accuracies of 98.2% and 97.8%. so, this deep transfer learning-based approach may assist in reducing the gap between experts and farmers by providing an automated expert advice platform for the timely detection of diseases in paddy crop. | [
"a major proportion",
"the loss",
"the agricultural industry originates",
"the diseases",
"the crop",
"cultivation",
"paddy crop",
"the dominant crops",
"which",
"food",
"a huge population",
"this crop",
"the losses",
"such diseases",
"30 to 90%",
"the yield",
"the automated detection",
"different diseases",
"paddy crops",
"the attention",
"the research community",
"this context",
"the present work",
"a deep transfer learning solution",
"the automated detection",
"blast disease",
"paddy",
"which",
"the major cause",
"its yield reduction",
"this purpose",
"paddy crop",
"these images",
"five convolutional neural network-based deep transfer learning algorithms",
"viz",
"lenet",
"alexnet",
"vgg",
"inception v1",
"xception models",
"binary classification",
"the performance analysis",
"given algorithms",
"alexnet",
"better results",
"binary classification",
"an average accuracy",
"98.7%",
"accuracies",
"98.2%",
"97.8%",
"this deep transfer learning-based approach",
"the gap",
"experts",
"farmers",
"an automated expert advice platform",
"the timely detection",
"diseases",
"paddy crop",
"30 to 90%",
"fed",
"five",
"16",
"98.7%",
"16",
"98.2%",
"97.8%"
] |
Octorotor flight control system design with stochastic optimal tuning, deep learning and differential morphing | [
"Oguz Kose"
] | In this paper, simultaneous longitudinal and lateral flight control is investigated for an octorotor by using stochastic optimal tuning and deep learning under differential morphing. Octorotor models for differential morphing were drawn in SOLIDWORKS drawing program. Arm lengths are randomly estimated in the algorithm. Moments of inertia changing according to morphing ratios are estimated with deep neural network. In addition, the proportional–integral–derivative controller coefficients required for both longitudinal and lateral flight according to the morphing ratios are estimated by simultaneous perturbation stochastic approximation. Considering the design performance criteria, 49.95% improvement was achieved in the total cost. The estimation of unknown parameters by optimization method and deep learning was tested in simulations, and the octorotor successfully followed the given reference angle. | 10.1007/s40430-024-04972-1 | octorotor flight control system design with stochastic optimal tuning, deep learning and differential morphing | in this paper, simultaneous longitudinal and lateral flight control is investigated for an octorotor by using stochastic optimal tuning and deep learning under differential morphing. octorotor models for differential morphing were drawn in solidworks drawing program. arm lengths are randomly estimated in the algorithm. moments of inertia changing according to morphing ratios are estimated with deep neural network. in addition, the proportional–integral–derivative controller coefficients required for both longitudinal and lateral flight according to the morphing ratios are estimated by simultaneous perturbation stochastic approximation. considering the design performance criteria, 49.95% improvement was achieved in the total cost. the estimation of unknown parameters by optimization method and deep learning was tested in simulations, and the octorotor successfully followed the given reference angle. | [
"this paper",
"simultaneous longitudinal and lateral flight control",
"an octorotor",
"stochastic optimal tuning",
"deep learning",
"differential morphing",
"octorotor models",
"differential morphing",
"solidworks drawing program",
"arm lengths",
"the algorithm",
"moments",
"inertia",
"morphing ratios",
"deep neural network",
"addition",
"the proportional–integral–derivative controller coefficients",
"both longitudinal and lateral flight",
"the morphing ratios",
"simultaneous perturbation stochastic approximation",
"the design performance criteria",
"49.95% improvement",
"the total cost",
"the estimation",
"unknown parameters",
"optimization method",
"deep learning",
"simulations",
"the octorotor",
"the given reference angle",
"49.95%"
] |
Network intrusion detection and mitigation in SDN using deep learning models | [
"Mamatha Maddu",
"Yamarthi Narasimha Rao"
] | Software-Defined Networking (SDN) is a contemporary network strategy utilized instead of a traditional network structure. It provides significantly more administrative efficiency and ease than traditional networks. However, the centralized control used in SDN entails an elevated risk of single-point failure that is more susceptible to different kinds of network assaults like Distributed Denial of Service (DDoS), DoS, spoofing, and API exploitation which are very complex to identify and mitigate. Thus, a powerful intrusion detection system (IDS) based on deep learning is created in this study for the detection and mitigation of network intrusions. This system contains several stages and begins with the data augmentation method named Deep Convolutional Generative Adversarial Networks (DCGAN) to over the data imbalance problem. Then, the features are extracted from the input data using a CenterNet-based approach. After extracting effective characteristics, ResNet152V2 with Slime Mold Algorithm (SMA) based deep learning is implemented to categorize the assaults in InSDN and Edge IIoT datasets. Once the network intrusion is detected, the proposed defense module is activated to restore regular network connectivity quickly. Finally, several experiments are carried out to validate the algorithm's robustness, and the outcomes reveal that the proposed system can successfully detect and mitigate network intrusions. | 10.1007/s10207-023-00771-2 | network intrusion detection and mitigation in sdn using deep learning models | software-defined networking (sdn) is a contemporary network strategy utilized instead of a traditional network structure. it provides significantly more administrative efficiency and ease than traditional networks. however, the centralized control used in sdn entails an elevated risk of single-point failure that is more susceptible to different kinds of network assaults like distributed denial of service (ddos), dos, spoofing, and api exploitation which are very complex to identify and mitigate. thus, a powerful intrusion detection system (ids) based on deep learning is created in this study for the detection and mitigation of network intrusions. this system contains several stages and begins with the data augmentation method named deep convolutional generative adversarial networks (dcgan) to over the data imbalance problem. then, the features are extracted from the input data using a centernet-based approach. after extracting effective characteristics, resnet152v2 with slime mold algorithm (sma) based deep learning is implemented to categorize the assaults in insdn and edge iiot datasets. once the network intrusion is detected, the proposed defense module is activated to restore regular network connectivity quickly. finally, several experiments are carried out to validate the algorithm's robustness, and the outcomes reveal that the proposed system can successfully detect and mitigate network intrusions. | [
"software-defined networking",
"sdn",
"a contemporary network strategy",
"a traditional network structure",
"it",
"significantly more administrative efficiency",
"ease",
"traditional networks",
"the centralized control",
"sdn",
"an elevated risk",
"single-point failure",
"that",
"different kinds",
"network assaults",
"distributed denial",
"service",
"ddos",
"api exploitation",
"which",
"a powerful intrusion detection system",
"ids",
"deep learning",
"this study",
"the detection",
"mitigation",
"network intrusions",
"this system",
"several stages",
"the data augmentation method",
"deep convolutional generative adversarial networks",
"dcgan",
"the data imbalance problem",
"the features",
"the input data",
"a centernet-based approach",
"effective characteristics",
"slime mold",
"algorithm",
"sma",
"based deep learning",
"the assaults",
"insdn",
"iiot datasets",
"the network intrusion",
"the proposed defense module",
"regular network connectivity",
"several experiments",
"the algorithm's robustness",
"the outcomes",
"the proposed system",
"network intrusions"
] |
Development of a remote music teaching system based on facial recognition and deep learning | [
"Ning Zhang",
"Huizhong Wang"
] | With the continuous progress of computer and network technology, teaching methods and educational models are also constantly evolving and improving. The development of facial recognition technology has brought new opportunities and challenges to the development of educational theory and systems. This article establishes a remote music teaching system based on facial recognition and deep learning technology. The system adopts the Java EE framework structure and deep learning technology. By conducting deep learning and training on a large amount of facial data, we can identify students' facial expressions and emotional states, thereby better understanding their learning status and needs. At the same time, the system also supports multiple teaching modes and interactive methods, providing teachers and students with a more convenient and efficient teaching management and learning experience. Subsequently, this article evaluated and explored the effectiveness of the remote music teaching system through a questionnaire survey. The results show that most students believe that the system can help them better master basic music knowledge and professional skills, improve learning effectiveness and achieve learning goals. The use of the system can also stimulate students' interest in music learning, providing new ways and means for teaching. | 10.1007/s00500-023-09120-w | development of a remote music teaching system based on facial recognition and deep learning | with the continuous progress of computer and network technology, teaching methods and educational models are also constantly evolving and improving. the development of facial recognition technology has brought new opportunities and challenges to the development of educational theory and systems. this article establishes a remote music teaching system based on facial recognition and deep learning technology. the system adopts the java ee framework structure and deep learning technology. by conducting deep learning and training on a large amount of facial data, we can identify students' facial expressions and emotional states, thereby better understanding their learning status and needs. at the same time, the system also supports multiple teaching modes and interactive methods, providing teachers and students with a more convenient and efficient teaching management and learning experience. subsequently, this article evaluated and explored the effectiveness of the remote music teaching system through a questionnaire survey. the results show that most students believe that the system can help them better master basic music knowledge and professional skills, improve learning effectiveness and achieve learning goals. the use of the system can also stimulate students' interest in music learning, providing new ways and means for teaching. | [
"the continuous progress",
"computer and network technology",
"teaching methods",
"educational models",
"the development",
"facial recognition technology",
"new opportunities",
"challenges",
"the development",
"educational theory",
"systems",
"this article",
"a remote music teaching system",
"facial recognition",
"deep learning technology",
"the system",
"the java ee framework structure",
"deep learning technology",
"deep learning",
"training",
"a large amount",
"facial data",
"we",
"students' facial expressions",
"emotional states",
"their learning status",
"needs",
"the same time",
"the system",
"multiple teaching modes",
"interactive methods",
"teachers",
"students",
"a more convenient and efficient teaching management and learning experience",
"this article",
"the effectiveness",
"the remote music teaching system",
"a questionnaire survey",
"the results",
"most students",
"the system",
"them",
"basic music knowledge",
"professional skills",
"effectiveness",
"learning goals",
"the use",
"the system",
"students' interest",
"new ways",
"teaching"
] |
Advances in Deep Learning Techniques for Short-term Energy Load Forecasting Applications: A Review | [
"Radhika Chandrasekaran",
"Senthil Kumar Paramasivan"
] | Today, the majority of the leading power companies place a significant emphasis on forecasting the electricity load in the balance of power and administration. Meanwhile, since electricity is an integral component of every person’s contemporary life, energy load forecasting is necessary to afford the energy demand required. The expansion of the electrical infrastructure is a key factor in increasing sustainable economic growth, and the planning and control of the utility power system rely on accurate load forecasting. Due to uncertainty in energy utilization, forecasting is turning into a complex task, and it makes an impact on applications that include energy scheduling and management, price forecasting, etc. The statistical methods involving time series for regression analysis and machine learning techniques have been used in energy load forecasting extensively over the last few decades to precisely predict future energy demands. However, they have some drawbacks with limited model flexibility, generalization, and overfitting. Deep learning addresses the issues of handling unstructured and unlabeled data, automatic feature learning, non-linear model flexibility, the ability to handle high-dimensional data, and simultaneous computation using GPUs efficiently. This paper investigates factors influencing energy load forecasting, then discusses the most commonly used deep learning approaches in energy load forecasting, as well as evaluation metrics to evaluate the performance of the model, followed by bio-inspired algorithms to optimize the model, and other advanced technologies for energy load forecasting. This study discusses the research findings, challenges, and opportunities in energy load forecasting. | 10.1007/s11831-024-10155-x | advances in deep learning techniques for short-term energy load forecasting applications: a review | today, the majority of the leading power companies place a significant emphasis on forecasting the electricity load in the balance of power and administration. meanwhile, since electricity is an integral component of every person’s contemporary life, energy load forecasting is necessary to afford the energy demand required. the expansion of the electrical infrastructure is a key factor in increasing sustainable economic growth, and the planning and control of the utility power system rely on accurate load forecasting. due to uncertainty in energy utilization, forecasting is turning into a complex task, and it makes an impact on applications that include energy scheduling and management, price forecasting, etc. the statistical methods involving time series for regression analysis and machine learning techniques have been used in energy load forecasting extensively over the last few decades to precisely predict future energy demands. however, they have some drawbacks with limited model flexibility, generalization, and overfitting. deep learning addresses the issues of handling unstructured and unlabeled data, automatic feature learning, non-linear model flexibility, the ability to handle high-dimensional data, and simultaneous computation using gpus efficiently. this paper investigates factors influencing energy load forecasting, then discusses the most commonly used deep learning approaches in energy load forecasting, as well as evaluation metrics to evaluate the performance of the model, followed by bio-inspired algorithms to optimize the model, and other advanced technologies for energy load forecasting. this study discusses the research findings, challenges, and opportunities in energy load forecasting. | [
"the majority",
"the leading power companies",
"a significant emphasis",
"the electricity load",
"the balance",
"power",
"administration",
"electricity",
"an integral component",
"every person’s contemporary life",
"energy load forecasting",
"the energy demand",
"the expansion",
"the electrical infrastructure",
"a key factor",
"sustainable economic growth",
"the planning",
"control",
"the utility power system",
"accurate load forecasting",
"uncertainty",
"energy utilization",
"forecasting",
"a complex task",
"it",
"an impact",
"applications",
"that",
"energy scheduling",
"management",
"price forecasting",
"the statistical methods",
"time series",
"regression analysis",
"machine learning techniques",
"energy load",
"the last few decades",
"future energy demands",
"they",
"some drawbacks",
"limited model flexibility",
"generalization",
"deep learning addresses",
"the issues",
"unstructured and unlabeled data",
"automatic feature learning",
"non-linear model flexibility",
"the ability",
"high-dimensional data",
"simultaneous computation",
"gpus",
"this paper investigates",
"energy load forecasting",
"the most commonly used deep learning approaches",
"energy load forecasting",
"evaluation metrics",
"the performance",
"the model",
"bio-inspired algorithms",
"the model",
"other advanced technologies",
"energy load forecasting",
"this study",
"the research findings",
"challenges",
"opportunities",
"energy load forecasting",
"today",
"the last few decades"
] |
Advancing differential diagnosis: a comprehensive review of deep learning approaches for differentiating tuberculosis, pneumonia, and COVID-19 | [
"Kajal Kansal",
"Tej Bahadur Chandra",
"Akansha Singh"
] | In the realm of medical diagnostics, particularly in differential diagnosis, where differentiating between illnesses or ailments with comparable symptoms is essential, deep learning has gained importance. Recent developments in deep learning have demonstrated considerable promise for revolutionizing medical diagnostics by using the ability of artificial intelligence (AI) to accurately interpret radiological images. We examine the most cutting-edge deep learning techniques currently being utilized for the differential diagnosis of tuberculosis, pneumonia, and COVID-19 in this in-depth review. The study presents an in-depth critical review of several SOTA (state-of-the-art) studies used for differential diagnosis of different respiratory abnormalities like TB, Pneumonia, and COVID-19. In addition, an overview of various approaches, datasets employed in each method, various diagnosis tests, used assessment measures, and obtained performance is summarized and comprehensively compared to assist future research. We suggest a pathway for future research and development of deep learning solutions for differential diagnosis by critically analyzing the current literature and outlining the limitations and potential in this sector. | 10.1007/s11042-024-19350-1 | advancing differential diagnosis: a comprehensive review of deep learning approaches for differentiating tuberculosis, pneumonia, and covid-19 | in the realm of medical diagnostics, particularly in differential diagnosis, where differentiating between illnesses or ailments with comparable symptoms is essential, deep learning has gained importance. recent developments in deep learning have demonstrated considerable promise for revolutionizing medical diagnostics by using the ability of artificial intelligence (ai) to accurately interpret radiological images. we examine the most cutting-edge deep learning techniques currently being utilized for the differential diagnosis of tuberculosis, pneumonia, and covid-19 in this in-depth review. the study presents an in-depth critical review of several sota (state-of-the-art) studies used for differential diagnosis of different respiratory abnormalities like tb, pneumonia, and covid-19. in addition, an overview of various approaches, datasets employed in each method, various diagnosis tests, used assessment measures, and obtained performance is summarized and comprehensively compared to assist future research. we suggest a pathway for future research and development of deep learning solutions for differential diagnosis by critically analyzing the current literature and outlining the limitations and potential in this sector. | [
"the realm",
"medical diagnostics",
"differential diagnosis",
"illnesses",
"ailments",
"comparable symptoms",
"deep learning",
"importance",
"recent developments",
"deep learning",
"considerable promise",
"medical diagnostics",
"the ability",
"artificial intelligence",
"radiological images",
"we",
"the most cutting-edge deep learning techniques",
"the differential diagnosis",
"tuberculosis",
"pneumonia",
"covid-19",
"-depth",
"the study",
"an in-depth critical review",
"the-art",
"differential diagnosis",
"different respiratory abnormalities",
"tb",
"pneumonia",
"covid-19",
"addition",
"an overview",
"various approaches",
"datasets",
"each method",
"various diagnosis tests",
"assessment measures",
"performance",
"future research",
"we",
"a pathway",
"future research",
"development",
"deep learning solutions",
"differential diagnosis",
"the current literature",
"the limitations",
"potential",
"this sector",
"covid-19",
"covid-19"
] |
Molecular design with automated quantum computing-based deep learning and optimization | [
"Akshay Ajagekar",
"Fengqi You"
] | Computer-aided design of novel molecules and compounds is a challenging task that can be addressed with quantum computing (QC) owing to its notable advances in optimization and machine learning. Here, we use QC-assisted learning and optimization techniques implemented with near-term QC devices for molecular property prediction and generation tasks. The proposed probabilistic energy-based deep learning model trained in a generative manner facilitated by QC yields robust latent representations of molecules, while the proposed data-driven QC-based optimization framework performs guided navigation of the target chemical space by exploiting the structure–property relationships captured by the energy-based model. We demonstrate the viability of the proposed molecular design approach by generating several molecular candidates that satisfy specific property target requirements. The proposed QC-based methods exhibit an improved predictive performance while efficiently generating novel molecules that accurately fulfill target conditions and exemplify the potential of QC for automated molecular design, thus accentuating its utility. | 10.1038/s41524-023-01099-0 | molecular design with automated quantum computing-based deep learning and optimization | computer-aided design of novel molecules and compounds is a challenging task that can be addressed with quantum computing (qc) owing to its notable advances in optimization and machine learning. here, we use qc-assisted learning and optimization techniques implemented with near-term qc devices for molecular property prediction and generation tasks. the proposed probabilistic energy-based deep learning model trained in a generative manner facilitated by qc yields robust latent representations of molecules, while the proposed data-driven qc-based optimization framework performs guided navigation of the target chemical space by exploiting the structure–property relationships captured by the energy-based model. we demonstrate the viability of the proposed molecular design approach by generating several molecular candidates that satisfy specific property target requirements. the proposed qc-based methods exhibit an improved predictive performance while efficiently generating novel molecules that accurately fulfill target conditions and exemplify the potential of qc for automated molecular design, thus accentuating its utility. | [
"computer-aided design",
"novel molecules",
"compounds",
"a challenging task",
"that",
"quantum computing",
"qc",
"its notable advances",
"optimization",
"machine learning",
"we",
"qc-assisted learning and optimization techniques",
"near-term qc devices",
"molecular property prediction",
"generation tasks",
"the proposed probabilistic energy-based deep learning model",
"a generative manner",
"qc yields",
"robust latent representations",
"molecules",
"the proposed data-driven qc-based optimization framework",
"guided navigation",
"the target chemical space",
"the structure",
"property relationships",
"the energy-based model",
"we",
"the viability",
"the proposed molecular design approach",
"several molecular candidates",
"that",
"specific property target requirements",
"the proposed qc-based methods",
"an improved predictive performance",
"novel molecules",
"that",
"target conditions",
"the potential",
"qc",
"automated molecular design",
"its utility",
"quantum"
] |
ConvMixer deep learning model for detection of pneumonia disease using chest X-ray images | [
"Ankit Chaudhary",
"Sushil Kumar Saroj"
] | Pneumonia is a common and fatal disease in children nowadays. It infects the lungs resulting in difficulties in breathing. Severe cases of it may lead to death. Therefore, early and accurate detection of pneumonia disease is essential. There are existing various methods for the detection of pneumonia disease today. Deep learning methods are considered more effective for this. We have applied a novel deep learning model i.e. the ConvMixer model for the detection of pneumonia disease. The model replaces traditional convolutional layers with a mixer of channels and spatial dimensions. By mixing channels and spatial dimensions, it reduces the number of parameters and computations required for processing each layer leading to improved efficiency. We have applied the model to the large numbers of chest X-ray images that are publicly available on Kaggle, provided by Guangzhou Women and Children’s Medical Centre in Guangzhou. The model has achieved the highest accuracy of 95.11%. It has also been evaluated for precision, recall, and f1-score parameters. | 10.1007/s10742-024-00334-5 | convmixer deep learning model for detection of pneumonia disease using chest x-ray images | pneumonia is a common and fatal disease in children nowadays. it infects the lungs resulting in difficulties in breathing. severe cases of it may lead to death. therefore, early and accurate detection of pneumonia disease is essential. there are existing various methods for the detection of pneumonia disease today. deep learning methods are considered more effective for this. we have applied a novel deep learning model i.e. the convmixer model for the detection of pneumonia disease. the model replaces traditional convolutional layers with a mixer of channels and spatial dimensions. by mixing channels and spatial dimensions, it reduces the number of parameters and computations required for processing each layer leading to improved efficiency. we have applied the model to the large numbers of chest x-ray images that are publicly available on kaggle, provided by guangzhou women and children’s medical centre in guangzhou. the model has achieved the highest accuracy of 95.11%. it has also been evaluated for precision, recall, and f1-score parameters. | [
"pneumonia",
"a common and fatal disease",
"children",
"it",
"the lungs",
"difficulties",
"breathing",
"severe cases",
"it",
"death",
"early and accurate detection",
"pneumonia disease",
"various methods",
"the detection",
"pneumonia disease",
"deep learning methods",
"this",
"we",
"a novel deep learning model",
"the detection",
"pneumonia disease",
"the model",
"traditional convolutional layers",
"a mixer",
"channels",
"spatial dimensions",
"channels",
"spatial dimensions",
"it",
"the number",
"parameters",
"computations",
"each layer",
"improved efficiency",
"we",
"the model",
"the large numbers",
"chest x-ray images",
"that",
"kaggle",
"guangzhou women",
"children’s medical centre",
"guangzhou",
"the model",
"the highest accuracy",
"95.11%",
"it",
"precision, recall, and f1-score parameters",
"today",
"guangzhou",
"guangzhou",
"95.11%"
] |
Research on Real-time Detection of Stacked Objects Based on Deep Learning | [
"Kaiguo Geng",
"Jinwei Qiao",
"Na Liu",
"Zhi Yang",
"Rongmin Zhang",
"Huiling Li"
] | Deep Learning has garnered significant attention in the field of object detection and is widely used in both industry and everyday life. The objective of this study is to investigate the applicability and targeted improvements of Deep Learning-based object detection in complex stacked environments. We analyzed the limitations in practical applications under such conditions, pinpointed the specific problems, and proposed corresponding improvement strategies. First, the study provided an overview of recent advancements in mainstream one-stage object detection algorithms, which included Anchor-based, Anchor-free, and Transformer-based architectures. The high real-time performance of these algorithms holds particular significance in practical engineering applications. It then looked at relevant technologies in three emerging research areas: Parts Recognition, Intelligent Driving, and Agricultural Picking. The study summarized existing limitations in real-time object detection within complex stacked environments and provided a comprehensive analysis of prevalent improvement strategies such as multi-level feature fusion, knowledge distillation, and hyperparameter optimization. Finally, after analyzing the performance of recent advanced one-stage algorithms on official datasets, this paper conducted empirical tests on a self-constructed industrial stacked dataset with algorithms of different structure and analyzed the experimental results in detail. A comprehensive analysis shows that Deep Learning-based object detection algorithms offer extensive applicability in complex stacked environments. In addressing diverse target sizes, overlapping occlusions, real-time constraints, and the need for lightweight solutions in complex stacked environments, each improvement strategy has its own advantages and limitations. Selecting and integrating appropriate enhancement strategies is critical and typically requires holistic evaluation, tailored to specific application contexts and challenges. | 10.1007/s10846-023-02009-8 | research on real-time detection of stacked objects based on deep learning | deep learning has garnered significant attention in the field of object detection and is widely used in both industry and everyday life. the objective of this study is to investigate the applicability and targeted improvements of deep learning-based object detection in complex stacked environments. we analyzed the limitations in practical applications under such conditions, pinpointed the specific problems, and proposed corresponding improvement strategies. first, the study provided an overview of recent advancements in mainstream one-stage object detection algorithms, which included anchor-based, anchor-free, and transformer-based architectures. the high real-time performance of these algorithms holds particular significance in practical engineering applications. it then looked at relevant technologies in three emerging research areas: parts recognition, intelligent driving, and agricultural picking. the study summarized existing limitations in real-time object detection within complex stacked environments and provided a comprehensive analysis of prevalent improvement strategies such as multi-level feature fusion, knowledge distillation, and hyperparameter optimization. finally, after analyzing the performance of recent advanced one-stage algorithms on official datasets, this paper conducted empirical tests on a self-constructed industrial stacked dataset with algorithms of different structure and analyzed the experimental results in detail. a comprehensive analysis shows that deep learning-based object detection algorithms offer extensive applicability in complex stacked environments. in addressing diverse target sizes, overlapping occlusions, real-time constraints, and the need for lightweight solutions in complex stacked environments, each improvement strategy has its own advantages and limitations. selecting and integrating appropriate enhancement strategies is critical and typically requires holistic evaluation, tailored to specific application contexts and challenges. | [
"deep learning",
"significant attention",
"the field",
"object detection",
"both industry",
"everyday life",
"the objective",
"this study",
"the applicability",
"targeted improvements",
"deep learning-based object detection",
"complex stacked environments",
"we",
"the limitations",
"practical applications",
"such conditions",
"the specific problems",
"corresponding improvement strategies",
"the study",
"an overview",
"recent advancements",
"mainstream one-stage object detection algorithms",
"which",
"anchor-based, anchor-free, and transformer-based architectures",
"the high real-time performance",
"these algorithms",
"particular significance",
"practical engineering applications",
"it",
"relevant technologies",
"three emerging research areas",
"parts recognition",
"intelligent driving",
"agricultural picking",
"the study",
"existing limitations",
"real-time object detection",
"complex stacked environments",
"a comprehensive analysis",
"prevalent improvement strategies",
"multi-level feature fusion",
"knowledge distillation",
"hyperparameter optimization",
"the performance",
"recent advanced one-stage algorithms",
"official datasets",
"this paper",
"empirical tests",
"a self-constructed industrial stacked dataset",
"algorithms",
"different structure",
"the experimental results",
"detail",
"a comprehensive analysis",
"deep learning-based object detection algorithms",
"extensive applicability",
"complex stacked environments",
"diverse target sizes",
"occlusions",
"real-time constraints",
"lightweight solutions",
"complex stacked environments",
"each improvement strategy",
"its own advantages",
"limitations",
"appropriate enhancement strategies",
"holistic evaluation",
"specific application contexts",
"challenges",
"first",
"one",
"three",
"one"
] |
Characterization of fault-karst reservoirs based on deep learning and attribute fusion | [
"Zhipeng Gui",
"Junhua Zhang",
"Yintao Zhang",
"Chong Sun"
] | The identification of fault-karst reservoir is crucial for the exploration and development of fault-controlled oil and gas reservoirs. Traditional methods primarily rely on well logging and seismic attribute analysis for karst cave identification. However, these methods often lack the resolution needed to meet practical demands. Deep learning methods offer promising solutions by effectively overcoming the complex response characteristics of seismic wave fields, owing to their high learning capabilities. Therefore, this research proposes a method for fault-karst reservoir identification. Initially, a comparative analysis between the improved U-Net++ network and traditional deep convolutional networks is conducted to select appropriate training parameters for separate training of karst caves and faults. Subsequently, the trained models are applied to actual seismic data to predict karst caves and faults within the research area, followed by attribute fusion to acquire data on fault-karst reservoirs. The results indicate that: (1) The proposed method effectively identifies karst caves and faults, outperforming traditional seismic attribute and coherence methods in terms of identification accuracy, and slightly surpassing U-Net and FCN; (2) The fusion of predicted karst caves and faults yields clear delineation of the relationship between top karst caves and bottom fractures within the research area. In summary, the proposed method for fault-karst reservoirs identification and characterization provides valuable insights for the exploration and development of fault-controlled oil and gas reservoirs in the region. | 10.1007/s11600-024-01420-5 | characterization of fault-karst reservoirs based on deep learning and attribute fusion | the identification of fault-karst reservoir is crucial for the exploration and development of fault-controlled oil and gas reservoirs. traditional methods primarily rely on well logging and seismic attribute analysis for karst cave identification. however, these methods often lack the resolution needed to meet practical demands. deep learning methods offer promising solutions by effectively overcoming the complex response characteristics of seismic wave fields, owing to their high learning capabilities. therefore, this research proposes a method for fault-karst reservoir identification. initially, a comparative analysis between the improved u-net++ network and traditional deep convolutional networks is conducted to select appropriate training parameters for separate training of karst caves and faults. subsequently, the trained models are applied to actual seismic data to predict karst caves and faults within the research area, followed by attribute fusion to acquire data on fault-karst reservoirs. the results indicate that: (1) the proposed method effectively identifies karst caves and faults, outperforming traditional seismic attribute and coherence methods in terms of identification accuracy, and slightly surpassing u-net and fcn; (2) the fusion of predicted karst caves and faults yields clear delineation of the relationship between top karst caves and bottom fractures within the research area. in summary, the proposed method for fault-karst reservoirs identification and characterization provides valuable insights for the exploration and development of fault-controlled oil and gas reservoirs in the region. | [
"the identification",
"fault-karst reservoir",
"the exploration",
"development",
"fault-controlled oil and gas reservoirs",
"traditional methods",
"well logging and seismic attribute analysis",
"karst cave identification",
"these methods",
"the resolution",
"practical demands",
"deep learning methods",
"promising solutions",
"the complex response characteristics",
"seismic wave fields",
"their high learning capabilities",
"this research",
"a method",
"fault-karst reservoir identification",
"a comparative analysis",
"the improved u-net++ network",
"traditional deep convolutional networks",
"appropriate training parameters",
"separate training",
"karst caves",
"faults",
"the trained models",
"actual seismic data",
"karst caves",
"faults",
"the research area",
"attribute fusion",
"data",
"fault-karst reservoirs",
"the results",
"the proposed method",
"karst caves",
"faults",
"traditional seismic attribute and coherence methods",
"terms",
"identification accuracy",
"u",
"-",
"net",
"fcn",
"(2) the fusion",
"predicted karst caves",
"faults",
"clear delineation",
"the relationship",
"top karst caves",
"bottom fractures",
"the research area",
"summary",
"the proposed method",
"fault-karst reservoirs identification",
"characterization",
"valuable insights",
"the exploration",
"development",
"fault-controlled oil and gas reservoirs",
"the region",
"1",
"2"
] |
Crop disease detection via ensembled-deep-learning paradigm and ABC Coyote pack optimization algorithm (ABC-CPOA) | [
"M. Chithambarathanu",
"M. K. Jeyakumar"
] | Crop disease is a significant issue that affects the growth and yield of crops, leading to financial loss for farmers. Identification and treatment of crop diseases have become challenging due to the increase in the variety of diseases and the lack of knowledge among farmers. To address this issue, this investigate uses an ensembled-deep-learning paradigm to propose a deep learning-based model for crop disease identification trained with an ABC-CPOA. Initially, collected raw images are pre-processed via Bilateral filter and gamma correction Feature Extraction: Then, from the pre-processed images, the features like texture feature (Local Quinary Pattern (LQP), Local Gradient Pattern (LGP), Enriched Local Binary Pattern (E-LBP), color features (Color Histogram, Color Moments), shape features (Contour-based features, Convex Hull). Optimal feature selection- Among the extracted features, the optimal features is designated by means of a self-improved meta-heuristic optimization model referred as ABC-CPOA. This ABC-CPOA model is an extended version of standard Coyote Optimization Algorithm (COA). Crop disease detection phase is modelled with a new ensembled-deep-learning paradigm. Ensembled-deep-learning paradigm comprises Attention-based Bi-LSTM, Recurrent Neural Networks (RNNs) and Optimized Deep Neural Network (O-DNN). The weight function of O-DNN is fine-tuned using the new ABC-CPOA. Precision, recall, sensitivity, and specificity, in addition to TPR, FPR, FNR, and TNR, F1-score, and accuracy are used to assess the suggested approach. The implementation was performed by the MATLAB tool (version: 2022B). | 10.1007/s11042-024-19329-y | crop disease detection via ensembled-deep-learning paradigm and abc coyote pack optimization algorithm (abc-cpoa) | crop disease is a significant issue that affects the growth and yield of crops, leading to financial loss for farmers. identification and treatment of crop diseases have become challenging due to the increase in the variety of diseases and the lack of knowledge among farmers. to address this issue, this investigate uses an ensembled-deep-learning paradigm to propose a deep learning-based model for crop disease identification trained with an abc-cpoa. initially, collected raw images are pre-processed via bilateral filter and gamma correction feature extraction: then, from the pre-processed images, the features like texture feature (local quinary pattern (lqp), local gradient pattern (lgp), enriched local binary pattern (e-lbp), color features (color histogram, color moments), shape features (contour-based features, convex hull). optimal feature selection- among the extracted features, the optimal features is designated by means of a self-improved meta-heuristic optimization model referred as abc-cpoa. this abc-cpoa model is an extended version of standard coyote optimization algorithm (coa). crop disease detection phase is modelled with a new ensembled-deep-learning paradigm. ensembled-deep-learning paradigm comprises attention-based bi-lstm, recurrent neural networks (rnns) and optimized deep neural network (o-dnn). the weight function of o-dnn is fine-tuned using the new abc-cpoa. precision, recall, sensitivity, and specificity, in addition to tpr, fpr, fnr, and tnr, f1-score, and accuracy are used to assess the suggested approach. the implementation was performed by the matlab tool (version: 2022b). | [
"crop disease",
"a significant issue",
"that",
"the growth",
"yield",
"crops",
"financial loss",
"farmers",
"identification",
"treatment",
"crop diseases",
"the increase",
"the variety",
"diseases",
"the lack",
"knowledge",
"farmers",
"this issue",
"this investigate",
"an ensembled-deep-learning paradigm",
"a deep learning-based model",
"crop disease identification",
"an abc-cpoa",
"raw images",
"bilateral filter",
"gamma correction",
"feature extraction",
"the pre-processed images",
"texture feature",
"local quinary pattern",
"lqp",
"local gradient pattern",
"lgp",
"enriched local binary pattern",
"e",
"-",
"lbp",
"color features",
"color histogram",
"color moments",
"shape features",
"(contour-based features",
"convex hull",
"optimal feature",
"the extracted features",
"the optimal features",
"means",
"a self-improved meta-heuristic optimization model",
"abc-cpoa",
"this abc-cpoa model",
"an extended version",
"standard coyote optimization algorithm",
"coa",
"crop disease detection phase",
"a new ensembled-deep-learning paradigm",
"ensembled-deep-learning paradigm",
"attention-based bi",
"-",
"lstm",
"recurrent neural networks",
"rnns",
"deep neural network",
"o",
"dnn",
"the weight function",
"o",
"-",
"dnn",
"the new abc-cpoa",
"precision",
"recall",
"sensitivity",
"specificity",
"addition",
"tpr",
"fpr",
"fnr",
"tnr",
"f1-score",
"accuracy",
"the suggested approach",
"the implementation",
"the matlab tool",
"version",
"abc",
"abc",
"abc",
"abc"
] |
An interpretable deep learning framework for genome-informed precision oncology | [
"Shuangxia Ren",
"Gregory F. Cooper",
"Lujia Chen",
"Xinghua Lu"
] | Cancers result from aberrations in cellular signalling systems, typically resulting from driver somatic genome alterations (SGAs) in individual tumours. Precision oncology requires understanding the cellular state and selecting medications that induce vulnerability in cancer cells under such conditions. To this end, we developed a computational framework consisting of two components: (1) a representation-learning component, which learns a representation of the cellular signalling systems when perturbed by SGAs and uses a biologically motivated and interpretable deep learning model, and (2) a drug-response prediction component, which predicts drug responses by leveraging the information of the cellular state of the cancer cells derived by the first component. Our cell-state-oriented framework notably improves the accuracy of predictions of drug responses compared to models using SGAs directly in cell lines. Moreover, our model performs well with real patient data. Importantly, our framework enables the prediction of responses to chemotherapy agents based on SGAs, thus expanding genome-informed precision oncology beyond molecularly targeted drugs. | 10.1038/s42256-024-00866-y | an interpretable deep learning framework for genome-informed precision oncology | cancers result from aberrations in cellular signalling systems, typically resulting from driver somatic genome alterations (sgas) in individual tumours. precision oncology requires understanding the cellular state and selecting medications that induce vulnerability in cancer cells under such conditions. to this end, we developed a computational framework consisting of two components: (1) a representation-learning component, which learns a representation of the cellular signalling systems when perturbed by sgas and uses a biologically motivated and interpretable deep learning model, and (2) a drug-response prediction component, which predicts drug responses by leveraging the information of the cellular state of the cancer cells derived by the first component. our cell-state-oriented framework notably improves the accuracy of predictions of drug responses compared to models using sgas directly in cell lines. moreover, our model performs well with real patient data. importantly, our framework enables the prediction of responses to chemotherapy agents based on sgas, thus expanding genome-informed precision oncology beyond molecularly targeted drugs. | [
"cancers",
"aberrations",
"cellular signalling systems",
"driver somatic genome alterations",
"sgas",
"individual tumours",
"precision oncology",
"the cellular state and selecting medications",
"that",
"vulnerability",
"cancer cells",
"such conditions",
"this end",
"we",
"a computational framework",
"two components",
"a representation-learning component",
"which",
"a representation",
"the cellular signalling systems",
"sgas",
"a biologically motivated and interpretable deep learning model",
"(2) a drug-response prediction component",
"which",
"drug responses",
"the information",
"the cellular state",
"the cancer cells",
"the first component",
"our cell-state-oriented framework",
"the accuracy",
"predictions",
"drug responses",
"models",
"sgas",
"cell lines",
"our model",
"real patient data",
"our framework",
"the prediction",
"responses",
"chemotherapy agents",
"sgas",
"genome-informed precision oncology",
"molecularly targeted drugs",
"two",
"1",
"2",
"first"
] |
Development of a deep learning model for cancer diagnosis by inspecting cell-free DNA end-motifs | [
"Hongru Shen",
"Meng Yang",
"Jilei Liu",
"Kexin Chen",
"Xiangchun Li"
] | Accurate discrimination between patients with and without cancer from cfDNA is crucial for early cancer diagnosis. Herein, we develop and validate a deep-learning-based model entitled end-motif inspection via transformer (EMIT) for discriminating individuals with and without cancer by learning feature representations from cfDNA end-motifs. EMIT is a self-supervised learning approach that models rankings of cfDNA end-motifs. We include 4606 samples subjected to different types of cfDNA sequencing to develop EIMIT, and subsequently evaluate classification performance of linear projections of EMIT on six datasets and an additional inhouse testing set encopassing whole-genome, whole-genome bisulfite and 5-hydroxymethylcytosine sequencing. The linear projection of representations from EMIT achieved area under the receiver operating curve (AUROC) values ranged from 0.895 (0.835–0.955) to 0.996 (0.994–0.997) across these six datasets, outperforming its baseline by significant margins. Additionally, we showed that linear projection of EMIT representations can achieve an AUROC of 0.962 (0.914–1.0) in identification of lung cancer on an independent testing set subjected to whole-exome sequencing. The findings of this study indicate that a transformer-based deep learning model can learn cancer-discrimative representations from cfDNA end-motifs. The representations of this deep learning model can be exploited for discriminating patients with and without cancer. | 10.1038/s41698-024-00635-5 | development of a deep learning model for cancer diagnosis by inspecting cell-free dna end-motifs | accurate discrimination between patients with and without cancer from cfdna is crucial for early cancer diagnosis. herein, we develop and validate a deep-learning-based model entitled end-motif inspection via transformer (emit) for discriminating individuals with and without cancer by learning feature representations from cfdna end-motifs. emit is a self-supervised learning approach that models rankings of cfdna end-motifs. we include 4606 samples subjected to different types of cfdna sequencing to develop eimit, and subsequently evaluate classification performance of linear projections of emit on six datasets and an additional inhouse testing set encopassing whole-genome, whole-genome bisulfite and 5-hydroxymethylcytosine sequencing. the linear projection of representations from emit achieved area under the receiver operating curve (auroc) values ranged from 0.895 (0.835–0.955) to 0.996 (0.994–0.997) across these six datasets, outperforming its baseline by significant margins. additionally, we showed that linear projection of emit representations can achieve an auroc of 0.962 (0.914–1.0) in identification of lung cancer on an independent testing set subjected to whole-exome sequencing. the findings of this study indicate that a transformer-based deep learning model can learn cancer-discrimative representations from cfdna end-motifs. the representations of this deep learning model can be exploited for discriminating patients with and without cancer. | [
"accurate discrimination",
"patients",
"cancer",
"cfdna",
"early cancer diagnosis",
"we",
"a deep-learning-based model entitled end-motif inspection",
"transformer",
"individuals",
"cancer",
"feature representations",
"cfdna end-motifs",
"emit",
"a self-supervised learning approach",
"that",
"cfdna end-motifs",
"we",
"4606 samples",
"different types",
"cfdna",
"eimit",
"classification performance",
"linear projections",
"six datasets",
"an additional inhouse testing",
"whole-genome, whole-genome bisulfite",
"5-hydroxymethylcytosine",
"the linear projection",
"representations",
"emit",
"area",
"the receiver operating curve (auroc) values",
"(0.835–0.955",
"0.994–0.997",
"these six datasets",
"its baseline",
"significant margins",
"we",
"linear projection",
"emit representations",
"an auroc",
"identification",
"lung cancer",
"an independent testing",
"the findings",
"this study",
"a transformer-based deep learning model",
"cancer-discrimative representations",
"cfdna end-motifs",
"the representations",
"this deep learning model",
"patients",
"cancer",
"emit",
"linear",
"six",
"5",
"0.895",
"0.996",
"six",
"linear",
"0.962"
] |
VResNet: A Deep Learning Architecture for Image Inpainting of Irregular Damaged Images | [
"Sariva Sharma",
"Rajneesh Rani"
] | In computer vision, image inpainting is a famous problem to automatically reconstruct the damaged part of the image according to the undamaged portion of an image. Inpainting irregular damaged areas in the image is still challenging. Deep learning-based techniques have given us a fantastic performance over the last few years. In this paper, we propose VResNet, a deep-learning approach for image inpainting, inspired by U-Net architecture and the residual framework. Since deeper neural networks are extra hard to train, the superficial convolution block in U-Net architecture is replaced by the residual learning block in the proposed approach to simplify the training of deeper neural networks. To develop an effective and adaptable model, an extensive series of experiments was conducted using the Paris-Street-View dataset. Our proposed method achieved notable results, including a PSNR of 20.65, an SSIM of 0.65, an L1 Loss of 6.90, and a total loss (L\(_{\hbox {Total}}\)) of 0.30 on the Paris-Street-View dataset. These outcomes clearly demonstrate the superior performance of our model when compared to other techniques. The paper presents both qualitative and quantitative comparisons to provide a comprehensive assessment of our approach. | 10.1007/s42979-023-02523-4 | vresnet: a deep learning architecture for image inpainting of irregular damaged images | in computer vision, image inpainting is a famous problem to automatically reconstruct the damaged part of the image according to the undamaged portion of an image. inpainting irregular damaged areas in the image is still challenging. deep learning-based techniques have given us a fantastic performance over the last few years. in this paper, we propose vresnet, a deep-learning approach for image inpainting, inspired by u-net architecture and the residual framework. since deeper neural networks are extra hard to train, the superficial convolution block in u-net architecture is replaced by the residual learning block in the proposed approach to simplify the training of deeper neural networks. to develop an effective and adaptable model, an extensive series of experiments was conducted using the paris-street-view dataset. our proposed method achieved notable results, including a psnr of 20.65, an ssim of 0.65, an l1 loss of 6.90, and a total loss (l\(_{\hbox {total}}\)) of 0.30 on the paris-street-view dataset. these outcomes clearly demonstrate the superior performance of our model when compared to other techniques. the paper presents both qualitative and quantitative comparisons to provide a comprehensive assessment of our approach. | [
"computer vision",
"a famous problem",
"the damaged part",
"the image",
"the undamaged portion",
"an image",
"irregular damaged areas",
"the image",
"deep learning-based techniques",
"us",
"a fantastic performance",
"the last few years",
"this paper",
"we",
"vresnet",
"a deep-learning approach",
"image",
"u-net architecture",
"the residual framework",
"deeper neural networks",
"the superficial convolution block",
"u-net architecture",
"the residual learning block",
"the proposed approach",
"the training",
"deeper neural networks",
"an effective and adaptable model",
"an extensive series",
"experiments",
"the paris-street-view dataset",
"our proposed method",
"notable results",
"a psnr",
"an ssim",
"an l1 loss",
"a total loss",
"l\\(_{\\hbox {total}}\\",
"the paris-street-view dataset",
"these outcomes",
"the superior performance",
"our model",
"other techniques",
"the paper",
"both qualitative and quantitative comparisons",
"a comprehensive assessment",
"our approach",
"the last few years",
"paris",
"20.65",
"0.65",
"6.90",
"0.30",
"paris"
] |
Deep Learning-based Pilot Adaptation and Channel Estimation in OFDM Systems | [
"Mohammadamin Shahmohammadi",
"Mohammadali Sebghati",
"Hassan Zareian"
] | Clear communication over wireless channels demands overcoming their disruptive effects. Doubly-selective fading channels, with rapidly changing parameters, pose a particular challenge for accurate channel estimation. Traditional models often falter, lacking solutions or becoming overly complex. This is where deep learning takes center stage. In OFDM systems, pilots assist in estimating the channel response, but they also come at the cost of reduced data throughput. Adaptive adjustment of pilot patterns based on the channel state offers a promising solution. This paper introduces a deep learning-based framework that leverages adaptive pilots for fast-varying channels. We employ two deep neural networks. First, the pilot adaptation network dynamically selects the pilot pattern, reacting to the channel coherence bandwidth. Second, the channel estimation network extracts features from the channel frequency response using a 1D convolutional neural network. It then harnesses the power of long short-term memory layers to learn the channel behavior and estimate the response across all pilots and data subcarriers. Training and testing datasets are generated using WINNER II. The entire communication link, equipped with our proposed method, undergoes rigorous simulations, evaluated by both bit error rate and pilot overhead. The simulation results illustrate that the proposed scheme outperforms the previous methods, because it yields the same or better errors with less pilot overhead. This translates to a substantial data rate boost, paving the way for faster wireless communication. | 10.1007/s11277-024-10937-3 | deep learning-based pilot adaptation and channel estimation in ofdm systems | clear communication over wireless channels demands overcoming their disruptive effects. doubly-selective fading channels, with rapidly changing parameters, pose a particular challenge for accurate channel estimation. traditional models often falter, lacking solutions or becoming overly complex. this is where deep learning takes center stage. in ofdm systems, pilots assist in estimating the channel response, but they also come at the cost of reduced data throughput. adaptive adjustment of pilot patterns based on the channel state offers a promising solution. this paper introduces a deep learning-based framework that leverages adaptive pilots for fast-varying channels. we employ two deep neural networks. first, the pilot adaptation network dynamically selects the pilot pattern, reacting to the channel coherence bandwidth. second, the channel estimation network extracts features from the channel frequency response using a 1d convolutional neural network. it then harnesses the power of long short-term memory layers to learn the channel behavior and estimate the response across all pilots and data subcarriers. training and testing datasets are generated using winner ii. the entire communication link, equipped with our proposed method, undergoes rigorous simulations, evaluated by both bit error rate and pilot overhead. the simulation results illustrate that the proposed scheme outperforms the previous methods, because it yields the same or better errors with less pilot overhead. this translates to a substantial data rate boost, paving the way for faster wireless communication. | [
"clear communication",
"their disruptive effects",
"doubly-selective fading channels",
"rapidly changing parameters",
"a particular challenge",
"accurate channel estimation",
"traditional models",
"solutions",
"this",
"deep learning",
"center stage",
"ofdm systems",
"pilots",
"the channel response",
"they",
"the cost",
"reduced data throughput",
"adaptive adjustment",
"pilot patterns",
"the channel state",
"a promising solution",
"this paper",
"a deep learning-based framework",
"that",
"adaptive pilots",
"fast-varying channels",
"we",
"two deep neural networks",
"the pilot adaptation network",
"the pilot pattern",
"the channel coherence",
"the channel estimation network",
"features",
"the channel frequency response",
"a 1d convolutional neural network",
"it",
"the power",
"long short-term memory layers",
"the channel behavior",
"the response",
"all pilots and data subcarriers",
"training and testing datasets",
"winner ii",
"the entire communication link",
"our proposed method",
"undergoes rigorous simulations",
"both bit error rate",
"the simulation results",
"the proposed scheme",
"the previous methods",
"it",
"the same or better errors",
"this",
"a substantial data rate boost",
"the way",
"faster wireless communication",
"two",
"first",
"second",
"1d"
] |
Intelligent upper-limb exoskeleton integrated with soft bioelectronics and deep learning for intention-driven augmentation | [
"Jinwoo Lee",
"Kangkyu Kwon",
"Ira Soltis",
"Jared Matthews",
"Yoon Jae Lee",
"Hojoong Kim",
"Lissette Romero",
"Nathan Zavanelli",
"Youngjin Kwon",
"Shinjae Kwon",
"Jimin Lee",
"Yewon Na",
"Sung Hoon Lee",
"Ki Jun Yu",
"Minoru Shinohara",
"Frank L. Hammond",
"Woon-Hong Yeo"
] | The age and stroke-associated decline in musculoskeletal strength degrades the ability to perform daily human tasks using the upper extremities. Here, we introduce an intelligent upper-limb exoskeleton system that utilizes deep learning to predict human intention for strength augmentation. The embedded soft wearable sensors provide sensory feedback by collecting real-time muscle activities, which are simultaneously computed to determine the user’s intended movement. Cloud-based deep learning predicts four upper-limb joint motions with an average accuracy of 96.2% at a 500–550 ms response rate, suggesting that the exoskeleton operates just by human intention. In addition, an array of soft pneumatics assists the intended movements by providing 897 newtons of force while generating a displacement of 87 mm at maximum. The intent-driven exoskeleton can reduce human muscle activities by 3.7 times on average compared to the unassisted exoskeleton. | 10.1038/s41528-024-00297-0 | intelligent upper-limb exoskeleton integrated with soft bioelectronics and deep learning for intention-driven augmentation | the age and stroke-associated decline in musculoskeletal strength degrades the ability to perform daily human tasks using the upper extremities. here, we introduce an intelligent upper-limb exoskeleton system that utilizes deep learning to predict human intention for strength augmentation. the embedded soft wearable sensors provide sensory feedback by collecting real-time muscle activities, which are simultaneously computed to determine the user’s intended movement. cloud-based deep learning predicts four upper-limb joint motions with an average accuracy of 96.2% at a 500–550 ms response rate, suggesting that the exoskeleton operates just by human intention. in addition, an array of soft pneumatics assists the intended movements by providing 897 newtons of force while generating a displacement of 87 mm at maximum. the intent-driven exoskeleton can reduce human muscle activities by 3.7 times on average compared to the unassisted exoskeleton. | [
"the age and stroke-associated decline",
"musculoskeletal strength",
"the ability",
"daily human tasks",
"the upper extremities",
"we",
"an intelligent upper-limb exoskeleton system",
"that",
"deep learning",
"human intention",
"strength augmentation",
"the embedded soft wearable sensors",
"sensory feedback",
"real-time muscle activities",
"which",
"the user’s intended movement",
"cloud-based deep learning",
"four upper-limb joint motions",
"an average accuracy",
"96.2%",
"a 500–550 ms response rate",
"the exoskeleton",
"human intention",
"addition",
"an array",
"soft pneumatics",
"the intended movements",
"897 newtons",
"force",
"a displacement",
"87 mm",
"maximum",
"the intent-driven exoskeleton",
"human muscle activities",
"3.7 times",
"the unassisted exoskeleton",
"daily",
"four",
"96.2%",
"500–550",
"897",
"87 mm",
"3.7"
] |
Deep sampling of gRNA in the human genome and deep-learning-informed prediction of gRNA activities | [
"Heng Zhang",
"Jianfeng Yan",
"Zhike Lu",
"Yangfan Zhou",
"Qingfeng Zhang",
"Tingting Cui",
"Yini Li",
"Hui Chen",
"Lijia Ma"
] | Life science studies involving clustered regularly interspaced short palindromic repeat (CRISPR) editing generally apply the best-performing guide RNA (gRNA) for a gene of interest. Computational models are combined with massive experimental quantification on synthetic gRNA-target libraries to accurately predict gRNA activity and mutational patterns. However, the measurements are inconsistent between studies due to differences in the designs of the gRNA-target pair constructs, and there has not yet been an integrated investigation that concurrently focuses on multiple facets of gRNA capacity. In this study, we analyzed the DNA double-strand break (DSB)-induced repair outcomes and measured SpCas9/gRNA activities at both matched and mismatched locations using 926,476 gRNAs covering 19,111 protein-coding genes and 20,268 non-coding genes. We developed machine learning models to forecast the on-target cleavage efficiency (AIdit_ON), off-target cleavage specificity (AIdit_OFF), and mutational profiles (AIdit_DSB) of SpCas9/gRNA from a uniformly collected and processed dataset by deep sampling and massively quantifying gRNA capabilities in K562 cells. Each of these models exhibited superlative performance in predicting SpCas9/gRNA activities on independent datasets when benchmarked with previous models. A previous unknown parameter was also empirically determined regarding the “sweet spot” in the size of datasets used to establish an effective model to predict gRNA capabilities at a manageable experimental scale. In addition, we observed cell type-specific mutational profiles and were able to link nucleotidylexotransferase as the key factor driving these outcomes. These massive datasets and deep learning algorithms have been implemented into the user-friendly web service http://crispr-aidit.com to evaluate and rank gRNAs for life science studies. | 10.1038/s41421-023-00549-9 | deep sampling of grna in the human genome and deep-learning-informed prediction of grna activities | life science studies involving clustered regularly interspaced short palindromic repeat (crispr) editing generally apply the best-performing guide rna (grna) for a gene of interest. computational models are combined with massive experimental quantification on synthetic grna-target libraries to accurately predict grna activity and mutational patterns. however, the measurements are inconsistent between studies due to differences in the designs of the grna-target pair constructs, and there has not yet been an integrated investigation that concurrently focuses on multiple facets of grna capacity. in this study, we analyzed the dna double-strand break (dsb)-induced repair outcomes and measured spcas9/grna activities at both matched and mismatched locations using 926,476 grnas covering 19,111 protein-coding genes and 20,268 non-coding genes. we developed machine learning models to forecast the on-target cleavage efficiency (aidit_on), off-target cleavage specificity (aidit_off), and mutational profiles (aidit_dsb) of spcas9/grna from a uniformly collected and processed dataset by deep sampling and massively quantifying grna capabilities in k562 cells. each of these models exhibited superlative performance in predicting spcas9/grna activities on independent datasets when benchmarked with previous models. a previous unknown parameter was also empirically determined regarding the “sweet spot” in the size of datasets used to establish an effective model to predict grna capabilities at a manageable experimental scale. in addition, we observed cell type-specific mutational profiles and were able to link nucleotidylexotransferase as the key factor driving these outcomes. these massive datasets and deep learning algorithms have been implemented into the user-friendly web service http://crispr-aidit.com to evaluate and rank grnas for life science studies. | [
"life science studies",
"regularly interspaced short palindromic repeat",
"the best-performing guide",
"rna",
"grna",
"a gene",
"interest",
"computational models",
"massive experimental quantification",
"synthetic grna-target libraries",
"grna activity",
"mutational patterns",
"the measurements",
"studies",
"differences",
"the designs",
"the grna-target pair constructs",
"an integrated investigation",
"that",
"multiple facets",
"grna capacity",
"this study",
"we",
"the dna double-strand break",
"dsb)-induced repair outcomes",
"measured spcas9/grna activities",
"both matched and mismatched locations",
"926,476 grnas",
"19,111 protein-coding genes",
"20,268 non-coding genes",
"we",
"machine learning models",
"target",
"aidit_on",
"target",
"aidit_off",
"mutational profiles",
"aidit_dsb",
"spcas9/grna",
"a uniformly collected and processed dataset",
"deep sampling",
"grna capabilities",
"k562 cells",
"each",
"these models",
"superlative performance",
"spcas9/grna activities",
"independent datasets",
"previous models",
"a previous unknown parameter",
"the “sweet spot",
"the size",
"datasets",
"an effective model",
"grna capabilities",
"a manageable experimental scale",
"addition",
"we",
"cell type-specific mutational profiles",
"nucleotidylexotransferase",
"the key factor",
"these outcomes",
"these massive datasets",
"deep learning algorithms",
"the user-friendly web service",
"http://crispr-aidit.com",
"rank grnas",
"life science studies",
"926,476",
"19,111",
"20,268"
] |
DEL-Thyroid: deep ensemble learning framework for detection of thyroid cancer progression through genomic mutation | [
"Asghar Ali Shah",
"Ali Daud",
"Amal Bukhari",
"Bader Alshemaimri",
"Muhammad Ahsan",
"Rehmana Younis"
] | Genes, expressed as sequences of nucleotides, are susceptible to mutations, some of which can lead to cancer. Machine learning and deep learning methods have emerged as vital tools in identifying mutations associated with cancer. Thyroid cancer ranks as the 5th most prevalent cancer in the USA, with thousands diagnosed annually. This paper presents an ensemble learning model leveraging deep learning techniques such as Long Short-Term Memory (LSTM), Gated Recurrent Units (GRUs), and Bi-directional LSTM (Bi-LSTM) to detect thyroid cancer mutations early. The model is trained on a dataset sourced from asia.ensembl.org and IntOGen.org, consisting of 633 samples with 969 mutations across 41 genes, collected from individuals of various demographics. Feature extraction encompasses techniques including Hahn moments, central moments, raw moments, and various matrix-based methods. Evaluation employs three testing methods: self-consistency test (SCT), independent set test (IST), and 10-fold cross-validation test (10-FCVT). The proposed ensemble learning model demonstrates promising performance, achieving 96% accuracy in the independent set test (IST). Statistical measures such as training accuracy, testing accuracy, recall, sensitivity, specificity, Mathew's Correlation Coefficient (MCC), loss, training accuracy, F1 Score, and Cohen's kappa are utilized for comprehensive evaluation. | 10.1186/s12911-024-02604-1 | del-thyroid: deep ensemble learning framework for detection of thyroid cancer progression through genomic mutation | genes, expressed as sequences of nucleotides, are susceptible to mutations, some of which can lead to cancer. machine learning and deep learning methods have emerged as vital tools in identifying mutations associated with cancer. thyroid cancer ranks as the 5th most prevalent cancer in the usa, with thousands diagnosed annually. this paper presents an ensemble learning model leveraging deep learning techniques such as long short-term memory (lstm), gated recurrent units (grus), and bi-directional lstm (bi-lstm) to detect thyroid cancer mutations early. the model is trained on a dataset sourced from asia.ensembl.org and intogen.org, consisting of 633 samples with 969 mutations across 41 genes, collected from individuals of various demographics. feature extraction encompasses techniques including hahn moments, central moments, raw moments, and various matrix-based methods. evaluation employs three testing methods: self-consistency test (sct), independent set test (ist), and 10-fold cross-validation test (10-fcvt). the proposed ensemble learning model demonstrates promising performance, achieving 96% accuracy in the independent set test (ist). statistical measures such as training accuracy, testing accuracy, recall, sensitivity, specificity, mathew's correlation coefficient (mcc), loss, training accuracy, f1 score, and cohen's kappa are utilized for comprehensive evaluation. | [
"genes",
"sequences",
"nucleotides",
"mutations",
"some",
"which",
"cancer",
"machine learning",
"deep learning methods",
"vital tools",
"mutations",
"cancer",
"thyroid cancer",
"the 5th most prevalent cancer",
"the usa",
"thousands",
"this paper",
"an ensemble learning model",
"deep learning techniques",
"long short-term memory",
"lstm",
"gated recurrent units",
"grus",
"bi-directional lstm",
"bi",
"-",
"lstm",
"thyroid cancer mutations",
"the model",
"a dataset",
"intogen.org",
"633 samples",
"969 mutations",
"41 genes",
"individuals",
"various demographics",
"feature extraction",
"techniques",
"hahn moments",
"central moments",
"raw moments",
"various matrix-based methods",
"evaluation",
"three testing methods",
"self-consistency test",
"sct",
"independent set test",
"ist",
"10-fold cross-validation test",
"the proposed ensemble learning model",
"performance",
"96% accuracy",
"the independent set test",
"(ist",
"statistical measures",
"training accuracy",
"testing accuracy",
"recall",
"sensitivity",
"specificity",
"mathew's correlation coefficient",
"mcc",
"loss",
"training accuracy",
"f1 score",
"cohen's kappa",
"comprehensive evaluation",
"5th",
"thousands",
"annually",
"633",
"969",
"41",
"three",
"sct",
"10-fold",
"10-fcvt",
"96%",
"cohen"
] |
Deep learning model for intravascular ultrasound image segmentation with temporal consistency | [
"Hyeonmin Kim",
"June-Goo Lee",
"Gyu-Jun Jeong",
"Geunyoung Lee",
"Hyunseok Min",
"Hyungjoo Cho",
"Daegyu Min",
"Seung-Whan Lee",
"Jun Hwan Cho",
"Sungsoo Cho",
"Soo-Jin Kang"
] | This study was conducted to develop and validate a deep learning model for delineating intravascular ultrasound (IVUS) images of coronary arteries.Using a total of 1240 40-MHz IVUS pullbacks with 191,407 frames, the model for lumen and external elastic membrane (EEM) segmentation was developed. Both frame- and vessel-level performances and clinical impact of the model on 3-year cardiovascular events were evaluated in the independent data sets. In the test set, the Dice similarity coefficients (DSC) were 0.966 ± 0.025 and 0.982 ± 0.017 for the lumen and EEM, respectively. Even at sites of extensive attenuation, the frame-level performance was excellent (DSCs > 0.96 for the lumen and EEM). The model (vs. the expert) showed a better temporal consistency for contouring the EEM. The agreement between the model- vs. the expert-derived cross-sectional and volumetric measurements was excellent in the independent retrospective cohort (all, intra-class coefficients > 0.94). The model-derived percent atheroma volume > 52.5% (area under curve 0.70, sensitivity 71% and specificity 67%) and plaque burden at the minimal lumen area site (area under curve 0.72, sensitivity 72% and specificity 66%) best predicted 3-year cardiac death and nonculprit-related target vessel revascularization, respectively. In the stented segment, the DSCs > 0.96 for contouring lumen and EEM were achieved. Applied to the 60-MHz IVUS images, the DSCs were > 0.97. In the external cohort with 45-MHz IVUS, the DSCs were > 0.96. The deep learning model accurately delineated vascular geometry, which may be cost-saving and support clinical decision-making. | 10.1007/s10554-024-03221-9 | deep learning model for intravascular ultrasound image segmentation with temporal consistency | this study was conducted to develop and validate a deep learning model for delineating intravascular ultrasound (ivus) images of coronary arteries.using a total of 1240 40-mhz ivus pullbacks with 191,407 frames, the model for lumen and external elastic membrane (eem) segmentation was developed. both frame- and vessel-level performances and clinical impact of the model on 3-year cardiovascular events were evaluated in the independent data sets. in the test set, the dice similarity coefficients (dsc) were 0.966 ± 0.025 and 0.982 ± 0.017 for the lumen and eem, respectively. even at sites of extensive attenuation, the frame-level performance was excellent (dscs > 0.96 for the lumen and eem). the model (vs. the expert) showed a better temporal consistency for contouring the eem. the agreement between the model- vs. the expert-derived cross-sectional and volumetric measurements was excellent in the independent retrospective cohort (all, intra-class coefficients > 0.94). the model-derived percent atheroma volume > 52.5% (area under curve 0.70, sensitivity 71% and specificity 67%) and plaque burden at the minimal lumen area site (area under curve 0.72, sensitivity 72% and specificity 66%) best predicted 3-year cardiac death and nonculprit-related target vessel revascularization, respectively. in the stented segment, the dscs > 0.96 for contouring lumen and eem were achieved. applied to the 60-mhz ivus images, the dscs were > 0.97. in the external cohort with 45-mhz ivus, the dscs were > 0.96. the deep learning model accurately delineated vascular geometry, which may be cost-saving and support clinical decision-making. | [
"this study",
"a deep learning model",
"ivus",
"a total",
"1240 40-mhz ivus pullbacks",
"191,407 frames",
"the model",
"lumen",
"external elastic membrane",
"(eem) segmentation",
"both frame- and vessel-level performances",
"clinical impact",
"the model",
"3-year cardiovascular events",
"the independent data sets",
"the test set",
"the dice similarity coefficients",
"0.966 ±",
"0.982 ±",
"the lumen",
"eem",
"sites",
"extensive attenuation",
"the frame-level performance",
"the lumen",
"eem",
"the model",
"the expert",
"a better temporal consistency",
"the eem",
"the agreement",
"the expert-derived cross",
"the independent retrospective cohort",
", intra-class coefficients",
"the model-derived percent atheroma volume",
"52.5%",
"area",
"curve",
"sensitivity",
"specificity",
"plaque burden",
"the minimal lumen area site",
"area",
"curve",
"sensitivity",
"specificity",
"3-year cardiac death",
"nonculprit-related target vessel revascularization",
"the stented segment",
"lumen",
"eem",
"the 60-mhz ivus images",
"the dscs",
"the external cohort",
"45-mhz ivus",
"the dscs",
"the deep learning model",
"vascular geometry",
"which",
"clinical decision-making",
"1240",
"40",
"191,407",
"frame-",
"3-year",
"0.966",
"0.025",
"0.982",
"0.96",
"0.94",
"52.5%",
"0.70",
"71%",
"67%",
"0.72",
"72%",
"66%",
"3-year",
"0.96",
"60",
"0.97",
"45",
"0.96"
] |
CT-based deep learning model for predicting hospital discharge outcome in spontaneous intracerebral hemorrhage | [
"Xianjing Zhao",
"Bijing Zhou",
"Yong Luo",
"Lei Chen",
"Lequn Zhu",
"Shixin Chang",
"Xiangming Fang",
"Zhenwei Yao"
] | ObjectivesTo predict the functional outcome of patients with intracerebral hemorrhage (ICH) using deep learning models based on computed tomography (CT) images.MethodsA retrospective, bi-center study of ICH patients was conducted. Firstly, a custom 3D convolutional model was built for predicting the functional outcome of ICH patients based on CT scans from randomly selected ICH patients in H training dataset collected from H hospital. Secondly, clinical data and radiological features were collected at admission and the Extreme Gradient Boosting (XGBoost) algorithm was used to establish a second model, named the XGBoost model. Finally, the Convolution model and XGBoost model were fused to build the third “Fusion model.” Favorable outcome was defined as modified Rankin Scale score of 0–3 at discharge. The prognostic predictive accuracy of the three models was evaluated using an H test dataset and an external Y dataset, and compared with the performance of ICH score and ICH grading scale (ICH-GS).ResultsA total of 604 patients with ICH were included in this study, of which 450 patients were in the H training dataset, 50 patients in the H test dataset, and 104 patients in the Y dataset. In the Y dataset, the areas under the curve (AUCs) of the Convolution model, XGBoost model, and Fusion model were 0.829, 0.871, and 0.905, respectively. The Fusion model prognostic performance exceeded that of ICH score and ICH-GS (p = 0.043 and p = 0.045, respectively).ConclusionsDeep learning models have good accuracy for predicting functional outcome of patients with spontaneous intracerebral hemorrhage.Clinical relevance statementThe proposed deep learning Fusion model may assist clinicians in predicting functional outcome and developing treatment strategies, thereby improving the survival and quality of life of patients with spontaneous intracerebral hemorrhage.Key Points• Integrating clinical presentations, CT images, and radiological features to establish deep learning model for functional outcome prediction of patients with intracerebral hemorrhage.• Deep learning applied to CT images provides great help in prognosing functional outcome of intracerebral hemorrhage patients.• The developed deep learning model performs better than clinical prognostic scores in predicting functional outcome of patients with intracerebral hemorrhage. | 10.1007/s00330-023-10505-6 | ct-based deep learning model for predicting hospital discharge outcome in spontaneous intracerebral hemorrhage | objectivesto predict the functional outcome of patients with intracerebral hemorrhage (ich) using deep learning models based on computed tomography (ct) images.methodsa retrospective, bi-center study of ich patients was conducted. firstly, a custom 3d convolutional model was built for predicting the functional outcome of ich patients based on ct scans from randomly selected ich patients in h training dataset collected from h hospital. secondly, clinical data and radiological features were collected at admission and the extreme gradient boosting (xgboost) algorithm was used to establish a second model, named the xgboost model. finally, the convolution model and xgboost model were fused to build the third “fusion model.” favorable outcome was defined as modified rankin scale score of 0–3 at discharge. the prognostic predictive accuracy of the three models was evaluated using an h test dataset and an external y dataset, and compared with the performance of ich score and ich grading scale (ich-gs).resultsa total of 604 patients with ich were included in this study, of which 450 patients were in the h training dataset, 50 patients in the h test dataset, and 104 patients in the y dataset. in the y dataset, the areas under the curve (aucs) of the convolution model, xgboost model, and fusion model were 0.829, 0.871, and 0.905, respectively. the fusion model prognostic performance exceeded that of ich score and ich-gs (p = 0.043 and p = 0.045, respectively).conclusionsdeep learning models have good accuracy for predicting functional outcome of patients with spontaneous intracerebral hemorrhage.clinical relevance statementthe proposed deep learning fusion model may assist clinicians in predicting functional outcome and developing treatment strategies, thereby improving the survival and quality of life of patients with spontaneous intracerebral hemorrhage.key points• integrating clinical presentations, ct images, and radiological features to establish deep learning model for functional outcome prediction of patients with intracerebral hemorrhage.• deep learning applied to ct images provides great help in prognosing functional outcome of intracerebral hemorrhage patients.• the developed deep learning model performs better than clinical prognostic scores in predicting functional outcome of patients with intracerebral hemorrhage. | [
"objectivesto",
"the functional outcome",
"patients",
"intracerebral hemorrhage",
"ich",
"deep learning models",
"computed tomography",
"(ct",
"images.methodsa retrospective, bi-center study",
"ich patients",
"a custom 3d convolutional model",
"the functional outcome",
"ich patients",
"ct scans",
"randomly selected ich patients",
"h training dataset",
"h hospital",
"clinical data",
"radiological features",
"admission",
"the extreme gradient",
"algorithm",
"a second model",
"the xgboost model",
"the convolution model",
"xgboost model",
"the third “fusion model",
"favorable outcome",
"modified rankin scale score",
"0–3",
"discharge",
"the prognostic predictive accuracy",
"the three models",
"an h test dataset",
"an external y dataset",
"the performance",
"ich score",
"ich grading scale",
"ich-gs).resultsa total",
"604 patients",
"ich",
"this study",
"which",
"450 patients",
"the h training dataset",
"the h test dataset",
"104 patients",
"the y dataset",
"the y dataset",
"the areas",
"the curve",
"aucs",
"the convolution model",
"xgboost model",
"fusion model",
"the fusion model prognostic performance",
"ich score",
"ich-gs",
"respectively).conclusionsdeep learning models",
"good accuracy",
"functional outcome",
"patients",
"spontaneous intracerebral hemorrhage.clinical relevance statementthe proposed deep learning fusion model",
"clinicians",
"functional outcome",
"treatment strategies",
"the survival",
"quality",
"life",
"patients",
"clinical presentations",
"ct images",
"radiological features",
"deep learning model",
"functional outcome prediction",
"patients",
"intracerebral hemorrhage.• deep learning",
"ct images",
"great help",
"functional outcome",
"intracerebral hemorrhage patients.•",
"the developed deep learning model",
"clinical prognostic scores",
"functional outcome",
"patients",
"intracerebral hemorrhage",
"images.methodsa",
"firstly",
"3d",
"secondly",
"second",
"third",
"three",
"604",
"450",
"50",
"104",
"0.829",
"0.871",
"0.905",
"0.043",
"0.045",
"clinicians"
] |
Deep Learning for Perfusion Cerebral Blood Flow (CBF) and Volume (CBV) Predictions and Diagnostics | [
"Salmonn Talebi",
"Siyu Gai",
"Aaron Sossin",
"Vivian Zhu",
"Elizabeth Tong",
"Mohammad R. K. Mofrad"
] | Dynamic susceptibility contrast magnetic resonance perfusion (DSC-MRP) is a non-invasive imaging technique for hemodynamic measurements. Various perfusion parameters, such as cerebral blood volume (CBV) and cerebral blood flow (CBF), can be derived from DSC-MRP, hence this non-invasive imaging protocol is widely used clinically for the diagnosis and assessment of intracranial pathologies. Currently, most institutions use commercially available software to compute the perfusion parametric maps. However, these conventional methods often have limitations, such as being time-consuming and sensitive to user input, which can lead to inconsistent results; this highlights the need for a more robust and efficient approach like deep learning. Using the relative cerebral blood volume (rCBV) and relative cerebral blood flow (rCBF) perfusion maps generated by FDA-approved software, we trained a multistage deep learning model. The model, featuring a combination of a 1D convolutional neural network (CNN) and a 2D U-Net encoder-decoder network, processes each 4D MRP dataset by integrating temporal and spatial features of the brain for voxel-wise perfusion parameters prediction. An auxiliary model, with similar architecture, but trained with truncated datasets that had fewer time-points, was designed to explore the contribution of temporal features. Both qualitatively and quantitatively evaluated, deep learning-generated rCBV and rCBF maps showcased effective integration of temporal and spatial data, producing comprehensive predictions for the entire brain volume. Our deep learning model provides a robust and efficient approach for calculating perfusion parameters, demonstrating comparable performance to FDA-approved commercial software, and potentially mitigating the challenges inherent to traditional techniques. | 10.1007/s10439-024-03471-7 | deep learning for perfusion cerebral blood flow (cbf) and volume (cbv) predictions and diagnostics | dynamic susceptibility contrast magnetic resonance perfusion (dsc-mrp) is a non-invasive imaging technique for hemodynamic measurements. various perfusion parameters, such as cerebral blood volume (cbv) and cerebral blood flow (cbf), can be derived from dsc-mrp, hence this non-invasive imaging protocol is widely used clinically for the diagnosis and assessment of intracranial pathologies. currently, most institutions use commercially available software to compute the perfusion parametric maps. however, these conventional methods often have limitations, such as being time-consuming and sensitive to user input, which can lead to inconsistent results; this highlights the need for a more robust and efficient approach like deep learning. using the relative cerebral blood volume (rcbv) and relative cerebral blood flow (rcbf) perfusion maps generated by fda-approved software, we trained a multistage deep learning model. the model, featuring a combination of a 1d convolutional neural network (cnn) and a 2d u-net encoder-decoder network, processes each 4d mrp dataset by integrating temporal and spatial features of the brain for voxel-wise perfusion parameters prediction. an auxiliary model, with similar architecture, but trained with truncated datasets that had fewer time-points, was designed to explore the contribution of temporal features. both qualitatively and quantitatively evaluated, deep learning-generated rcbv and rcbf maps showcased effective integration of temporal and spatial data, producing comprehensive predictions for the entire brain volume. our deep learning model provides a robust and efficient approach for calculating perfusion parameters, demonstrating comparable performance to fda-approved commercial software, and potentially mitigating the challenges inherent to traditional techniques. | [
"dynamic susceptibility contrast magnetic resonance perfusion",
"dsc-mrp",
"a non-invasive imaging technique",
"hemodynamic measurements",
"various perfusion parameters",
"cerebral blood volume",
"cbv",
"cerebral blood flow",
"cbf",
"dsc-mrp",
"this non-invasive imaging protocol",
"the diagnosis",
"assessment",
"intracranial pathologies",
"most institutions",
"commercially available software",
"the perfusion parametric maps",
"these conventional methods",
"limitations",
"user input",
"which",
"inconsistent results",
"this",
"the need",
"a more robust and efficient approach",
"deep learning",
"the relative cerebral blood volume",
"rcbv",
"relative cerebral blood flow",
"(rcbf) perfusion maps",
"fda-approved software",
"we",
"a multistage deep learning model",
"the model",
"a combination",
"a 1d convolutional neural network",
"cnn",
"a 2d u-net encoder-decoder network",
"each 4d mrp dataset",
"temporal and spatial features",
"the brain",
"voxel-wise perfusion parameters prediction",
"an auxiliary model",
"similar architecture",
"truncated datasets",
"that",
"fewer time-points",
"the contribution",
"temporal features",
"both qualitatively and quantitatively evaluated, deep learning-generated rcbv",
"rcbf maps",
"effective integration",
"temporal and spatial data",
"comprehensive predictions",
"the entire brain volume",
"our deep learning model",
"a robust and efficient approach",
"perfusion parameters",
"comparable performance",
"fda-approved commercial software",
"the challenges",
"traditional techniques",
"fda",
"1d",
"cnn",
"2d",
"4d",
"fda"
] |
Advancing horizons in remote sensing: a comprehensive survey of deep learning models and applications in image classification and beyond | [
"Sidike Paheding",
"Ashraf Saleem",
"Mohammad Faridul Haque Siddiqui",
"Nathir Rawashdeh",
"Almabrok Essa",
"Abel A. Reyes"
] | In recent years, deep learning has significantly reshaped numerous fields and applications, fundamentally altering how we tackle a variety of challenges. Areas such as natural language processing (NLP), computer vision, healthcare, network security, wide-area surveillance, and precision agriculture have leveraged the merits of the deep learning era. Particularly, deep learning has significantly improved the analysis of remote sensing images, with a continuous increase in the number of researchers and contributions to the field. The high impact of deep learning development is complemented by rapid advancements and the availability of data from a variety of sensors, including high-resolution RGB, thermal, LiDAR, and multi-/hyperspectral cameras, as well as emerging sensing platforms such as satellites and aerial vehicles that can be captured by multi-temporal, multi-sensor, and sensing devices with a wider view. This study aims to present an extensive survey that encapsulates widely used deep learning strategies for tackling image classification challenges in remote sensing. It encompasses an exploration of remote sensing imaging platforms, sensor varieties, practical applications, and prospective developments in the field. | 10.1007/s00521-024-10165-7 | advancing horizons in remote sensing: a comprehensive survey of deep learning models and applications in image classification and beyond | in recent years, deep learning has significantly reshaped numerous fields and applications, fundamentally altering how we tackle a variety of challenges. areas such as natural language processing (nlp), computer vision, healthcare, network security, wide-area surveillance, and precision agriculture have leveraged the merits of the deep learning era. particularly, deep learning has significantly improved the analysis of remote sensing images, with a continuous increase in the number of researchers and contributions to the field. the high impact of deep learning development is complemented by rapid advancements and the availability of data from a variety of sensors, including high-resolution rgb, thermal, lidar, and multi-/hyperspectral cameras, as well as emerging sensing platforms such as satellites and aerial vehicles that can be captured by multi-temporal, multi-sensor, and sensing devices with a wider view. this study aims to present an extensive survey that encapsulates widely used deep learning strategies for tackling image classification challenges in remote sensing. it encompasses an exploration of remote sensing imaging platforms, sensor varieties, practical applications, and prospective developments in the field. | [
"recent years",
"deep learning",
"numerous fields",
"applications",
"we",
"a variety",
"challenges",
"areas",
"natural language processing",
"nlp",
"computer vision",
"healthcare",
"network security",
"wide-area surveillance",
"precision agriculture",
"the merits",
"the deep learning era",
"deep learning",
"the analysis",
"remote sensing images",
"a continuous increase",
"the number",
"researchers",
"contributions",
"the field",
"the high impact",
"deep learning development",
"rapid advancements",
"the availability",
"data",
"a variety",
"sensors",
"high-resolution rgb, thermal, lidar, and multi-/hyperspectral cameras, as well as emerging sensing platforms",
"satellites",
"aerial vehicles",
"that",
"-",
"sensor",
"devices",
"a wider view",
"this study",
"an extensive survey",
"that",
"widely used deep learning strategies",
"image classification challenges",
"remote sensing",
"it",
"an exploration",
"remote sensing imaging platforms",
"sensor varieties",
"practical applications",
"prospective developments",
"the field",
"recent years",
"healthcare"
] |
Comparative study of machine learning and deep learning techniques for fault diagnosis in suspension system | [
"P. Arun Balaji",
"V. Sugumaran"
] | Comfort and stability are the prime reasons to own an automobile (car). Suspension system of an automobile plays a major role in providing comfort, stability and control. Over a period of time, internal components in the suspension system exhibit faults due to fatigue and wear. Hence, it is essential to perform fault diagnosis such that the performance of the suspension components is restored. However, high instrumentation cost, skilled labor requirement and expertise in the particular field of study are certain drawbacks of traditional fault diagnosis techniques. Such challenges have made industrialists and the research communities look for advanced fault diagnosis techniques. Advancements in machine learning and deep learning techniques can be used to fulfill the need of a high degree intelligent fault diagnosis system. In the current study, the performance of machine learning (ML) classifiers are compared with the performance of deep learning (DL) models and the best performing model among them is adopted to detect faults in the automobile suspension system. A total of eight test conditions, namely strut external damage, strut mount failure, ball joint worn out, control arm bush worn out, control arm ball joint worn out, strut worn out, low wheel pressure and good condition, were considered in the study. The vibration measurements were acquired for three load conditions. Among all the techniques considered for classification, the pre-trained VGG16 model outperformed other DL and ML models with an overall classification accuracy of 98.10%. | 10.1007/s40430-023-04145-6 | comparative study of machine learning and deep learning techniques for fault diagnosis in suspension system | comfort and stability are the prime reasons to own an automobile (car). suspension system of an automobile plays a major role in providing comfort, stability and control. over a period of time, internal components in the suspension system exhibit faults due to fatigue and wear. hence, it is essential to perform fault diagnosis such that the performance of the suspension components is restored. however, high instrumentation cost, skilled labor requirement and expertise in the particular field of study are certain drawbacks of traditional fault diagnosis techniques. such challenges have made industrialists and the research communities look for advanced fault diagnosis techniques. advancements in machine learning and deep learning techniques can be used to fulfill the need of a high degree intelligent fault diagnosis system. in the current study, the performance of machine learning (ml) classifiers are compared with the performance of deep learning (dl) models and the best performing model among them is adopted to detect faults in the automobile suspension system. a total of eight test conditions, namely strut external damage, strut mount failure, ball joint worn out, control arm bush worn out, control arm ball joint worn out, strut worn out, low wheel pressure and good condition, were considered in the study. the vibration measurements were acquired for three load conditions. among all the techniques considered for classification, the pre-trained vgg16 model outperformed other dl and ml models with an overall classification accuracy of 98.10%. | [
"comfort",
"stability",
"the prime reasons",
"an automobile (car",
"suspension system",
"an automobile",
"a major role",
"comfort",
"stability",
"control",
"a period",
"time",
"the suspension system",
"exhibit faults",
"fatigue",
"it",
"fault diagnosis",
"the performance",
"the suspension components",
"high instrumentation cost",
"skilled labor requirement",
"expertise",
"the particular field",
"study",
"certain drawbacks",
"traditional fault diagnosis techniques",
"such challenges",
"industrialists",
"the research communities",
"advanced fault diagnosis techniques",
"advancements",
"machine learning",
"deep learning techniques",
"the need",
"a high degree intelligent fault diagnosis system",
"the current study",
"the performance",
"machine learning (ml) classifiers",
"the performance",
"deep learning (dl) models",
"the best performing model",
"them",
"faults",
"the automobile suspension system",
"a total",
"eight test conditions",
"namely strut external damage",
"strut mount failure",
"ball joint",
"control arm bush",
"control arm ball joint",
"strut",
"low wheel pressure",
"good condition",
"the study",
"the vibration measurements",
"three load conditions",
"all the techniques",
"classification",
"the pre-trained vgg16 model",
"other dl and ml models",
"an overall classification accuracy",
"98.10%",
"eight",
"strut mount",
"bush",
"three",
"98.10%"
] |
Deep representation learning and reinforcement learning for workpiece setup optimization in CNC milling | [
"Vladimir Samsonov",
"Enslin Chrismarie",
"Hans-Georg Köpken",
"Schirin Bär",
"Daniel Lütticke",
"Tobias Meisen"
] | Computer Numerical Control (CNC) milling is a commonly used manufacturing process with a high level of automation. Nevertheless, setting up a new CNC milling process involves multiple development steps relying heavily on human expertise. In this work, we focus on positioning and orientation of the workpiece (WP) in the working space of a CNC milling machine and propose a deep learning approach to speed up this process significantly. The selection of the WP’s setup depends on the chosen milling technological process, the geometry of the WP, and the capabilities of the considered CNC machining. It directly impacts the milling quality, machine wear, and overall energy consumption. Our approach relies on representation learning of the milling technological process with the subsequent use of reinforcement learning (RL) for the WP positioning and orientation. Solutions proposed by the RL agent are used as a warm start for simple hill-climbing heuristics, which boosts overall performance while keeping the overall number of search iterations low. The novelty of the developed approach is the ability to conduct the WP setup optimization covering both WP positioning and orientation while ensuring the axis collision avoidance, minimization of the axis traveled distances and improving the dynamic characteristics of the milling process with no input from human experts. Experiments show the potential of the proposed learning-based approach to generate almost comparably good WP setups order of magnitude faster than common metaheuristics, such as genetic algorithms (GA) and Particle Swarm Optimisation (PSA). | 10.1007/s11740-023-01209-3 | deep representation learning and reinforcement learning for workpiece setup optimization in cnc milling | computer numerical control (cnc) milling is a commonly used manufacturing process with a high level of automation. nevertheless, setting up a new cnc milling process involves multiple development steps relying heavily on human expertise. in this work, we focus on positioning and orientation of the workpiece (wp) in the working space of a cnc milling machine and propose a deep learning approach to speed up this process significantly. the selection of the wp’s setup depends on the chosen milling technological process, the geometry of the wp, and the capabilities of the considered cnc machining. it directly impacts the milling quality, machine wear, and overall energy consumption. our approach relies on representation learning of the milling technological process with the subsequent use of reinforcement learning (rl) for the wp positioning and orientation. solutions proposed by the rl agent are used as a warm start for simple hill-climbing heuristics, which boosts overall performance while keeping the overall number of search iterations low. the novelty of the developed approach is the ability to conduct the wp setup optimization covering both wp positioning and orientation while ensuring the axis collision avoidance, minimization of the axis traveled distances and improving the dynamic characteristics of the milling process with no input from human experts. experiments show the potential of the proposed learning-based approach to generate almost comparably good wp setups order of magnitude faster than common metaheuristics, such as genetic algorithms (ga) and particle swarm optimisation (psa). | [
"cnc",
"a commonly used manufacturing process",
"a high level",
"automation",
"a new cnc milling process",
"multiple development steps",
"human expertise",
"this work",
"we",
"positioning",
"orientation",
"the workpiece",
"wp",
"the working space",
"a cnc milling machine",
"a deep learning approach",
"this process",
"the selection",
"the wp’s setup",
"the chosen milling technological process",
"the geometry",
"the wp",
"the capabilities",
"the considered cnc machining",
"it",
"the milling quality",
"machine wear",
"overall energy consumption",
"our approach",
"representation learning",
"the milling technological process",
"the subsequent use",
"reinforcement learning",
"the wp positioning",
"orientation",
"solutions",
"the rl agent",
"a warm start",
"simple hill-climbing heuristics",
"which",
"overall performance",
"the overall number",
"search iterations",
"the novelty",
"the developed approach",
"the ability",
"the wp setup optimization",
"both wp positioning",
"orientation",
"the axis collision avoidance",
"minimization",
"the axis",
"distances",
"the dynamic characteristics",
"the milling process",
"no input",
"human experts",
"experiments",
"the potential",
"the proposed learning-based approach",
"almost comparably good wp setups order",
"magnitude",
"common metaheuristics",
"genetic algorithms",
"ga",
"particle swarm optimisation",
"psa",
"simple hill",
"ga"
] |
Boosting in-transit entertainment: deep reinforcement learning for intelligent multimedia caching in bus networks | [
"Dan Lan",
"Incheol Shin"
] | Multimedia content delivery in advanced networks faces exponential growth in data volumes, rendering existing solutions obsolete. This research investigates deep reinforcement learning (DRL) for autonomous optimization without extensive datasets. The work analyzes two prominent DRL algorithms, i.e., Dueling Deep Q-Network (DDQN) and Deep Q-Network (DQN) for multimedia delivery in simulated bus networks. DDQN utilizes a novel “dueling” architecture to estimate state value and action advantages, accelerating learning separately. DQN employs deep neural networks to approximate optimal policies. The environment simulates urban buses with passenger file requests and cache sizes modeled on actual data. Comparative analysis evaluates cumulative rewards and losses over 1500 training episodes to analyze learning efficiency, stability, and performance. Results demonstrate DDQN’s superior convergence and 32% higher cumulative rewards than DQN. However, DQN showed potential for gains over successive runs despite inconsistencies. It establishes DRL’s promise for automated decision-making while revealing enhancements to improve DQN. Further research should evaluate generalizability across problem domains, investigate hybrid models, and test physical systems. DDQN emerged as the most efficient algorithm, highlighting DRL’s potential to enable intelligent agents that optimize multimedia delivery. | 10.1007/s00500-023-09354-8 | boosting in-transit entertainment: deep reinforcement learning for intelligent multimedia caching in bus networks | multimedia content delivery in advanced networks faces exponential growth in data volumes, rendering existing solutions obsolete. this research investigates deep reinforcement learning (drl) for autonomous optimization without extensive datasets. the work analyzes two prominent drl algorithms, i.e., dueling deep q-network (ddqn) and deep q-network (dqn) for multimedia delivery in simulated bus networks. ddqn utilizes a novel “dueling” architecture to estimate state value and action advantages, accelerating learning separately. dqn employs deep neural networks to approximate optimal policies. the environment simulates urban buses with passenger file requests and cache sizes modeled on actual data. comparative analysis evaluates cumulative rewards and losses over 1500 training episodes to analyze learning efficiency, stability, and performance. results demonstrate ddqn’s superior convergence and 32% higher cumulative rewards than dqn. however, dqn showed potential for gains over successive runs despite inconsistencies. it establishes drl’s promise for automated decision-making while revealing enhancements to improve dqn. further research should evaluate generalizability across problem domains, investigate hybrid models, and test physical systems. ddqn emerged as the most efficient algorithm, highlighting drl’s potential to enable intelligent agents that optimize multimedia delivery. | [
"multimedia content delivery",
"advanced networks",
"exponential growth",
"data volumes",
"this research investigates",
"deep reinforcement learning",
"drl",
"autonomous optimization",
"extensive datasets",
"the work",
"two prominent drl algorithms",
"deep q-network (ddqn",
"deep q-network",
"dqn",
"multimedia delivery",
"simulated bus networks",
"ddqn",
"a novel “dueling” architecture",
"state value",
"action advantages",
"dqn",
"deep neural networks",
"optimal policies",
"the environment",
"urban buses",
"passenger file requests",
"cache sizes",
"actual data",
"comparative analysis",
"cumulative rewards",
"losses",
"1500 training episodes",
"learning efficiency",
"stability",
"performance",
"results",
"ddqn’s superior convergence",
"32% higher cumulative rewards",
"dqn",
"dqn",
"potential",
"gains",
"successive runs",
"inconsistencies",
"it",
"drl’s promise",
"automated decision-making",
"enhancements",
"dqn",
"further research",
"generalizability",
"problem domains",
"hybrid models",
"physical systems",
"ddqn",
"the most efficient algorithm",
"drl’s potential",
"intelligent agents",
"that",
"multimedia delivery",
"two",
"1500",
"32%"
] |
A novel deep facenet framework for real-time face detection based on deep learning model | [
"B Lakshmanan",
"A Vaishnavi",
"R Ananthapriya",
"A K Aananthalakshmi"
] | Real-time face detection has many challenges, such as non-frontal faces, tiny faces, occlusions, and multifarious backgrounds. Real-time face detection can be done by using Convolutional Neural Network (CNN) models, which result in elevated performance but have a huge computation time. It needs to be implemented on high-end computational devices to produce more accurate face detection results for high resolution images. We proposed a light architecture based on CNN for deep learning-based feature extraction and detection of human faces. The challenges faced during real-time face detection, such as occlusions, different scales, different backgrounds, varying positions, lighting, and poses, are resolved, and faces are detected accurately using the proposed framework. The amount of computation required for real-time face detection is reduced. This light architecture consists of two modules: the backbone module is used to contract the input size of the image and extract the features; the detection module transforms the feature map between prediction layers and detects faces at various scales. In our architecture, we use mini-inception blocks that minimize the computation cost and are implemented using available low-end system configurations without the need for external hardware. The proposed model uses anchor boxes to predict bounding boxes using dimensional clusters. The model is trained and tested using images from the WIDER Face dataset, which has images of various challenging conditions. Finally, images with multiple faces detected are displayed as output. The proposed work shows an increased accuracy rate with reduced computation cost over state-of-the-art performance on the benchmark dataset. | 10.1007/s12046-023-02329-3 | a novel deep facenet framework for real-time face detection based on deep learning model | real-time face detection has many challenges, such as non-frontal faces, tiny faces, occlusions, and multifarious backgrounds. real-time face detection can be done by using convolutional neural network (cnn) models, which result in elevated performance but have a huge computation time. it needs to be implemented on high-end computational devices to produce more accurate face detection results for high resolution images. we proposed a light architecture based on cnn for deep learning-based feature extraction and detection of human faces. the challenges faced during real-time face detection, such as occlusions, different scales, different backgrounds, varying positions, lighting, and poses, are resolved, and faces are detected accurately using the proposed framework. the amount of computation required for real-time face detection is reduced. this light architecture consists of two modules: the backbone module is used to contract the input size of the image and extract the features; the detection module transforms the feature map between prediction layers and detects faces at various scales. in our architecture, we use mini-inception blocks that minimize the computation cost and are implemented using available low-end system configurations without the need for external hardware. the proposed model uses anchor boxes to predict bounding boxes using dimensional clusters. the model is trained and tested using images from the wider face dataset, which has images of various challenging conditions. finally, images with multiple faces detected are displayed as output. the proposed work shows an increased accuracy rate with reduced computation cost over state-of-the-art performance on the benchmark dataset. | [
"real-time face detection",
"many challenges",
"non-frontal faces",
"tiny faces",
"occlusions",
"multifarious backgrounds",
"real-time face detection",
"convolutional neural network (cnn) models",
"which",
"elevated performance",
"a huge computation time",
"it",
"high-end computational devices",
"more accurate face detection results",
"high resolution images",
"we",
"a light architecture",
"cnn",
"deep learning-based feature extraction",
"detection",
"human faces",
"the challenges",
"real-time face detection",
"occlusions",
"different scales",
"different backgrounds",
"varying positions",
"lighting",
"poses",
"faces",
"the proposed framework",
"the amount",
"computation",
"real-time face detection",
"this light architecture",
"two modules",
"the backbone module",
"the input size",
"the image",
"the features",
"the detection module",
"the feature map",
"prediction layers",
"detects",
"various scales",
"our architecture",
"we",
"mini",
"-inception blocks",
"that",
"the computation cost",
"available low-end system configurations",
"the need",
"external hardware",
"the proposed model",
"anchor boxes",
"bounding boxes",
"dimensional clusters",
"the model",
"images",
"the wider face dataset",
"which",
"images",
"various challenging conditions",
"images",
"multiple faces",
"output",
"the proposed work",
"an increased accuracy rate",
"reduced computation cost",
"the-art",
"the benchmark dataset",
"cnn",
"cnn",
"two modules"
] |
Development of a remote music teaching system based on facial recognition and deep learning | [
"Ning Zhang",
"Huizhong Wang"
] | With the continuous progress of computer and network technology, teaching methods and educational models are also constantly evolving and improving. The development of facial recognition technology has brought new opportunities and challenges to the development of educational theory and systems. This article establishes a remote music teaching system based on facial recognition and deep learning technology. The system adopts the Java EE framework structure and deep learning technology. By conducting deep learning and training on a large amount of facial data, we can identify students' facial expressions and emotional states, thereby better understanding their learning status and needs. At the same time, the system also supports multiple teaching modes and interactive methods, providing teachers and students with a more convenient and efficient teaching management and learning experience. Subsequently, this article evaluated and explored the effectiveness of the remote music teaching system through a questionnaire survey. The results show that most students believe that the system can help them better master basic music knowledge and professional skills, improve learning effectiveness and achieve learning goals. The use of the system can also stimulate students' interest in music learning, providing new ways and means for teaching. | 10.1007/s00500-023-09120-w | development of a remote music teaching system based on facial recognition and deep learning | with the continuous progress of computer and network technology, teaching methods and educational models are also constantly evolving and improving. the development of facial recognition technology has brought new opportunities and challenges to the development of educational theory and systems. this article establishes a remote music teaching system based on facial recognition and deep learning technology. the system adopts the java ee framework structure and deep learning technology. by conducting deep learning and training on a large amount of facial data, we can identify students' facial expressions and emotional states, thereby better understanding their learning status and needs. at the same time, the system also supports multiple teaching modes and interactive methods, providing teachers and students with a more convenient and efficient teaching management and learning experience. subsequently, this article evaluated and explored the effectiveness of the remote music teaching system through a questionnaire survey. the results show that most students believe that the system can help them better master basic music knowledge and professional skills, improve learning effectiveness and achieve learning goals. the use of the system can also stimulate students' interest in music learning, providing new ways and means for teaching. | [
"the continuous progress",
"computer and network technology",
"teaching methods",
"educational models",
"the development",
"facial recognition technology",
"new opportunities",
"challenges",
"the development",
"educational theory",
"systems",
"this article",
"a remote music teaching system",
"facial recognition",
"deep learning technology",
"the system",
"the java ee framework structure",
"deep learning technology",
"deep learning",
"training",
"a large amount",
"facial data",
"we",
"students' facial expressions",
"emotional states",
"their learning status",
"needs",
"the same time",
"the system",
"multiple teaching modes",
"interactive methods",
"teachers",
"students",
"a more convenient and efficient teaching management and learning experience",
"this article",
"the effectiveness",
"the remote music teaching system",
"a questionnaire survey",
"the results",
"most students",
"the system",
"them",
"basic music knowledge",
"professional skills",
"effectiveness",
"learning goals",
"the use",
"the system",
"students' interest",
"new ways",
"teaching"
] |
Parameter-Free Reduction of the Estimation Bias in Deep Reinforcement Learning for Deterministic Policy Gradients | [
"Baturay Saglam",
"Furkan Burak Mutlu",
"Dogan Can Cicek",
"Suleyman Serdar Kozat"
] | Approximation of the value functions in value-based deep reinforcement learning induces overestimation bias, resulting in suboptimal policies. We show that when the reinforcement signals received by the agents have a high variance, deep actor-critic approaches that overcome the overestimation bias lead to a substantial underestimation bias. We first address the detrimental issues in the existing approaches that aim to overcome such underestimation error. Then, through extensive statistical analysis, we introduce a novel, parameter-free Deep Q-learning variant to reduce this underestimation bias in deterministic policy gradients. By sampling the weights of a linear combination of two approximate critics from a highly shrunk estimation bias interval, our Q-value update rule is not affected by the variance of the rewards received by the agents throughout learning. We test the performance of the introduced improvement on a set of MuJoCo and Box2D continuous control tasks and demonstrate that it outperforms the existing approaches and improves the baseline actor-critic algorithm in most of the environments tested. | 10.1007/s11063-024-11461-y | parameter-free reduction of the estimation bias in deep reinforcement learning for deterministic policy gradients | approximation of the value functions in value-based deep reinforcement learning induces overestimation bias, resulting in suboptimal policies. we show that when the reinforcement signals received by the agents have a high variance, deep actor-critic approaches that overcome the overestimation bias lead to a substantial underestimation bias. we first address the detrimental issues in the existing approaches that aim to overcome such underestimation error. then, through extensive statistical analysis, we introduce a novel, parameter-free deep q-learning variant to reduce this underestimation bias in deterministic policy gradients. by sampling the weights of a linear combination of two approximate critics from a highly shrunk estimation bias interval, our q-value update rule is not affected by the variance of the rewards received by the agents throughout learning. we test the performance of the introduced improvement on a set of mujoco and box2d continuous control tasks and demonstrate that it outperforms the existing approaches and improves the baseline actor-critic algorithm in most of the environments tested. | [
"approximation",
"the value functions",
"value-based deep reinforcement learning",
"overestimation bias",
"suboptimal policies",
"we",
"the reinforcement signals",
"the agents",
"a high variance",
"deep actor-critic approaches",
"that",
"the overestimation bias",
"a substantial underestimation bias",
"we",
"the detrimental issues",
"the existing approaches",
"that",
"such underestimation error",
"extensive statistical analysis",
"we",
"a novel, parameter-free deep q-learning variant",
"this underestimation bias",
"deterministic policy gradients",
"the weights",
"a linear combination",
"two approximate critics",
"a highly shrunk estimation bias interval",
"our q-value update rule",
"the variance",
"the rewards",
"the agents",
"we",
"the performance",
"the introduced improvement",
"a set",
"mujoco",
"box2d continuous control tasks",
"it",
"the existing approaches",
"the baseline actor-critic algorithm",
"the environments",
"first",
"two"
] |
C-net: a deep learning-based Jujube grading approach | [
"Atif Mahmood",
"Amod Kumar Tiwari",
"Sanjay Kumar Singh"
] | Jujube grading is a crucial process in the jujube-associated industry to ascertain the quality, ripeness, value, and security of the product. Traditionally, jujube grading has been done manually, which may be expensive, time-consuming, and prone to human mistakes. With the expansion of innovation, Machine Learning (ML)/Deep Learning (DL) turned out as a potent technique for automating the fruits grading process. Within this work, we deployed and analyzed the Concatenated-Convolutional Neural Network (C-Net) based on the residual network concept and seven cutting-edge CNNs for sorting the Indian jujube into six classes. To train and evaluate the models, we collected and assembled the dataset of jujube images. The performance analysis of the model relies upon two varying hyperparameters, batch size, and epochs as well as some performance metrics like F1-score, precision, and recall. The finding indicates that the proposed C-Net model was able to classify jujube images with high precision of 98.61% which surpasses other models but lags slightly behind the EfficientNet-B0 model. Our C-Net model has several advantages over most of the cutting-edge CNN models for jujube grading including increased accuracy, efficiency, cost-effectiveness, better decision-making, scalability, and real-time grading. The use of a C-Net model for jujube grading has the capability to revolutionize the jujube grading task and improve the fruit’s overall quality. | 10.1007/s11694-024-02765-7 | c-net: a deep learning-based jujube grading approach | jujube grading is a crucial process in the jujube-associated industry to ascertain the quality, ripeness, value, and security of the product. traditionally, jujube grading has been done manually, which may be expensive, time-consuming, and prone to human mistakes. with the expansion of innovation, machine learning (ml)/deep learning (dl) turned out as a potent technique for automating the fruits grading process. within this work, we deployed and analyzed the concatenated-convolutional neural network (c-net) based on the residual network concept and seven cutting-edge cnns for sorting the indian jujube into six classes. to train and evaluate the models, we collected and assembled the dataset of jujube images. the performance analysis of the model relies upon two varying hyperparameters, batch size, and epochs as well as some performance metrics like f1-score, precision, and recall. the finding indicates that the proposed c-net model was able to classify jujube images with high precision of 98.61% which surpasses other models but lags slightly behind the efficientnet-b0 model. our c-net model has several advantages over most of the cutting-edge cnn models for jujube grading including increased accuracy, efficiency, cost-effectiveness, better decision-making, scalability, and real-time grading. the use of a c-net model for jujube grading has the capability to revolutionize the jujube grading task and improve the fruit’s overall quality. | [
"a crucial process",
"the jujube-associated industry",
"the quality",
"ripeness",
"value",
"security",
"the product",
"jujube grading",
"which",
"human mistakes",
"the expansion",
"innovation",
"machine learning",
"dl",
"a potent technique",
"the fruits grading process",
"this work",
"we",
"the concatenated-convolutional neural network",
"c-net",
"the residual network concept",
"seven cutting-edge cnns",
"the indian jujube",
"six classes",
"the models",
"we",
"the dataset",
"jujube images",
"the performance analysis",
"the model",
"two varying hyperparameters",
"batch size",
"epochs",
"some performance metrics",
"f1-score",
"precision",
"the finding",
"the proposed c-net model",
"jujube images",
"high precision",
"98.61%",
"which",
"other models",
"lags",
"the efficientnet-b0 model",
"our c-net model",
"several advantages",
"the cutting-edge cnn models",
"increased accuracy",
"efficiency",
"cost-effectiveness",
"better decision-making",
"scalability",
"real-time grading",
"the use",
"a c-net model",
"the capability",
"the jujube",
"task",
"the fruit’s overall quality",
"jujube",
"jujube",
"seven",
"indian",
"six",
"two",
"98.61%",
"cnn"
] |
Hierarchical Goal-Guided Learning for the Evasive Maneuver of Fixed-Wing UAVs based on Deep Reinforcement Learning | [
"Yinlong Yuan",
"Jian Yang",
"Zhu Liang Yu",
"Yun Cheng",
"Pengpeng Jiao",
"Liang Hua"
] | Fixed-wing unmanned aerial vehicles (UAVs) will play a vital role in forthcoming military conflicts. Effectively avoiding threats and improving the survivability of fixed-wing UAV in dynamic hostile environments are the keys to the success of combat missions. Hence, endowing fixed-wing UAVs with the ability to autonomously generate evasive maneuver is the primary problem that should be solved. With considering the threat of air-to-air missile attacks, this paper designs a novel hierarchical goal-guided learning (HGGL) method, which combines with traditional off-policy deep reinforcement learning (DRL) algorithms and endows the agent with the ability to evade a series of air-to-air missiles. The pivotal idea of the proposed algorithm is to use the hierarchical features of the goal, it improves the availability of training data to eliminate the limitation of the convergence rate of traditional DRL algorithms owing to sparse rewards. We demonstrate the performance of our algorithm in several simulation experiments. All experiments are applied on the XSimStudio platform. The results demonstrate that the proposed algorithm improves the convergence speed and outperforms the state-of-the-art traditional algorithms. | 10.1007/s10846-023-01953-9 | hierarchical goal-guided learning for the evasive maneuver of fixed-wing uavs based on deep reinforcement learning | fixed-wing unmanned aerial vehicles (uavs) will play a vital role in forthcoming military conflicts. effectively avoiding threats and improving the survivability of fixed-wing uav in dynamic hostile environments are the keys to the success of combat missions. hence, endowing fixed-wing uavs with the ability to autonomously generate evasive maneuver is the primary problem that should be solved. with considering the threat of air-to-air missile attacks, this paper designs a novel hierarchical goal-guided learning (hggl) method, which combines with traditional off-policy deep reinforcement learning (drl) algorithms and endows the agent with the ability to evade a series of air-to-air missiles. the pivotal idea of the proposed algorithm is to use the hierarchical features of the goal, it improves the availability of training data to eliminate the limitation of the convergence rate of traditional drl algorithms owing to sparse rewards. we demonstrate the performance of our algorithm in several simulation experiments. all experiments are applied on the xsimstudio platform. the results demonstrate that the proposed algorithm improves the convergence speed and outperforms the state-of-the-art traditional algorithms. | [
"fixed-wing unmanned aerial vehicles",
"a vital role",
"forthcoming military conflicts",
"threats",
"the survivability",
"fixed-wing uav",
"dynamic hostile environments",
"the keys",
"the success",
"combat missions",
"fixed-wing uavs",
"the ability",
"evasive maneuver",
"the primary problem",
"that",
"the threat",
"air",
"this paper",
"a novel hierarchical goal-guided learning (hggl) method",
"which",
"drl",
"the agent",
"the ability",
"a series",
"air",
"the pivotal idea",
"the proposed algorithm",
"the hierarchical features",
"the goal",
"it",
"the availability",
"training data",
"the limitation",
"the convergence rate",
"traditional drl algorithms",
"sparse rewards",
"we",
"the performance",
"our algorithm",
"several simulation experiments",
"all experiments",
"the xsimstudio platform",
"the results",
"the proposed algorithm",
"the convergence speed",
"the-art"
] |
Advanced deep learning and large language models for suicide ideation detection on social media | [
"Mohammed Qorich",
"Rajae El Ouazzani"
] | Recently, suicide ideations represent a worldwide health concern and pose many anticipation challenges. Actually, the prevalence of expressing self-destructive thoughts especially on forums and social media requires effective monitoring for suicide prevention, and early intervention. Meanwhile, deep learning techniques and Large Language Models (LLMs) have emerged as promising tools in diverse Natural Language Processing (NLP) tasks, including sentiment analysis and text classification. In this paper, we propose a deep learning model incorporating triple models of word embeddings, as well as various fine-tuned LLMs, to identify suicidal thoughts in Reddit posts. In effect, we implemented a Bidirectional Long Short-Term Memory (BiLSTM), and a Convolutional Neural Network (CNN) model to categorize posts associated with non-suicidal and suicidal thoughts. Besides, through the combination of Word2Vec, FastText and GloVe embeddings, our models learn intricate patterns and prevalent nuances in suicide-related language. Furthermore, we employed a merged version of CNN and BiLSTM models, entitled C-BiLSTM, and several LLMs, including pre-trained Bidirectional Encoder Representations from Transformers (BERT) models and a Generative Pre-training Transformer (GPT) model. The analysis of all our proposed models shows that our C-BiLSTM model with triple word embedding and our GPT model got the best performance compared to deep learning and LLMs baseline models, reaching accuracies of 94.5% and 97.69%, respectively. In fact, our best model’s capacity to extract meaningful interdependencies among words significantly promotes its classification performance. This analysis contributes to a deeper understanding of the psychological factors and linguistic markers indicative of suicidal thoughts, thereby informing future research and intervention strategies. | 10.1007/s13748-024-00326-z | advanced deep learning and large language models for suicide ideation detection on social media | recently, suicide ideations represent a worldwide health concern and pose many anticipation challenges. actually, the prevalence of expressing self-destructive thoughts especially on forums and social media requires effective monitoring for suicide prevention, and early intervention. meanwhile, deep learning techniques and large language models (llms) have emerged as promising tools in diverse natural language processing (nlp) tasks, including sentiment analysis and text classification. in this paper, we propose a deep learning model incorporating triple models of word embeddings, as well as various fine-tuned llms, to identify suicidal thoughts in reddit posts. in effect, we implemented a bidirectional long short-term memory (bilstm), and a convolutional neural network (cnn) model to categorize posts associated with non-suicidal and suicidal thoughts. besides, through the combination of word2vec, fasttext and glove embeddings, our models learn intricate patterns and prevalent nuances in suicide-related language. furthermore, we employed a merged version of cnn and bilstm models, entitled c-bilstm, and several llms, including pre-trained bidirectional encoder representations from transformers (bert) models and a generative pre-training transformer (gpt) model. the analysis of all our proposed models shows that our c-bilstm model with triple word embedding and our gpt model got the best performance compared to deep learning and llms baseline models, reaching accuracies of 94.5% and 97.69%, respectively. in fact, our best model’s capacity to extract meaningful interdependencies among words significantly promotes its classification performance. this analysis contributes to a deeper understanding of the psychological factors and linguistic markers indicative of suicidal thoughts, thereby informing future research and intervention strategies. | [
"suicide ideations",
"a worldwide health concern",
"many anticipation challenges",
"the prevalence",
"self-destructive thoughts",
"forums",
"social media",
"effective monitoring",
"suicide prevention",
"early intervention",
"deep learning techniques",
"large language models",
"llms",
"promising tools",
"nlp",
"sentiment analysis",
"text classification",
"this paper",
"we",
"a deep learning model",
"triple models",
"word embeddings",
"various fine-tuned llms",
"suicidal thoughts",
"reddit posts",
"effect",
"we",
"a bidirectional long short-term memory",
"bilstm",
"a convolutional neural network (cnn) model",
"posts",
"-suicidal and suicidal thoughts",
"the combination",
"word2vec, fasttext and glove embeddings",
"our models",
"intricate patterns",
"prevalent nuances",
"suicide-related language",
"we",
"a merged version",
"cnn",
"bilstm models",
"c-bilstm",
"several llms",
"pre-trained bidirectional encoder representations",
"transformers",
"(bert) models",
"a generative pre-training transformer",
"(gpt) model",
"the analysis",
"all our proposed models",
"our c-bilstm model",
"triple word",
"our gpt model",
"the best performance",
"deep learning and llms baseline models",
"accuracies",
"94.5%",
"97.69%",
"fact",
"our best model’s capacity",
"meaningful interdependencies",
"words",
"its classification performance",
"this analysis",
"a deeper understanding",
"the psychological factors",
"linguistic markers",
"suicidal thoughts",
"future research and intervention strategies",
"cnn",
"cnn",
"gpt",
"gpt",
"94.5%",
"97.69%"
] |
Study on automatic lithology identification based on convolutional neural network and deep transfer learning | [
"Shiliang Li",
"Yuelong Dong",
"Zhanrong Zhang",
"Chengyuan Lin",
"Huaji Liu",
"Yafei Wang",
"Youyan Bian",
"Feng Xiong",
"Guohua Zhang"
] | Automatic and fast rock classification identification is an important part of geotechnical intelligent survey system. Image based supervised deep learning analysis, especially for convolutional neural networks (CNN), has potential in optimizing lithologic classification and interpretation using borehole core images. However, the accuracy and efficiency of lithology identification models are low at present. In this work, a systematic and enormous rock data framework based on the geological rock classification system is firstly established to provide rock learning datasets. The dataset is composed of approximately 150,000 images of rock samples, which covers igneous rocks, sedimentary rocks, and metamorphic rocks. Secondly, based on CNN-deep transfer learning algorithm, an end-to-end, image-to-label rock lithology identification is established. Finally, the generalization of the proposed model and the field drilling core verification test show that the constructed intelligent rock recognition model has an ability to identify rocks quickly and accurately, and the recognition accuracy of 12 kinds of common engineering rocks is more than 95%. The proposed rock intelligent classification model provides a convenient and fast tool for field geologists and scientific researchers. | 10.1007/s42452-024-06020-y | study on automatic lithology identification based on convolutional neural network and deep transfer learning | automatic and fast rock classification identification is an important part of geotechnical intelligent survey system. image based supervised deep learning analysis, especially for convolutional neural networks (cnn), has potential in optimizing lithologic classification and interpretation using borehole core images. however, the accuracy and efficiency of lithology identification models are low at present. in this work, a systematic and enormous rock data framework based on the geological rock classification system is firstly established to provide rock learning datasets. the dataset is composed of approximately 150,000 images of rock samples, which covers igneous rocks, sedimentary rocks, and metamorphic rocks. secondly, based on cnn-deep transfer learning algorithm, an end-to-end, image-to-label rock lithology identification is established. finally, the generalization of the proposed model and the field drilling core verification test show that the constructed intelligent rock recognition model has an ability to identify rocks quickly and accurately, and the recognition accuracy of 12 kinds of common engineering rocks is more than 95%. the proposed rock intelligent classification model provides a convenient and fast tool for field geologists and scientific researchers. | [
"automatic and fast rock classification identification",
"an important part",
"geotechnical intelligent survey system",
"image",
"deep learning analysis",
"convolutional neural networks",
"cnn",
"potential",
"lithologic classification",
"interpretation",
"borehole core images",
"the accuracy",
"efficiency",
"lithology identification models",
"present",
"this work",
"a systematic and enormous rock data framework",
"the geological rock classification system",
"rock learning datasets",
"the dataset",
"approximately 150,000 images",
"rock samples",
"which",
"igneous rocks",
"sedimentary rocks",
"metamorphic rocks",
"cnn-deep transfer learning algorithm",
"an end",
"end",
"label",
"the generalization",
"the proposed model",
"the field",
"the constructed intelligent rock recognition model",
"an ability",
"rocks",
"the recognition accuracy",
"12 kinds",
"common engineering rocks",
"more than 95%",
"the proposed rock intelligent classification model",
"a convenient and fast tool",
"field geologists",
"scientific researchers",
"cnn",
"approximately 150,000",
"secondly",
"cnn",
"12",
"more than 95%"
] |
Molecular design with automated quantum computing-based deep learning and optimization | [
"Akshay Ajagekar",
"Fengqi You"
] | Computer-aided design of novel molecules and compounds is a challenging task that can be addressed with quantum computing (QC) owing to its notable advances in optimization and machine learning. Here, we use QC-assisted learning and optimization techniques implemented with near-term QC devices for molecular property prediction and generation tasks. The proposed probabilistic energy-based deep learning model trained in a generative manner facilitated by QC yields robust latent representations of molecules, while the proposed data-driven QC-based optimization framework performs guided navigation of the target chemical space by exploiting the structure–property relationships captured by the energy-based model. We demonstrate the viability of the proposed molecular design approach by generating several molecular candidates that satisfy specific property target requirements. The proposed QC-based methods exhibit an improved predictive performance while efficiently generating novel molecules that accurately fulfill target conditions and exemplify the potential of QC for automated molecular design, thus accentuating its utility. | 10.1038/s41524-023-01099-0 | molecular design with automated quantum computing-based deep learning and optimization | computer-aided design of novel molecules and compounds is a challenging task that can be addressed with quantum computing (qc) owing to its notable advances in optimization and machine learning. here, we use qc-assisted learning and optimization techniques implemented with near-term qc devices for molecular property prediction and generation tasks. the proposed probabilistic energy-based deep learning model trained in a generative manner facilitated by qc yields robust latent representations of molecules, while the proposed data-driven qc-based optimization framework performs guided navigation of the target chemical space by exploiting the structure–property relationships captured by the energy-based model. we demonstrate the viability of the proposed molecular design approach by generating several molecular candidates that satisfy specific property target requirements. the proposed qc-based methods exhibit an improved predictive performance while efficiently generating novel molecules that accurately fulfill target conditions and exemplify the potential of qc for automated molecular design, thus accentuating its utility. | [
"computer-aided design",
"novel molecules",
"compounds",
"a challenging task",
"that",
"quantum computing",
"qc",
"its notable advances",
"optimization",
"machine learning",
"we",
"qc-assisted learning and optimization techniques",
"near-term qc devices",
"molecular property prediction",
"generation tasks",
"the proposed probabilistic energy-based deep learning model",
"a generative manner",
"qc yields",
"robust latent representations",
"molecules",
"the proposed data-driven qc-based optimization framework",
"guided navigation",
"the target chemical space",
"the structure",
"property relationships",
"the energy-based model",
"we",
"the viability",
"the proposed molecular design approach",
"several molecular candidates",
"that",
"specific property target requirements",
"the proposed qc-based methods",
"an improved predictive performance",
"novel molecules",
"that",
"target conditions",
"the potential",
"qc",
"automated molecular design",
"its utility",
"quantum"
] |
Automated detection and recognition system for chewable food items using advanced deep learning models | [
"Yogesh Kumar",
"Apeksha Koul",
"Kamini",
"Marcin Woźniak",
"Jana Shafi",
"Muhammad Fazal Ijaz"
] | Identifying and recognizing the food on the basis of its eating sounds is a challenging task, as it plays an important role in avoiding allergic foods, providing dietary preferences to people who are restricted to a particular diet, showcasing its cultural significance, etc. In this research paper, the aim is to design a novel methodology that helps to identify food items by analyzing their eating sounds using various deep learning models. To achieve this objective, a system has been proposed that extracts meaningful features from food-eating sounds with the help of signal processing techniques and deep learning models for classifying them into their respective food classes. Initially, 1200 audio files for 20 food items labeled have been collected and visualized to find relationships between the sound files of different food items. Later, to extract meaningful features, various techniques such as spectrograms, spectral rolloff, spectral bandwidth, and mel-frequency cepstral coefficients are used for the cleaning of audio files as well as to capture the unique characteristics of different food items. In the next phase, various deep learning models like GRU, LSTM, InceptionResNetV2, and the customized CNN model have been trained to learn spectral and temporal patterns in audio signals. Besides this, the models have also been hybridized i.e. Bidirectional LSTM + GRU and RNN + Bidirectional LSTM, and RNN + Bidirectional GRU to analyze their performance for the same labeled data in order to associate particular patterns of sound with their corresponding class of food item. During evaluation, the highest accuracy, precision,F1 score, and recall have been obtained by GRU with 99.28%, Bidirectional LSTM + GRU with 97.7% as well as 97.3%, and RNN + Bidirectional LSTM with 97.45%, respectively. The results of this study demonstrate that deep learning models have the potential to precisely identify foods on the basis of their sound by computing the best outcomes. | 10.1038/s41598-024-57077-z | automated detection and recognition system for chewable food items using advanced deep learning models | identifying and recognizing the food on the basis of its eating sounds is a challenging task, as it plays an important role in avoiding allergic foods, providing dietary preferences to people who are restricted to a particular diet, showcasing its cultural significance, etc. in this research paper, the aim is to design a novel methodology that helps to identify food items by analyzing their eating sounds using various deep learning models. to achieve this objective, a system has been proposed that extracts meaningful features from food-eating sounds with the help of signal processing techniques and deep learning models for classifying them into their respective food classes. initially, 1200 audio files for 20 food items labeled have been collected and visualized to find relationships between the sound files of different food items. later, to extract meaningful features, various techniques such as spectrograms, spectral rolloff, spectral bandwidth, and mel-frequency cepstral coefficients are used for the cleaning of audio files as well as to capture the unique characteristics of different food items. in the next phase, various deep learning models like gru, lstm, inceptionresnetv2, and the customized cnn model have been trained to learn spectral and temporal patterns in audio signals. besides this, the models have also been hybridized i.e. bidirectional lstm + gru and rnn + bidirectional lstm, and rnn + bidirectional gru to analyze their performance for the same labeled data in order to associate particular patterns of sound with their corresponding class of food item. during evaluation, the highest accuracy, precision,f1 score, and recall have been obtained by gru with 99.28%, bidirectional lstm + gru with 97.7% as well as 97.3%, and rnn + bidirectional lstm with 97.45%, respectively. the results of this study demonstrate that deep learning models have the potential to precisely identify foods on the basis of their sound by computing the best outcomes. | [
"the food",
"the basis",
"its eating sounds",
"a challenging task",
"it",
"an important role",
"allergic foods",
"dietary preferences",
"people",
"who",
"a particular diet",
"its cultural significance",
"this research paper",
"the aim",
"a novel methodology",
"that",
"food items",
"their eating sounds",
"various deep learning models",
"this objective",
"a system",
"meaningful features",
"the help",
"signal processing techniques",
"deep learning models",
"them",
"their respective food classes",
"1200 audio files",
"20 food items",
"relationships",
"the sound files",
"different food items",
"meaningful features",
"various techniques",
"spectrograms",
"spectral rolloff",
"spectral",
"mel-frequency cepstral coefficients",
"the cleaning",
"audio files",
"the unique characteristics",
"different food items",
"the next phase",
"various deep learning models",
"gru",
"lstm",
"inceptionresnetv2",
"the customized cnn model",
"spectral and temporal patterns",
"audio signals",
"this",
"the models",
"i.e. bidirectional lstm",
"bidirectional lstm",
"bidirectional gru",
"their performance",
"the same labeled data",
"order",
"particular patterns",
"sound",
"their corresponding class",
"food item",
"evaluation",
"the highest accuracy",
"precision",
"f1 score",
"recall",
"gru",
"99.28%",
"bidirectional lstm",
"97.7%",
"97.3%",
"bidirectional lstm",
"97.45%",
"the results",
"this study",
"deep learning models",
"the potential",
"foods",
"the basis",
"their sound",
"the best outcomes",
"1200",
"20",
"mel",
"inceptionresnetv2",
"cnn",
"99.28%",
"97.7%",
"97.3%",
"+ bidirectional lstm",
"97.45%"
] |
Advances in Deep Learning Techniques for Short-term Energy Load Forecasting Applications: A Review | [
"Radhika Chandrasekaran",
"Senthil Kumar Paramasivan"
] | Today, the majority of the leading power companies place a significant emphasis on forecasting the electricity load in the balance of power and administration. Meanwhile, since electricity is an integral component of every person’s contemporary life, energy load forecasting is necessary to afford the energy demand required. The expansion of the electrical infrastructure is a key factor in increasing sustainable economic growth, and the planning and control of the utility power system rely on accurate load forecasting. Due to uncertainty in energy utilization, forecasting is turning into a complex task, and it makes an impact on applications that include energy scheduling and management, price forecasting, etc. The statistical methods involving time series for regression analysis and machine learning techniques have been used in energy load forecasting extensively over the last few decades to precisely predict future energy demands. However, they have some drawbacks with limited model flexibility, generalization, and overfitting. Deep learning addresses the issues of handling unstructured and unlabeled data, automatic feature learning, non-linear model flexibility, the ability to handle high-dimensional data, and simultaneous computation using GPUs efficiently. This paper investigates factors influencing energy load forecasting, then discusses the most commonly used deep learning approaches in energy load forecasting, as well as evaluation metrics to evaluate the performance of the model, followed by bio-inspired algorithms to optimize the model, and other advanced technologies for energy load forecasting. This study discusses the research findings, challenges, and opportunities in energy load forecasting. | 10.1007/s11831-024-10155-x | advances in deep learning techniques for short-term energy load forecasting applications: a review | today, the majority of the leading power companies place a significant emphasis on forecasting the electricity load in the balance of power and administration. meanwhile, since electricity is an integral component of every person’s contemporary life, energy load forecasting is necessary to afford the energy demand required. the expansion of the electrical infrastructure is a key factor in increasing sustainable economic growth, and the planning and control of the utility power system rely on accurate load forecasting. due to uncertainty in energy utilization, forecasting is turning into a complex task, and it makes an impact on applications that include energy scheduling and management, price forecasting, etc. the statistical methods involving time series for regression analysis and machine learning techniques have been used in energy load forecasting extensively over the last few decades to precisely predict future energy demands. however, they have some drawbacks with limited model flexibility, generalization, and overfitting. deep learning addresses the issues of handling unstructured and unlabeled data, automatic feature learning, non-linear model flexibility, the ability to handle high-dimensional data, and simultaneous computation using gpus efficiently. this paper investigates factors influencing energy load forecasting, then discusses the most commonly used deep learning approaches in energy load forecasting, as well as evaluation metrics to evaluate the performance of the model, followed by bio-inspired algorithms to optimize the model, and other advanced technologies for energy load forecasting. this study discusses the research findings, challenges, and opportunities in energy load forecasting. | [
"the majority",
"the leading power companies",
"a significant emphasis",
"the electricity load",
"the balance",
"power",
"administration",
"electricity",
"an integral component",
"every person’s contemporary life",
"energy load forecasting",
"the energy demand",
"the expansion",
"the electrical infrastructure",
"a key factor",
"sustainable economic growth",
"the planning",
"control",
"the utility power system",
"accurate load forecasting",
"uncertainty",
"energy utilization",
"forecasting",
"a complex task",
"it",
"an impact",
"applications",
"that",
"energy scheduling",
"management",
"price forecasting",
"the statistical methods",
"time series",
"regression analysis",
"machine learning techniques",
"energy load",
"the last few decades",
"future energy demands",
"they",
"some drawbacks",
"limited model flexibility",
"generalization",
"deep learning addresses",
"the issues",
"unstructured and unlabeled data",
"automatic feature learning",
"non-linear model flexibility",
"the ability",
"high-dimensional data",
"simultaneous computation",
"gpus",
"this paper investigates",
"energy load forecasting",
"the most commonly used deep learning approaches",
"energy load forecasting",
"evaluation metrics",
"the performance",
"the model",
"bio-inspired algorithms",
"the model",
"other advanced technologies",
"energy load forecasting",
"this study",
"the research findings",
"challenges",
"opportunities",
"energy load forecasting",
"today",
"the last few decades"
] |
Optimized COVID-19 detection using sparse deep learning models from multimodal imaging data | [
"MohammadMahdi Moradi",
"Alireza Hassanzadeh",
"Arman Haghanifar",
"Seok Bum Ko"
] | This paper explores the global spread of the COVID-19 virus since 2019, impacting 219 countries worldwide. Despite the absence of a definitive cure, the utilization of artificial intelligence (AI) methods for disease diagnosis has demonstrated commendable effectiveness in promptly diagnosing patients and curbing infection transmission. The study introduces a deep learning-based model tailored for COVID-19 detection, leveraging three prevalent medical imaging modalities: computed tomography (CT), chest X-ray (CXR), and Ultrasound. Various deep Transfer Learning Convolutional Neural Network-based (CNN) models have undergone assessment for each imaging modality. For each imaging modality, this study has selected the two most accurate models based on evaluation metrics such as accuracy and loss. Additionally, efforts have been made to prune unnecessary weights from these models to obtain more efficient and sparse models. By fusing these pruned models, enhanced performance has been achieved. The models have undergone rigorous training and testing using publicly available real-world medical datasets, focusing on classifying these datasets into three distinct categories: Normal, COVID-19 Pneumonia, and non-COVID-19 Pneumonia. The primary objective is to develop an optimized and swift model through strategies like Transfer Learning, Ensemble Learning, and reducing network complexity, making it easier for storage and transfer. The results of the trained network on test data exhibit promising outcomes. The accuracy of these models on the CT scan, X-ray, and ultrasound datasets stands at 99.4%, 98.9%, and 99.3%, respectively. Moreover, these models’ sizes have been substantially reduced and optimized by 51.93%, 38.00%, and 69.07%, respectively. This study proposes a Computer-aided-coronavirus-detection system based on three standard medical imaging techniques. The intention is to assist radiologists in accurately and swiftly diagnosing the disease, especially during the screening process, by providing high accuracy and speed in the identification of COVID-19 cases. | 10.1007/s11042-024-18987-2 | optimized covid-19 detection using sparse deep learning models from multimodal imaging data | this paper explores the global spread of the covid-19 virus since 2019, impacting 219 countries worldwide. despite the absence of a definitive cure, the utilization of artificial intelligence (ai) methods for disease diagnosis has demonstrated commendable effectiveness in promptly diagnosing patients and curbing infection transmission. the study introduces a deep learning-based model tailored for covid-19 detection, leveraging three prevalent medical imaging modalities: computed tomography (ct), chest x-ray (cxr), and ultrasound. various deep transfer learning convolutional neural network-based (cnn) models have undergone assessment for each imaging modality. for each imaging modality, this study has selected the two most accurate models based on evaluation metrics such as accuracy and loss. additionally, efforts have been made to prune unnecessary weights from these models to obtain more efficient and sparse models. by fusing these pruned models, enhanced performance has been achieved. the models have undergone rigorous training and testing using publicly available real-world medical datasets, focusing on classifying these datasets into three distinct categories: normal, covid-19 pneumonia, and non-covid-19 pneumonia. the primary objective is to develop an optimized and swift model through strategies like transfer learning, ensemble learning, and reducing network complexity, making it easier for storage and transfer. the results of the trained network on test data exhibit promising outcomes. the accuracy of these models on the ct scan, x-ray, and ultrasound datasets stands at 99.4%, 98.9%, and 99.3%, respectively. moreover, these models’ sizes have been substantially reduced and optimized by 51.93%, 38.00%, and 69.07%, respectively. this study proposes a computer-aided-coronavirus-detection system based on three standard medical imaging techniques. the intention is to assist radiologists in accurately and swiftly diagnosing the disease, especially during the screening process, by providing high accuracy and speed in the identification of covid-19 cases. | [
"this paper",
"the global spread",
"the covid-19 virus",
"219 countries",
"the absence",
"a definitive cure",
"the utilization",
"artificial intelligence (ai) methods",
"disease diagnosis",
"commendable effectiveness",
"patients",
"infection transmission",
"the study",
"a deep learning-based model",
"covid-19 detection",
"three prevalent medical imaging modalities",
"computed tomography",
"ct",
"chest",
"-",
"ray",
"(cxr",
"ultrasound",
"various deep transfer",
"convolutional neural network-based (cnn) models",
"assessment",
"each imaging modality",
"each imaging modality",
"this study",
"the two most accurate models",
"evaluation metrics",
"accuracy",
"loss",
"efforts",
"unnecessary weights",
"these models",
"more efficient and sparse models",
"these pruned models",
"enhanced performance",
"the models",
"rigorous training",
"testing",
"publicly available real-world medical datasets",
"these datasets",
"three distinct categories",
"the primary objective",
"an optimized and swift model",
"strategies",
"transfer learning",
"ensemble learning",
"network complexity",
"it",
"storage",
"the results",
"the trained network",
"test data",
"promising outcomes",
"the accuracy",
"these models",
"the ct scan",
"x",
"-",
"ray",
"ultrasound datasets",
"99.4%",
"98.9%",
"99.3%",
"these models’ sizes",
"51.93%",
"38.00%",
"69.07%",
"this study",
"a computer-aided-coronavirus-detection system",
"three standard medical imaging techniques",
"the intention",
"radiologists",
"the disease",
"the screening process",
"high accuracy",
"speed",
"the identification",
"covid-19 cases",
"covid-19",
"2019",
"219",
"covid-19",
"three",
"cnn",
"two",
"three",
"covid-19",
"non-covid-19",
"99.4%",
"98.9%",
"99.3%",
"51.93%",
"38.00%",
"69.07%",
"three",
"covid-19"
] |
Automated classification of polyps using deep learning architectures and few-shot learning | [
"Adrian Krenzer",
"Stefan Heil",
"Daniel Fitting",
"Safa Matti",
"Wolfram G. Zoller",
"Alexander Hann",
"Frank Puppe"
] | BackgroundColorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification.MethodsWe build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database.ResultsFor the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations.ConclusionOverall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning. | 10.1186/s12880-023-01007-4 | automated classification of polyps using deep learning architectures and few-shot learning | backgroundcolorectal cancer is a leading cause of cancer-related deaths worldwide. the best method to prevent crc is a colonoscopy. however, not all colon polyps have the risk of becoming cancerous. therefore, polyps are classified using different classification systems. after the classification, further treatment and procedures are based on the classification of the polyp. nevertheless, classification is not easy. therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the nice and paris classification.methodswe build two classification systems. one is classifying polyps based on their shape (paris). the other classifies polyps based on their texture and surface patterns (nice). a two-step process for the paris classification is introduced: first, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. for the nice classification, we design a few-shot learning algorithm based on the deep metric learning approach. the algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of nice annotated images in our database.resultsfor the paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. for the nice classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. additionally, we show different ablations of the algorithms. finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations.conclusionoverall we introduce two polyp classification systems to assist gastroenterologists. we achieve state-of-the-art performance in the paris classification and demonstrate the viability of the few-shot learning paradigm in the nice classification, addressing the prevalent data scarcity issues faced in medical machine learning. | [
"backgroundcolorectal cancer",
"a leading cause",
"cancer-related deaths",
"the best method",
"crc",
"a colonoscopy",
"not all colon polyps",
"the risk",
"polyps",
"different classification systems",
"the classification",
"further treatment",
"procedures",
"the classification",
"the polyp",
"classification",
"we",
"two novel automated classifications system",
"gastroenterologists",
"polyps",
"the nice and paris classification.methodswe",
"two classification systems",
"polyps",
"their shape",
"paris",
"the other classifies polyps",
"their texture and surface patterns",
"a two-step process",
"the paris classification",
"the polyp",
"the image",
"the polyp",
"the cropped area",
"a transformer network",
"the nice classification",
"we",
"a few-shot learning algorithm",
"the deep metric learning approach",
"the algorithm",
"an embedding space",
"polyps",
"which",
"classification",
"a few examples",
"the data scarcity",
"nice annotated images",
"our database.resultsfor",
"the paris classification",
"we",
"an accuracy",
"89.35 %",
"all papers",
"the literature",
"the-art",
"baseline",
"other publications",
"a public data set",
"the nice classification",
"we",
"a competitive accuracy",
"81.13 %",
"thereby the viability",
"the few-shot learning paradigm",
"polyp classification",
"data-scarce environments",
"we",
"different ablations",
"the algorithms",
"we",
"the explainability",
"the system",
"heat maps",
"the neural network",
"neural activations.conclusionoverall",
"we",
"two polyp classification systems",
"gastroenterologists",
"we",
"the-art",
"the paris classification",
"the viability",
"the few-shot learning paradigm",
"the nice classification",
"the prevalent data scarcity issues",
"medical machine learning",
"two",
"paris",
"two",
"paris",
"two",
"paris",
"first",
"secondly",
"paris",
"89.35 %",
"81.13 %",
"two",
"paris"
] |
Deep learning prediction of survival in patients with heart failure using chest radiographs | [
"Han Jia",
"Shengen Liao",
"Xiaomei Zhu",
"Wangyan Liu",
"Yi Xu",
"Rongjun Ge",
"Yinsu Zhu"
] | Heart failure (HF) is associated with high rates of morbidity and mortality. The value of deep learning survival prediction models using chest radiographs in patients with heart failure is currently unclear. The aim of our study is to develop and validate a deep learning survival prediction model using chest X-ray (DLSPCXR) in patients with HF. The study retrospectively enrolled a cohort of 353 patients with HF who underwent chest X-ray (CXR) at our institution between March 2012 and March 2017. The dataset was randomly divided into training (n = 247) and validation (n = 106) datasets. Univariate and multivariate Cox analysis were conducted on the training dataset to develop clinical and imaging survival prediction models. The DLSPCXR was trained and the selected clinical parameters were incorporated into DLSPCXR to establish a new model called DLSPinteg. Discrimination performance was evaluated using the time-dependent area under the receiver operating characteristic curves (TD AUC) at 1, 3, and 5-years survival. Delong’s test was employed for the comparison of differences between two AUCs of different models. The risk-discrimination capability of the optimal model was evaluated by the Kaplan–Meier curve. In multivariable Cox analysis, older age, higher N-terminal pro-B-type natriuretic peptide (NT-ProBNP), systolic pulmonary artery pressure (sPAP) > 50 mmHg, New York Heart Association (NYHA) functional class III–IV and cardiothoracic ratio (CTR) ≥ 0.62 in CXR were independent predictors of poor prognosis in patients with HF. Based on the receiver operating characteristic (ROC) curve analysis, DLSPCXR had better performance at predicting 5-year survival than the imaging Cox model in the validation cohort (AUC: 0.757 vs. 0.561, P = 0.01). DLSPinteg as the optimal model outperforms the clinical Cox model (AUC: 0.826 vs. 0.633, P = 0.03), imaging Cox model (AUC: 0.826 vs. 0.555, P < 0.001), and DLSPCXR (AUC: 0.826 vs. 0.767, P = 0.06). Deep learning models using chest radiographs can predict survival in patients with heart failure with acceptable accuracy. | 10.1007/s10554-024-03177-w | deep learning prediction of survival in patients with heart failure using chest radiographs | heart failure (hf) is associated with high rates of morbidity and mortality. the value of deep learning survival prediction models using chest radiographs in patients with heart failure is currently unclear. the aim of our study is to develop and validate a deep learning survival prediction model using chest x-ray (dlspcxr) in patients with hf. the study retrospectively enrolled a cohort of 353 patients with hf who underwent chest x-ray (cxr) at our institution between march 2012 and march 2017. the dataset was randomly divided into training (n = 247) and validation (n = 106) datasets. univariate and multivariate cox analysis were conducted on the training dataset to develop clinical and imaging survival prediction models. the dlspcxr was trained and the selected clinical parameters were incorporated into dlspcxr to establish a new model called dlspinteg. discrimination performance was evaluated using the time-dependent area under the receiver operating characteristic curves (td auc) at 1, 3, and 5-years survival. delong’s test was employed for the comparison of differences between two aucs of different models. the risk-discrimination capability of the optimal model was evaluated by the kaplan–meier curve. in multivariable cox analysis, older age, higher n-terminal pro-b-type natriuretic peptide (nt-probnp), systolic pulmonary artery pressure (spap) > 50 mmhg, new york heart association (nyha) functional class iii–iv and cardiothoracic ratio (ctr) ≥ 0.62 in cxr were independent predictors of poor prognosis in patients with hf. based on the receiver operating characteristic (roc) curve analysis, dlspcxr had better performance at predicting 5-year survival than the imaging cox model in the validation cohort (auc: 0.757 vs. 0.561, p = 0.01). dlspinteg as the optimal model outperforms the clinical cox model (auc: 0.826 vs. 0.633, p = 0.03), imaging cox model (auc: 0.826 vs. 0.555, p < 0.001), and dlspcxr (auc: 0.826 vs. 0.767, p = 0.06). deep learning models using chest radiographs can predict survival in patients with heart failure with acceptable accuracy. | [
"heart failure",
"hf",
"high rates",
"morbidity",
"mortality",
"the value",
"deep learning survival prediction models",
"chest radiographs",
"patients",
"heart failure",
"the aim",
"our study",
"a deep learning survival prediction model",
"chest x",
"-",
"ray",
"dlspcxr",
"patients",
"hf",
"the study",
"a cohort",
"353 patients",
"hf",
"who",
"chest",
"x",
"-",
"ray",
"(cxr",
"our institution",
"march",
"march",
"the dataset",
"training",
"validation",
"= 106) datasets",
"univariate and multivariate cox analysis",
"the training dataset",
"clinical and imaging survival prediction models",
"the dlspcxr",
"the selected clinical parameters",
"dlspcxr",
"a new model",
"discrimination performance",
"the time-dependent area",
"the receiver operating characteristic curves",
"td auc",
"5-years survival",
"delong’s test",
"the comparison",
"differences",
"two aucs",
"different models",
"the risk-discrimination capability",
"the optimal model",
"the kaplan–meier curve",
"multivariable cox analysis",
"higher n-terminal pro-b-type natriuretic peptide",
"nt-probnp",
"systolic pulmonary artery pressure",
"spap",
"50 mmhg",
"new york heart association",
"nyha) functional class iii",
"iv",
"cardiothoracic ratio",
"ctr",
"≥",
"cxr",
"independent predictors",
"poor prognosis",
"patients",
"hf",
"the receiver operating characteristic (roc) curve analysis",
"dlspcxr",
"better performance",
"5-year survival",
"the imaging cox model",
"the validation cohort",
"auc",
"the optimal model",
"the clinical cox model",
"auc",
"cox model",
"auc",
"auc",
"deep learning models",
"chest radiographs",
"survival",
"patients",
"heart failure",
"acceptable accuracy",
"353",
"march 2012 and",
"march 2017",
"247",
"106",
"1",
"3",
"5-years",
"delong",
"between two",
"50",
"new york",
"ctr",
"≥ 0.62",
"roc",
"dlspcxr",
"5-year",
"0.757",
"0.561",
"0.01",
"0.826",
"0.633",
"0.03",
"0.826",
"0.555",
"p < 0.001",
"0.826",
"0.767",
"0.06"
] |
Leveraging distant supervision and deep learning for twitter sentiment and emotion classification | [
"Muhamet Kastrati",
"Zenun Kastrati",
"Ali Shariq Imran",
"Marenglen Biba"
] | Nowadays, various applications across industries, healthcare, and security have begun adopting automatic sentiment analysis and emotion detection in short texts, such as posts from social media. Twitter stands out as one of the most popular online social media platforms due to its easy, unique, and advanced accessibility using the API. On the other hand, supervised learning is the most widely used paradigm for tasks involving sentiment polarity and fine-grained emotion detection in short and informal texts, such as Twitter posts. However, supervised learning models are data-hungry and heavily reliant on abundant labeled data, which remains a challenge. This study aims to address this challenge by creating a large-scale real-world dataset of 17.5 million tweets. A distant supervision approach relying on emojis available in tweets is applied to label tweets corresponding to Ekman’s six basic emotions. Additionally, we conducted a series of experiments using various conventional machine learning models and deep learning, including transformer-based models, on our dataset to establish baseline results. The experimental results and an extensive ablation analysis on the dataset showed that BiLSTM with FastText and an attention mechanism outperforms other models in both classification tasks, achieving an F1-score of 70.92% for sentiment classification and 54.85% for emotion detection. | 10.1007/s10844-024-00845-0 | leveraging distant supervision and deep learning for twitter sentiment and emotion classification | nowadays, various applications across industries, healthcare, and security have begun adopting automatic sentiment analysis and emotion detection in short texts, such as posts from social media. twitter stands out as one of the most popular online social media platforms due to its easy, unique, and advanced accessibility using the api. on the other hand, supervised learning is the most widely used paradigm for tasks involving sentiment polarity and fine-grained emotion detection in short and informal texts, such as twitter posts. however, supervised learning models are data-hungry and heavily reliant on abundant labeled data, which remains a challenge. this study aims to address this challenge by creating a large-scale real-world dataset of 17.5 million tweets. a distant supervision approach relying on emojis available in tweets is applied to label tweets corresponding to ekman’s six basic emotions. additionally, we conducted a series of experiments using various conventional machine learning models and deep learning, including transformer-based models, on our dataset to establish baseline results. the experimental results and an extensive ablation analysis on the dataset showed that bilstm with fasttext and an attention mechanism outperforms other models in both classification tasks, achieving an f1-score of 70.92% for sentiment classification and 54.85% for emotion detection. | [
"various applications",
"industries",
"healthcare",
"security",
"automatic sentiment analysis",
"emotion detection",
"short texts",
"posts",
"social media",
"twitter",
"the most popular online social media platforms",
"its easy, unique, and advanced accessibility",
"the api",
"the other hand",
"supervised learning",
"the most widely used paradigm",
"tasks",
"sentiment polarity",
"fine-grained emotion detection",
"short and informal texts",
"twitter posts",
"supervised learning models",
"abundant labeled data",
"which",
"a challenge",
"this study",
"this challenge",
"a large-scale real-world dataset",
"17.5 million tweets",
"a distant supervision approach",
"emojis",
"tweets",
"label tweets",
"ekman’s six basic emotions",
"we",
"a series",
"experiments",
"various conventional machine learning models",
"deep learning",
"transformer-based models",
"our dataset",
"baseline results",
"the experimental results",
"an extensive ablation analysis",
"the dataset",
"that bilstm",
"fasttext",
"an attention mechanism",
"other models",
"both classification tasks",
"an f1-score",
"70.92%",
"sentiment classification",
"54.85%",
"emotion detection",
"17.5 million",
"six",
"70.92%",
"54.85%"
] |
Octorotor flight control system design with stochastic optimal tuning, deep learning and differential morphing | [
"Oguz Kose"
] | In this paper, simultaneous longitudinal and lateral flight control is investigated for an octorotor by using stochastic optimal tuning and deep learning under differential morphing. Octorotor models for differential morphing were drawn in SOLIDWORKS drawing program. Arm lengths are randomly estimated in the algorithm. Moments of inertia changing according to morphing ratios are estimated with deep neural network. In addition, the proportional–integral–derivative controller coefficients required for both longitudinal and lateral flight according to the morphing ratios are estimated by simultaneous perturbation stochastic approximation. Considering the design performance criteria, 49.95% improvement was achieved in the total cost. The estimation of unknown parameters by optimization method and deep learning was tested in simulations, and the octorotor successfully followed the given reference angle. | 10.1007/s40430-024-04972-1 | octorotor flight control system design with stochastic optimal tuning, deep learning and differential morphing | in this paper, simultaneous longitudinal and lateral flight control is investigated for an octorotor by using stochastic optimal tuning and deep learning under differential morphing. octorotor models for differential morphing were drawn in solidworks drawing program. arm lengths are randomly estimated in the algorithm. moments of inertia changing according to morphing ratios are estimated with deep neural network. in addition, the proportional–integral–derivative controller coefficients required for both longitudinal and lateral flight according to the morphing ratios are estimated by simultaneous perturbation stochastic approximation. considering the design performance criteria, 49.95% improvement was achieved in the total cost. the estimation of unknown parameters by optimization method and deep learning was tested in simulations, and the octorotor successfully followed the given reference angle. | [
"this paper",
"simultaneous longitudinal and lateral flight control",
"an octorotor",
"stochastic optimal tuning",
"deep learning",
"differential morphing",
"octorotor models",
"differential morphing",
"solidworks drawing program",
"arm lengths",
"the algorithm",
"moments",
"inertia",
"morphing ratios",
"deep neural network",
"addition",
"the proportional–integral–derivative controller coefficients",
"both longitudinal and lateral flight",
"the morphing ratios",
"simultaneous perturbation stochastic approximation",
"the design performance criteria",
"49.95% improvement",
"the total cost",
"the estimation",
"unknown parameters",
"optimization method",
"deep learning",
"simulations",
"the octorotor",
"the given reference angle",
"49.95%"
] |
Advances in Deep Learning Models for Resolving Medical Image Segmentation Data Scarcity Problem: A Topical Review | [
"Ashwini Kumar Upadhyay",
"Ashish Kumar Bhandari"
] | Deep learning (DL) methods have recently become state-of-the-art in most automated medical image segmentation tasks. Some of the biggest challenges in this field are related to datasets. This paper aims to review the recent developments in deep learning architectures and approaches that aim to resolve dataset-related challenges faced in DL-based medical image segmentation. We have studied architectural developments in deep learning models and their recent applications in medical image segmentation tasks. Popular U-Net-based models are tested for segmentation performance comparison on a Coronavirus disease 2019 (Covid-19) lung infection Computed Tomography segmentation dataset. The comparison results prove the effectiveness of the original U-Net architecture, even in present-day medical image segmentation tasks. To overcome major dataset-related challenges such as labeled data scarcity, high annotation time and cost, distribution shifts, low-quality of images, and generalizability issues; we have studied recent developments in deep learning approaches like active learning, data augmentation, domain adaptation, and self- and semi-supervised learning, that aim to provide innovative solutions for those challenges. With rapid developments in the field, approaches like data augmentation, domain adaptation, and semi-supervised learning have become some of the hot areas of research, aiming for more efficient use of datasets, better segmentation prediction, and model generalizability. | 10.1007/s11831-023-10028-9 | advances in deep learning models for resolving medical image segmentation data scarcity problem: a topical review | deep learning (dl) methods have recently become state-of-the-art in most automated medical image segmentation tasks. some of the biggest challenges in this field are related to datasets. this paper aims to review the recent developments in deep learning architectures and approaches that aim to resolve dataset-related challenges faced in dl-based medical image segmentation. we have studied architectural developments in deep learning models and their recent applications in medical image segmentation tasks. popular u-net-based models are tested for segmentation performance comparison on a coronavirus disease 2019 (covid-19) lung infection computed tomography segmentation dataset. the comparison results prove the effectiveness of the original u-net architecture, even in present-day medical image segmentation tasks. to overcome major dataset-related challenges such as labeled data scarcity, high annotation time and cost, distribution shifts, low-quality of images, and generalizability issues; we have studied recent developments in deep learning approaches like active learning, data augmentation, domain adaptation, and self- and semi-supervised learning, that aim to provide innovative solutions for those challenges. with rapid developments in the field, approaches like data augmentation, domain adaptation, and semi-supervised learning have become some of the hot areas of research, aiming for more efficient use of datasets, better segmentation prediction, and model generalizability. | [
"deep learning (dl) methods",
"state",
"the-art",
"most automated medical image segmentation tasks",
"some",
"the biggest challenges",
"this field",
"datasets",
"this paper",
"the recent developments",
"deep learning architectures",
"approaches",
"that",
"dataset-related challenges",
"dl-based medical image segmentation",
"we",
"architectural developments",
"deep learning models",
"their recent applications",
"medical image segmentation tasks",
"popular u-net-based models",
"segmentation performance comparison",
"a coronavirus disease",
"covid-19) lung infection",
"tomography segmentation",
"the comparison results",
"the effectiveness",
"the original u-net architecture",
"present-day medical image segmentation tasks",
"major dataset-related challenges",
"labeled data scarcity",
"high annotation time",
"cost",
"distribution shifts",
"low-quality",
"images",
"generalizability issues",
"we",
"recent developments",
"deep learning approaches",
"active learning",
"data augmentation",
"domain adaptation",
"self-",
"semi-supervised learning",
"that",
"innovative solutions",
"those challenges",
"rapid developments",
"the field",
"approaches",
"data augmentation",
"domain adaptation",
"semi-supervised learning",
"some",
"the hot areas",
"research",
"more efficient use",
"datasets",
"better segmentation prediction",
"model generalizability",
"2019",
"covid-19",
"present-day"
] |
Advancing differential diagnosis: a comprehensive review of deep learning approaches for differentiating tuberculosis, pneumonia, and COVID-19 | [
"Kajal Kansal",
"Tej Bahadur Chandra",
"Akansha Singh"
] | In the realm of medical diagnostics, particularly in differential diagnosis, where differentiating between illnesses or ailments with comparable symptoms is essential, deep learning has gained importance. Recent developments in deep learning have demonstrated considerable promise for revolutionizing medical diagnostics by using the ability of artificial intelligence (AI) to accurately interpret radiological images. We examine the most cutting-edge deep learning techniques currently being utilized for the differential diagnosis of tuberculosis, pneumonia, and COVID-19 in this in-depth review. The study presents an in-depth critical review of several SOTA (state-of-the-art) studies used for differential diagnosis of different respiratory abnormalities like TB, Pneumonia, and COVID-19. In addition, an overview of various approaches, datasets employed in each method, various diagnosis tests, used assessment measures, and obtained performance is summarized and comprehensively compared to assist future research. We suggest a pathway for future research and development of deep learning solutions for differential diagnosis by critically analyzing the current literature and outlining the limitations and potential in this sector. | 10.1007/s11042-024-19350-1 | advancing differential diagnosis: a comprehensive review of deep learning approaches for differentiating tuberculosis, pneumonia, and covid-19 | in the realm of medical diagnostics, particularly in differential diagnosis, where differentiating between illnesses or ailments with comparable symptoms is essential, deep learning has gained importance. recent developments in deep learning have demonstrated considerable promise for revolutionizing medical diagnostics by using the ability of artificial intelligence (ai) to accurately interpret radiological images. we examine the most cutting-edge deep learning techniques currently being utilized for the differential diagnosis of tuberculosis, pneumonia, and covid-19 in this in-depth review. the study presents an in-depth critical review of several sota (state-of-the-art) studies used for differential diagnosis of different respiratory abnormalities like tb, pneumonia, and covid-19. in addition, an overview of various approaches, datasets employed in each method, various diagnosis tests, used assessment measures, and obtained performance is summarized and comprehensively compared to assist future research. we suggest a pathway for future research and development of deep learning solutions for differential diagnosis by critically analyzing the current literature and outlining the limitations and potential in this sector. | [
"the realm",
"medical diagnostics",
"differential diagnosis",
"illnesses",
"ailments",
"comparable symptoms",
"deep learning",
"importance",
"recent developments",
"deep learning",
"considerable promise",
"medical diagnostics",
"the ability",
"artificial intelligence",
"radiological images",
"we",
"the most cutting-edge deep learning techniques",
"the differential diagnosis",
"tuberculosis",
"pneumonia",
"covid-19",
"-depth",
"the study",
"an in-depth critical review",
"the-art",
"differential diagnosis",
"different respiratory abnormalities",
"tb",
"pneumonia",
"covid-19",
"addition",
"an overview",
"various approaches",
"datasets",
"each method",
"various diagnosis tests",
"assessment measures",
"performance",
"future research",
"we",
"a pathway",
"future research",
"development",
"deep learning solutions",
"differential diagnosis",
"the current literature",
"the limitations",
"potential",
"this sector",
"covid-19",
"covid-19"
] |
An efficient intrusive deep reinforcement learning framework for OpenFOAM | [
"Saeed Salehi"
] | Recent advancements in artificial intelligence and deep learning offer tremendous opportunities to tackle high-dimensional and challenging problems. Particularly, deep reinforcement learning (DRL) has been shown to be able to address optimal decision-making problems and control complex dynamical systems. DRL has received increased attention in the realm of computational fluid dynamics (CFD) due to its demonstrated ability to optimize complex flow control strategies. However, DRL algorithms often suffer from low sampling efficiency and require numerous interactions between the agent and the environment, necessitating frequent data exchanges. One significant bottleneck in coupled DRL–CFD algorithms is the extensive data communication between DRL and CFD codes. Non-intrusive algorithms where the DRL agent treats the CFD environment as a black box may come with the deficiency of increased computational cost due to overhead associated with the information exchange between the two DRL and CFD modules. In this article, a TensorFlow-based intrusive DRL–CFD framework is introduced where the agent model is integrated within the open-source CFD solver OpenFOAM. The integration eliminates the need for any external information exchange during DRL episodes. The framework is parallelized using the message passing interface to manage parallel environments for computationally intensive CFD cases through distributed computing. The performance and effectiveness of the framework are verified by controlling the vortex shedding behind two and three-dimensional cylinders, achieved as a result of minimizing drag and lift forces through an active flow control mechanism. The simulation results indicate that the trained controller can stabilize the flow and effectively mitigate the vortex shedding. | 10.1007/s11012-024-01830-1 | an efficient intrusive deep reinforcement learning framework for openfoam | recent advancements in artificial intelligence and deep learning offer tremendous opportunities to tackle high-dimensional and challenging problems. particularly, deep reinforcement learning (drl) has been shown to be able to address optimal decision-making problems and control complex dynamical systems. drl has received increased attention in the realm of computational fluid dynamics (cfd) due to its demonstrated ability to optimize complex flow control strategies. however, drl algorithms often suffer from low sampling efficiency and require numerous interactions between the agent and the environment, necessitating frequent data exchanges. one significant bottleneck in coupled drl–cfd algorithms is the extensive data communication between drl and cfd codes. non-intrusive algorithms where the drl agent treats the cfd environment as a black box may come with the deficiency of increased computational cost due to overhead associated with the information exchange between the two drl and cfd modules. in this article, a tensorflow-based intrusive drl–cfd framework is introduced where the agent model is integrated within the open-source cfd solver openfoam. the integration eliminates the need for any external information exchange during drl episodes. the framework is parallelized using the message passing interface to manage parallel environments for computationally intensive cfd cases through distributed computing. the performance and effectiveness of the framework are verified by controlling the vortex shedding behind two and three-dimensional cylinders, achieved as a result of minimizing drag and lift forces through an active flow control mechanism. the simulation results indicate that the trained controller can stabilize the flow and effectively mitigate the vortex shedding. | [
"recent advancements",
"artificial intelligence",
"deep learning",
"tremendous opportunities",
"high-dimensional and challenging problems",
"deep reinforcement learning",
"drl",
"optimal decision-making problems",
"complex dynamical systems",
"drl",
"increased attention",
"the realm",
"computational fluid dynamics",
"cfd",
"its demonstrated ability",
"complex flow control strategies",
"drl algorithms",
"low sampling efficiency",
"numerous interactions",
"the agent",
"the environment",
"frequent data exchanges",
"one significant bottleneck",
"coupled drl",
"cfd algorithms",
"the extensive data communication",
"drl and cfd codes",
"non-intrusive algorithms",
"the drl agent",
"the cfd environment",
"a black box",
"the deficiency",
"increased computational cost",
"overhead",
"the information exchange",
"the two drl and cfd modules",
"this article",
"a tensorflow-based intrusive drl",
"cfd framework",
"the agent model",
"the open-source cfd",
"solver openfoam",
"the integration",
"the need",
"any external information exchange",
"drl episodes",
"the framework",
"the message",
"interface",
"parallel environments",
"computationally intensive cfd cases",
"distributed computing",
"the performance",
"effectiveness",
"the framework",
"the vortex",
"two and three-dimensional cylinders",
"a result",
"drag",
"forces",
"an active flow control mechanism",
"the simulation results",
"the trained controller",
"the flow",
"the vortex shedding",
"drl",
"one",
"drl",
"two",
"cfd modules",
"two"
] |
Heart failure classification using deep learning to extract spatiotemporal features from ECG | [
"Chang-Jiang Zhang",
"Yuan-Lu",
"Fu-Qin Tang",
"Hai-Peng Cai",
"Yin-Fen Qian",
"Chao-Wang"
] | BackgroundHeart failure is a syndrome with complex clinical manifestations. Due to increasing population aging, heart failure has become a major medical problem worldwide. In this study, we used the MIMIC-III public database to extract the temporal and spatial characteristics of electrocardiogram (ECG) signals from patients with heart failure.MethodsWe developed a NYHA functional classification model for heart failure based on a deep learning method. We introduced an integrating attention mechanism based on the CNN-LSTM-SE model, segmenting the ECG signal into 2 to 20 s long segments. Ablation experiments showed that the 12 s ECG signal segments could be used with the proposed deep learning model for superior classification of heart failure.ResultsThe accuracy, positive predictive value, sensitivity, and specificity of the NYHA functional classification method were 99.09, 98.9855, 99.033, and 99.649%, respectively.ConclusionsThe comprehensive performance of this model exceeds similar methods and can be used to assist in clinical medical diagnoses. | 10.1186/s12911-024-02415-4 | heart failure classification using deep learning to extract spatiotemporal features from ecg | backgroundheart failure is a syndrome with complex clinical manifestations. due to increasing population aging, heart failure has become a major medical problem worldwide. in this study, we used the mimic-iii public database to extract the temporal and spatial characteristics of electrocardiogram (ecg) signals from patients with heart failure.methodswe developed a nyha functional classification model for heart failure based on a deep learning method. we introduced an integrating attention mechanism based on the cnn-lstm-se model, segmenting the ecg signal into 2 to 20 s long segments. ablation experiments showed that the 12 s ecg signal segments could be used with the proposed deep learning model for superior classification of heart failure.resultsthe accuracy, positive predictive value, sensitivity, and specificity of the nyha functional classification method were 99.09, 98.9855, 99.033, and 99.649%, respectively.conclusionsthe comprehensive performance of this model exceeds similar methods and can be used to assist in clinical medical diagnoses. | [
"backgroundheart failure",
"a syndrome",
"complex clinical manifestations",
"increasing population aging",
"heart failure",
"a major medical problem",
"this study",
"we",
"the mimic-iii public database",
"the temporal and spatial characteristics",
"electrocardiogram",
"ecg",
"patients",
"a nyha functional classification model",
"heart failure",
"a deep learning method",
"we",
"an integrating attention mechanism",
"the cnn-lstm-se model",
"the ecg signal",
"2 to 20 s long segments",
"ablation experiments",
"the 12 s ecg signal segments",
"the proposed deep learning model",
"superior classification",
"heart",
"failure.resultsthe accuracy",
"positive predictive value",
"sensitivity",
"specificity",
"the nyha functional classification method",
"99.649%",
"respectively.conclusionsthe comprehensive performance",
"this model",
"similar methods",
"clinical medical diagnoses",
"cnn",
"2",
"12",
"99.09",
"98.9855",
"99.033",
"99.649%"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.