title
stringlengths 31
206
| authors
sequencelengths 1
85
| abstract
stringlengths 428
3.21k
| doi
stringlengths 21
31
| cleaned_title
stringlengths 31
206
| cleaned_abstract
stringlengths 428
3.21k
| key_phrases
sequencelengths 19
150
|
---|---|---|---|---|---|---|
Deep learning for survival analysis: a review | [
"Simon Wiegrebe",
"Philipp Kopper",
"Raphael Sonabend",
"Bernd Bischl",
"Andreas Bender"
] | The influx of deep learning (DL) techniques into the field of survival analysis in recent years has led to substantial methodological progress; for instance, learning from unstructured or high-dimensional data such as images, text or omics data. In this work, we conduct a comprehensive systematic review of DL-based methods for time-to-event analysis, characterizing them according to both survival- and DL-related attributes. In summary, the reviewed methods often address only a small subset of tasks relevant to time-to-event data—e.g., single-risk right-censored data—and neglect to incorporate more complex settings. Our findings are summarized in an editable, open-source, interactive table: https://survival-org.github.io/DL4Survival. As this research area is advancing rapidly, we encourage community contribution in order to keep this database up to date. | 10.1007/s10462-023-10681-3 | deep learning for survival analysis: a review | the influx of deep learning (dl) techniques into the field of survival analysis in recent years has led to substantial methodological progress; for instance, learning from unstructured or high-dimensional data such as images, text or omics data. in this work, we conduct a comprehensive systematic review of dl-based methods for time-to-event analysis, characterizing them according to both survival- and dl-related attributes. in summary, the reviewed methods often address only a small subset of tasks relevant to time-to-event data—e.g., single-risk right-censored data—and neglect to incorporate more complex settings. our findings are summarized in an editable, open-source, interactive table: https://survival-org.github.io/dl4survival. as this research area is advancing rapidly, we encourage community contribution in order to keep this database up to date. | [
"the influx",
"deep learning",
"(dl) techniques",
"the field",
"survival analysis",
"recent years",
"substantial methodological progress",
"instance",
"unstructured or high-dimensional data",
"images",
"text",
"omics",
"data",
"this work",
"we",
"a comprehensive systematic review",
"dl-based methods",
"event",
"them",
"both survival- and dl-related attributes",
"summary",
"the reviewed methods",
"only a small subset",
"tasks",
"event",
"e.g., single-risk right-censored data",
"neglect",
"more complex settings",
"our findings",
"an editable, open-source, interactive table",
"https://survival-org.github.io/dl4survival",
"this research area",
"we",
"community contribution",
"order",
"this database",
"date",
"recent years"
] |
A review on emotion detection by using deep learning techniques | [
"Tulika Chutia",
"Nomi Baruah"
] | Along with the growth of Internet with its numerous potential applications and diverse fields, artificial intelligence (AI) and sentiment analysis (SA) have become significant and popular research areas. Additionally, it was a key technology that contributed to the Fourth Industrial Revolution (IR 4.0). The subset of AI known as emotion recognition systems facilitates communication between IR 4.0 and IR 5.0. Nowadays users of social media, digital marketing, and e-commerce sites are increasing day by day resulting in massive amounts of unstructured data. Medical, marketing, public safety, education, human resources, business, and other industries also use the emotion recognition system widely. Hence it provides a large amount of textual data to extract the emotions from them. The paper presents a systematic literature review of the existing literature published between 2013 to 2023 in text-based emotion detection. This review scrupulously summarized 330 research papers from different conferences, journals, workshops, and dissertations. This paper explores different approaches, methods, different deep learning models, key aspects, description of datasets, evaluation techniques, Future prospects of deep learning, challenges in existing studies and presents limitations and practical implications. | 10.1007/s10462-024-10831-1 | a review on emotion detection by using deep learning techniques | along with the growth of internet with its numerous potential applications and diverse fields, artificial intelligence (ai) and sentiment analysis (sa) have become significant and popular research areas. additionally, it was a key technology that contributed to the fourth industrial revolution (ir 4.0). the subset of ai known as emotion recognition systems facilitates communication between ir 4.0 and ir 5.0. nowadays users of social media, digital marketing, and e-commerce sites are increasing day by day resulting in massive amounts of unstructured data. medical, marketing, public safety, education, human resources, business, and other industries also use the emotion recognition system widely. hence it provides a large amount of textual data to extract the emotions from them. the paper presents a systematic literature review of the existing literature published between 2013 to 2023 in text-based emotion detection. this review scrupulously summarized 330 research papers from different conferences, journals, workshops, and dissertations. this paper explores different approaches, methods, different deep learning models, key aspects, description of datasets, evaluation techniques, future prospects of deep learning, challenges in existing studies and presents limitations and practical implications. | [
"the growth",
"internet",
"its numerous potential applications",
"diverse fields",
"artificial intelligence",
"analysis",
"sa",
"significant and popular research areas",
"it",
"a key technology",
"that",
"the fourth industrial revolution",
"the subset",
"emotion recognition systems",
"communication",
"users",
"social media",
"digital marketing",
"e-commerce sites",
"day",
"massive amounts",
"unstructured data",
"marketing",
"public safety",
"education",
"human resources",
"business",
"other industries",
"the emotion recognition system",
"it",
"a large amount",
"textual data",
"the emotions",
"them",
"the paper",
"a systematic literature review",
"the existing literature",
"text-based emotion detection",
"this review",
"330 research papers",
"different conferences",
"journals",
"workshops",
"dissertations",
"this paper",
"different approaches",
"methods",
"different deep learning models",
"key aspects",
"description",
"datasets",
"evaluation techniques",
"future prospects",
"deep learning",
"challenges",
"existing studies",
"limitations",
"practical implications",
"fourth",
"4.0",
"4.0",
"5.0",
"between 2013 to 2023",
"330"
] |
Theoretical Assessment for Weather Nowcasting Using Deep Learning Methods | [
"Abhay B. Upadhyay",
"Saurin R. Shah",
"Rajesh A. Thakkar"
] | Weather is influenced by various factors such as temperature, pressure, air movement, moisture/water vapor, and the Earth’s rotating motion. Accurate weather forecasting at a high geographical resolution is a complex and computationally expensive task. This study employs a nowcasting approach using meteorological radar images. Building upon the principles of unsupervised representation in deep learning, we delve into the emerging field of next-frame prediction in computer vision. This research focuses on predicting future images based on prior image data, with applications ranging from robot decision-making to autonomous driving. We present the latest advancements in next-frame prediction networks, categorizing them into two approaches: Machine Learners and deep learners. We discuss the merits and limitations of each approach, comparing them based on various parameters. Finally, we outline potential directions for future research in this field, aiming to make weather forecasting more precise and accessible. | 10.1007/s11831-024-10096-5 | theoretical assessment for weather nowcasting using deep learning methods | weather is influenced by various factors such as temperature, pressure, air movement, moisture/water vapor, and the earth’s rotating motion. accurate weather forecasting at a high geographical resolution is a complex and computationally expensive task. this study employs a nowcasting approach using meteorological radar images. building upon the principles of unsupervised representation in deep learning, we delve into the emerging field of next-frame prediction in computer vision. this research focuses on predicting future images based on prior image data, with applications ranging from robot decision-making to autonomous driving. we present the latest advancements in next-frame prediction networks, categorizing them into two approaches: machine learners and deep learners. we discuss the merits and limitations of each approach, comparing them based on various parameters. finally, we outline potential directions for future research in this field, aiming to make weather forecasting more precise and accessible. | [
"weather",
"various factors",
"temperature",
"pressure",
"air movement",
"moisture/water vapor",
"the earth",
"motion",
"accurate weather forecasting",
"a high geographical resolution",
"a complex and computationally expensive task",
"this study",
"a nowcasting approach",
"meteorological radar images",
"the principles",
"unsupervised representation",
"deep learning",
"we",
"the emerging field",
"next-frame prediction",
"computer vision",
"this research",
"future images",
"prior image data",
"applications",
"robot decision-making",
"autonomous driving",
"we",
"the latest advancements",
"next-frame prediction networks",
"them",
"two approaches",
"machine learners",
"deep learners",
"we",
"the merits",
"limitations",
"each approach",
"them",
"various parameters",
"we",
"potential directions",
"future research",
"this field",
"earth",
"two"
] |
Deep Learning for Stock Market Prediction Using Sentiment and Technical Analysis | [
"Georgios-Markos Chatziloizos",
"Dimitrios Gunopulos",
"Konstantinos Konstantinou"
] | Machine learning and deep learning techniques are applied by researchers with a background in both economics and computer science, to predict stock prices and trends. These techniques are particularly attractive as an alternative to existing models and methodologies because of their ability to extract abstract features from data. Most existing research approaches are based on using either numerical/economical data or textual/sentimental data. In this article, we use cutting-edge deep learning/machine learning approaches on both numerical/economical data and textual/sentimental data in order not only to predict stock market prices and trends based on combined data but also to understand how a stock's Technical Analysis can be strengthened by using Sentiment Analysis. Using the four tickers AAPL, GOOG, NVDA and S&P 500 Information Technology, we collected historical financial data and historical textual data and we used each type of data individually and in unison, to display in which case the results were more accurate and more profitable. We describe in detail how we analyzed each type of data, and how we used it to come up with our results. | 10.1007/s42979-024-02651-5 | deep learning for stock market prediction using sentiment and technical analysis | machine learning and deep learning techniques are applied by researchers with a background in both economics and computer science, to predict stock prices and trends. these techniques are particularly attractive as an alternative to existing models and methodologies because of their ability to extract abstract features from data. most existing research approaches are based on using either numerical/economical data or textual/sentimental data. in this article, we use cutting-edge deep learning/machine learning approaches on both numerical/economical data and textual/sentimental data in order not only to predict stock market prices and trends based on combined data but also to understand how a stock's technical analysis can be strengthened by using sentiment analysis. using the four tickers aapl, goog, nvda and s&p 500 information technology, we collected historical financial data and historical textual data and we used each type of data individually and in unison, to display in which case the results were more accurate and more profitable. we describe in detail how we analyzed each type of data, and how we used it to come up with our results. | [
"machine learning",
"deep learning techniques",
"researchers",
"a background",
"both economics",
"computer science",
"stock prices",
"trends",
"these techniques",
"an alternative",
"existing models",
"methodologies",
"their ability",
"abstract features",
"data",
"most existing research approaches",
"either numerical/economical data",
"textual/sentimental data",
"this article",
"we",
"approaches",
"both numerical/economical data",
"textual/sentimental data",
"order",
"stock market prices",
"trends",
"combined data",
"a stock's technical analysis",
"sentiment analysis",
"the four tickers",
"aapl",
"goog",
"nvda",
"s&p",
"information technology",
"we",
"historical financial data",
"historical textual data",
"we",
"each type",
"data",
"unison",
"which case",
"the results",
"we",
"detail",
"we",
"each type",
"data",
"we",
"it",
"our results",
"four"
] |
IoT-enabled healthcare transformation leveraging deep learning for advanced patient monitoring and diagnosis | [
"Nawaf Alharbe",
"Manal Almalki"
] | Deep learning and the Internet of Things (IoT) are revolutionizing the healthcare industry. This study explores the potential commercial transformation resulting from IoT-enabled healthcare systems that use deep learning for patient monitoring and diagnosis. Wearables, smart sensors, and internet-connected medical devices allow doctors to monitor patients' vital signs, activities, and physiological traits in real time. However, these devices generate vast and complex data, making analysis and diagnosis challenging. Deep learning models are well-suited to analyze this growing volume of medical data. Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks can automatically recognize complex patterns and relationships in sensor data, electronic health records, and patient-reported information. This capability aids clinical professionals in diagnosing illnesses, identifying warning signs, and tailoring treatments. This paper describes a Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) -based IoT-enabled healthcare system that performs feature extraction, classification, prediction, and data preparation. Additionally, it addresses interpretability issues, privacy concerns, and resource limitations of deep learning models in real-time healthcare settings. The study demonstrates the effectiveness of Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) -powered IoT-based healthcare solutions, such as real-time patient monitoring, disease detection, risk prediction, and therapy optimization. These techniques can improve the quality, cost, and outcomes of healthcare. Combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) with IoT can significantly enhance healthcare by improving disease detection, personalized treatment, and patient monitoring through connected devices and powerful analytics. | 10.1007/s11042-024-19919-w | iot-enabled healthcare transformation leveraging deep learning for advanced patient monitoring and diagnosis | deep learning and the internet of things (iot) are revolutionizing the healthcare industry. this study explores the potential commercial transformation resulting from iot-enabled healthcare systems that use deep learning for patient monitoring and diagnosis. wearables, smart sensors, and internet-connected medical devices allow doctors to monitor patients' vital signs, activities, and physiological traits in real time. however, these devices generate vast and complex data, making analysis and diagnosis challenging. deep learning models are well-suited to analyze this growing volume of medical data. convolutional neural networks (cnns) and long short-term memory (lstm) networks can automatically recognize complex patterns and relationships in sensor data, electronic health records, and patient-reported information. this capability aids clinical professionals in diagnosing illnesses, identifying warning signs, and tailoring treatments. this paper describes a convolutional neural networks (cnns) and long short-term memory (lstm) -based iot-enabled healthcare system that performs feature extraction, classification, prediction, and data preparation. additionally, it addresses interpretability issues, privacy concerns, and resource limitations of deep learning models in real-time healthcare settings. the study demonstrates the effectiveness of convolutional neural networks (cnns) and long short-term memory (lstm) -powered iot-based healthcare solutions, such as real-time patient monitoring, disease detection, risk prediction, and therapy optimization. these techniques can improve the quality, cost, and outcomes of healthcare. combining convolutional neural networks (cnns) and long short-term memory (lstm) with iot can significantly enhance healthcare by improving disease detection, personalized treatment, and patient monitoring through connected devices and powerful analytics. | [
"deep learning",
"the internet",
"things",
"iot",
"the healthcare industry",
"this study",
"the potential commercial transformation",
"iot-enabled healthcare systems",
"that",
"deep learning",
"patient monitoring",
"diagnosis",
"wearables",
"smart sensors",
"internet-connected medical devices",
"doctors",
"patients' vital signs",
"activities",
"physiological traits",
"real time",
"these devices",
"vast and complex data",
"analysis",
"diagnosis",
"challenging",
"deep learning models",
"this growing volume",
"medical data",
"convolutional neural networks",
"cnns",
"long short-term memory",
"(lstm) networks",
"complex patterns",
"relationships",
"sensor data",
"electronic health records",
"patient-reported information",
"this capability",
"clinical professionals",
"illnesses",
"warning signs",
"tailoring treatments",
"this paper",
"a convolutional neural networks",
"cnns",
"long short-term memory",
"lstm",
"that",
"feature extraction",
"classification",
"prediction",
"data preparation",
"it",
"interpretability issues",
"privacy concerns",
"resource limitations",
"deep learning models",
"real-time healthcare settings",
"the study",
"the effectiveness",
"convolutional neural networks",
"cnns",
"long short-term memory",
"lstm",
"real-time patient monitoring",
"disease detection",
"risk prediction",
"therapy optimization",
"these techniques",
"the quality",
"cost",
"outcomes",
"healthcare",
"convolutional neural networks",
"cnns",
"long short-term memory",
"lstm",
"iot",
"healthcare",
"disease detection",
"personalized treatment",
"patient monitoring",
"connected devices",
"powerful analytics"
] |
An improved ensemble deep learning framework for glaucoma detection | [
"K. J. Subha",
"R. Rajavel",
"B. Paulchamy"
] | In the realm of biomedical engineering, a significant challenge involves detecting physiological changes within the human body. Currently, these irregularities are assessed manually, which proves to be both laborious and time-consuming due to the intricate nature of the methods used for identification. To address the need for early disease detection, there is growing interest in employing computer-assisted diagnostics. The core goal of this proposed work is to develop a computer-aided system (CAD) aimed at the prompt identification, screening, and treatment of glaucoma. A fundus camera is used to extract structural characteristics from segmented optic discs and optic cups, which assists in the characterization of glaucoma and the evaluation of its severity.. Here, a novel ensemble-based deep learning model is used which consists of three modified pre-trained convolutional neural networks such as ResNet50, VGGNet19, and the Inception-V3. To assess the proposed algorithm's performance, a combined four distinct publicly available datasets which includes RIM-ONE, DRISHTI-GS, DRIONS-DB and HRF were utilized. An accuracy of 98.58%, specificity of 98.17%, sensitivity of 98.80%, Precision of 98.86%, F1 Score of 98.83% and AUC of 0.98 was obtained for the combined dataset and also an outstanding accuracy rates were attained across various datasets: 98.67%, 97.71%, 97.22% and 97.50% for RIM-ONE, DRIONS-DB, HRF, DRISHTI-GS respectively. The ensemble framework exhibits a significant advantage over state-of-the-art techniques, showcasing a performance improvement of up to 1.58% in glaucoma classification than the existing deep learning techniques. | 10.1007/s11042-024-20088-z | an improved ensemble deep learning framework for glaucoma detection | in the realm of biomedical engineering, a significant challenge involves detecting physiological changes within the human body. currently, these irregularities are assessed manually, which proves to be both laborious and time-consuming due to the intricate nature of the methods used for identification. to address the need for early disease detection, there is growing interest in employing computer-assisted diagnostics. the core goal of this proposed work is to develop a computer-aided system (cad) aimed at the prompt identification, screening, and treatment of glaucoma. a fundus camera is used to extract structural characteristics from segmented optic discs and optic cups, which assists in the characterization of glaucoma and the evaluation of its severity.. here, a novel ensemble-based deep learning model is used which consists of three modified pre-trained convolutional neural networks such as resnet50, vggnet19, and the inception-v3. to assess the proposed algorithm's performance, a combined four distinct publicly available datasets which includes rim-one, drishti-gs, drions-db and hrf were utilized. an accuracy of 98.58%, specificity of 98.17%, sensitivity of 98.80%, precision of 98.86%, f1 score of 98.83% and auc of 0.98 was obtained for the combined dataset and also an outstanding accuracy rates were attained across various datasets: 98.67%, 97.71%, 97.22% and 97.50% for rim-one, drions-db, hrf, drishti-gs respectively. the ensemble framework exhibits a significant advantage over state-of-the-art techniques, showcasing a performance improvement of up to 1.58% in glaucoma classification than the existing deep learning techniques. | [
"the realm",
"biomedical engineering",
"a significant challenge",
"physiological changes",
"the human body",
"these irregularities",
"which",
"the intricate nature",
"the methods",
"identification",
"the need",
"early disease detection",
"interest",
"computer-assisted diagnostics",
"the core goal",
"this proposed work",
"a computer-aided system",
"cad",
"the prompt identification",
"screening",
"treatment",
"glaucoma",
"a fundus camera",
"structural characteristics",
"segmented optic discs and optic cups",
"which",
"the characterization",
"glaucoma",
"the evaluation",
"its severity",
"a novel ensemble-based deep learning model",
"which",
"three modified pre-trained convolutional neural networks",
"resnet50",
"vggnet19",
"the inception",
"v3",
"the proposed algorithm's performance",
"a combined four distinct publicly available datasets",
"which",
"drishti-gs, drions",
"db",
"an accuracy",
"98.58%",
"specificity",
"98.17%",
"sensitivity",
"98.80%",
"precision",
"98.86%",
"f1 score",
"98.83%",
"auc",
"the combined dataset",
"an outstanding accuracy rates",
"various datasets",
"98.67%",
"97.71%",
"97.22%",
"97.50%",
"drions",
"db",
"drishti-gs",
"the ensemble framework",
"a significant advantage",
"the-art",
"a performance improvement",
"up to 1.58%",
"glaucoma classification",
"the existing deep learning techniques",
"three",
"resnet50",
"vggnet19",
"four",
"98.58%",
"98.17%",
"98.80%",
"98.86%",
"98.83%",
"0.98",
"98.67%",
"97.71%",
"97.22%",
"97.50%",
"up to 1.58%"
] |
Deep palmprint recognition algorithm based on self-supervised learning and uncertainty loss | [
"Rui Fan",
"Xiaohong Han"
] | With the rapid development of deep learning technology, an increasing number of people are adopting palmprint recognition algorithms based on deep learning for identity authentication. However, these algorithms are susceptible to factors such as palm placement, light source, and insufficient data sampling, resulting in poor recognition accuracy. To address these issues, this paper proposes a new end-to-end deep palmprint recognition algorithm (SSLAUL), which introduces self-supervised representation learning based on contextual prediction, utilizing unlabeled palmprint data for pre-training before introducing the trained parameters into the downstream model for fine-tuning. An uncertainty loss function is introduced into the downstream model, using the homoskedastic uncertainty as a benchmark to do adaptive weight adjustment for different loss functions dynamically. Channel and spatial attention mechanisms are also introduced to extract highly discriminative local features. In this paper, the algorithm is validated on publicly available IITD, CASIA, and PolyU palmprint datasets. The method always achieves the best recognition performance compared to other state-of-the-art algorithms. | 10.1007/s11760-024-03104-5 | deep palmprint recognition algorithm based on self-supervised learning and uncertainty loss | with the rapid development of deep learning technology, an increasing number of people are adopting palmprint recognition algorithms based on deep learning for identity authentication. however, these algorithms are susceptible to factors such as palm placement, light source, and insufficient data sampling, resulting in poor recognition accuracy. to address these issues, this paper proposes a new end-to-end deep palmprint recognition algorithm (sslaul), which introduces self-supervised representation learning based on contextual prediction, utilizing unlabeled palmprint data for pre-training before introducing the trained parameters into the downstream model for fine-tuning. an uncertainty loss function is introduced into the downstream model, using the homoskedastic uncertainty as a benchmark to do adaptive weight adjustment for different loss functions dynamically. channel and spatial attention mechanisms are also introduced to extract highly discriminative local features. in this paper, the algorithm is validated on publicly available iitd, casia, and polyu palmprint datasets. the method always achieves the best recognition performance compared to other state-of-the-art algorithms. | [
"the rapid development",
"deep learning technology",
"an increasing number",
"people",
"palmprint recognition algorithms",
"deep learning",
"identity authentication",
"these algorithms",
"factors",
"palm placement",
"light source",
"insufficient data sampling",
"poor recognition accuracy",
"these issues",
"this paper",
"end",
"recognition algorithm",
"sslaul",
"which",
"self-supervised representation learning",
"contextual prediction",
"unlabeled palmprint data",
"pre",
"-",
"training",
"the trained parameters",
"the downstream model",
"fine-tuning",
"an uncertainty loss function",
"the downstream model",
"the homoskedastic uncertainty",
"a benchmark",
"adaptive weight adjustment",
"different loss functions",
"channel",
"spatial attention mechanisms",
"highly discriminative local features",
"this paper",
"the algorithm",
"publicly available iitd",
"casia",
"polyu",
"datasets",
"the method",
"the best recognition performance",
"the-art",
"casia"
] |
On the use of deep learning for phase recovery | [
"Kaiqiang Wang",
"Li Song",
"Chutian Wang",
"Zhenbo Ren",
"Guangyuan Zhao",
"Jiazhen Dou",
"Jianglei Di",
"George Barbastathis",
"Renjie Zhou",
"Jianlin Zhao",
"Edmund Y. Lam"
] | Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource (https://github.com/kqwang/phase-recovery) for readers to learn more about PR. | 10.1038/s41377-023-01340-x | on the use of deep learning for phase recovery | phase recovery (pr) refers to calculating the phase of the light field from its intensity measurements. as exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, pr is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. in recent years, deep learning (dl), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various pr problems. in this review, we first briefly introduce conventional methods for pr. then, we review how dl provides support for pr from the following three stages, namely, pre-processing, in-processing, and post-processing. we also review how dl is used in phase image processing. finally, we summarize the work in dl for pr and provide an outlook on how to better use dl to improve the reliability and efficiency of pr. furthermore, we present a live-updating resource (https://github.com/kqwang/phase-recovery) for readers to learn more about pr. | [
"phase recovery",
"(pr",
"the phase",
"the light field",
"its intensity measurements",
"quantitative phase imaging",
"coherent diffraction",
"adaptive optics",
"the refractive index distribution",
"topography",
"an object",
"the aberration",
"an imaging system",
"recent years",
"deep learning",
"dl",
"deep neural networks",
"unprecedented support",
"computational imaging",
"more efficient solutions",
"various pr problems",
"this review",
"we",
"conventional methods",
"we",
"dl",
"support",
"pr",
"the following three stages",
"-processing",
"processing",
"we",
"dl",
"phase image processing",
"we",
"the work",
"dl",
"pr",
"an outlook",
"dl",
"the reliability",
"efficiency",
"pr",
"we",
"a live-updating resource",
"https://github.com/kqwang/phase-recovery",
"readers",
"pr",
"recent years",
"first",
"three"
] |
Efficient Generation of Pretraining Samples for Developing a Deep Learning Brain Injury Model via Transfer Learning | [
"Nan Lin",
"Shaoju Wu",
"Zheyang Wu",
"Songbai Ji"
] | The large amount of training samples required to develop a deep learning brain injury model demands enormous computational resources. Here, we study how a transformer neural network (TNN) of high accuracy can be used to efficiently generate pretraining samples for a convolutional neural network (CNN) brain injury model to reduce computational cost. The samples use synthetic impacts emulating real-world events or augmented impacts generated from limited measured impacts. First, we verify that the TNN remains highly accurate for the two impact types (N = 100 each; \({R}^{2}\) of 0.948–0.967 with root mean squared error, RMSE, ~ 0.01, for voxelized peak strains). The TNN-estimated samples (1000–5000 for each data type) are then used to pretrain a CNN, which is further finetuned using directly simulated training samples (250–5000). An independent measured impact dataset considered of complete capture of impact event is used to assess estimation accuracy (N = 191). We find that pretraining can significantly improve CNN accuracy via transfer learning compared to a baseline CNN without pretraining. It is most effective when the finetuning dataset is relatively small (e.g., 2000–4000 pretraining synthetic or augmented samples improves success rate from 0.72 to 0.81 with 500 finetuning samples). When finetuning samples reach 3000 or more, no obvious improvement occurs from pretraining. These results support using the TNN to rapidly generate pretraining samples to facilitate a more efficient training strategy for future deep learning brain models, by limiting the number of costly direct simulations from an alternative baseline model. This study could contribute to a wider adoption of deep learning brain injury models for large-scale predictive modeling and ultimately, enhancing safety protocols and protective equipment. | 10.1007/s10439-023-03354-3 | efficient generation of pretraining samples for developing a deep learning brain injury model via transfer learning | the large amount of training samples required to develop a deep learning brain injury model demands enormous computational resources. here, we study how a transformer neural network (tnn) of high accuracy can be used to efficiently generate pretraining samples for a convolutional neural network (cnn) brain injury model to reduce computational cost. the samples use synthetic impacts emulating real-world events or augmented impacts generated from limited measured impacts. first, we verify that the tnn remains highly accurate for the two impact types (n = 100 each; \({r}^{2}\) of 0.948–0.967 with root mean squared error, rmse, ~ 0.01, for voxelized peak strains). the tnn-estimated samples (1000–5000 for each data type) are then used to pretrain a cnn, which is further finetuned using directly simulated training samples (250–5000). an independent measured impact dataset considered of complete capture of impact event is used to assess estimation accuracy (n = 191). we find that pretraining can significantly improve cnn accuracy via transfer learning compared to a baseline cnn without pretraining. it is most effective when the finetuning dataset is relatively small (e.g., 2000–4000 pretraining synthetic or augmented samples improves success rate from 0.72 to 0.81 with 500 finetuning samples). when finetuning samples reach 3000 or more, no obvious improvement occurs from pretraining. these results support using the tnn to rapidly generate pretraining samples to facilitate a more efficient training strategy for future deep learning brain models, by limiting the number of costly direct simulations from an alternative baseline model. this study could contribute to a wider adoption of deep learning brain injury models for large-scale predictive modeling and ultimately, enhancing safety protocols and protective equipment. | [
"the large amount",
"training samples",
"a deep learning brain injury model",
"enormous computational resources",
"we",
"a transformer neural network",
"tnn",
"high accuracy",
"pretraining samples",
"a convolutional neural network (cnn) brain injury model",
"computational cost",
"the samples",
"synthetic impacts",
"real-world events",
"augmented impacts",
"limited measured impacts",
"we",
"the tnn",
"the two impact types",
"root mean squared error",
"rmse",
"voxelized peak strains",
"the tnn-estimated samples",
"each data type",
"a cnn",
"which",
"directly simulated training samples",
"an independent measured impact dataset",
"complete capture",
"impact event",
"estimation accuracy",
"we",
"pretraining",
"cnn accuracy",
"transfer learning",
"a baseline cnn",
"it",
"the finetuning dataset",
"2000–4000 pretraining synthetic or augmented samples",
"success rate",
"500 finetuning samples",
"samples",
"no obvious improvement",
"these results",
"the tnn",
"pretraining samples",
"a more efficient training strategy",
"future deep learning brain models",
"the number",
"costly direct simulations",
"an alternative baseline model",
"this study",
"a wider adoption",
"deep learning brain injury models",
"large-scale predictive modeling",
"safety protocols",
"protective equipment",
"cnn",
"first",
"two",
"100",
"0.948–0.967",
"0.01",
"1000–5000",
"cnn",
"191",
"cnn",
"cnn",
"2000–4000",
"0.72",
"0.81",
"500",
"3000"
] |
Curriculum learning for ab initio deep learned refractive optics | [
"Xinge Yang",
"Qiang Fu",
"Wolfgang Heidrich"
] | Deep optical optimization has recently emerged as a new paradigm for designing computational imaging systems using only the output image as the objective. However, it has been limited to either simple optical systems consisting of a single element such as a diffractive optical element or metalens, or the fine-tuning of compound lenses from good initial designs. Here we present a DeepLens design method based on curriculum learning, which is able to learn optical designs of compound lenses ab initio from randomly initialized surfaces without human intervention, therefore overcoming the need for a good initial design. We demonstrate the effectiveness of our approach by fully automatically designing both classical imaging lenses and a large field-of-view extended depth-of-field computational lens in a cellphone-style form factor, with highly aspheric surfaces and a short back focal length. | 10.1038/s41467-024-50835-7 | curriculum learning for ab initio deep learned refractive optics | deep optical optimization has recently emerged as a new paradigm for designing computational imaging systems using only the output image as the objective. however, it has been limited to either simple optical systems consisting of a single element such as a diffractive optical element or metalens, or the fine-tuning of compound lenses from good initial designs. here we present a deeplens design method based on curriculum learning, which is able to learn optical designs of compound lenses ab initio from randomly initialized surfaces without human intervention, therefore overcoming the need for a good initial design. we demonstrate the effectiveness of our approach by fully automatically designing both classical imaging lenses and a large field-of-view extended depth-of-field computational lens in a cellphone-style form factor, with highly aspheric surfaces and a short back focal length. | [
"deep optical optimization",
"a new paradigm",
"computational imaging systems",
"only the output image",
"the objective",
"it",
"either simple optical systems",
"a single element",
"a diffractive optical element",
"metalens",
"the fine-tuning",
"compound lenses",
"good initial designs",
"we",
"a deeplens design method",
"curriculum learning",
"which",
"optical designs",
"compound lenses",
"ab initio",
"randomly initialized surfaces",
"human intervention",
"the need",
"a good initial design",
"we",
"the effectiveness",
"our approach",
"both classical imaging lenses",
"view",
"field",
"a cellphone-style form factor",
"highly aspheric surfaces",
"a short back focal length",
"deeplens"
] |
Verifying the Generalization of Deep Learning to Out-of-Distribution Domains | [
"Guy Amir",
"Osher Maayan",
"Tom Zelazny",
"Guy Katz",
"Michael Schapira"
] | Deep neural networks (DNNs) play a crucial role in the field of machine learning, demonstrating state-of-the-art performance across various application domains. However, despite their success, DNN-based models may occasionally exhibit challenges with generalization, i.e., may fail to handle inputs that were not encountered during training. This limitation is a significant challenge when it comes to deploying deep learning for safety-critical tasks, as well as in real-world settings characterized by substantial variability. We introduce a novel approach for harnessing DNN verification technology to identify DNN-driven decision rules that exhibit robust generalization to previously unencountered input domains. Our method assesses generalization within an input domain by measuring the level of agreement between independently trained deep neural networks for inputs in this domain. We also efficiently realize our approach by using off-the-shelf DNN verification engines, and extensively evaluate it on both supervised and unsupervised DNN benchmarks, including a deep reinforcement learning (DRL) system for Internet congestion control—demonstrating the applicability of our approach for real-world settings. Moreover, our research introduces a fresh objective for formal verification, offering the prospect of mitigating the challenges linked to deploying DNN-driven systems in real-world scenarios. | 10.1007/s10817-024-09704-7 | verifying the generalization of deep learning to out-of-distribution domains | deep neural networks (dnns) play a crucial role in the field of machine learning, demonstrating state-of-the-art performance across various application domains. however, despite their success, dnn-based models may occasionally exhibit challenges with generalization, i.e., may fail to handle inputs that were not encountered during training. this limitation is a significant challenge when it comes to deploying deep learning for safety-critical tasks, as well as in real-world settings characterized by substantial variability. we introduce a novel approach for harnessing dnn verification technology to identify dnn-driven decision rules that exhibit robust generalization to previously unencountered input domains. our method assesses generalization within an input domain by measuring the level of agreement between independently trained deep neural networks for inputs in this domain. we also efficiently realize our approach by using off-the-shelf dnn verification engines, and extensively evaluate it on both supervised and unsupervised dnn benchmarks, including a deep reinforcement learning (drl) system for internet congestion control—demonstrating the applicability of our approach for real-world settings. moreover, our research introduces a fresh objective for formal verification, offering the prospect of mitigating the challenges linked to deploying dnn-driven systems in real-world scenarios. | [
"deep neural networks",
"dnns",
"a crucial role",
"the field",
"machine learning",
"the-art",
"various application domains",
"their success",
"dnn-based models",
"challenges",
"generalization",
"inputs",
"that",
"training",
"this limitation",
"a significant challenge",
"it",
"deep learning",
"safety-critical tasks",
"real-world settings",
"substantial variability",
"we",
"a novel approach",
"dnn verification technology",
"dnn-driven decision rules",
"that",
"robust generalization",
"input domains",
"our method",
"generalization",
"an input domain",
"the level",
"agreement",
"independently trained deep neural networks",
"inputs",
"this domain",
"we",
"our approach",
"the-shelf",
"dnn verification engines",
"it",
"dnn benchmarks",
"a deep reinforcement learning (drl) system",
"internet congestion control",
"the applicability",
"our approach",
"real-world settings",
"our research",
"a fresh objective",
"formal verification",
"the prospect",
"the challenges",
"dnn-driven systems",
"real-world scenarios"
] |
A multimodal deep learning model for predicting severe hemorrhage in placenta previa | [
"Munetoshi Akazawa",
"Kazunori Hashimoto"
] | Placenta previa causes life-threatening bleeding and accurate prediction of severe hemorrhage leads to risk stratification and optimum allocation of interventions. We aimed to use a multimodal deep learning model to predict severe hemorrhage. Using MRI T2-weighted image of the placenta and tabular data consisting of patient demographics and preoperative blood examination data, a multimodal deep learning model was constructed to predict cases of intraoperative blood loss > 2000 ml. We evaluated the prediction performance of the model by comparing it with that of two machine learning methods using only tabular data and MRI images, as well as with that of two human expert obstetricians. Among the enrolled 48 patients, 26 (54.2%) lost > 2000 ml of blood and 22 (45.8%) lost < 2000 ml of blood. Multimodal deep learning model showed the best accuracy of 0.68 and AUC of 0.74, whereas the machine learning model using tabular data and MRI images had a class accuracy of 0.61 and 0.53, respectively. The human experts had median accuracies of 0.61. Multimodal deep learning models could integrate the two types of information and predict severe hemorrhage cases. The model might assist human expert in the prediction of intraoperative hemorrhage in the case of placenta previa. | 10.1038/s41598-023-44634-1 | a multimodal deep learning model for predicting severe hemorrhage in placenta previa | placenta previa causes life-threatening bleeding and accurate prediction of severe hemorrhage leads to risk stratification and optimum allocation of interventions. we aimed to use a multimodal deep learning model to predict severe hemorrhage. using mri t2-weighted image of the placenta and tabular data consisting of patient demographics and preoperative blood examination data, a multimodal deep learning model was constructed to predict cases of intraoperative blood loss > 2000 ml. we evaluated the prediction performance of the model by comparing it with that of two machine learning methods using only tabular data and mri images, as well as with that of two human expert obstetricians. among the enrolled 48 patients, 26 (54.2%) lost > 2000 ml of blood and 22 (45.8%) lost < 2000 ml of blood. multimodal deep learning model showed the best accuracy of 0.68 and auc of 0.74, whereas the machine learning model using tabular data and mri images had a class accuracy of 0.61 and 0.53, respectively. the human experts had median accuracies of 0.61. multimodal deep learning models could integrate the two types of information and predict severe hemorrhage cases. the model might assist human expert in the prediction of intraoperative hemorrhage in the case of placenta previa. | [
"placenta previa",
"life-threatening bleeding",
"accurate prediction",
"severe hemorrhage",
"risk stratification",
"optimum allocation",
"interventions",
"we",
"a multimodal deep learning model",
"severe hemorrhage",
"mri t2-weighted image",
"the placenta and tabular data",
"patient demographics",
"preoperative blood examination data",
"a multimodal deep learning model",
"cases",
"intraoperative blood loss",
"we",
"the prediction performance",
"the model",
"it",
"that",
"two machine learning methods",
"only tabular data and mri images",
"that",
"two human expert obstetricians",
"the enrolled 48 patients",
"54.2%",
"2000 ml",
"blood",
"22 (45.8%",
"2000 ml",
"blood",
"multimodal deep learning model",
"the best accuracy",
"auc",
"the machine learning model",
"tabular data and mri images",
"a class accuracy",
"the human experts",
"median accuracies",
"multimodal deep learning models",
"the two types",
"information",
"severe hemorrhage cases",
"the model",
"human expert",
"the prediction",
"intraoperative hemorrhage",
"the case",
"placenta previa",
"placenta previa",
"2000 ml",
"two",
"two",
"48",
"26",
"54.2%",
"2000 ml",
"22",
"45.8%",
"2000 ml",
"0.68",
"0.74",
"0.61",
"0.53",
"0.61",
"two"
] |
Deep Reinforcement Learning for Mineral Prospectivity Mapping | [
"Zixian Shi",
"Renguang Zuo",
"Bao Zhou"
] | Machine learning algorithms, including supervised and unsupervised learning ones, have been widely used in mineral prospectivity mapping. Supervised learning algorithms require the use of numerous known mineral deposits to ensure the reliability of the training results. Unsupervised learning algorithms can be applied to areas with rare or no known deposits. Reinforcement learning (RL) is a type of machine learning algorithm that differs from supervised and unsupervised learning models in that the learning process is performed interactively by the agent and environment. The environment feeds the agent with reward signals and states, and the agent synthetically evaluates the mineralization potential of each state based on these rewards. In this study, a deep RL framework was constructed for mineral prospectivity mapping, and a case study for mapping gold prospectivity in northwest Hubei Province, China, was used to test the framework. The deep RL agent extracted the information of known mineralization by automatically interacting with the environment while simultaneously mining potential mineralization information from the unlabeled dataset. Its comparison with random forest and isolation forest models demonstrates that deep RL performs better regardless of the number of known mineral deposits because of its unique reward and feedback mechanism. The delineated high-potential areas show a strong spatial correlation with known gold deposits and can therefore provide significant clues for future prospecting in the study area. | 10.1007/s11004-023-10059-9 | deep reinforcement learning for mineral prospectivity mapping | machine learning algorithms, including supervised and unsupervised learning ones, have been widely used in mineral prospectivity mapping. supervised learning algorithms require the use of numerous known mineral deposits to ensure the reliability of the training results. unsupervised learning algorithms can be applied to areas with rare or no known deposits. reinforcement learning (rl) is a type of machine learning algorithm that differs from supervised and unsupervised learning models in that the learning process is performed interactively by the agent and environment. the environment feeds the agent with reward signals and states, and the agent synthetically evaluates the mineralization potential of each state based on these rewards. in this study, a deep rl framework was constructed for mineral prospectivity mapping, and a case study for mapping gold prospectivity in northwest hubei province, china, was used to test the framework. the deep rl agent extracted the information of known mineralization by automatically interacting with the environment while simultaneously mining potential mineralization information from the unlabeled dataset. its comparison with random forest and isolation forest models demonstrates that deep rl performs better regardless of the number of known mineral deposits because of its unique reward and feedback mechanism. the delineated high-potential areas show a strong spatial correlation with known gold deposits and can therefore provide significant clues for future prospecting in the study area. | [
"machine learning algorithms",
"supervised and unsupervised learning ones",
"mineral prospectivity mapping",
"supervised learning algorithms",
"the use",
"numerous known mineral deposits",
"the reliability",
"the training results",
"unsupervised learning algorithms",
"areas",
"rare or no known deposits",
"reinforcement learning",
"rl",
"a type",
"machine learning algorithm",
"that",
"supervised and unsupervised learning models",
"the learning process",
"the agent",
"environment",
"the environment",
"the agent",
"reward signals",
"states",
"the agent",
"the mineralization potential",
"each state",
"these rewards",
"this study",
"a deep rl framework",
"mineral prospectivity mapping",
"a case study",
"mapping",
"gold prospectivity",
"northwest hubei province",
"china",
"the framework",
"the deep rl agent",
"the information",
"known mineralization",
"the environment",
"potential mineralization information",
"the unlabeled dataset",
"its comparison",
"random forest and isolation forest models",
"deep rl",
"the number",
"known mineral deposits",
"its unique reward",
"feedback mechanism",
"the delineated high-potential areas",
"a strong spatial correlation",
"known gold deposits",
"significant clues",
"future prospecting",
"the study area",
"china"
] |
A deep learning based approach for image retrieval extraction in mobile edge computing | [
"Jamal Alasadi",
"Ghassan F. Bati",
"Ahmed Al Hilli"
] | Deep learning has been widely explored in 5G applications, including computer vision, the Internet of Things (IoT), and intermedia classification. However, applying the deep learning approach in limited-resource mobile devices is one of the most challenging issues. At the same time, users’ experience in terms of Quality of Service (QoS) (e.g., service latency, outcome accuracy, and achievable data rate) performs poorly while interacting with machine learning applications. Mobile edge computing (MEC) has been introduced as a cooperative approach to bring computation resources in proximity to end-user devices to overcome these limitations. This article aims to design a novel image reiterative extraction algorithm based on convolution neural network (CNN) learning and computational task offloading to support machine learning-based mobile applications in resource-limited and uncertain environments. Accordingly, we leverage the framework of image retrieval extraction and introduce three approaches. First, privacy preservation is strict and aims to protect personal data. Second, network traffic reduction. Third, minimizing feature matching time. Our simulation results associated with real-time experiments on a small-scale MEC server have shown the effectiveness of the proposed deep learning-based approach over existing schemes. The source code is available here: https://github.com/jamalalasadi/CNN_Image_retrieval. | 10.1007/s43995-024-00060-6 | a deep learning based approach for image retrieval extraction in mobile edge computing | deep learning has been widely explored in 5g applications, including computer vision, the internet of things (iot), and intermedia classification. however, applying the deep learning approach in limited-resource mobile devices is one of the most challenging issues. at the same time, users’ experience in terms of quality of service (qos) (e.g., service latency, outcome accuracy, and achievable data rate) performs poorly while interacting with machine learning applications. mobile edge computing (mec) has been introduced as a cooperative approach to bring computation resources in proximity to end-user devices to overcome these limitations. this article aims to design a novel image reiterative extraction algorithm based on convolution neural network (cnn) learning and computational task offloading to support machine learning-based mobile applications in resource-limited and uncertain environments. accordingly, we leverage the framework of image retrieval extraction and introduce three approaches. first, privacy preservation is strict and aims to protect personal data. second, network traffic reduction. third, minimizing feature matching time. our simulation results associated with real-time experiments on a small-scale mec server have shown the effectiveness of the proposed deep learning-based approach over existing schemes. the source code is available here: https://github.com/jamalalasadi/cnn_image_retrieval. | [
"deep learning",
"5g applications",
"computer vision",
"the internet",
"things",
"iot",
"intermedia classification",
"the deep learning approach",
"limited-resource mobile devices",
"the most challenging issues",
"the same time",
"users’ experience",
"terms",
"quality",
"service",
"qos",
"machine learning applications",
"mobile edge computing",
"(mec",
"a cooperative approach",
"computation resources",
"proximity",
"end-user devices",
"these limitations",
"this article",
"a novel image reiterative extraction algorithm",
"convolution neural network",
"(cnn) learning",
"computational task",
"machine learning-based mobile applications",
"resource-limited and uncertain environments",
"we",
"the framework",
"image retrieval extraction",
"three approaches",
"privacy preservation",
"personal data",
"feature matching time",
"our simulation results",
"real-time experiments",
"a small-scale mec server",
"the effectiveness",
"the proposed deep learning-based approach",
"existing schemes",
"the source code",
"https://github.com/jamalalasadi/cnn_image_retrieval",
"5",
"cnn",
"three",
"first",
"second",
"third"
] |
Deep learning-based automated angle measurement for flatfoot diagnosis in weight-bearing lateral radiographs | [
"Won-Jun Noh",
"Mu Sook Lee",
"Byoung-Dai Lee"
] | This study aimed to develop and evaluate a deep learning-based system for the automatic measurement of angles (specifically, Meary’s angle and calcaneal pitch) in weight-bearing lateral radiographs of the foot for flatfoot diagnosis. We utilized 3960 lateral radiographs, either from the left or right foot, sourced from a pool of 4000 patients to construct and evaluate a deep learning-based model. These radiographs were captured between June and November 2021, and patients who had undergone total ankle replacement surgery or ankle arthrodesis surgery were excluded. Various methods, including correlation analysis, Bland–Altman plots, and paired T-tests, were employed to assess the concordance between the angles automatically measured using the system and those assessed by clinical experts. The evaluation dataset comprised 150 weight-bearing radiographs from 150 patients. In all test cases, the angles automatically computed using the deep learning-based system were in good agreement with the reference standards (Meary’s angle: Pearson correlation coefficient (PCC) = 0.964, intraclass correlation coefficient (ICC) = 0.963, concordance correlation coefficient (CCC) = 0.963, p-value = 0.632, mean absolute error (MAE) = 1.59°; calcaneal pitch: PCC = 0.988, ICC = 0.987, CCC = 0.987, p-value = 0.055, MAE = 0.63°). The average time required for angle measurement using only the CPU to execute the deep learning-based system was 11 ± 1 s. The deep learning-based automatic angle measurement system, a tool for diagnosing flatfoot, demonstrated comparable accuracy and reliability with the results obtained by medical professionals for patients without internal fixation devices. | 10.1038/s41598-024-69549-3 | deep learning-based automated angle measurement for flatfoot diagnosis in weight-bearing lateral radiographs | this study aimed to develop and evaluate a deep learning-based system for the automatic measurement of angles (specifically, meary’s angle and calcaneal pitch) in weight-bearing lateral radiographs of the foot for flatfoot diagnosis. we utilized 3960 lateral radiographs, either from the left or right foot, sourced from a pool of 4000 patients to construct and evaluate a deep learning-based model. these radiographs were captured between june and november 2021, and patients who had undergone total ankle replacement surgery or ankle arthrodesis surgery were excluded. various methods, including correlation analysis, bland–altman plots, and paired t-tests, were employed to assess the concordance between the angles automatically measured using the system and those assessed by clinical experts. the evaluation dataset comprised 150 weight-bearing radiographs from 150 patients. in all test cases, the angles automatically computed using the deep learning-based system were in good agreement with the reference standards (meary’s angle: pearson correlation coefficient (pcc) = 0.964, intraclass correlation coefficient (icc) = 0.963, concordance correlation coefficient (ccc) = 0.963, p-value = 0.632, mean absolute error (mae) = 1.59°; calcaneal pitch: pcc = 0.988, icc = 0.987, ccc = 0.987, p-value = 0.055, mae = 0.63°). the average time required for angle measurement using only the cpu to execute the deep learning-based system was 11 ± 1 s. the deep learning-based automatic angle measurement system, a tool for diagnosing flatfoot, demonstrated comparable accuracy and reliability with the results obtained by medical professionals for patients without internal fixation devices. | [
"this study",
"a deep learning-based system",
"the automatic measurement",
"angles",
"(specifically, meary’s angle",
"calcaneal pitch",
"weight-bearing lateral radiographs",
"the foot",
"flatfoot diagnosis",
"we",
"3960 lateral radiographs",
"the left or right foot",
"a pool",
"4000 patients",
"a deep learning-based model",
"these radiographs",
"june",
"november",
"patients",
"who",
"total ankle replacement surgery",
"ankle arthrodesis surgery",
"various methods",
"correlation analysis",
"bland–altman plots",
"t-tests",
"the concordance",
"the angles",
"the system",
"those",
"clinical experts",
"the evaluation dataset",
"150 weight-bearing",
"150 patients",
"all test cases",
"the angles",
"the deep learning-based system",
"good agreement",
"the reference standards",
"meary’s angle",
"pcc",
"intraclass correlation coefficient",
"icc",
"concordance correlation coefficient",
"ccc",
"absolute error",
"mae",
"1.59°",
"calcaneal pitch",
"the average time",
"angle measurement",
"only the cpu",
"the deep learning-based system",
"11 ±",
"the deep learning-based automatic angle measurement system",
"a tool",
"comparable accuracy",
"reliability",
"the results",
"medical professionals",
"patients",
"internal fixation devices",
"3960",
"4000",
"between june and november 2021",
"150",
"150",
"0.964",
"0.963",
"0.963",
"0.632",
"1.59",
"0.988",
"0.987",
"0.987",
"0.055",
"0.63",
"11",
"1"
] |
Visual sentiment analysis using data-augmented deep transfer learning techniques | [
"Zhiguo Jiang",
"Waneeza Zaheer",
"Aamir Wali",
"S. A. M. Gilani"
] | There has been a growing trend among users of social media platforms to express their emotions using visual content. Visual sentiment analysis is the process of understanding the emotional polarity of images or videos and is still considered a challenging problem in artificial intelligence. Most of the existing models are based on robust machine learning or deep learning techniques. The idea of using deep transfer learning techniques for visual sentiment analysis is fairly new. In this paper, we propose a new approach using data-augmented-transfer learning architecture consisting of a pre-trained VGG16 model that is fine-tuned using SVM with augmented training data. For fine-tuning and evaluation, we initially use two Twitter image datasets. We further validated the proposed model on a third dataset. The commonly used geometric augmentation methods such as rotation, zoom range, width shift, height shift, shear range and horizontal flip were are used. We compare our proposed VGG16-SVM model with 3 other state-of-the-art deep models commonly used for transfer learning and 4 machine learning models (besides SVM) used for fine-tuning. The results show that VGG16-SVM produces the overall best accuracy (94%) and recall (96%) among all transfer learning and machine learning pairs. We also show that our proposed model outperforms all previous studies that use the same dataset. | 10.1007/s11042-023-16262-4 | visual sentiment analysis using data-augmented deep transfer learning techniques | there has been a growing trend among users of social media platforms to express their emotions using visual content. visual sentiment analysis is the process of understanding the emotional polarity of images or videos and is still considered a challenging problem in artificial intelligence. most of the existing models are based on robust machine learning or deep learning techniques. the idea of using deep transfer learning techniques for visual sentiment analysis is fairly new. in this paper, we propose a new approach using data-augmented-transfer learning architecture consisting of a pre-trained vgg16 model that is fine-tuned using svm with augmented training data. for fine-tuning and evaluation, we initially use two twitter image datasets. we further validated the proposed model on a third dataset. the commonly used geometric augmentation methods such as rotation, zoom range, width shift, height shift, shear range and horizontal flip were are used. we compare our proposed vgg16-svm model with 3 other state-of-the-art deep models commonly used for transfer learning and 4 machine learning models (besides svm) used for fine-tuning. the results show that vgg16-svm produces the overall best accuracy (94%) and recall (96%) among all transfer learning and machine learning pairs. we also show that our proposed model outperforms all previous studies that use the same dataset. | [
"a growing trend",
"users",
"social media platforms",
"their emotions",
"visual content",
"visual sentiment analysis",
"the process",
"the emotional polarity",
"images",
"videos",
"a challenging problem",
"artificial intelligence",
"the existing models",
"robust machine learning",
"deep learning techniques",
"the idea",
"deep transfer",
"techniques",
"visual sentiment analysis",
"this paper",
"we",
"a new approach",
"data-augmented-transfer learning architecture",
"a pre-trained vgg16 model",
"that",
"svm",
"augmented training data",
"fine-tuning",
"evaluation",
"we",
"two twitter image datasets",
"we",
"the proposed model",
"a third dataset",
"the commonly used geometric augmentation methods",
"rotation",
"zoom",
"range",
", width shift",
"height shift",
"shear range",
"horizontal flip",
"we",
"our proposed vgg16-svm model",
"the-art",
"transfer learning",
"4 machine learning models",
"svm",
"fine-tuning",
"the results",
"vgg16-svm",
"the overall best accuracy",
"94%",
"96%",
"all transfer learning and machine learning pairs",
"we",
"our proposed model",
"all previous studies",
"that",
"the same dataset",
"two",
"third",
"3",
"4",
"94%",
"96%"
] |
Deep learning-based electrocardiographic screening for chronic kidney disease | [
"Lauri Holmstrom",
"Matthew Christensen",
"Neal Yuan",
"J. Weston Hughes",
"John Theurer",
"Melvin Jujjavarapu",
"Pedram Fatehi",
"Alan Kwan",
"Roopinder K. Sandhu",
"Joseph Ebinger",
"Susan Cheng",
"James Zou",
"Sumeet S. Chugh",
"David Ouyang"
] | BackgroundUndiagnosed chronic kidney disease (CKD) is a common and usually asymptomatic disorder that causes a high burden of morbidity and early mortality worldwide. We developed a deep learning model for CKD screening from routinely acquired ECGs.MethodsWe collected data from a primary cohort with 111,370 patients which had 247,655 ECGs between 2005 and 2019. Using this data, we developed, trained, validated, and tested a deep learning model to predict whether an ECG was taken within one year of the patient receiving a CKD diagnosis. The model was additionally validated using an external cohort from another healthcare system which had 312,145 patients with 896,620 ECGs between 2005 and 2018.ResultsUsing 12-lead ECG waveforms, our deep learning algorithm achieves discrimination for CKD of any stage with an AUC of 0.767 (95% CI 0.760–0.773) in a held-out test set and an AUC of 0.709 (0.708–0.710) in the external cohort. Our 12-lead ECG-based model performance is consistent across the severity of CKD, with an AUC of 0.753 (0.735–0.770) for mild CKD, AUC of 0.759 (0.750–0.767) for moderate-severe CKD, and an AUC of 0.783 (0.773–0.793) for ESRD. In patients under 60 years old, our model achieves high performance in detecting any stage CKD with both 12-lead (AUC 0.843 [0.836–0.852]) and 1-lead ECG waveform (0.824 [0.815–0.832]).ConclusionsOur deep learning algorithm is able to detect CKD using ECG waveforms, with stronger performance in younger patients and more severe CKD stages. This ECG algorithm has the potential to augment screening for CKD. | 10.1038/s43856-023-00278-w | deep learning-based electrocardiographic screening for chronic kidney disease | backgroundundiagnosed chronic kidney disease (ckd) is a common and usually asymptomatic disorder that causes a high burden of morbidity and early mortality worldwide. we developed a deep learning model for ckd screening from routinely acquired ecgs.methodswe collected data from a primary cohort with 111,370 patients which had 247,655 ecgs between 2005 and 2019. using this data, we developed, trained, validated, and tested a deep learning model to predict whether an ecg was taken within one year of the patient receiving a ckd diagnosis. the model was additionally validated using an external cohort from another healthcare system which had 312,145 patients with 896,620 ecgs between 2005 and 2018.resultsusing 12-lead ecg waveforms, our deep learning algorithm achieves discrimination for ckd of any stage with an auc of 0.767 (95% ci 0.760–0.773) in a held-out test set and an auc of 0.709 (0.708–0.710) in the external cohort. our 12-lead ecg-based model performance is consistent across the severity of ckd, with an auc of 0.753 (0.735–0.770) for mild ckd, auc of 0.759 (0.750–0.767) for moderate-severe ckd, and an auc of 0.783 (0.773–0.793) for esrd. in patients under 60 years old, our model achieves high performance in detecting any stage ckd with both 12-lead (auc 0.843 [0.836–0.852]) and 1-lead ecg waveform (0.824 [0.815–0.832]).conclusionsour deep learning algorithm is able to detect ckd using ecg waveforms, with stronger performance in younger patients and more severe ckd stages. this ecg algorithm has the potential to augment screening for ckd. | [
"backgroundundiagnosed chronic kidney disease",
"ckd",
"a common and usually asymptomatic disorder",
"that",
"a high burden",
"morbidity",
"early mortality",
"we",
"a deep learning model",
"ckd",
"routinely acquired ecgs.methodswe collected data",
"a primary cohort",
"111,370 patients",
"which",
"247,655 ecgs",
"this data",
"we",
"a deep learning model",
"an ecg",
"one year",
"the patient",
"a ckd diagnosis",
"the model",
"an external cohort",
"another healthcare system",
"which",
"312,145 patients",
"896,620 ecgs",
"our deep learning",
"algorithm",
"discrimination",
"ckd",
"any stage",
"an auc",
"95%",
"ci 0.760–0.773",
"an auc",
"(0.708–0.710",
"the external cohort",
"our 12-lead ecg-based model performance",
"the severity",
"ckd",
"an auc",
"(0.735–0.770",
"mild ckd",
"auc",
"0.750–0.767",
"moderate-severe ckd",
"an auc",
"esrd",
"patients",
"our model",
"high performance",
"any stage ckd",
"both 12-lead",
"auc",
"deep learning algorithm",
"ckd",
"ecg waveforms",
"stronger performance",
"younger patients",
"more severe ckd stages",
"this ecg algorithm",
"the potential",
"augment",
"ckd",
"111,370",
"247,655",
"between 2005 and 2019",
"one year",
"312,145",
"896,620",
"between 2005",
"2018.resultsusing",
"12",
"0.767",
"95%",
"0.760–0.773",
"0.709",
"12",
"0.753",
"0.735–0.770",
"0.759",
"0.750–0.767",
"0.783",
"0.773–0.793",
"under 60 years old",
"12",
"0.843",
"1",
"0.824"
] |
Deep-learning based supervisory monitoring of robotized DE-GMAW process through learning from human welders | [
"Rui Yu",
"Yue Cao",
"Jennifer Martin",
"Otto Chiang",
"YuMing Zhang"
] | Double-electrode gas metal arc welding (DE-GMAW) modifies GMAW by adding a second electrode to bypass a portion of the current flowing from the wire. This reduces the current to, and the heat input on, the workpiece. Successful bypassing depends on the relative position of the bypass electrode to the continuously varying wire tip. To ensure proper operation, we propose robotizing the system using a follower robot to carry and adaptively adjust the bypass electrode. The primary information for monitoring this process is the arc image, which directly shows desired and undesired modes. However, developing a robust algorithm for processing the complex arc image is time-consuming and challenging. Employing a deep learning approach requires labeling numerous arc images for the corresponding DE-GMAW modes, which is not practically feasible. To introduce alternative labels, we analyze arc phenomena in various DE-GMAW modes and correlate them with distinct arc systems having varying voltages. These voltages serve as automatically derived labels to train the deep-learning network. The results demonstrated reliable process monitoring. | 10.1007/s40194-023-01635-y | deep-learning based supervisory monitoring of robotized de-gmaw process through learning from human welders | double-electrode gas metal arc welding (de-gmaw) modifies gmaw by adding a second electrode to bypass a portion of the current flowing from the wire. this reduces the current to, and the heat input on, the workpiece. successful bypassing depends on the relative position of the bypass electrode to the continuously varying wire tip. to ensure proper operation, we propose robotizing the system using a follower robot to carry and adaptively adjust the bypass electrode. the primary information for monitoring this process is the arc image, which directly shows desired and undesired modes. however, developing a robust algorithm for processing the complex arc image is time-consuming and challenging. employing a deep learning approach requires labeling numerous arc images for the corresponding de-gmaw modes, which is not practically feasible. to introduce alternative labels, we analyze arc phenomena in various de-gmaw modes and correlate them with distinct arc systems having varying voltages. these voltages serve as automatically derived labels to train the deep-learning network. the results demonstrated reliable process monitoring. | [
"double-electrode gas metal arc welding",
"de",
"-",
"gmaw",
"gmaw",
"a second electrode",
"a portion",
"the current",
"the wire",
"this",
"the heat input",
"the workpiece",
"successful bypassing",
"the relative position",
"the bypass",
"the continuously varying wire tip",
"proper operation",
"we",
"the system",
"a follower robot",
"the bypass",
"the primary information",
"this process",
"the arc image",
"which",
"desired and undesired modes",
"a robust algorithm",
"the complex arc image",
"a deep learning approach",
"numerous arc images",
"the corresponding de-gmaw modes",
"which",
"alternative labels",
"we",
"arc phenomena",
"various de-gmaw modes",
"them",
"distinct arc systems",
"varying voltages",
"these voltages",
"automatically derived labels",
"the deep-learning network",
"the results",
"reliable process monitoring",
"second"
] |
A Pragmatic Privacy-Preserving Deep Learning Framework Satisfying Differential Privacy | [
"Tran Khanh Dang",
"Phat T. Tran-Truong"
] | With the increasing use of technology in our daily lives, data privacy has become a critical issue. It is essential to carefully design technologies to ensure the protection of people’s personal information. In fact, what we need are privacy-enhancing technologies (PETs) rather than solely focusing on technologies themselves. Artificial intelligence (AI) and deep learning technologies, which are considered societal locomotives, are no exception. However, AI practitioners usually design and develop without considering privacy concerns. To address this gap, we propose a pragmatic privacy-preserving deep learning framework that is suitable for AI practitioners. Our proposed framework is designed to satisfy differential privacy, a rigorous standard for preserving privacy. It is based on a setting called Private Aggregation of Teacher Ensembles (PATE), in which we have made several improvements to achieve a better level of accuracy and privacy protection. Specifically, we use a differential private aggregation mechanism called sparse vector technique and combine it with several other improvements such as human-in-the-loop and pre-trained models. Our proposed solution demonstrates the possibility of producing privacy-preserving models that approximate ground-truth models with a fixed privacy budget. These models are capable of handling a large number of training requests, making them suitable for deep learning training processes. Furthermore, our framework can be deployed in both centralized and distributed training settings. We hope that our work will encourage AI practitioners to adopt PETs and build technologies with privacy in mind. | 10.1007/s42979-023-02437-1 | a pragmatic privacy-preserving deep learning framework satisfying differential privacy | with the increasing use of technology in our daily lives, data privacy has become a critical issue. it is essential to carefully design technologies to ensure the protection of people’s personal information. in fact, what we need are privacy-enhancing technologies (pets) rather than solely focusing on technologies themselves. artificial intelligence (ai) and deep learning technologies, which are considered societal locomotives, are no exception. however, ai practitioners usually design and develop without considering privacy concerns. to address this gap, we propose a pragmatic privacy-preserving deep learning framework that is suitable for ai practitioners. our proposed framework is designed to satisfy differential privacy, a rigorous standard for preserving privacy. it is based on a setting called private aggregation of teacher ensembles (pate), in which we have made several improvements to achieve a better level of accuracy and privacy protection. specifically, we use a differential private aggregation mechanism called sparse vector technique and combine it with several other improvements such as human-in-the-loop and pre-trained models. our proposed solution demonstrates the possibility of producing privacy-preserving models that approximate ground-truth models with a fixed privacy budget. these models are capable of handling a large number of training requests, making them suitable for deep learning training processes. furthermore, our framework can be deployed in both centralized and distributed training settings. we hope that our work will encourage ai practitioners to adopt pets and build technologies with privacy in mind. | [
"the increasing use",
"technology",
"our daily lives",
"data privacy",
"a critical issue",
"it",
"technologies",
"the protection",
"people’s personal information",
"fact",
"what",
"we",
"privacy-enhancing technologies",
"pets",
"technologies",
"themselves",
"artificial intelligence",
"deep learning technologies",
"which",
"societal locomotives",
"no exception",
"practitioners",
"privacy concerns",
"this gap",
"we",
"a pragmatic privacy-preserving deep learning framework",
"that",
"ai practitioners",
"our proposed framework",
"differential privacy",
"a rigorous standard",
"privacy",
"it",
"a setting",
"private aggregation",
"teacher ensembles",
"which",
"we",
"several improvements",
"a better level",
"accuracy",
"privacy protection",
"we",
"a differential private aggregation mechanism",
"sparse vector technique",
"it",
"several other improvements",
"the-loop",
"our proposed solution",
"the possibility",
"privacy-preserving models",
"that",
"a fixed privacy budget",
"these models",
"a large number",
"training requests",
"them",
"deep learning training processes",
"our framework",
"training settings",
"we",
"our work",
"practitioners",
"pets",
"technologies",
"privacy",
"mind"
] |
A stacked deep learning approach for multiclass classification of plant diseases | [
"Aman Sharma",
"Raghav Dalmia",
"Aarush Saxena",
"Rajni Mohana"
] | PurposePlant diseases are one of the main factors affecting food production and reducing production losses, they must be swiftly identified and treated. Deep learning algorithms in association with computer vision techniques have recently found usage in the diagnosis of plant diseases, offering a potent tool with highly accurate results. The objective of this study is to identify a stacking ensemble-based solution by using several algorithms in the process of classifying and diagnosing plant diseases, describing trends, and emphasizing gaps.MethodThe stacking ensemble is made using top four performing deep learning algorithms and multi-layered perceptron as meta classifier. In this regard, we reviewed more than 15 studies from the previous three years that address problems with disease detection, dataset characteristics, researched crops, and pathogens in various ways.ResultsThe proposed ensemble model achieved a maximum accuracy of 98.13% compared to the conventional architectures. For comparing the results, various performance metrics are used such as accuracy, loss, precision etc. which outperformed the results of the deep learning algorithms run separately for the data as shown in Table 5.ConclusionThe suggested framework can help identify the presence of disease in a sample of plant leaves as a preventative strategy as the results were quite promising compared to the results of existing literature. | 10.1007/s11104-024-06719-2 | a stacked deep learning approach for multiclass classification of plant diseases | purposeplant diseases are one of the main factors affecting food production and reducing production losses, they must be swiftly identified and treated. deep learning algorithms in association with computer vision techniques have recently found usage in the diagnosis of plant diseases, offering a potent tool with highly accurate results. the objective of this study is to identify a stacking ensemble-based solution by using several algorithms in the process of classifying and diagnosing plant diseases, describing trends, and emphasizing gaps.methodthe stacking ensemble is made using top four performing deep learning algorithms and multi-layered perceptron as meta classifier. in this regard, we reviewed more than 15 studies from the previous three years that address problems with disease detection, dataset characteristics, researched crops, and pathogens in various ways.resultsthe proposed ensemble model achieved a maximum accuracy of 98.13% compared to the conventional architectures. for comparing the results, various performance metrics are used such as accuracy, loss, precision etc. which outperformed the results of the deep learning algorithms run separately for the data as shown in table 5.conclusionthe suggested framework can help identify the presence of disease in a sample of plant leaves as a preventative strategy as the results were quite promising compared to the results of existing literature. | [
"purposeplant diseases",
"the main factors",
"food production",
"production losses",
"they",
"deep learning algorithms",
"association",
"computer vision techniques",
"usage",
"the diagnosis",
"plant diseases",
"a potent tool",
"highly accurate results",
"the objective",
"this study",
"a stacking ensemble-based solution",
"several algorithms",
"the process",
"plant diseases",
"trends",
"gaps.methodthe stacking ensemble",
"deep learning algorithms",
"multi-layered perceptron",
"meta classifier",
"this regard",
"we",
"more than 15 studies",
"the previous three years",
"that",
"problems",
"disease detection",
"dataset characteristics",
"crops",
"pathogens",
"various ways.resultsthe proposed ensemble model",
"a maximum accuracy",
"98.13%",
"the conventional architectures",
"the results",
"various performance metrics",
"accuracy",
"loss",
"which",
"the results",
"the deep learning algorithms",
"the data",
"table",
"5.conclusionthe suggested framework",
"the presence",
"disease",
"a sample",
"plant leaves",
"a preventative strategy",
"the results",
"the results",
"existing literature",
"one",
"four",
"meta classifier",
"more than 15",
"the previous three years",
"98.13%"
] |
Forecasting bitcoin volatility: exploring the potential of deep learning | [
"Tiago E. Pratas",
"Filipe R. Ramos",
"Lihki Rubio"
] | This study aims to evaluate forecasting properties of classic methodologies (ARCH and GARCH models) in comparison with deep learning methodologies (MLP, RNN, and LSTM architectures) for predicting Bitcoin's volatility. As a new asset class with unique characteristics, Bitcoin's high volatility and structural breaks make forecasting challenging. Based on 2753 observations from 08-09-2014 to 01-05-2022, this study focuses on Bitcoin logarithmic returns. Results show that deep learning methodologies have advantages in terms of forecast quality, although significant computational costs are required. Although both MLP and RNN models produce smoother forecasts with less fluctuation, they fail to capture large spikes. The LSTM architecture, on the other hand, reacts strongly to such movements and tries to adjust its forecast accordingly. To compare forecasting accuracy at different horizons MAPE, MAE metrics are used. Diebold–Mariano tests were conducted to compare the forecast, confirming the superiority of deep learning methodologies. Overall, this study suggests that deep learning methodologies could provide a promising tool for forecasting Bitcoin returns (and therefore volatility), especially for short-term horizons. | 10.1007/s40822-023-00232-0 | forecasting bitcoin volatility: exploring the potential of deep learning | this study aims to evaluate forecasting properties of classic methodologies (arch and garch models) in comparison with deep learning methodologies (mlp, rnn, and lstm architectures) for predicting bitcoin's volatility. as a new asset class with unique characteristics, bitcoin's high volatility and structural breaks make forecasting challenging. based on 2753 observations from 08-09-2014 to 01-05-2022, this study focuses on bitcoin logarithmic returns. results show that deep learning methodologies have advantages in terms of forecast quality, although significant computational costs are required. although both mlp and rnn models produce smoother forecasts with less fluctuation, they fail to capture large spikes. the lstm architecture, on the other hand, reacts strongly to such movements and tries to adjust its forecast accordingly. to compare forecasting accuracy at different horizons mape, mae metrics are used. diebold–mariano tests were conducted to compare the forecast, confirming the superiority of deep learning methodologies. overall, this study suggests that deep learning methodologies could provide a promising tool for forecasting bitcoin returns (and therefore volatility), especially for short-term horizons. | [
"this study",
"forecasting properties",
"classic methodologies",
"arch and garch models",
"comparison",
"deep learning methodologies",
"mlp",
"rnn",
"lstm architectures",
"bitcoin's volatility",
"a new asset class",
"unique characteristics",
"bitcoin's high volatility",
"structural breaks",
"forecasting",
"2753 observations",
"this study",
"bitcoin logarithmic returns",
"results",
"deep learning methodologies",
"advantages",
"terms",
"forecast quality",
"significant computational costs",
"both mlp",
"models",
"smoother forecasts",
"less fluctuation",
"they",
"large spikes",
"the lstm architecture",
"the other hand",
"such movements",
"its forecast",
"forecasting accuracy",
"different horizons mape",
"mae metrics",
"–mariano tests",
"the forecast",
"the superiority",
"deep learning methodologies",
"this study",
"deep learning methodologies",
"a promising tool",
"bitcoin returns",
"therefore volatility",
"short-term horizons",
"2753",
"08",
"01",
"mae metrics"
] |
Presenting a three layer stacking ensemble classifier of deep learning and machine learning for skin cancer classification | [
"Bahman Jafari Tabaghsar",
"Reza Tavoli",
"Mohammad Mahdi Alizadeh Toosi"
] | One of the most common types of cancer in the world is skin cancer. Despite the different types of skin diseases with different shapes, the classification of skin diseases is a very difficult task. As a result, considering such a problem, a combination model of deep learning algorithms and machine has been proposed for skin disease classification.In this paper, a three-layer architecture based on ensemble learning is presented. In the first layer, the training input is given to convolutional neural network and EfficientNET. The output of the first layer is given to the classifiers of the second layer including machine learning classifiers. The output of the best decision of these classifiers is sent to the third layer classifier and the final prediction is made.The reason for using the three-layer architecture based on group learning was the lack of correct recognition of some classes by simple classifications. On the other hand, some diseases with different classes are classified in the same class. This model helps to correctly identify input samples with the correct combination of classifications in different layers.HAM10000 data set has been used to test and validate the proposed method. The mentioned dataset includes 10,015 images of skin lesions in seven different classes and includes different types of skin diseases. The accuracy is 99.97 on the testing set, which was much better than the previous heavy models. | 10.1007/s11042-024-19195-8 | presenting a three layer stacking ensemble classifier of deep learning and machine learning for skin cancer classification | one of the most common types of cancer in the world is skin cancer. despite the different types of skin diseases with different shapes, the classification of skin diseases is a very difficult task. as a result, considering such a problem, a combination model of deep learning algorithms and machine has been proposed for skin disease classification.in this paper, a three-layer architecture based on ensemble learning is presented. in the first layer, the training input is given to convolutional neural network and efficientnet. the output of the first layer is given to the classifiers of the second layer including machine learning classifiers. the output of the best decision of these classifiers is sent to the third layer classifier and the final prediction is made.the reason for using the three-layer architecture based on group learning was the lack of correct recognition of some classes by simple classifications. on the other hand, some diseases with different classes are classified in the same class. this model helps to correctly identify input samples with the correct combination of classifications in different layers.ham10000 data set has been used to test and validate the proposed method. the mentioned dataset includes 10,015 images of skin lesions in seven different classes and includes different types of skin diseases. the accuracy is 99.97 on the testing set, which was much better than the previous heavy models. | [
"the most common types",
"cancer",
"the world",
"skin cancer",
"the different types",
"skin diseases",
"different shapes",
"the classification",
"skin diseases",
"a very difficult task",
"a result",
"such a problem",
"a combination model",
"deep learning algorithms",
"machine",
"a three-layer architecture",
"ensemble learning",
"the first layer",
"the training input",
"convolutional neural network",
"efficientnet",
"the output",
"the first layer",
"the classifiers",
"the second layer",
"machine learning classifiers",
"the output",
"the best decision",
"these classifiers",
"the third layer classifier",
"the final prediction",
"made.the reason",
"the three-layer architecture",
"group learning",
"the lack",
"correct recognition",
"some classes",
"simple classifications",
"the other hand",
"some diseases",
"different classes",
"the same class",
"this model",
"input samples",
"the correct combination",
"classifications",
"different layers.ham10000 data set",
"the proposed method",
"the mentioned dataset",
"10,015 images",
"skin lesions",
"seven different classes",
"different types",
"skin diseases",
"the accuracy",
"the testing set",
"which",
"the previous heavy models",
"one",
"three",
"first",
"first",
"second",
"third",
"three",
"10,015",
"seven",
"99.97"
] |
RRS: Review-Based Recommendation System Using Deep Learning for Vietnamese | [
"Minh Hoang Nguyen",
"Thuat Thien Nguyen",
"Minh Nhat Ta",
"Tien Minh Nguyen",
"Kiet Van Nguyen"
] | Tourism, which includes sightseeing, relaxation, and discovery, is a fundamental aspect of human life. One of the most critical considerations for traveling is accommodation, mainly hotels. To improve the travel experience, we have presented a solution for building a recommendation model using Vietnamese and user data to suggest travelers choose the ideal hotel. Our data was collected from two well-known websites, Traveloka and Ivivu, and includes information about hotels in Vietnam and users’ feedback history, such as comments, ratings, and the names of users and hotels. We then preprocessed and labeled the inter-annotator agreement for various aspects, including service (0.89), infrastructure (0.84), sanitary (0.83), location (0.89), and attitude (0.83). Our recommendation model is built by using Collaborative Filtering and deep learning techniques. Furthermore, we suggest incorporating context vectors from tourists’ Vietnamese comments in the recommendation process. The context model is developed by using deep learning techniques to extract topics and sentiments from the words effectively. The results of our proposed model, as measured by the MSE, were 0.027, which is significantly better than a context-free model using the same parameters, which had an MSE of 0.061. Additionally, our Deep Learning, created using PhoBERT embedding, had an accuracy of 81% for topic classification and 82% for sentiment classification. The FastText based model had 82% and 81% accuracy for topic and sentiment, respectively. Our research demonstrates that our approach improves the accuracy of the recommendation model and has the potential for further development in the future. This idea can introduce a new Recommendation System that can overcome existing limitations and apply to other areas. | 10.1007/s42979-024-02812-6 | rrs: review-based recommendation system using deep learning for vietnamese | tourism, which includes sightseeing, relaxation, and discovery, is a fundamental aspect of human life. one of the most critical considerations for traveling is accommodation, mainly hotels. to improve the travel experience, we have presented a solution for building a recommendation model using vietnamese and user data to suggest travelers choose the ideal hotel. our data was collected from two well-known websites, traveloka and ivivu, and includes information about hotels in vietnam and users’ feedback history, such as comments, ratings, and the names of users and hotels. we then preprocessed and labeled the inter-annotator agreement for various aspects, including service (0.89), infrastructure (0.84), sanitary (0.83), location (0.89), and attitude (0.83). our recommendation model is built by using collaborative filtering and deep learning techniques. furthermore, we suggest incorporating context vectors from tourists’ vietnamese comments in the recommendation process. the context model is developed by using deep learning techniques to extract topics and sentiments from the words effectively. the results of our proposed model, as measured by the mse, were 0.027, which is significantly better than a context-free model using the same parameters, which had an mse of 0.061. additionally, our deep learning, created using phobert embedding, had an accuracy of 81% for topic classification and 82% for sentiment classification. the fasttext based model had 82% and 81% accuracy for topic and sentiment, respectively. our research demonstrates that our approach improves the accuracy of the recommendation model and has the potential for further development in the future. this idea can introduce a new recommendation system that can overcome existing limitations and apply to other areas. | [
"tourism",
"which",
"sightseeing",
"relaxation",
"discovery",
"a fundamental aspect",
"human life",
"the most critical considerations",
"accommodation",
", mainly hotels",
"the travel experience",
"we",
"a solution",
"a recommendation model",
"user data",
"travelers",
"the ideal hotel",
"our data",
"two well-known websites",
"traveloka",
"ivivu",
"information",
"hotels",
"vietnam",
"users’ feedback history",
"comments",
"ratings",
"the names",
"users",
"hotels",
"we",
"the inter-annotator agreement",
"various aspects",
"service",
"infrastructure",
"location",
"attitude",
"our recommendation model",
"collaborative filtering",
"deep learning techniques",
"we",
"context vectors",
"tourists’ vietnamese comments",
"the recommendation process",
"the context model",
"deep learning techniques",
"topics",
"sentiments",
"the words",
"the results",
"our proposed model",
"the mse",
"which",
"a context-free model",
"the same parameters",
"which",
"an mse",
"our deep learning",
"phobert",
"an accuracy",
"81%",
"topic classification",
"82%",
"sentiment classification",
"the fasttext based model",
"82%",
"81% accuracy",
"topic",
"sentiment",
"our research",
"our approach",
"the accuracy",
"the recommendation model",
"the potential",
"further development",
"the future",
"this idea",
"a new recommendation system",
"that",
"existing limitations",
"other areas",
"one",
"vietnamese",
"two",
"vietnam",
"0.89",
"0.84",
"0.83",
"0.89",
"0.83",
"vietnamese",
"0.027",
"0.061",
"81%",
"82%",
"82% and",
"81%"
] |
Basal Cell Carcinoma Diagnosis with Fusion of Deep Learning and Telangiectasia Features | [
"Akanksha Maurya",
"R. Joe Stanley",
"Hemanth Y. Aradhyula",
"Norsang Lama",
"Anand K. Nambisan",
"Gehana Patel",
"Daniyal Saeed",
"Samantha Swinfard",
"Colin Smith",
"Sadhika Jagannathan",
"Jason R. Hagerty",
"William V. Stoecker"
] | In recent years, deep learning (DL) has been used extensively and successfully to diagnose different cancers in dermoscopic images. However, most approaches lack clinical inputs supported by dermatologists that could aid in higher accuracy and explainability. To dermatologists, the presence of telangiectasia, or narrow blood vessels that typically appear serpiginous or arborizing, is a critical indicator of basal cell carcinoma (BCC). Exploiting the feature information present in telangiectasia through a combination of DL-based techniques could create a pathway for both, improving DL results as well as aiding dermatologists in BCC diagnosis. This study demonstrates a novel “fusion” technique for BCC vs non-BCC classification using ensemble learning on a combination of (a) handcrafted features from semantically segmented telangiectasia (U-Net-based) and (b) deep learning features generated from whole lesion images (EfficientNet-B5-based). This fusion method achieves a binary classification accuracy of 97.2%, with a 1.3% improvement over the corresponding DL-only model, on a holdout test set of 395 images. An increase of 3.7% in sensitivity, 1.5% in specificity, and 1.5% in precision along with an AUC of 0.99 was also achieved. Metric improvements were demonstrated in three stages: (1) the addition of handcrafted telangiectasia features to deep learning features, (2) including areas near telangiectasia (surround areas), (3) discarding the noisy lower-importance features through feature importance. Another novel approach to feature finding with weak annotations through the examination of the surrounding areas of telangiectasia is offered in this study. The experimental results show state-of-the-art accuracy and precision in the diagnosis of BCC, compared to three benchmark techniques. Further exploration of deep learning techniques for individual dermoscopy feature detection is warranted. | 10.1007/s10278-024-00969-3 | basal cell carcinoma diagnosis with fusion of deep learning and telangiectasia features | in recent years, deep learning (dl) has been used extensively and successfully to diagnose different cancers in dermoscopic images. however, most approaches lack clinical inputs supported by dermatologists that could aid in higher accuracy and explainability. to dermatologists, the presence of telangiectasia, or narrow blood vessels that typically appear serpiginous or arborizing, is a critical indicator of basal cell carcinoma (bcc). exploiting the feature information present in telangiectasia through a combination of dl-based techniques could create a pathway for both, improving dl results as well as aiding dermatologists in bcc diagnosis. this study demonstrates a novel “fusion” technique for bcc vs non-bcc classification using ensemble learning on a combination of (a) handcrafted features from semantically segmented telangiectasia (u-net-based) and (b) deep learning features generated from whole lesion images (efficientnet-b5-based). this fusion method achieves a binary classification accuracy of 97.2%, with a 1.3% improvement over the corresponding dl-only model, on a holdout test set of 395 images. an increase of 3.7% in sensitivity, 1.5% in specificity, and 1.5% in precision along with an auc of 0.99 was also achieved. metric improvements were demonstrated in three stages: (1) the addition of handcrafted telangiectasia features to deep learning features, (2) including areas near telangiectasia (surround areas), (3) discarding the noisy lower-importance features through feature importance. another novel approach to feature finding with weak annotations through the examination of the surrounding areas of telangiectasia is offered in this study. the experimental results show state-of-the-art accuracy and precision in the diagnosis of bcc, compared to three benchmark techniques. further exploration of deep learning techniques for individual dermoscopy feature detection is warranted. | [
"recent years",
"deep learning",
"dl",
"different cancers",
"dermoscopic images",
"most approaches",
"clinical inputs",
"dermatologists",
"that",
"higher accuracy",
"explainability",
"dermatologists",
"the presence",
"telangiectasia",
"narrow blood vessels",
"that",
"a critical indicator",
"basal cell carcinoma",
"(bcc",
"the feature information",
"telangiectasia",
"a combination",
"dl-based techniques",
"a pathway",
"both",
"dl results",
"dermatologists",
"bcc diagnosis",
"this study",
"a novel “fusion” technique",
"bcc",
"non-bcc classification",
"ensemble learning",
"a combination",
"(a) handcrafted features",
"semantically segmented telangiectasia",
"(b) deep learning features",
"whole lesion images",
"this fusion method",
"a binary classification accuracy",
"97.2%",
"a 1.3% improvement",
"the corresponding dl-only model",
"a holdout test",
"395 images",
"an increase",
"3.7%",
"sensitivity",
"1.5%",
"specificity",
"1.5%",
"precision",
"an auc",
"metric improvements",
"three stages",
"the addition",
"deep learning features",
"areas",
"telangiectasia",
"surround areas",
"the noisy lower-importance features",
"feature importance",
"another novel approach",
"weak annotations",
"the examination",
"the surrounding areas",
"telangiectasia",
"this study",
"the experimental results",
"the-art",
"precision",
"the diagnosis",
"bcc",
"three benchmark techniques",
"further exploration",
"deep learning techniques",
"individual dermoscopy feature detection",
"recent years",
"telangiectasia",
"bcc",
"telangiectasia",
"telangiectasia",
"97.2%",
"1.3%",
"395",
"3.7%",
"1.5%",
"1.5%",
"0.99",
"three",
"1",
"telangiectasia",
"2",
"telangiectasia",
"3",
"telangiectasia",
"bcc",
"three"
] |
Semantic speech analysis using machine learning and deep learning techniques: a comprehensive review | [
"Suryakant Tyagi",
"Sándor Szénási"
] | Human cognitive functions such as perception, attention, learning, memory, reasoning, and problem-solving are all significantly influenced by emotion. Emotion has a particularly potent impact on attention, modifying its selectivity in particular and influencing behavior and action motivation. Artificial Emotional Intelligence (AEI) technologies enable computers to understand a user's emotional state and respond appropriately. These systems enable a realistic dialogue between people and machines. The current generation of adaptive user interference technologies is built on techniques from data analytics and machine learning (ML), namely deep learning (DL) artificial neural networks (ANN) from multimodal data, such as videos of facial expressions, stance, and gesture, voice, and bio-physiological data (such as eye movement, ECG, respiration, EEG, FMRT, EMG, eye tracking). In this study, we reviewed existing literature based on ML and data analytics techniques being used to detect emotions in speech. The efficacy of data analytics and ML techniques in this unique area of multimodal data processing and extracting emotions from speech. This study analyzes how emotional chatbots, facial expressions, images, and social media texts can be effective in detecting emotions. PRISMA methodology is used to review the existing survey. Support Vector Machines (SVM), Naïve Bayes (NB), Random Forests (RF), Recurrent Neural Networks (RNN), Logistic Regression (LR), etc., are commonly used ML techniques for emotion extraction purposes. This study provides a new taxonomy about the application of ML in SER. The result shows that Long-Short Term Memory (LSTM) and Convolutional Neural Networks (CNN) are found to be the most useful methodology for this purpose. | 10.1007/s11042-023-17769-6 | semantic speech analysis using machine learning and deep learning techniques: a comprehensive review | human cognitive functions such as perception, attention, learning, memory, reasoning, and problem-solving are all significantly influenced by emotion. emotion has a particularly potent impact on attention, modifying its selectivity in particular and influencing behavior and action motivation. artificial emotional intelligence (aei) technologies enable computers to understand a user's emotional state and respond appropriately. these systems enable a realistic dialogue between people and machines. the current generation of adaptive user interference technologies is built on techniques from data analytics and machine learning (ml), namely deep learning (dl) artificial neural networks (ann) from multimodal data, such as videos of facial expressions, stance, and gesture, voice, and bio-physiological data (such as eye movement, ecg, respiration, eeg, fmrt, emg, eye tracking). in this study, we reviewed existing literature based on ml and data analytics techniques being used to detect emotions in speech. the efficacy of data analytics and ml techniques in this unique area of multimodal data processing and extracting emotions from speech. this study analyzes how emotional chatbots, facial expressions, images, and social media texts can be effective in detecting emotions. prisma methodology is used to review the existing survey. support vector machines (svm), naïve bayes (nb), random forests (rf), recurrent neural networks (rnn), logistic regression (lr), etc., are commonly used ml techniques for emotion extraction purposes. this study provides a new taxonomy about the application of ml in ser. the result shows that long-short term memory (lstm) and convolutional neural networks (cnn) are found to be the most useful methodology for this purpose. | [
"human cognitive functions",
"perception",
"attention",
"learning",
"memory",
"reasoning",
"problem-solving",
"emotion",
"emotion",
"a particularly potent impact",
"attention",
"its selectivity",
"behavior",
"action motivation",
"artificial emotional intelligence (aei) technologies",
"computers",
"a user's emotional state",
"these systems",
"a realistic dialogue",
"people",
"machines",
"the current generation",
"adaptive user",
"interference technologies",
"techniques",
"data analytics",
"machine learning",
"ml",
"namely deep learning",
"(dl) artificial neural networks",
"ann",
"multimodal data",
"videos",
"facial expressions",
"stance",
"gesture",
"voice",
"bio-physiological data",
"eye movement",
"ecg",
"respiration",
"fmrt",
"emg",
"eye tracking",
"this study",
"we",
"existing literature",
"ml and data analytics techniques",
"emotions",
"speech",
"the efficacy",
"data analytics",
"ml",
"techniques",
"this unique area",
"multimodal data processing",
"emotions",
"speech",
"this study",
"emotional chatbots",
"facial expressions",
"images",
"social media texts",
"emotions",
"prisma methodology",
"the existing survey",
"support vector machines",
"svm",
"naïve bayes",
"random forests",
"neural networks",
"rnn",
"logistic regression",
"lr",
"ml techniques",
"emotion extraction purposes",
"this study",
"a new taxonomy",
"the application",
"ml",
"ser",
"the result",
"long-short term memory",
"lstm",
"convolutional neural networks",
"cnn",
"the most useful methodology",
"this purpose",
"naïve bayes (nb",
"cnn"
] |
Detection of Image Tampering Using Deep Learning, Error Levels and Noise Residuals | [
"Sunen Chakraborty",
"Kingshuk Chatterjee",
"Paramita Dey"
] | Images once were considered a reliable source of information. However, when photo-editing software started to get noticed it gave rise to illegal activities which is called image tampering. These days we can come across innumerable tampered images across the internet. Software such as Photoshop, GNU Image Manipulation Program, etc. are applied to form tampered images from real ones in just a few minutes. To discover hidden signs of tampering in an image deep learning models are an effective tool than any other methods. Models used in deep learning are capable of extracting intricate features from an image automatically. Here we proposed a combination of traditional handcrafted features along with a deep learning model to differentiate between authentic and tampered images. We have presented a dual-branch Convolutional Neural Network in conjunction with Error Level Analysis and noise residuals from Spatial Rich Model. For our experiment, we utilized the freely accessible CASIA dataset. After training the dual-branch network for 16 epochs, it generated an accuracy of 98.55%. We have also provided a comparative analysis with other previously proposed work in the field of image forgery detection. This hybrid approach proves that deep learning models along with some well-known traditional approaches can provide better results for detecting tampered images. | 10.1007/s11063-024-11448-9 | detection of image tampering using deep learning, error levels and noise residuals | images once were considered a reliable source of information. however, when photo-editing software started to get noticed it gave rise to illegal activities which is called image tampering. these days we can come across innumerable tampered images across the internet. software such as photoshop, gnu image manipulation program, etc. are applied to form tampered images from real ones in just a few minutes. to discover hidden signs of tampering in an image deep learning models are an effective tool than any other methods. models used in deep learning are capable of extracting intricate features from an image automatically. here we proposed a combination of traditional handcrafted features along with a deep learning model to differentiate between authentic and tampered images. we have presented a dual-branch convolutional neural network in conjunction with error level analysis and noise residuals from spatial rich model. for our experiment, we utilized the freely accessible casia dataset. after training the dual-branch network for 16 epochs, it generated an accuracy of 98.55%. we have also provided a comparative analysis with other previously proposed work in the field of image forgery detection. this hybrid approach proves that deep learning models along with some well-known traditional approaches can provide better results for detecting tampered images. | [
"images",
"a reliable source",
"information",
"photo-editing software",
"it",
"rise",
"illegal activities",
"which",
"image tampering",
"we",
"innumerable tampered images",
"the internet",
"software",
"photoshop",
"gnu image manipulation program",
"tampered images",
"real ones",
"just a few minutes",
"hidden signs",
"an image",
"deep learning models",
"an effective tool",
"any other methods",
"models",
"deep learning",
"intricate features",
"an image",
"we",
"a combination",
"traditional handcrafted features",
"a deep learning model",
"authentic and tampered images",
"we",
"a dual-branch convolutional neural network",
"conjunction",
"error level analysis",
"noise residuals",
"spatial rich model",
"our experiment",
"we",
"the freely accessible casia dataset",
"the dual-branch network",
"16 epochs",
"it",
"an accuracy",
"98.55%",
"we",
"a comparative analysis",
"other previously proposed work",
"the field",
"image forgery detection",
"this hybrid approach",
"deep learning models",
"some well-known traditional approaches",
"better results",
"tampered images",
"these days",
"just a few minutes",
"16",
"98.55%"
] |
Repetition Dynamics-based Deep Learning Model for Next Basket Recommendation | [
"Kaushlendra Kumar Sinha",
"Somaraju Suvvari"
] | Next Basket Recommendation system analyzes users’ past interactions to provide personalized recommendations. For a better understanding of the complex relations between users and objects and to handle large datasets, most recommendation system researchers present deep learning-based models. These deep learning based models considers users short-term preferences and long-term preferences for better performance, but these existing deep learning-based models neglected the repetition behavior of the user, as some researchers have highlighted the importance of repetition behavior in the literature. Towards this objective, we proposed a deep learning model that considers the user’s repetition behavior, and also our model included the correlation dynamics at the embedding level. This means that the generated embeddings represent a combination of a user’s repetition behavior and the correlation-based dynamics of items. Further, to extract the short-term preferences of the user, we fed these embeddings to an LSTM architecture, and finally, it generates suitable personalized recommendations. To evaluate the effectiveness of the proposed model, we tested our model on different real-world datasets by using extensive performance metrics, and the results are compared against state-of-the-art models. There was been considerable improvement in recall ranging from 42 to 99% over various values of k in Ta Feng and Dunnhumby dataset and considering precision there has been an improvement of 221–400%. | 10.1007/s42979-023-02403-x | repetition dynamics-based deep learning model for next basket recommendation | next basket recommendation system analyzes users’ past interactions to provide personalized recommendations. for a better understanding of the complex relations between users and objects and to handle large datasets, most recommendation system researchers present deep learning-based models. these deep learning based models considers users short-term preferences and long-term preferences for better performance, but these existing deep learning-based models neglected the repetition behavior of the user, as some researchers have highlighted the importance of repetition behavior in the literature. towards this objective, we proposed a deep learning model that considers the user’s repetition behavior, and also our model included the correlation dynamics at the embedding level. this means that the generated embeddings represent a combination of a user’s repetition behavior and the correlation-based dynamics of items. further, to extract the short-term preferences of the user, we fed these embeddings to an lstm architecture, and finally, it generates suitable personalized recommendations. to evaluate the effectiveness of the proposed model, we tested our model on different real-world datasets by using extensive performance metrics, and the results are compared against state-of-the-art models. there was been considerable improvement in recall ranging from 42 to 99% over various values of k in ta feng and dunnhumby dataset and considering precision there has been an improvement of 221–400%. | [
"next basket recommendation system",
"users",
"interactions",
"personalized recommendations",
"a better understanding",
"the complex relations",
"users",
"objects",
"large datasets",
"most recommendation system researchers",
"deep learning-based models",
"these deep learning based models",
"users",
"better performance",
"these existing deep learning-based models",
"the repetition behavior",
"the user",
"some researchers",
"the importance",
"repetition behavior",
"the literature",
"this objective",
"we",
"a deep learning model",
"that",
"the user’s repetition behavior",
"our model",
"the correlation dynamics",
"the embedding level",
"this",
"the generated embeddings",
"a combination",
"a user’s repetition behavior",
"the correlation-based dynamics",
"items",
"the short-term preferences",
"the user",
"we",
"these embeddings",
"an lstm architecture",
"it",
"suitable personalized recommendations",
"the effectiveness",
"the proposed model",
"we",
"our model",
"different real-world datasets",
"extensive performance metrics",
"the results",
"the-art",
"considerable improvement",
"recall",
"42 to 99%",
"various values",
"k",
"ta feng",
"dunnhumby",
"precision",
"an improvement",
"221–400%",
"42 to 99%",
"feng",
"221–400%"
] |
On the use of deep learning for phase recovery | [
"Kaiqiang Wang",
"Li Song",
"Chutian Wang",
"Zhenbo Ren",
"Guangyuan Zhao",
"Jiazhen Dou",
"Jianglei Di",
"George Barbastathis",
"Renjie Zhou",
"Jianlin Zhao",
"Edmund Y. Lam"
] | Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource (https://github.com/kqwang/phase-recovery) for readers to learn more about PR. | 10.1038/s41377-023-01340-x | on the use of deep learning for phase recovery | phase recovery (pr) refers to calculating the phase of the light field from its intensity measurements. as exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, pr is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. in recent years, deep learning (dl), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various pr problems. in this review, we first briefly introduce conventional methods for pr. then, we review how dl provides support for pr from the following three stages, namely, pre-processing, in-processing, and post-processing. we also review how dl is used in phase image processing. finally, we summarize the work in dl for pr and provide an outlook on how to better use dl to improve the reliability and efficiency of pr. furthermore, we present a live-updating resource (https://github.com/kqwang/phase-recovery) for readers to learn more about pr. | [
"phase recovery",
"(pr",
"the phase",
"the light field",
"its intensity measurements",
"quantitative phase imaging",
"coherent diffraction",
"adaptive optics",
"the refractive index distribution",
"topography",
"an object",
"the aberration",
"an imaging system",
"recent years",
"deep learning",
"dl",
"deep neural networks",
"unprecedented support",
"computational imaging",
"more efficient solutions",
"various pr problems",
"this review",
"we",
"conventional methods",
"we",
"dl",
"support",
"pr",
"the following three stages",
"-processing",
"processing",
"we",
"dl",
"phase image processing",
"we",
"the work",
"dl",
"pr",
"an outlook",
"dl",
"the reliability",
"efficiency",
"pr",
"we",
"a live-updating resource",
"https://github.com/kqwang/phase-recovery",
"readers",
"pr",
"recent years",
"first",
"three"
] |
Curriculum learning for ab initio deep learned refractive optics | [
"Xinge Yang",
"Qiang Fu",
"Wolfgang Heidrich"
] | Deep optical optimization has recently emerged as a new paradigm for designing computational imaging systems using only the output image as the objective. However, it has been limited to either simple optical systems consisting of a single element such as a diffractive optical element or metalens, or the fine-tuning of compound lenses from good initial designs. Here we present a DeepLens design method based on curriculum learning, which is able to learn optical designs of compound lenses ab initio from randomly initialized surfaces without human intervention, therefore overcoming the need for a good initial design. We demonstrate the effectiveness of our approach by fully automatically designing both classical imaging lenses and a large field-of-view extended depth-of-field computational lens in a cellphone-style form factor, with highly aspheric surfaces and a short back focal length. | 10.1038/s41467-024-50835-7 | curriculum learning for ab initio deep learned refractive optics | deep optical optimization has recently emerged as a new paradigm for designing computational imaging systems using only the output image as the objective. however, it has been limited to either simple optical systems consisting of a single element such as a diffractive optical element or metalens, or the fine-tuning of compound lenses from good initial designs. here we present a deeplens design method based on curriculum learning, which is able to learn optical designs of compound lenses ab initio from randomly initialized surfaces without human intervention, therefore overcoming the need for a good initial design. we demonstrate the effectiveness of our approach by fully automatically designing both classical imaging lenses and a large field-of-view extended depth-of-field computational lens in a cellphone-style form factor, with highly aspheric surfaces and a short back focal length. | [
"deep optical optimization",
"a new paradigm",
"computational imaging systems",
"only the output image",
"the objective",
"it",
"either simple optical systems",
"a single element",
"a diffractive optical element",
"metalens",
"the fine-tuning",
"compound lenses",
"good initial designs",
"we",
"a deeplens design method",
"curriculum learning",
"which",
"optical designs",
"compound lenses",
"ab initio",
"randomly initialized surfaces",
"human intervention",
"the need",
"a good initial design",
"we",
"the effectiveness",
"our approach",
"both classical imaging lenses",
"view",
"field",
"a cellphone-style form factor",
"highly aspheric surfaces",
"a short back focal length",
"deeplens"
] |
A technique to forecast Pakistan’s news using deep hybrid learning model | [
"Rukhshanda Ihsan",
"Syed Khaldoon Khurshid",
"Muhammad Shoaib",
"Sadia Ali",
"Sana Mahnoor",
"Syed Muhammad Hamza"
] | Forecasting future events is a challenging task that can have a significant impact on decision-making and policy-making. In this research, we focus on forecasting news related to Pakistan. Despite the importance of accurate predictions in this field, there currently exists no dataset for forecasting Pakistani news, specifically with regards to politics. Unlike numerical time series data, textual data includes information about the event's potential causes in addition to its impact. Better forecasts are thus anticipated as a result of this greater information. In order to address this gap, our research aims to create a first Pakistani news dataset for forecasting of Pakistan news that is mostly related to politics of Pakistan. This dataset was collected from various sources, including Pakistani news websites and social media platforms, as well as frequently asked questions about Pakistani politics. We develop a forecasting model using this dataset and evaluate the effectiveness of cutting-edge deep hybrid learning techniques incorporating neural networks, random forest, Word2vec, Natural language processing (NLP), and Naive Bayes. To the best of our understanding, no research has been done on the application of a deep hybrid learning model—a blend of deep learning and machine learning—for news forecasting. The accuracy for forecasting model is 97%. According to our findings, the model's performance is adequate when compared to that of other forecasting models. Our research not only fills the gap in the current literature but also presents a new challenge for large language models and has the potential to bring significant practical advantages in the field of forecasting. The unique contribution of this study lies in the intelligent modeling of the prediction challenge, allowing for the utilization of text rich in content for forecasting objectives. | 10.1007/s41870-024-01781-6 | a technique to forecast pakistan’s news using deep hybrid learning model | forecasting future events is a challenging task that can have a significant impact on decision-making and policy-making. in this research, we focus on forecasting news related to pakistan. despite the importance of accurate predictions in this field, there currently exists no dataset for forecasting pakistani news, specifically with regards to politics. unlike numerical time series data, textual data includes information about the event's potential causes in addition to its impact. better forecasts are thus anticipated as a result of this greater information. in order to address this gap, our research aims to create a first pakistani news dataset for forecasting of pakistan news that is mostly related to politics of pakistan. this dataset was collected from various sources, including pakistani news websites and social media platforms, as well as frequently asked questions about pakistani politics. we develop a forecasting model using this dataset and evaluate the effectiveness of cutting-edge deep hybrid learning techniques incorporating neural networks, random forest, word2vec, natural language processing (nlp), and naive bayes. to the best of our understanding, no research has been done on the application of a deep hybrid learning model—a blend of deep learning and machine learning—for news forecasting. the accuracy for forecasting model is 97%. according to our findings, the model's performance is adequate when compared to that of other forecasting models. our research not only fills the gap in the current literature but also presents a new challenge for large language models and has the potential to bring significant practical advantages in the field of forecasting. the unique contribution of this study lies in the intelligent modeling of the prediction challenge, allowing for the utilization of text rich in content for forecasting objectives. | [
"future events",
"a challenging task",
"that",
"a significant impact",
"decision-making",
"policy-making",
"this research",
"we",
"news",
"pakistan",
"the importance",
"accurate predictions",
"this field",
"no dataset",
"pakistani news",
"regards",
"politics",
"numerical time series data",
"textual data",
"information",
"the event's potential causes",
"addition",
"its impact",
"better forecasts",
"a result",
"this greater information",
"order",
"this gap",
"our research",
"a first pakistani news dataset",
"forecasting",
"pakistan news",
"that",
"politics",
"pakistan",
"this dataset",
"various sources",
"pakistani news websites",
"social media platforms",
"questions",
"pakistani politics",
"we",
"a forecasting model",
"this dataset",
"the effectiveness",
"cutting-edge deep hybrid learning techniques",
"neural networks",
"random forest",
"word2vec",
"natural language processing",
"nlp",
"naive bayes",
"our understanding",
"no research",
"the application",
"a deep hybrid learning model",
"a blend",
"deep learning",
"machine learning",
"news forecasting",
"the accuracy",
"forecasting model",
"97%",
"our findings",
"the model's performance",
"that",
"other forecasting models",
"our research",
"the gap",
"the current literature",
"a new challenge",
"large language models",
"the potential",
"significant practical advantages",
"the field",
"forecasting",
"the unique contribution",
"this study",
"the intelligent modeling",
"the prediction challenge",
"the utilization",
"text",
"content",
"forecasting objectives",
"pakistan",
"pakistani",
"first",
"pakistani",
"pakistan",
"pakistan",
"pakistani",
"pakistani",
"97%"
] |
A Transfer Learning-Based CNN Deep Learning Model for Unfavorable Driving State Recognition | [
"Jichi Chen",
"Hong Wang",
"Enqiu He"
] | The detection of unfavorable driving states (UDS) of drivers based on electroencephalogram (EEG) measures has received continuous attention from extensive scholars on account of directly reflecting brain neural activity with high temporal resolution and low risk of being deceived. However, the existing EEG-based driver UDS detection methods involve limited exploration of the functional connectivity patterns and interaction relationships within the brain network. Therefore, there is still room for improvement in the accuracy of detection. In this project, we propose three pretrained convolutional neural network (CNN)-based automatic detection frameworks for UDS of drivers with 30-channel EEG signals. The frameworks are investigated by adjusting the learning rate and choosing the optimization solver, etc. Two different conditions of driving experiments are performed, collecting EEG signals from sixteen subjects. The acquired 1-dimensional 30-channel EEG signals are converted into 2-dimensional matrices by the Granger causality (GC) method to form the functional connectivity graphs of the brain (FCGB). Then, the FCGB are fed into pretrained deep learning models that employed transfer learning strategy for feature extraction and judgment of different EEG signal types. Furthermore, we adopt two visualization interpretability techniques, named, activation visualization and gradient-weighted class activation mapping (Grad-CAM) for better visualizing and understanding the predictions of the pretrained models after fine-tuning. The experimental outcomes show that Resnet 18 model yields the highest average recognition accuracy of 90% using the rmsprop optimizer with a learning rate of 1e − 3. The overall outcomes suggest that cooperating of biologically inspired functional connectivity graphs of the brain and pretrained transfer learning algorithms is a prospective approach in reducing the rate of major traffic accidents caused by driver unfavorable driving states. | 10.1007/s12559-023-10196-7 | a transfer learning-based cnn deep learning model for unfavorable driving state recognition | the detection of unfavorable driving states (uds) of drivers based on electroencephalogram (eeg) measures has received continuous attention from extensive scholars on account of directly reflecting brain neural activity with high temporal resolution and low risk of being deceived. however, the existing eeg-based driver uds detection methods involve limited exploration of the functional connectivity patterns and interaction relationships within the brain network. therefore, there is still room for improvement in the accuracy of detection. in this project, we propose three pretrained convolutional neural network (cnn)-based automatic detection frameworks for uds of drivers with 30-channel eeg signals. the frameworks are investigated by adjusting the learning rate and choosing the optimization solver, etc. two different conditions of driving experiments are performed, collecting eeg signals from sixteen subjects. the acquired 1-dimensional 30-channel eeg signals are converted into 2-dimensional matrices by the granger causality (gc) method to form the functional connectivity graphs of the brain (fcgb). then, the fcgb are fed into pretrained deep learning models that employed transfer learning strategy for feature extraction and judgment of different eeg signal types. furthermore, we adopt two visualization interpretability techniques, named, activation visualization and gradient-weighted class activation mapping (grad-cam) for better visualizing and understanding the predictions of the pretrained models after fine-tuning. the experimental outcomes show that resnet 18 model yields the highest average recognition accuracy of 90% using the rmsprop optimizer with a learning rate of 1e − 3. the overall outcomes suggest that cooperating of biologically inspired functional connectivity graphs of the brain and pretrained transfer learning algorithms is a prospective approach in reducing the rate of major traffic accidents caused by driver unfavorable driving states. | [
"the detection",
"unfavorable driving states",
"uds",
"drivers",
"electroencephalogram (eeg) measures",
"continuous attention",
"extensive scholars",
"account",
"brain neural activity",
"high temporal resolution",
"low risk",
"the existing eeg-based driver uds detection methods",
"limited exploration",
"the functional connectivity patterns",
"interaction relationships",
"the brain network",
"room",
"improvement",
"the accuracy",
"detection",
"this project",
"we",
"three pretrained convolutional neural network",
"cnn)-based automatic detection frameworks",
"uds",
"drivers",
"30-channel eeg signals",
"the frameworks",
"the learning rate",
"the optimization",
"two different conditions",
"driving experiments",
"eeg signals",
"sixteen subjects",
"the acquired 1-dimensional 30-channel eeg signals",
"2-dimensional matrices",
"the granger causality",
"(gc) method",
"the functional connectivity graphs",
"the brain",
"fcgb",
"the fcgb",
"pretrained deep learning models",
"that",
"transfer learning strategy",
"feature extraction",
"judgment",
"different eeg signal types",
"we",
"two visualization interpretability techniques",
"grad-cam",
"better visualizing",
"the predictions",
"the pretrained models",
"fine-tuning",
"the experimental outcomes",
"resnet",
"18 model",
"the highest average recognition accuracy",
"90%",
"the rmsprop optimizer",
"a learning rate",
"the overall outcomes",
"biologically inspired functional connectivity graphs",
"the brain",
"pretrained transfer learning algorithms",
"a prospective approach",
"the rate",
"major traffic accidents",
"driver unfavorable driving states",
"three",
"30",
"two",
"1",
"30",
"2",
"fed",
"two",
"18",
"90%",
"1e −",
"3"
] |
Deep learning reconstruction for lumbar spine MRI acceleration: a prospective study | [
"Hui Tang",
"Ming Hong",
"Lu Yu",
"Yang Song",
"Mengqiu Cao",
"Lei Xiang",
"Yan Zhou",
"Shiteng Suo"
] | BackgroundWe compared magnetic resonance imaging (MRI) turbo spin-echo images reconstructed using a deep learning technique (TSE-DL) with standard turbo spin-echo (TSE-SD) images of the lumbar spine regarding image quality and detection performance of common degenerative pathologies.MethodsThis prospective, single-center study included 31 patients (15 males and 16 females; aged 51 ± 16 years (mean ± standard deviation)) who underwent lumbar spine exams with both TSE-SD and TSE-DL acquisitions for degenerative spine diseases. Images were analyzed by two radiologists and assessed for qualitative image quality using a 4-point Likert scale, quantitative signal-to-noise ratio (SNR) of anatomic landmarks, and detection of common pathologies. Paired-sample t, Wilcoxon, and McNemar tests, unweighted/linearly weighted Cohen κ statistics, and intraclass correlation coefficients were used.ResultsScan time for TSE-DL and TSE-SD protocols was 2:55 and 5:17 min:s, respectively. The overall image quality was either significantly higher for TSE-DL or not significantly different between TSE-SD and TSE-DL. TSE-DL demonstrated higher SNR and subject noise scores than TSE-SD. For pathology detection, the interreader agreement was substantial to almost perfect for TSE-DL, with κ values ranging from 0.61 to 1.00; the interprotocol agreement was almost perfect for both readers, with κ values ranging from 0.84 to 1.00. There was no significant difference in the diagnostic confidence or detection rate of common pathologies between the two sequences (p ≥ 0.081).ConclusionsTSE-DL allowed for a 45% reduction in scan time over TSE-SD in lumbar spine MRI without compromising the overall image quality and showed comparable detection performance of common pathologies in the evaluation of degenerative lumbar spine changes.Relevance statementDeep learning-reconstructed lumbar spine MRI protocol enabled a 45% reduction in scan time compared with conventional reconstruction, with comparable image quality and detection performance of common degenerative pathologies.Key points• Lumbar spine MRI with deep learning reconstruction has broad application prospects.• Deep learning reconstruction of lumbar spine MRI saved 45% scan time without compromising overall image quality.• When compared with standard sequences, deep learning reconstruction showed similar detection performance of common degenerative lumbar spine pathologies.Graphical Abstract | 10.1186/s41747-024-00470-0 | deep learning reconstruction for lumbar spine mri acceleration: a prospective study | backgroundwe compared magnetic resonance imaging (mri) turbo spin-echo images reconstructed using a deep learning technique (tse-dl) with standard turbo spin-echo (tse-sd) images of the lumbar spine regarding image quality and detection performance of common degenerative pathologies.methodsthis prospective, single-center study included 31 patients (15 males and 16 females; aged 51 ± 16 years (mean ± standard deviation)) who underwent lumbar spine exams with both tse-sd and tse-dl acquisitions for degenerative spine diseases. images were analyzed by two radiologists and assessed for qualitative image quality using a 4-point likert scale, quantitative signal-to-noise ratio (snr) of anatomic landmarks, and detection of common pathologies. paired-sample t, wilcoxon, and mcnemar tests, unweighted/linearly weighted cohen κ statistics, and intraclass correlation coefficients were used.resultsscan time for tse-dl and tse-sd protocols was 2:55 and 5:17 min:s, respectively. the overall image quality was either significantly higher for tse-dl or not significantly different between tse-sd and tse-dl. tse-dl demonstrated higher snr and subject noise scores than tse-sd. for pathology detection, the interreader agreement was substantial to almost perfect for tse-dl, with κ values ranging from 0.61 to 1.00; the interprotocol agreement was almost perfect for both readers, with κ values ranging from 0.84 to 1.00. there was no significant difference in the diagnostic confidence or detection rate of common pathologies between the two sequences (p ≥ 0.081).conclusionstse-dl allowed for a 45% reduction in scan time over tse-sd in lumbar spine mri without compromising the overall image quality and showed comparable detection performance of common pathologies in the evaluation of degenerative lumbar spine changes.relevance statementdeep learning-reconstructed lumbar spine mri protocol enabled a 45% reduction in scan time compared with conventional reconstruction, with comparable image quality and detection performance of common degenerative pathologies.key points• lumbar spine mri with deep learning reconstruction has broad application prospects.• deep learning reconstruction of lumbar spine mri saved 45% scan time without compromising overall image quality.• when compared with standard sequences, deep learning reconstruction showed similar detection performance of common degenerative lumbar spine pathologies.graphical abstract | [
"backgroundwe",
"mri",
"a deep learning technique",
"tse-dl",
"standard turbo spin-echo (tse-sd) images",
"the lumbar spine",
"image quality and detection performance",
"common degenerative pathologies.methodsthis prospective, single-center study",
"31 patients",
"15 males",
"16 females",
"51 ±",
"± standard deviation",
"who",
"lumbar spine exams",
"both tse-sd and tse-dl acquisitions",
"degenerative spine diseases",
"images",
"two radiologists",
"qualitative image quality",
"a 4-point likert scale",
"noise",
"snr",
"anatomic landmarks",
"detection",
"common pathologies",
"paired-sample t",
"wilcoxon",
"mcnemar tests",
"cohen κ statistics",
"correlation coefficients",
"used.resultsscan time",
"tse-dl and tse-sd protocols",
"min",
":",
"s",
"the overall image quality",
"tse-dl",
"tse-sd and tse-dl. tse-dl",
"higher snr and subject noise scores",
"tse-sd",
"pathology detection",
"the interreader agreement",
"tse-dl",
"κ values",
"the interprotocol agreement",
"both readers",
"κ values",
"no significant difference",
"the diagnostic confidence or detection rate",
"common pathologies",
"the two sequences",
"p ≥",
"0.081).conclusionstse-dl",
"a 45% reduction",
"scan time",
"tse-sd",
"lumbar spine mri",
"the overall image quality",
"comparable detection performance",
"common pathologies",
"the evaluation",
"degenerative lumbar spine changes.relevance",
"learning-reconstructed lumbar spine mri protocol",
"a 45% reduction",
"scan time",
"conventional reconstruction",
"comparable image quality and detection performance",
"lumbar spine mri",
"deep learning reconstruction",
"broad application prospects.•",
"reconstruction",
"lumbar spine mri",
"45% scan time",
"overall image",
"standard sequences",
"deep learning reconstruction",
"similar detection performance",
"common degenerative lumbar spine",
"31",
"15",
"16",
"51 ± 16 years",
"two",
"4",
"mcnemar",
"used.resultsscan",
"2:55",
"5:17",
"0.61",
"1.00",
"0.84",
"1.00",
"two",
"45%",
"45%",
"45%"
] |
A multimodal deep learning model for predicting severe hemorrhage in placenta previa | [
"Munetoshi Akazawa",
"Kazunori Hashimoto"
] | Placenta previa causes life-threatening bleeding and accurate prediction of severe hemorrhage leads to risk stratification and optimum allocation of interventions. We aimed to use a multimodal deep learning model to predict severe hemorrhage. Using MRI T2-weighted image of the placenta and tabular data consisting of patient demographics and preoperative blood examination data, a multimodal deep learning model was constructed to predict cases of intraoperative blood loss > 2000 ml. We evaluated the prediction performance of the model by comparing it with that of two machine learning methods using only tabular data and MRI images, as well as with that of two human expert obstetricians. Among the enrolled 48 patients, 26 (54.2%) lost > 2000 ml of blood and 22 (45.8%) lost < 2000 ml of blood. Multimodal deep learning model showed the best accuracy of 0.68 and AUC of 0.74, whereas the machine learning model using tabular data and MRI images had a class accuracy of 0.61 and 0.53, respectively. The human experts had median accuracies of 0.61. Multimodal deep learning models could integrate the two types of information and predict severe hemorrhage cases. The model might assist human expert in the prediction of intraoperative hemorrhage in the case of placenta previa. | 10.1038/s41598-023-44634-1 | a multimodal deep learning model for predicting severe hemorrhage in placenta previa | placenta previa causes life-threatening bleeding and accurate prediction of severe hemorrhage leads to risk stratification and optimum allocation of interventions. we aimed to use a multimodal deep learning model to predict severe hemorrhage. using mri t2-weighted image of the placenta and tabular data consisting of patient demographics and preoperative blood examination data, a multimodal deep learning model was constructed to predict cases of intraoperative blood loss > 2000 ml. we evaluated the prediction performance of the model by comparing it with that of two machine learning methods using only tabular data and mri images, as well as with that of two human expert obstetricians. among the enrolled 48 patients, 26 (54.2%) lost > 2000 ml of blood and 22 (45.8%) lost < 2000 ml of blood. multimodal deep learning model showed the best accuracy of 0.68 and auc of 0.74, whereas the machine learning model using tabular data and mri images had a class accuracy of 0.61 and 0.53, respectively. the human experts had median accuracies of 0.61. multimodal deep learning models could integrate the two types of information and predict severe hemorrhage cases. the model might assist human expert in the prediction of intraoperative hemorrhage in the case of placenta previa. | [
"placenta previa",
"life-threatening bleeding",
"accurate prediction",
"severe hemorrhage",
"risk stratification",
"optimum allocation",
"interventions",
"we",
"a multimodal deep learning model",
"severe hemorrhage",
"mri t2-weighted image",
"the placenta and tabular data",
"patient demographics",
"preoperative blood examination data",
"a multimodal deep learning model",
"cases",
"intraoperative blood loss",
"we",
"the prediction performance",
"the model",
"it",
"that",
"two machine learning methods",
"only tabular data and mri images",
"that",
"two human expert obstetricians",
"the enrolled 48 patients",
"54.2%",
"2000 ml",
"blood",
"22 (45.8%",
"2000 ml",
"blood",
"multimodal deep learning model",
"the best accuracy",
"auc",
"the machine learning model",
"tabular data and mri images",
"a class accuracy",
"the human experts",
"median accuracies",
"multimodal deep learning models",
"the two types",
"information",
"severe hemorrhage cases",
"the model",
"human expert",
"the prediction",
"intraoperative hemorrhage",
"the case",
"placenta previa",
"placenta previa",
"2000 ml",
"two",
"two",
"48",
"26",
"54.2%",
"2000 ml",
"22",
"45.8%",
"2000 ml",
"0.68",
"0.74",
"0.61",
"0.53",
"0.61",
"two"
] |
IRADA: integrated reinforcement learning and deep learning algorithm for attack detection in wireless sensor networks | [
"Vandana Shakya",
"Jaytrilok Choudhary",
"Dhirendra Pratap Singh"
] | Wireless Sensor Networks (WSNs) play a vital role in various applications, necessitating robust network security to protect sensitive data. Intrusion Detection Systems (IDSs) are crucial for preserving the integrity, availability, and confidentiality of WSNs by detecting and countering potential attacks. Despite significant research efforts, the existing IDS solutions still suffer from challenges related to detection accuracy and false alarms. To address these challenges, in this paper, we propose a Bayesian optimization-based Deep Learning (DL) model. However, the proposed optimized DL model, while showing promising results in enhancing security, encounters challenges such as data dependency, computational complexity, and the potential for overfitting. In the literature, researchers have employed Reinforcement Learning (RL) to address these issues. However, it also introduces its own concerns, including exploration, reward design, and prolonged training times. Consequently, to address these challenges, this paper proposes an Innovative Integrated RL-based Advanced DL Algorithm (IRADA) for attack detection in WSNs. IRADA leverages the convergence of DL and RL models to achieve superior intrusion detection performance. The performance analysis of IRADA reveals impressive results, including accuracy (99.50%), specificity (99.94%), sensitivity (99.48%), F1-Score (98.26%), Kappa statistics (99.42%), and area under the curve (99.38%). Additionally, we analyze IRADA’s robustness against adversarial attacks, ensuring its applicability in real-world security scenarios. | 10.1007/s11042-024-18289-7 | irada: integrated reinforcement learning and deep learning algorithm for attack detection in wireless sensor networks | wireless sensor networks (wsns) play a vital role in various applications, necessitating robust network security to protect sensitive data. intrusion detection systems (idss) are crucial for preserving the integrity, availability, and confidentiality of wsns by detecting and countering potential attacks. despite significant research efforts, the existing ids solutions still suffer from challenges related to detection accuracy and false alarms. to address these challenges, in this paper, we propose a bayesian optimization-based deep learning (dl) model. however, the proposed optimized dl model, while showing promising results in enhancing security, encounters challenges such as data dependency, computational complexity, and the potential for overfitting. in the literature, researchers have employed reinforcement learning (rl) to address these issues. however, it also introduces its own concerns, including exploration, reward design, and prolonged training times. consequently, to address these challenges, this paper proposes an innovative integrated rl-based advanced dl algorithm (irada) for attack detection in wsns. irada leverages the convergence of dl and rl models to achieve superior intrusion detection performance. the performance analysis of irada reveals impressive results, including accuracy (99.50%), specificity (99.94%), sensitivity (99.48%), f1-score (98.26%), kappa statistics (99.42%), and area under the curve (99.38%). additionally, we analyze irada’s robustness against adversarial attacks, ensuring its applicability in real-world security scenarios. | [
"wireless sensor networks",
"wsns",
"a vital role",
"various applications",
"robust network security",
"sensitive data",
"intrusion detection systems",
"idss",
"the integrity",
"availability",
"confidentiality",
"wsns",
"potential attacks",
"significant research efforts",
"the existing ids solutions",
"challenges",
"detection accuracy",
"false alarms",
"these challenges",
"this paper",
"we",
"a bayesian optimization-based deep learning (dl) model",
"the proposed optimized dl model",
"promising results",
"security",
"challenges",
"data dependency",
"computational complexity",
"the potential",
"the literature",
"researchers",
"reinforcement learning",
"these issues",
"it",
"its own concerns",
"exploration",
"reward design",
"prolonged training times",
"these challenges",
"this paper",
"an innovative integrated rl-based advanced dl algorithm",
"irada",
"attack detection",
"wsns",
"irada",
"the convergence",
"dl and rl models",
"superior intrusion detection performance",
"the performance analysis",
"irada",
"impressive results",
"accuracy",
"99.50%",
"specificity",
"99.94%",
"sensitivity",
"99.48%",
"f1-score",
"98.26%",
"kappa statistics",
"99.42%",
"area",
"the curve",
"99.38%",
"we",
"irada’s robustness",
"adversarial attacks",
"its applicability",
"real-world security scenarios",
"advanced dl algorithm",
"irada",
"irada",
"irada",
"99.50%",
"99.94%",
"99.48%",
"98.26%",
"99.42%",
"99.38%",
"irada"
] |
A deep learning based approach for image retrieval extraction in mobile edge computing | [
"Jamal Alasadi",
"Ghassan F. Bati",
"Ahmed Al Hilli"
] | Deep learning has been widely explored in 5G applications, including computer vision, the Internet of Things (IoT), and intermedia classification. However, applying the deep learning approach in limited-resource mobile devices is one of the most challenging issues. At the same time, users’ experience in terms of Quality of Service (QoS) (e.g., service latency, outcome accuracy, and achievable data rate) performs poorly while interacting with machine learning applications. Mobile edge computing (MEC) has been introduced as a cooperative approach to bring computation resources in proximity to end-user devices to overcome these limitations. This article aims to design a novel image reiterative extraction algorithm based on convolution neural network (CNN) learning and computational task offloading to support machine learning-based mobile applications in resource-limited and uncertain environments. Accordingly, we leverage the framework of image retrieval extraction and introduce three approaches. First, privacy preservation is strict and aims to protect personal data. Second, network traffic reduction. Third, minimizing feature matching time. Our simulation results associated with real-time experiments on a small-scale MEC server have shown the effectiveness of the proposed deep learning-based approach over existing schemes. The source code is available here: https://github.com/jamalalasadi/CNN_Image_retrieval. | 10.1007/s43995-024-00060-6 | a deep learning based approach for image retrieval extraction in mobile edge computing | deep learning has been widely explored in 5g applications, including computer vision, the internet of things (iot), and intermedia classification. however, applying the deep learning approach in limited-resource mobile devices is one of the most challenging issues. at the same time, users’ experience in terms of quality of service (qos) (e.g., service latency, outcome accuracy, and achievable data rate) performs poorly while interacting with machine learning applications. mobile edge computing (mec) has been introduced as a cooperative approach to bring computation resources in proximity to end-user devices to overcome these limitations. this article aims to design a novel image reiterative extraction algorithm based on convolution neural network (cnn) learning and computational task offloading to support machine learning-based mobile applications in resource-limited and uncertain environments. accordingly, we leverage the framework of image retrieval extraction and introduce three approaches. first, privacy preservation is strict and aims to protect personal data. second, network traffic reduction. third, minimizing feature matching time. our simulation results associated with real-time experiments on a small-scale mec server have shown the effectiveness of the proposed deep learning-based approach over existing schemes. the source code is available here: https://github.com/jamalalasadi/cnn_image_retrieval. | [
"deep learning",
"5g applications",
"computer vision",
"the internet",
"things",
"iot",
"intermedia classification",
"the deep learning approach",
"limited-resource mobile devices",
"the most challenging issues",
"the same time",
"users’ experience",
"terms",
"quality",
"service",
"qos",
"machine learning applications",
"mobile edge computing",
"(mec",
"a cooperative approach",
"computation resources",
"proximity",
"end-user devices",
"these limitations",
"this article",
"a novel image reiterative extraction algorithm",
"convolution neural network",
"(cnn) learning",
"computational task",
"machine learning-based mobile applications",
"resource-limited and uncertain environments",
"we",
"the framework",
"image retrieval extraction",
"three approaches",
"privacy preservation",
"personal data",
"feature matching time",
"our simulation results",
"real-time experiments",
"a small-scale mec server",
"the effectiveness",
"the proposed deep learning-based approach",
"existing schemes",
"the source code",
"https://github.com/jamalalasadi/cnn_image_retrieval",
"5",
"cnn",
"three",
"first",
"second",
"third"
] |
Machine learning vs deep learning in stock market investment: an international evidence | [
"Jing Hao",
"Feng He",
"Feng Ma",
"Shibo Zhang",
"Xiaotao Zhang"
] | Machine learning and deep learning are powerful tools for quantitative investment. To examine the effectiveness of the models in different markets, this paper applies random forest and DNN models to forecast stock prices and construct statistical arbitrage strategies in five stock markets, including mainland China, the United States, the United Kingdom, Canada and Japan. Each model is applied to the price of major stock indices constituting stocks in these markets from 2005 to 2020 to construct a long-short portfolio with 20 selected stocks by the model. The results show that the a particular model obtains significantly different profits in different markets, among which DNN has the best performance, especially in the Chinese stock market. We find that DNN models generally perform better than other machine learning models in all markets. | 10.1007/s10479-023-05286-6 | machine learning vs deep learning in stock market investment: an international evidence | machine learning and deep learning are powerful tools for quantitative investment. to examine the effectiveness of the models in different markets, this paper applies random forest and dnn models to forecast stock prices and construct statistical arbitrage strategies in five stock markets, including mainland china, the united states, the united kingdom, canada and japan. each model is applied to the price of major stock indices constituting stocks in these markets from 2005 to 2020 to construct a long-short portfolio with 20 selected stocks by the model. the results show that the a particular model obtains significantly different profits in different markets, among which dnn has the best performance, especially in the chinese stock market. we find that dnn models generally perform better than other machine learning models in all markets. | [
"machine learning",
"deep learning",
"powerful tools",
"quantitative investment",
"the effectiveness",
"the models",
"different markets",
"this paper",
"random forest",
"dnn models",
"stock prices",
"statistical arbitrage strategies",
"five stock markets",
"mainland china",
"the united states",
"the united kingdom",
"canada",
"japan",
"each model",
"the price",
"major stock indices",
"stocks",
"these markets",
"a long-short portfolio",
"20 selected stocks",
"the model",
"the results",
"the a particular model",
"significantly different profits",
"different markets",
"which",
"dnn",
"the best performance",
"the chinese stock market",
"we",
"dnn models",
"other machine learning models",
"all markets",
"five",
"china",
"the united states",
"the united kingdom",
"canada",
"japan",
"2005",
"2020",
"20",
"chinese"
] |
Deep learning-based automated angle measurement for flatfoot diagnosis in weight-bearing lateral radiographs | [
"Won-Jun Noh",
"Mu Sook Lee",
"Byoung-Dai Lee"
] | This study aimed to develop and evaluate a deep learning-based system for the automatic measurement of angles (specifically, Meary’s angle and calcaneal pitch) in weight-bearing lateral radiographs of the foot for flatfoot diagnosis. We utilized 3960 lateral radiographs, either from the left or right foot, sourced from a pool of 4000 patients to construct and evaluate a deep learning-based model. These radiographs were captured between June and November 2021, and patients who had undergone total ankle replacement surgery or ankle arthrodesis surgery were excluded. Various methods, including correlation analysis, Bland–Altman plots, and paired T-tests, were employed to assess the concordance between the angles automatically measured using the system and those assessed by clinical experts. The evaluation dataset comprised 150 weight-bearing radiographs from 150 patients. In all test cases, the angles automatically computed using the deep learning-based system were in good agreement with the reference standards (Meary’s angle: Pearson correlation coefficient (PCC) = 0.964, intraclass correlation coefficient (ICC) = 0.963, concordance correlation coefficient (CCC) = 0.963, p-value = 0.632, mean absolute error (MAE) = 1.59°; calcaneal pitch: PCC = 0.988, ICC = 0.987, CCC = 0.987, p-value = 0.055, MAE = 0.63°). The average time required for angle measurement using only the CPU to execute the deep learning-based system was 11 ± 1 s. The deep learning-based automatic angle measurement system, a tool for diagnosing flatfoot, demonstrated comparable accuracy and reliability with the results obtained by medical professionals for patients without internal fixation devices. | 10.1038/s41598-024-69549-3 | deep learning-based automated angle measurement for flatfoot diagnosis in weight-bearing lateral radiographs | this study aimed to develop and evaluate a deep learning-based system for the automatic measurement of angles (specifically, meary’s angle and calcaneal pitch) in weight-bearing lateral radiographs of the foot for flatfoot diagnosis. we utilized 3960 lateral radiographs, either from the left or right foot, sourced from a pool of 4000 patients to construct and evaluate a deep learning-based model. these radiographs were captured between june and november 2021, and patients who had undergone total ankle replacement surgery or ankle arthrodesis surgery were excluded. various methods, including correlation analysis, bland–altman plots, and paired t-tests, were employed to assess the concordance between the angles automatically measured using the system and those assessed by clinical experts. the evaluation dataset comprised 150 weight-bearing radiographs from 150 patients. in all test cases, the angles automatically computed using the deep learning-based system were in good agreement with the reference standards (meary’s angle: pearson correlation coefficient (pcc) = 0.964, intraclass correlation coefficient (icc) = 0.963, concordance correlation coefficient (ccc) = 0.963, p-value = 0.632, mean absolute error (mae) = 1.59°; calcaneal pitch: pcc = 0.988, icc = 0.987, ccc = 0.987, p-value = 0.055, mae = 0.63°). the average time required for angle measurement using only the cpu to execute the deep learning-based system was 11 ± 1 s. the deep learning-based automatic angle measurement system, a tool for diagnosing flatfoot, demonstrated comparable accuracy and reliability with the results obtained by medical professionals for patients without internal fixation devices. | [
"this study",
"a deep learning-based system",
"the automatic measurement",
"angles",
"(specifically, meary’s angle",
"calcaneal pitch",
"weight-bearing lateral radiographs",
"the foot",
"flatfoot diagnosis",
"we",
"3960 lateral radiographs",
"the left or right foot",
"a pool",
"4000 patients",
"a deep learning-based model",
"these radiographs",
"june",
"november",
"patients",
"who",
"total ankle replacement surgery",
"ankle arthrodesis surgery",
"various methods",
"correlation analysis",
"bland–altman plots",
"t-tests",
"the concordance",
"the angles",
"the system",
"those",
"clinical experts",
"the evaluation dataset",
"150 weight-bearing",
"150 patients",
"all test cases",
"the angles",
"the deep learning-based system",
"good agreement",
"the reference standards",
"meary’s angle",
"pcc",
"intraclass correlation coefficient",
"icc",
"concordance correlation coefficient",
"ccc",
"absolute error",
"mae",
"1.59°",
"calcaneal pitch",
"the average time",
"angle measurement",
"only the cpu",
"the deep learning-based system",
"11 ±",
"the deep learning-based automatic angle measurement system",
"a tool",
"comparable accuracy",
"reliability",
"the results",
"medical professionals",
"patients",
"internal fixation devices",
"3960",
"4000",
"between june and november 2021",
"150",
"150",
"0.964",
"0.963",
"0.963",
"0.632",
"1.59",
"0.988",
"0.987",
"0.987",
"0.055",
"0.63",
"11",
"1"
] |
Deep-learning based supervisory monitoring of robotized DE-GMAW process through learning from human welders | [
"Rui Yu",
"Yue Cao",
"Jennifer Martin",
"Otto Chiang",
"YuMing Zhang"
] | Double-electrode gas metal arc welding (DE-GMAW) modifies GMAW by adding a second electrode to bypass a portion of the current flowing from the wire. This reduces the current to, and the heat input on, the workpiece. Successful bypassing depends on the relative position of the bypass electrode to the continuously varying wire tip. To ensure proper operation, we propose robotizing the system using a follower robot to carry and adaptively adjust the bypass electrode. The primary information for monitoring this process is the arc image, which directly shows desired and undesired modes. However, developing a robust algorithm for processing the complex arc image is time-consuming and challenging. Employing a deep learning approach requires labeling numerous arc images for the corresponding DE-GMAW modes, which is not practically feasible. To introduce alternative labels, we analyze arc phenomena in various DE-GMAW modes and correlate them with distinct arc systems having varying voltages. These voltages serve as automatically derived labels to train the deep-learning network. The results demonstrated reliable process monitoring. | 10.1007/s40194-023-01635-y | deep-learning based supervisory monitoring of robotized de-gmaw process through learning from human welders | double-electrode gas metal arc welding (de-gmaw) modifies gmaw by adding a second electrode to bypass a portion of the current flowing from the wire. this reduces the current to, and the heat input on, the workpiece. successful bypassing depends on the relative position of the bypass electrode to the continuously varying wire tip. to ensure proper operation, we propose robotizing the system using a follower robot to carry and adaptively adjust the bypass electrode. the primary information for monitoring this process is the arc image, which directly shows desired and undesired modes. however, developing a robust algorithm for processing the complex arc image is time-consuming and challenging. employing a deep learning approach requires labeling numerous arc images for the corresponding de-gmaw modes, which is not practically feasible. to introduce alternative labels, we analyze arc phenomena in various de-gmaw modes and correlate them with distinct arc systems having varying voltages. these voltages serve as automatically derived labels to train the deep-learning network. the results demonstrated reliable process monitoring. | [
"double-electrode gas metal arc welding",
"de",
"-",
"gmaw",
"gmaw",
"a second electrode",
"a portion",
"the current",
"the wire",
"this",
"the heat input",
"the workpiece",
"successful bypassing",
"the relative position",
"the bypass",
"the continuously varying wire tip",
"proper operation",
"we",
"the system",
"a follower robot",
"the bypass",
"the primary information",
"this process",
"the arc image",
"which",
"desired and undesired modes",
"a robust algorithm",
"the complex arc image",
"a deep learning approach",
"numerous arc images",
"the corresponding de-gmaw modes",
"which",
"alternative labels",
"we",
"arc phenomena",
"various de-gmaw modes",
"them",
"distinct arc systems",
"varying voltages",
"these voltages",
"automatically derived labels",
"the deep-learning network",
"the results",
"reliable process monitoring",
"second"
] |
Surface wave inversion with unknown number of soil layers based on a hybrid learning procedure of deep learning and genetic algorithm | [
"Zan Zhou",
"Thomas Man-Hoi Lok",
"Wan-Huan Zhou"
] | Surface wave inversion is a key step in the application of surface waves to soil velocity profiling. Currently, a common practice for the process of inversion is that the number of soil layers is assumed to be known before using heuristic search algorithms to compute the shear wave velocity profile or the number of soil layers is considered as an optimization variable. However, an improper selection of the number of layers may lead to an incorrect shear wave velocity profile. In this study, a deep learning and genetic algorithm hybrid learning procedure is proposed to perform the surface wave inversion without the need to assume the number of soil layers. First, a deep neural network is adapted to learn from a large number of synthetic dispersion curves for inferring the layer number. Then, the shear-wave velocity profile is determined by a genetic algorithm with the known layer number. By applying this procedure to both simulated and real-world cases, the results indicate that the proposed method is reliable and efficient for surface wave inversion. | 10.1007/s11803-024-2240-1 | surface wave inversion with unknown number of soil layers based on a hybrid learning procedure of deep learning and genetic algorithm | surface wave inversion is a key step in the application of surface waves to soil velocity profiling. currently, a common practice for the process of inversion is that the number of soil layers is assumed to be known before using heuristic search algorithms to compute the shear wave velocity profile or the number of soil layers is considered as an optimization variable. however, an improper selection of the number of layers may lead to an incorrect shear wave velocity profile. in this study, a deep learning and genetic algorithm hybrid learning procedure is proposed to perform the surface wave inversion without the need to assume the number of soil layers. first, a deep neural network is adapted to learn from a large number of synthetic dispersion curves for inferring the layer number. then, the shear-wave velocity profile is determined by a genetic algorithm with the known layer number. by applying this procedure to both simulated and real-world cases, the results indicate that the proposed method is reliable and efficient for surface wave inversion. | [
"surface wave inversion",
"a key step",
"the application",
"surface waves",
"velocity profiling",
"a common practice",
"the process",
"inversion",
"the number",
"soil layers",
"heuristic search algorithms",
"the shear wave velocity profile",
"the number",
"soil layers",
"an optimization variable",
"an improper selection",
"the number",
"layers",
"an incorrect shear wave velocity profile",
"this study",
"a deep learning and genetic algorithm hybrid learning procedure",
"the surface wave inversion",
"the need",
"the number",
"soil layers",
"a deep neural network",
"a large number",
"synthetic dispersion curves",
"the layer number",
"the shear-wave velocity profile",
"a genetic algorithm",
"the known layer number",
"this procedure",
"both simulated and real-world cases",
"the results",
"the proposed method",
"surface wave inversion",
"first"
] |
Presenting a three layer stacking ensemble classifier of deep learning and machine learning for skin cancer classification | [
"Bahman Jafari Tabaghsar",
"Reza Tavoli",
"Mohammad Mahdi Alizadeh Toosi"
] | One of the most common types of cancer in the world is skin cancer. Despite the different types of skin diseases with different shapes, the classification of skin diseases is a very difficult task. As a result, considering such a problem, a combination model of deep learning algorithms and machine has been proposed for skin disease classification.In this paper, a three-layer architecture based on ensemble learning is presented. In the first layer, the training input is given to convolutional neural network and EfficientNET. The output of the first layer is given to the classifiers of the second layer including machine learning classifiers. The output of the best decision of these classifiers is sent to the third layer classifier and the final prediction is made.The reason for using the three-layer architecture based on group learning was the lack of correct recognition of some classes by simple classifications. On the other hand, some diseases with different classes are classified in the same class. This model helps to correctly identify input samples with the correct combination of classifications in different layers.HAM10000 data set has been used to test and validate the proposed method. The mentioned dataset includes 10,015 images of skin lesions in seven different classes and includes different types of skin diseases. The accuracy is 99.97 on the testing set, which was much better than the previous heavy models. | 10.1007/s11042-024-19195-8 | presenting a three layer stacking ensemble classifier of deep learning and machine learning for skin cancer classification | one of the most common types of cancer in the world is skin cancer. despite the different types of skin diseases with different shapes, the classification of skin diseases is a very difficult task. as a result, considering such a problem, a combination model of deep learning algorithms and machine has been proposed for skin disease classification.in this paper, a three-layer architecture based on ensemble learning is presented. in the first layer, the training input is given to convolutional neural network and efficientnet. the output of the first layer is given to the classifiers of the second layer including machine learning classifiers. the output of the best decision of these classifiers is sent to the third layer classifier and the final prediction is made.the reason for using the three-layer architecture based on group learning was the lack of correct recognition of some classes by simple classifications. on the other hand, some diseases with different classes are classified in the same class. this model helps to correctly identify input samples with the correct combination of classifications in different layers.ham10000 data set has been used to test and validate the proposed method. the mentioned dataset includes 10,015 images of skin lesions in seven different classes and includes different types of skin diseases. the accuracy is 99.97 on the testing set, which was much better than the previous heavy models. | [
"the most common types",
"cancer",
"the world",
"skin cancer",
"the different types",
"skin diseases",
"different shapes",
"the classification",
"skin diseases",
"a very difficult task",
"a result",
"such a problem",
"a combination model",
"deep learning algorithms",
"machine",
"a three-layer architecture",
"ensemble learning",
"the first layer",
"the training input",
"convolutional neural network",
"efficientnet",
"the output",
"the first layer",
"the classifiers",
"the second layer",
"machine learning classifiers",
"the output",
"the best decision",
"these classifiers",
"the third layer classifier",
"the final prediction",
"made.the reason",
"the three-layer architecture",
"group learning",
"the lack",
"correct recognition",
"some classes",
"simple classifications",
"the other hand",
"some diseases",
"different classes",
"the same class",
"this model",
"input samples",
"the correct combination",
"classifications",
"different layers.ham10000 data set",
"the proposed method",
"the mentioned dataset",
"10,015 images",
"skin lesions",
"seven different classes",
"different types",
"skin diseases",
"the accuracy",
"the testing set",
"which",
"the previous heavy models",
"one",
"three",
"first",
"first",
"second",
"third",
"three",
"10,015",
"seven",
"99.97"
] |
RRS: Review-Based Recommendation System Using Deep Learning for Vietnamese | [
"Minh Hoang Nguyen",
"Thuat Thien Nguyen",
"Minh Nhat Ta",
"Tien Minh Nguyen",
"Kiet Van Nguyen"
] | Tourism, which includes sightseeing, relaxation, and discovery, is a fundamental aspect of human life. One of the most critical considerations for traveling is accommodation, mainly hotels. To improve the travel experience, we have presented a solution for building a recommendation model using Vietnamese and user data to suggest travelers choose the ideal hotel. Our data was collected from two well-known websites, Traveloka and Ivivu, and includes information about hotels in Vietnam and users’ feedback history, such as comments, ratings, and the names of users and hotels. We then preprocessed and labeled the inter-annotator agreement for various aspects, including service (0.89), infrastructure (0.84), sanitary (0.83), location (0.89), and attitude (0.83). Our recommendation model is built by using Collaborative Filtering and deep learning techniques. Furthermore, we suggest incorporating context vectors from tourists’ Vietnamese comments in the recommendation process. The context model is developed by using deep learning techniques to extract topics and sentiments from the words effectively. The results of our proposed model, as measured by the MSE, were 0.027, which is significantly better than a context-free model using the same parameters, which had an MSE of 0.061. Additionally, our Deep Learning, created using PhoBERT embedding, had an accuracy of 81% for topic classification and 82% for sentiment classification. The FastText based model had 82% and 81% accuracy for topic and sentiment, respectively. Our research demonstrates that our approach improves the accuracy of the recommendation model and has the potential for further development in the future. This idea can introduce a new Recommendation System that can overcome existing limitations and apply to other areas. | 10.1007/s42979-024-02812-6 | rrs: review-based recommendation system using deep learning for vietnamese | tourism, which includes sightseeing, relaxation, and discovery, is a fundamental aspect of human life. one of the most critical considerations for traveling is accommodation, mainly hotels. to improve the travel experience, we have presented a solution for building a recommendation model using vietnamese and user data to suggest travelers choose the ideal hotel. our data was collected from two well-known websites, traveloka and ivivu, and includes information about hotels in vietnam and users’ feedback history, such as comments, ratings, and the names of users and hotels. we then preprocessed and labeled the inter-annotator agreement for various aspects, including service (0.89), infrastructure (0.84), sanitary (0.83), location (0.89), and attitude (0.83). our recommendation model is built by using collaborative filtering and deep learning techniques. furthermore, we suggest incorporating context vectors from tourists’ vietnamese comments in the recommendation process. the context model is developed by using deep learning techniques to extract topics and sentiments from the words effectively. the results of our proposed model, as measured by the mse, were 0.027, which is significantly better than a context-free model using the same parameters, which had an mse of 0.061. additionally, our deep learning, created using phobert embedding, had an accuracy of 81% for topic classification and 82% for sentiment classification. the fasttext based model had 82% and 81% accuracy for topic and sentiment, respectively. our research demonstrates that our approach improves the accuracy of the recommendation model and has the potential for further development in the future. this idea can introduce a new recommendation system that can overcome existing limitations and apply to other areas. | [
"tourism",
"which",
"sightseeing",
"relaxation",
"discovery",
"a fundamental aspect",
"human life",
"the most critical considerations",
"accommodation",
", mainly hotels",
"the travel experience",
"we",
"a solution",
"a recommendation model",
"user data",
"travelers",
"the ideal hotel",
"our data",
"two well-known websites",
"traveloka",
"ivivu",
"information",
"hotels",
"vietnam",
"users’ feedback history",
"comments",
"ratings",
"the names",
"users",
"hotels",
"we",
"the inter-annotator agreement",
"various aspects",
"service",
"infrastructure",
"location",
"attitude",
"our recommendation model",
"collaborative filtering",
"deep learning techniques",
"we",
"context vectors",
"tourists’ vietnamese comments",
"the recommendation process",
"the context model",
"deep learning techniques",
"topics",
"sentiments",
"the words",
"the results",
"our proposed model",
"the mse",
"which",
"a context-free model",
"the same parameters",
"which",
"an mse",
"our deep learning",
"phobert",
"an accuracy",
"81%",
"topic classification",
"82%",
"sentiment classification",
"the fasttext based model",
"82%",
"81% accuracy",
"topic",
"sentiment",
"our research",
"our approach",
"the accuracy",
"the recommendation model",
"the potential",
"further development",
"the future",
"this idea",
"a new recommendation system",
"that",
"existing limitations",
"other areas",
"one",
"vietnamese",
"two",
"vietnam",
"0.89",
"0.84",
"0.83",
"0.89",
"0.83",
"vietnamese",
"0.027",
"0.061",
"81%",
"82%",
"82% and",
"81%"
] |
An Extensive Review on Deep Learning and Machine Learning Intervention in Prediction and Classification of Types of Aneurysms | [
"Renugadevi Ammapalayam Sinnaswamy",
"Natesan Palanisamy",
"Kavitha Subramaniam",
"Suresh Muthusamy",
"Ravita Lamba",
"Sreejith Sekaran"
] | Aneurysm (Rupture of blood vessels) may happen in the cerebrum, abdominal aorta and thoracic aorta of humans, which has a high fatal rate. The advancement of the artificial technologies specifically machine learning algorithms and deep learning models have attempted to predict the aneurysm, which may reduce the death rate. The main objective of this paper is to provide the review of various algorithms and models for the early prediction of the various types of aneurysms. The focused literature review was conducted from the preferred journals from 2007 to 2022 on various parameters such as way of collecting images, the techniques used, number of images used in data set, performance metrics and future work. The summarized overview of advances in prediction of aneurysms using the machine learning algorithms from non linear kernel support regression algorithm to 3D Unet architecture of deep learning models starting from CT scan images to final performance analysis in prediction. The range of sensitivity, specificity and area under receiving operating characteristic was from 0. 7 to 1 for the abdominal aortic aneurysm detection, intracranial aneurysm detection. The thoracic aortic aneurysm was not concentrated much in the literature review, so the prediction of thoracic aortic aneurysm using machine learning as well as deep learning model is recommended. | 10.1007/s11277-023-10532-y | an extensive review on deep learning and machine learning intervention in prediction and classification of types of aneurysms | aneurysm (rupture of blood vessels) may happen in the cerebrum, abdominal aorta and thoracic aorta of humans, which has a high fatal rate. the advancement of the artificial technologies specifically machine learning algorithms and deep learning models have attempted to predict the aneurysm, which may reduce the death rate. the main objective of this paper is to provide the review of various algorithms and models for the early prediction of the various types of aneurysms. the focused literature review was conducted from the preferred journals from 2007 to 2022 on various parameters such as way of collecting images, the techniques used, number of images used in data set, performance metrics and future work. the summarized overview of advances in prediction of aneurysms using the machine learning algorithms from non linear kernel support regression algorithm to 3d unet architecture of deep learning models starting from ct scan images to final performance analysis in prediction. the range of sensitivity, specificity and area under receiving operating characteristic was from 0. 7 to 1 for the abdominal aortic aneurysm detection, intracranial aneurysm detection. the thoracic aortic aneurysm was not concentrated much in the literature review, so the prediction of thoracic aortic aneurysm using machine learning as well as deep learning model is recommended. | [
"aneurysm",
"rupture",
"blood vessels",
"the cerebrum",
"abdominal aorta",
"aorta",
"humans",
"which",
"a high fatal rate",
"the advancement",
"the artificial technologies",
"specifically machine learning algorithms",
"deep learning models",
"the aneurysm",
"which",
"the death rate",
"the main objective",
"this paper",
"the review",
"various algorithms",
"models",
"the early prediction",
"the various types",
"aneurysms",
"the focused literature review",
"the preferred journals",
"various parameters",
"way",
"images",
"the techniques",
"images",
"data set",
"performance metrics",
"future work",
"the summarized overview",
"advances",
"prediction",
"aneurysms",
"the machine learning algorithms",
"non linear kernel support regression algorithm",
"3d unet architecture",
"deep learning models",
"ct scan images",
"final performance analysis",
"prediction",
"the range",
"sensitivity",
"specificity",
"area",
"operating characteristic",
"the abdominal aortic aneurysm detection",
"intracranial aneurysm detection",
"the thoracic aortic aneurysm",
"the literature review",
"the prediction",
"thoracic aortic aneurysm",
"machine learning",
"deep learning model",
"2007",
"2022",
"3d",
"scan",
"0",
"7 to 1",
"intracranial aneurysm detection",
"the thoracic aortic aneurysm"
] |
Machine learning and deep learning techniques for breast cancer diagnosis and classification: a comprehensive review of medical imaging studies | [
"Mehran Radak",
"Haider Yabr Lafta",
"Hossein Fallahi"
] | BackgroundBreast cancer is a major public health concern, and early diagnosis and classification are critical for effective treatment. Machine learning and deep learning techniques have shown great promise in the classification and diagnosis of breast cancer.PurposeIn this review, we examine studies that have used these techniques for breast cancer classification and diagnosis, focusing on five groups of medical images: mammography, ultrasound, MRI, histology, and thermography. We discuss the use of five popular machine learning techniques, including Nearest Neighbor, SVM, Naive Bayesian Network, DT, and ANN, as well as deep learning architectures and convolutional neural networks.ConclusionOur review finds that machine learning and deep learning techniques have achieved high accuracy rates in breast cancer classification and diagnosis across various medical imaging modalities. Furthermore, these techniques have the potential to improve clinical decision-making and ultimately lead to better patient outcomes. | 10.1007/s00432-023-04956-z | machine learning and deep learning techniques for breast cancer diagnosis and classification: a comprehensive review of medical imaging studies | backgroundbreast cancer is a major public health concern, and early diagnosis and classification are critical for effective treatment. machine learning and deep learning techniques have shown great promise in the classification and diagnosis of breast cancer.purposein this review, we examine studies that have used these techniques for breast cancer classification and diagnosis, focusing on five groups of medical images: mammography, ultrasound, mri, histology, and thermography. we discuss the use of five popular machine learning techniques, including nearest neighbor, svm, naive bayesian network, dt, and ann, as well as deep learning architectures and convolutional neural networks.conclusionour review finds that machine learning and deep learning techniques have achieved high accuracy rates in breast cancer classification and diagnosis across various medical imaging modalities. furthermore, these techniques have the potential to improve clinical decision-making and ultimately lead to better patient outcomes. | [
"backgroundbreast cancer",
"a major public health concern",
"early diagnosis",
"classification",
"effective treatment",
"machine learning",
"deep learning techniques",
"great promise",
"the classification",
"diagnosis",
"breast cancer.purposein",
"this review",
"we",
"studies",
"that",
"these techniques",
"breast cancer classification",
"diagnosis",
"five groups",
"medical images",
"mammography",
"ultrasound",
"mri",
"histology",
"thermography",
"we",
"the use",
"five popular machine learning techniques",
"neighbor",
"svm",
"naive bayesian network",
"dt",
"ann",
"deep learning architectures",
"convolutional neural networks.conclusionour review",
"machine learning",
"deep learning techniques",
"high accuracy rates",
"breast cancer classification",
"diagnosis",
"various medical imaging modalities",
"these techniques",
"the potential",
"clinical decision-making",
"better patient outcomes",
"five",
"five"
] |
SAKMR: Industrial control anomaly detection based on semi-supervised hybrid deep learning | [
"Shijie Tang",
"Yong Ding",
"Meng Zhao",
"Huiyong Wang"
] | With the advent of Industry 4.0, industrial control systems (ICS) are more and more closely connected with the Internet, leading to a rapid increase in the types and quantities of security threats that arise from ICS. Anomaly detection is an effective defense measure against attacks. At present, it is the main trend to use hybrid deep learning methods to realize ICS anomaly detection. However, we found that many ICS anomaly detection methods based on hybrid deep learning adopt phased learning, in which each phase is optimized separately with optimization goals deviating from the overall goal. In view of this issue, we propose an end-to-end anomaly detection method SAKMR based on hybrid deep learning. Our method uses radial basis function network (RBFN) to realize K-means clustering, and combines it with stacked auto-encoder (SAE), which is conducive to defining reconstruction error and clustering error into an objective function to ensure joint optimization of feature extraction and classification. Experiments were conducted on the commonly used KDDCUP99 and SWAT datasets. The results show that SAKMR is effective in detecting abnormal industrial control data and outperforms the baseline methods on multiple performance indicators such as F1-Measure. | 10.1007/s12083-023-01586-7 | sakmr: industrial control anomaly detection based on semi-supervised hybrid deep learning | with the advent of industry 4.0, industrial control systems (ics) are more and more closely connected with the internet, leading to a rapid increase in the types and quantities of security threats that arise from ics. anomaly detection is an effective defense measure against attacks. at present, it is the main trend to use hybrid deep learning methods to realize ics anomaly detection. however, we found that many ics anomaly detection methods based on hybrid deep learning adopt phased learning, in which each phase is optimized separately with optimization goals deviating from the overall goal. in view of this issue, we propose an end-to-end anomaly detection method sakmr based on hybrid deep learning. our method uses radial basis function network (rbfn) to realize k-means clustering, and combines it with stacked auto-encoder (sae), which is conducive to defining reconstruction error and clustering error into an objective function to ensure joint optimization of feature extraction and classification. experiments were conducted on the commonly used kddcup99 and swat datasets. the results show that sakmr is effective in detecting abnormal industrial control data and outperforms the baseline methods on multiple performance indicators such as f1-measure. | [
"the advent",
"industry",
"industrial control systems",
"ics",
"the internet",
"a rapid increase",
"the types",
"quantities",
"security threats",
"that",
"ics",
"anomaly detection",
"an effective defense measure",
"attacks",
"present",
"it",
"the main trend",
"hybrid deep learning methods",
"ics anomaly detection",
"we",
"that many ics anomaly detection methods",
"hybrid deep learning adopt phased learning",
"which",
"each phase",
"optimization goals",
"the overall goal",
"view",
"this issue",
"we",
"end",
"hybrid deep learning",
"our method",
"radial basis function network",
"rbfn",
"k",
"it",
"stacked auto-encoder",
"sae",
"which",
"reconstruction error",
"error",
"an objective function",
"joint optimization",
"feature extraction",
"classification",
"experiments",
"the commonly used kddcup99",
"swat datasets",
"the results",
"sakmr",
"abnormal industrial control data",
"the baseline methods",
"multiple performance indicators",
"f1-measure",
"4.0",
"anomaly detection",
"anomaly detection method",
"kddcup99"
] |
Deep learning for the harmonization of structural MRI scans: a survey | [
"Soolmaz Abbasi",
"Haoyu Lan",
"Jeiran Choupan",
"Nasim Sheikh-Bahaei",
"Gaurav Pandey",
"Bino Varghese"
] | Medical imaging datasets for research are frequently collected from multiple imaging centers using different scanners, protocols, and settings. These variations affect data consistency and compatibility across different sources. Image harmonization is a critical step to mitigate the effects of factors like inherent differences between various vendors, hardware upgrades, protocol changes, and scanner calibration drift, as well as to ensure consistent data for medical image processing techniques. Given the critical importance and widespread relevance of this issue, a vast array of image harmonization methodologies have emerged, with deep learning-based approaches driving substantial advancements in recent times. The goal of this review paper is to examine the latest deep learning techniques employed for image harmonization by analyzing cutting-edge architectural approaches in the field of medical image harmonization, evaluating both their strengths and limitations. This paper begins by providing a comprehensive fundamental overview of image harmonization strategies, covering three critical aspects: established imaging datasets, commonly used evaluation metrics, and characteristics of different scanners. Subsequently, this paper analyzes recent structural MRI (Magnetic Resonance Imaging) harmonization techniques based on network architecture, network learning algorithm, network supervision strategy, and network output. The underlying architectures include U-Net, Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), flow-based generative models, transformer-based approaches, as well as custom-designed network architectures. This paper investigates the effectiveness of Disentangled Representation Learning (DRL) as a pivotal learning algorithm in harmonization. Lastly, the review highlights the primary limitations in harmonization techniques, specifically the lack of comprehensive quantitative comparisons across different methods. The overall aim of this review is to serve as a guide for researchers and practitioners to select appropriate architectures based on their specific conditions and requirements. It also aims to foster discussions around ongoing challenges in the field and shed light on promising future research directions with the potential for significant advancements. | 10.1186/s12938-024-01280-6 | deep learning for the harmonization of structural mri scans: a survey | medical imaging datasets for research are frequently collected from multiple imaging centers using different scanners, protocols, and settings. these variations affect data consistency and compatibility across different sources. image harmonization is a critical step to mitigate the effects of factors like inherent differences between various vendors, hardware upgrades, protocol changes, and scanner calibration drift, as well as to ensure consistent data for medical image processing techniques. given the critical importance and widespread relevance of this issue, a vast array of image harmonization methodologies have emerged, with deep learning-based approaches driving substantial advancements in recent times. the goal of this review paper is to examine the latest deep learning techniques employed for image harmonization by analyzing cutting-edge architectural approaches in the field of medical image harmonization, evaluating both their strengths and limitations. this paper begins by providing a comprehensive fundamental overview of image harmonization strategies, covering three critical aspects: established imaging datasets, commonly used evaluation metrics, and characteristics of different scanners. subsequently, this paper analyzes recent structural mri (magnetic resonance imaging) harmonization techniques based on network architecture, network learning algorithm, network supervision strategy, and network output. the underlying architectures include u-net, generative adversarial networks (gans), variational autoencoders (vaes), flow-based generative models, transformer-based approaches, as well as custom-designed network architectures. this paper investigates the effectiveness of disentangled representation learning (drl) as a pivotal learning algorithm in harmonization. lastly, the review highlights the primary limitations in harmonization techniques, specifically the lack of comprehensive quantitative comparisons across different methods. the overall aim of this review is to serve as a guide for researchers and practitioners to select appropriate architectures based on their specific conditions and requirements. it also aims to foster discussions around ongoing challenges in the field and shed light on promising future research directions with the potential for significant advancements. | [
"medical imaging datasets",
"research",
"multiple imaging centers",
"different scanners",
"protocols",
"settings",
"these variations",
"data consistency",
"compatibility",
"different sources",
"image harmonization",
"a critical step",
"the effects",
"factors",
"inherent differences",
"various vendors",
"hardware upgrades",
"protocol changes",
"scanner calibration drift",
"consistent data",
"medical image processing techniques",
"the critical importance",
"widespread relevance",
"this issue",
"a vast array",
"image harmonization methodologies",
"deep learning-based approaches",
"substantial advancements",
"recent times",
"the goal",
"this review paper",
"the latest deep learning techniques",
"image harmonization",
"cutting-edge architectural approaches",
"the field",
"medical image harmonization",
"both their strengths",
"limitations",
"this paper",
"a comprehensive fundamental overview",
"image harmonization strategies",
"three critical aspects",
"imaging datasets",
"commonly used evaluation metrics",
"characteristics",
"different scanners",
"this paper",
"recent structural mri",
"magnetic resonance imaging",
"harmonization techniques",
"network architecture",
"network learning algorithm",
"network supervision strategy",
"network output",
"the underlying architectures",
"u",
"-",
"net",
"generative adversarial networks",
"gans",
"variational autoencoders",
"flow-based generative models",
"transformer-based approaches",
"custom-designed network architectures",
"this paper",
"the effectiveness",
"drl",
"a pivotal learning algorithm",
"harmonization",
"the review",
"the primary limitations",
"harmonization techniques",
"specifically the lack",
"comprehensive quantitative comparisons",
"different methods",
"the overall aim",
"this review",
"a guide",
"researchers",
"practitioners",
"appropriate architectures",
"their specific conditions",
"requirements",
"it",
"discussions",
"ongoing challenges",
"the field",
"light",
"future research directions",
"the potential",
"significant advancements",
"three"
] |
PolySeg Plus: Polyp Segmentation Using Deep Learning with Cost Effective Active Learning | [
"Abdelrahman I. Saad",
"Fahima A. Maghraby",
"Osama Badawy"
] | A deep convolution neural network image segmentation model based on a cost-effective active learning mechanism is proposed and named PolySeg Plus. It is intended to address polyp segmentation with a lack of labeled data and a high false-positive rate of polyp discovery. In addition to applying active learning, which assisted in labeling more image samples, a comprehensive polyp dataset formed of five benchmark datasets was generated to increase the number of images. To enhance the captured image features, the locally shared feature method is used, which utilizes the power of employing neighboring features together with one another to improve the quality of image features and overcome the drawbacks of the Conditional Random Features method. Medical image segmentation was performed using ResUNet++, ResUNet, UNet++, and UNet models. Gaussian noise was removed from the images using a gaussian filter, and the images were then augmented before being fed into the models. In addition to optimizing model performance through hyperparameter tuning, grid search is used to select the optimum parameters to maximize model performance. The results demonstrated a significant improvement and applicability of the proposed method in polyp segmentation when compared to state-of-the-art methods on the datasets CVC-ClinicDB, CVC-ColonDB, ETIS Larib Polyp DB, KVASIR-SEG, and Kvasir-Sessile, with Dice coefficients of 0.9558, 0.8947, 0.7547, 0.9476, and 0.6023, respectively. Not only did the suggested method improve the dice coefficients on the individual datasets, but it also produced better results on the comprehensive dataset, which will contribute to the development of computer-aided diagnosis systems. | 10.1007/s44196-023-00330-6 | polyseg plus: polyp segmentation using deep learning with cost effective active learning | a deep convolution neural network image segmentation model based on a cost-effective active learning mechanism is proposed and named polyseg plus. it is intended to address polyp segmentation with a lack of labeled data and a high false-positive rate of polyp discovery. in addition to applying active learning, which assisted in labeling more image samples, a comprehensive polyp dataset formed of five benchmark datasets was generated to increase the number of images. to enhance the captured image features, the locally shared feature method is used, which utilizes the power of employing neighboring features together with one another to improve the quality of image features and overcome the drawbacks of the conditional random features method. medical image segmentation was performed using resunet++, resunet, unet++, and unet models. gaussian noise was removed from the images using a gaussian filter, and the images were then augmented before being fed into the models. in addition to optimizing model performance through hyperparameter tuning, grid search is used to select the optimum parameters to maximize model performance. the results demonstrated a significant improvement and applicability of the proposed method in polyp segmentation when compared to state-of-the-art methods on the datasets cvc-clinicdb, cvc-colondb, etis larib polyp db, kvasir-seg, and kvasir-sessile, with dice coefficients of 0.9558, 0.8947, 0.7547, 0.9476, and 0.6023, respectively. not only did the suggested method improve the dice coefficients on the individual datasets, but it also produced better results on the comprehensive dataset, which will contribute to the development of computer-aided diagnosis systems. | [
"a deep convolution neural network image segmentation model",
"a cost-effective active learning mechanism",
"polyseg",
"it",
"polyp segmentation",
"a lack",
"labeled data",
"a high false-positive rate",
"polyp discovery",
"addition",
"active learning",
"which",
"more image samples",
"a comprehensive polyp dataset",
"five benchmark datasets",
"the number",
"images",
"the captured image features",
"the locally shared feature method",
"which",
"the power",
"neighboring features",
"the quality",
"image features",
"the drawbacks",
"the conditional random features method",
"medical image segmentation",
"resunet++",
"resunet",
"unet++",
"unet models",
"gaussian noise",
"the images",
"a gaussian filter",
"the images",
"the models",
"addition",
"model performance",
"hyperparameter tuning",
"grid search",
"the optimum parameters",
"model performance",
"the results",
"a significant improvement",
"applicability",
"the proposed method",
"polyp segmentation",
"the-art",
"the datasets cvc-clinicdb",
"cvc-colondb",
"etis larib polyp",
"kvasir-seg",
"kvasir-sessile",
"dice coefficients",
"the suggested method",
"the dice coefficients",
"the individual datasets",
"it",
"better results",
"the comprehensive dataset",
"which",
"the development",
"computer-aided diagnosis systems",
"polyseg",
"five",
"gaussian noise",
"0.9558",
"0.8947",
"0.7547",
"0.9476",
"0.6023"
] |
Deep learning large-scale drug discovery and repurposing | [
"Min Yu",
"Weiming Li",
"Yunru Yu",
"Yu Zhao",
"Lizhi Xiao",
"Volker M. Lauschke",
"Yiyu Cheng",
"Xingcai Zhang",
"Yi Wang"
] | Large-scale drug discovery and repurposing is challenging. Identifying the mechanism of action (MOA) is crucial, yet current approaches are costly and low-throughput. Here we present an approach for MOA identification by profiling changes in mitochondrial phenotypes. By temporally imaging mitochondrial morphology and membrane potential, we established a pipeline for monitoring time-resolved mitochondrial images, resulting in a dataset comprising 570,096 single-cell images of cells exposed to 1,068 United States Food and Drug Administration-approved drugs. A deep learning model named MitoReID, using a re-identification (ReID) framework and an Inflated 3D ResNet backbone, was developed. It achieved 76.32% Rank-1 and 65.92% mean average precision on the testing set and successfully identified the MOAs for six untrained drugs on the basis of mitochondrial phenotype. Furthermore, MitoReID identified cyclooxygenase-2 inhibition as the MOA of the natural compound epicatechin in tea, which was successfully validated in vitro. Our approach thus provides an automated and cost-effective alternative for target identification that could accelerate large-scale drug discovery and repurposing. | 10.1038/s43588-024-00679-4 | deep learning large-scale drug discovery and repurposing | large-scale drug discovery and repurposing is challenging. identifying the mechanism of action (moa) is crucial, yet current approaches are costly and low-throughput. here we present an approach for moa identification by profiling changes in mitochondrial phenotypes. by temporally imaging mitochondrial morphology and membrane potential, we established a pipeline for monitoring time-resolved mitochondrial images, resulting in a dataset comprising 570,096 single-cell images of cells exposed to 1,068 united states food and drug administration-approved drugs. a deep learning model named mitoreid, using a re-identification (reid) framework and an inflated 3d resnet backbone, was developed. it achieved 76.32% rank-1 and 65.92% mean average precision on the testing set and successfully identified the moas for six untrained drugs on the basis of mitochondrial phenotype. furthermore, mitoreid identified cyclooxygenase-2 inhibition as the moa of the natural compound epicatechin in tea, which was successfully validated in vitro. our approach thus provides an automated and cost-effective alternative for target identification that could accelerate large-scale drug discovery and repurposing. | [
"large-scale drug discovery",
"repurposing",
"the mechanism",
"action",
"(moa",
"current approaches",
"we",
"an approach",
"moa identification",
"changes",
"mitochondrial phenotypes",
"mitochondrial morphology",
"membrane potential",
"we",
"a pipeline",
"time-resolved mitochondrial images",
"a dataset",
"570,096 single-cell images",
"cells",
"1,068 united states food and drug administration-approved drugs",
"a deep learning model",
"mitoreid",
"a re",
"-",
"identification (reid) framework",
"an inflated 3d resnet backbone",
"it",
"76.32%",
"rank-1",
"65.92%",
"the moas",
"six untrained drugs",
"the basis",
"mitochondrial phenotype",
"mitoreid",
"inhibition",
"the moa",
"the natural compound epicatechin",
"tea",
"which",
"our approach",
"an automated and cost-effective alternative",
"target identification",
"that",
"large-scale drug discovery",
"repurposing",
"570,096",
"1,068",
"united states",
"food and drug administration",
"mitoreid",
"3d",
"76.32%",
"65.92%",
"six",
"mitoreid"
] |
CoVSeverity-Net: an efficient deep learning model for COVID-19 severity estimation from Chest X-Ray images | [
"Sagar Deep Deb",
"Rajib Kumar Jha",
"Rajnish Kumar",
"Prem S. Tripathi",
"Yash Talera",
"Manish Kumar"
] | PurposeCOVID-19 is not going anywhere and is slowly becoming a part of our life. The World Health Organization declared it a pandemic in 2020, and it has affected all of us in many ways. Several deep learning techniques have been developed to detect COVID-19 from Chest X-Ray images. COVID-19 infection severity scoring can aid in establishing the optimum course of treatment and care for a positive patient, as all COVID-19 positive patients do not require special medical attention. Still, very few works are reported to estimate the severity of the disease from the Chest X-Ray images. The unavailability of the large-scale dataset might be a reason.MethodsWe aim to propose CoVSeverity-Net, a deep learning-based architecture for predicting the severity of COVID-19 from Chest X-ray images. CoVSeverity-Net is trained on a public COVID-19 dataset, curated by experienced radiologists for severity estimation. For that, a large publicly available dataset is collected and divided into three levels of severity, namely Mild, Moderate, and Severe.ResultsAn accuracy of 85.71% is reported. Conducting 5-fold cross-validation, we have obtained an accuracy of 87.82 ± 6.25%. Similarly, conducting 10-fold cross-validation we obtained accuracy of 91.26 ± 3.42. The results were better when compared with other state-of-the-art architectures.ConclusionWe strongly believe that this study has a high chance of reducing the workload of overworked front-line radiologists, speeding up patient diagnosis and treatment, and easing pandemic control. Future work would be to train a novel deep learning-based architecture on a larger dataset for severity estimation. | 10.1007/s42600-022-00254-8 | covseverity-net: an efficient deep learning model for covid-19 severity estimation from chest x-ray images | purposecovid-19 is not going anywhere and is slowly becoming a part of our life. the world health organization declared it a pandemic in 2020, and it has affected all of us in many ways. several deep learning techniques have been developed to detect covid-19 from chest x-ray images. covid-19 infection severity scoring can aid in establishing the optimum course of treatment and care for a positive patient, as all covid-19 positive patients do not require special medical attention. still, very few works are reported to estimate the severity of the disease from the chest x-ray images. the unavailability of the large-scale dataset might be a reason.methodswe aim to propose covseverity-net, a deep learning-based architecture for predicting the severity of covid-19 from chest x-ray images. covseverity-net is trained on a public covid-19 dataset, curated by experienced radiologists for severity estimation. for that, a large publicly available dataset is collected and divided into three levels of severity, namely mild, moderate, and severe.resultsan accuracy of 85.71% is reported. conducting 5-fold cross-validation, we have obtained an accuracy of 87.82 ± 6.25%. similarly, conducting 10-fold cross-validation we obtained accuracy of 91.26 ± 3.42. the results were better when compared with other state-of-the-art architectures.conclusionwe strongly believe that this study has a high chance of reducing the workload of overworked front-line radiologists, speeding up patient diagnosis and treatment, and easing pandemic control. future work would be to train a novel deep learning-based architecture on a larger dataset for severity estimation. | [
"purposecovid-19",
"a part",
"our life",
"the world health organization",
"it",
"it",
"all",
"us",
"many ways",
"several deep learning techniques",
"covid-19",
"chest x-ray images",
"covid-19 infection severity scoring",
"the optimum course",
"treatment",
"care",
"a positive patient",
"all covid-19 positive patients",
"special medical attention",
"very few works",
"the severity",
"the disease",
"the chest x-ray images",
"the unavailability",
"the large-scale dataset",
"a reason.methodswe aim",
"covseverity-net",
"a deep learning-based architecture",
"the severity",
"covid-19",
"chest x-ray images",
"covseverity-net",
"a public covid-19 dataset",
"experienced radiologists",
"severity estimation",
"that",
"a large publicly available dataset",
"three levels",
"severity",
"namely mild, moderate, and severe.resultsan accuracy",
"85.71%",
"5-fold cross",
"-",
"validation",
"we",
"an accuracy",
"87.82 ±",
"6.25%",
"cross",
"validation",
"we",
"accuracy",
"91.26 ±",
"the results",
"the-art",
"this study",
"a high chance",
"the workload",
"overworked front-line radiologists",
"patient diagnosis",
"treatment",
"pandemic control",
"future work",
"a novel deep learning-based architecture",
"a larger dataset",
"severity estimation",
"purposecovid-19",
"the world health organization",
"2020",
"covid-19",
"covid-19",
"covid-19",
"covid-19",
"covid-19",
"three",
"85.71%",
"5-fold",
"87.82",
"6.25%",
"10-fold",
"91.26",
"3.42"
] |
A comparison between machine and deep learning models on high stationarity data | [
"Domenico Santoro",
"Tiziana Ciano",
"Massimiliano Ferrara"
] | Advances in sensor, computing, and communication technologies are enabling big data analytics by providing time series data. However, conventional models struggle to identify sequence features and forecast accuracy. This paper investigates time series features and shows that some machine learning algorithms can outperform deep learning models. In particular, the problem analyzed concerned predicting the number of vehicles passing through an Italian tollbooth in 2021. The dataset, composed of 8766 rows and 6 columns relating to additional tollbooths, proved to have high stationarity and was treated through machine learning methods such as support vector machine, random forest, and eXtreme gradient boosting (XGBoost), as well as deep learning through recurrent neural networks with long short-term memory (RNN-LSTM) cells. From the comparison of these models, the prediction through the XGBoost algorithm outperforms competing algorithms, particularly in terms of MAE and MSE. The result highlights how a shallower algorithm than a neural network is, in this case, able to obtain a better adaptation to the time series instead of a much deeper model that tends to develop a smoother prediction. | 10.1038/s41598-024-70341-6 | a comparison between machine and deep learning models on high stationarity data | advances in sensor, computing, and communication technologies are enabling big data analytics by providing time series data. however, conventional models struggle to identify sequence features and forecast accuracy. this paper investigates time series features and shows that some machine learning algorithms can outperform deep learning models. in particular, the problem analyzed concerned predicting the number of vehicles passing through an italian tollbooth in 2021. the dataset, composed of 8766 rows and 6 columns relating to additional tollbooths, proved to have high stationarity and was treated through machine learning methods such as support vector machine, random forest, and extreme gradient boosting (xgboost), as well as deep learning through recurrent neural networks with long short-term memory (rnn-lstm) cells. from the comparison of these models, the prediction through the xgboost algorithm outperforms competing algorithms, particularly in terms of mae and mse. the result highlights how a shallower algorithm than a neural network is, in this case, able to obtain a better adaptation to the time series instead of a much deeper model that tends to develop a smoother prediction. | [
"advances",
"sensor",
"computing",
"communication technologies",
"big data analytics",
"time series data",
"conventional models",
"sequence features",
"forecast accuracy",
"this paper investigates",
"some machine learning algorithms",
"deep learning models",
"the problem",
"the number",
"vehicles",
"an italian tollbooth",
"the dataset",
"8766 rows",
"6 columns",
"additional tollbooths",
"high stationarity",
"machine learning methods",
"support vector machine",
"random forest",
"deep learning",
"recurrent neural networks",
"long short-term memory",
"rnn-lstm) cells",
"the comparison",
"these models",
"the prediction",
"the xgboost algorithm",
"competing algorithms",
"terms",
"mae",
"mse",
"the result",
"a shallower algorithm",
"a neural network",
"this case",
"a better adaptation",
"the time series",
"a much deeper model",
"that",
"a smoother prediction",
"italian",
"2021",
"8766",
"6",
"mae"
] |
Efficient deep learning-based automated diagnosis from echocardiography with contrastive self-supervised learning | [
"Gregory Holste",
"Evangelos K. Oikonomou",
"Bobak J. Mortazavi",
"Zhangyang Wang",
"Rohan Khera"
] | BackgroundAdvances in self-supervised learning (SSL) have enabled state-of-the-art automated medical image diagnosis from small, labeled datasets. This label efficiency is often desirable, given the difficulty of obtaining expert labels for medical image recognition tasks. However, most efforts toward SSL in medical imaging are not adapted to video-based modalities, such as echocardiography.MethodsWe developed a self-supervised contrastive learning approach, EchoCLR, for echocardiogram videos with the goal of learning strong representations for efficient fine-tuning on downstream cardiac disease diagnosis. EchoCLR pretraining involves (i) contrastive learning, where the model is trained to identify distinct videos of the same patient, and (ii) frame reordering, where the model is trained to predict the correct of video frames after being randomly shuffled.ResultsWhen fine-tuned on small portions of labeled data, EchoCLR pretraining significantly improves classification performance for left ventricular hypertrophy (LVH) and aortic stenosis (AS) over other transfer learning and SSL approaches across internal and external test sets. When fine-tuning on 10% of available training data (519 studies), an EchoCLR-pretrained model achieves 0.72 AUROC (95% CI: [0.69, 0.75]) on LVH classification, compared to 0.61 AUROC (95% CI: [0.57, 0.64]) with a standard transfer learning approach. Similarly, using 1% of available training data (53 studies), EchoCLR pretraining achieves 0.82 AUROC (95% CI: [0.79, 0.84]) on severe AS classification, compared to 0.61 AUROC (95% CI: [0.58, 0.65]) with transfer learning.ConclusionsEchoCLR is unique in its ability to learn representations of echocardiogram videos and demonstrates that SSL can enable label-efficient disease classification from small amounts of labeled data. | 10.1038/s43856-024-00538-3 | efficient deep learning-based automated diagnosis from echocardiography with contrastive self-supervised learning | backgroundadvances in self-supervised learning (ssl) have enabled state-of-the-art automated medical image diagnosis from small, labeled datasets. this label efficiency is often desirable, given the difficulty of obtaining expert labels for medical image recognition tasks. however, most efforts toward ssl in medical imaging are not adapted to video-based modalities, such as echocardiography.methodswe developed a self-supervised contrastive learning approach, echoclr, for echocardiogram videos with the goal of learning strong representations for efficient fine-tuning on downstream cardiac disease diagnosis. echoclr pretraining involves (i) contrastive learning, where the model is trained to identify distinct videos of the same patient, and (ii) frame reordering, where the model is trained to predict the correct of video frames after being randomly shuffled.resultswhen fine-tuned on small portions of labeled data, echoclr pretraining significantly improves classification performance for left ventricular hypertrophy (lvh) and aortic stenosis (as) over other transfer learning and ssl approaches across internal and external test sets. when fine-tuning on 10% of available training data (519 studies), an echoclr-pretrained model achieves 0.72 auroc (95% ci: [0.69, 0.75]) on lvh classification, compared to 0.61 auroc (95% ci: [0.57, 0.64]) with a standard transfer learning approach. similarly, using 1% of available training data (53 studies), echoclr pretraining achieves 0.82 auroc (95% ci: [0.79, 0.84]) on severe as classification, compared to 0.61 auroc (95% ci: [0.58, 0.65]) with transfer learning.conclusionsechoclr is unique in its ability to learn representations of echocardiogram videos and demonstrates that ssl can enable label-efficient disease classification from small amounts of labeled data. | [
"backgroundadvances",
"self-supervised learning",
"ssl",
"the-art",
"small, labeled datasets",
"this label efficiency",
"the difficulty",
"expert labels",
"medical image recognition tasks",
"most efforts",
"ssl",
"medical imaging",
"video-based modalities",
"echocardiography.methodswe",
"a self-supervised contrastive learning approach",
"echocardiogram videos",
"the goal",
"strong representations",
"efficient fine-tuning",
"downstream cardiac disease diagnosis",
"echoclr pretraining",
"(i) contrastive learning",
"the model",
"distinct videos",
"the same patient",
"(ii) frame",
"the model",
"the correct",
"video frames",
"small portions",
"labeled data",
"echoclr pretraining",
"classification performance",
"left ventricular hypertrophy",
"lvh",
"aortic stenosis",
"other transfer learning",
"approaches",
"internal and external test sets",
"10%",
"available training data",
"519 studies",
"an echoclr-pretrained model",
"0.72 auroc",
"95% ci",
"lvh classification",
"0.61 auroc",
"95% ci",
"a standard transfer learning approach",
"1%",
"available training data",
"53 studies",
"0.82 auroc",
"95% ci",
"classification",
"0.61 auroc",
"95% ci",
"transfer learning.conclusionsechoclr",
"its ability",
"representations",
"echocardiogram videos",
"ssl",
"label-efficient disease classification",
"small amounts",
"labeled data",
"10%",
"519",
"0.72",
"95%",
"0.69",
"0.75",
"0.61",
"95%",
"0.57",
"0.64",
"1%",
"53",
"0.82",
"95%",
"0.79",
"0.84",
"0.61",
"95%",
"0.58",
"0.65"
] |
Detecting defects in PCB manufacturing: an exploration using Yolov8 deep learning | [
"Weifeng LI"
] | Detecting defects in automated inspection systems for Printed Circuit Board (PCB) manufacturing stands as a critical endeavor for ensuring product quality. Despite numerous methodologies explored in the literature, recent advancements highlight the superiority of deep learning techniques in defect identification, driven by their remarkable accuracy surpassing conventional methods. However, the persistent challenge of achieving higher accuracy rates and real-time processing capabilities persists, motivating the need for innovative solutions. In response, this study introduces a novel approach utilizing the Yolov8 architecture for PCB defect detection, addressing the aforementioned research challenge. Leveraging a custom dataset tailored explicitly for this purpose, our method showcases remarkable performance, with Yolov8x emerging as the top-performing model, achieving a notable F1 Score of 98% and a high mean Average Precision (mAP) of 98.9%. This study’s novelty lies in its comprehensive evaluation of deep learning models specifically tailored for PCB defect detection, offering promising prospects for enhancing precision, non-destructiveness, and real-time capabilities in manufacturing processes. | 10.1007/s12008-024-01986-w | detecting defects in pcb manufacturing: an exploration using yolov8 deep learning | detecting defects in automated inspection systems for printed circuit board (pcb) manufacturing stands as a critical endeavor for ensuring product quality. despite numerous methodologies explored in the literature, recent advancements highlight the superiority of deep learning techniques in defect identification, driven by their remarkable accuracy surpassing conventional methods. however, the persistent challenge of achieving higher accuracy rates and real-time processing capabilities persists, motivating the need for innovative solutions. in response, this study introduces a novel approach utilizing the yolov8 architecture for pcb defect detection, addressing the aforementioned research challenge. leveraging a custom dataset tailored explicitly for this purpose, our method showcases remarkable performance, with yolov8x emerging as the top-performing model, achieving a notable f1 score of 98% and a high mean average precision (map) of 98.9%. this study’s novelty lies in its comprehensive evaluation of deep learning models specifically tailored for pcb defect detection, offering promising prospects for enhancing precision, non-destructiveness, and real-time capabilities in manufacturing processes. | [
"defects",
"automated inspection systems",
"printed circuit board (pcb) manufacturing",
"a critical endeavor",
"product quality",
"numerous methodologies",
"the literature",
"recent advancements",
"the superiority",
"deep learning techniques",
"defect identification",
"their remarkable accuracy",
"conventional methods",
"however, the persistent challenge",
"higher accuracy rates",
"real-time processing capabilities persists",
"the need",
"innovative solutions",
"response",
"this study",
"a novel approach",
"the yolov8 architecture",
"pcb defect detection",
"the aforementioned research challenge",
"a custom dataset",
"this purpose",
"our method",
"remarkable performance",
"yolov8x",
"the top-performing model",
"a notable f1 score",
"98%",
"a high mean average precision",
"(map",
"98.9%",
"this study’s novelty",
"its comprehensive evaluation",
"deep learning models",
"pcb defect detection",
"promising prospects",
"precision",
"real-time capabilities",
"manufacturing processes",
"yolov8",
"98%",
"98.9%"
] |
Deep learning in medical image super resolution: a review | [
"Hujun Yang",
"Zhongyang Wang",
"Xinyao Liu",
"Chuangang Li",
"Junchang Xin",
"Zhiqiong Wang"
] | Super-resolution (SR) reconstruction is a hot topic in medical image processing. SR implies reconstructing corresponding high-resolution (HR) images from observed low-resolution (LR) images or image sequences. In recent years, significant breakthroughs in SR based on deep learning have been made, and many advanced results have been achieved. However, there is a lack of review literature that summarizes the field’s current state and provides an outlook on future developments. Therefore, we provide a comprehensive summary of the literature on medical image SR (MedSR) based on deep learning since 2018 in five aspects: (1) The SR problem of medical images is described, and the methods of image degradation are summarized. (2) We divide the existing studies into three categories: two-dimensional image SR (2DISR), three-dimensional image SR (3DISR), and video SR (VSR). Each category is subdivided. We analyze the network structure and method characteristics of typical methods. (3) Existing SR reconstruction quality evaluation metrics are presented in detail. (4) The application of MedSR methods based on deep learning is discussed. (5) We discuss the challenges of this phase and point out valuable research directions. | 10.1007/s10489-023-04566-9 | deep learning in medical image super resolution: a review | super-resolution (sr) reconstruction is a hot topic in medical image processing. sr implies reconstructing corresponding high-resolution (hr) images from observed low-resolution (lr) images or image sequences. in recent years, significant breakthroughs in sr based on deep learning have been made, and many advanced results have been achieved. however, there is a lack of review literature that summarizes the field’s current state and provides an outlook on future developments. therefore, we provide a comprehensive summary of the literature on medical image sr (medsr) based on deep learning since 2018 in five aspects: (1) the sr problem of medical images is described, and the methods of image degradation are summarized. (2) we divide the existing studies into three categories: two-dimensional image sr (2disr), three-dimensional image sr (3disr), and video sr (vsr). each category is subdivided. we analyze the network structure and method characteristics of typical methods. (3) existing sr reconstruction quality evaluation metrics are presented in detail. (4) the application of medsr methods based on deep learning is discussed. (5) we discuss the challenges of this phase and point out valuable research directions. | [
"-",
"resolution (sr) reconstruction",
"a hot topic",
"medical image processing",
"sr",
"hr",
"observed low-resolution (lr) images",
"image sequences",
"recent years",
"significant breakthroughs",
"sr",
"deep learning",
"many advanced results",
"a lack",
"review literature",
"that",
"the field’s current state",
"an outlook",
"future developments",
"we",
"a comprehensive summary",
"the literature",
"medical image sr",
"medsr",
"deep learning",
"five aspects",
"the sr problem",
"medical images",
"the methods",
"image degradation",
"we",
"the existing studies",
"three categories",
"two-dimensional image sr",
"three-dimensional image sr",
"video sr",
"vsr",
"each category",
"we",
"the network structure",
"method",
"characteristics",
"typical methods",
"existing sr reconstruction quality evaluation metrics",
"detail",
"the application",
"medsr methods",
"deep learning",
"we",
"the challenges",
"this phase",
"valuable research directions",
"recent years",
"2018",
"five",
"1",
"2",
"three",
"two",
"2disr",
"three",
"vsr",
"3",
"4",
"5"
] |
Aspect-oriented extraction and sentiment analysis using optimized hybrid deep learning approaches | [
"Srividya Kotagiri",
"A. Mary Sowjanya",
"B. Anilkumar",
"N Lakshmi Devi"
] | Aspect-oriented extraction involves the identification and extraction of specific aspects, features, or entities within a piece of text. Traditional methods often struggled with the complexity and variability of language, leading to the exploration of advanced deep learning approaches. In the realm of sentiment analysis, the conventional approaches often fall short when it comes to providing a nuanced understanding of sentiments expressed in textual data. Traditional sentiment analysis models often overlook the specific aspects or entities within the text that contribute to the overall sentiment. This limitation poses a significant challenge for businesses and organizations aiming to gain detailed insights into customer opinions, product reviews, and other forms of user-generated content.In this research, we propose an innovative approach for aspect-oriented extraction and sentiment analysis leveraging optimized hybrid deep learning techniques. Our methodology integrates the powerful capabilities of deep learning models with the efficiency of Reptile Search Optimization. Furthermore, we introduce an advanced sentiment analysis framework employing the state-of-the-art Extreme Gradient Boosting Algorithm. The fusion of these techniques aims to enhance the precision and interpretability of aspect-oriented sentiment analysis. The proposed approach first utilizes deep learning architectures to extract and comprehend diverse aspects within textual data. Through the incorporation of Reptile Search Optimization, we optimize the learning process, ensuring adaptability and improved model generalization across various datasets. Subsequently, the sentiment analysis phase employs the robust Extreme Gradient Boosting Algorithm, known for its effectiveness in handling complex relationships and patterns within data. Our experiments, conducted on diverse datasets, demonstrate the superior performance of the proposed methodology in comparison to traditional approaches. The optimized hybrid deep learning approach, coupled with the Reptile Search Optimization and Extreme Gradient Boosting Algorithm, showcases promising results in accurately capturing nuanced sentiments associated with different aspects. This research contributes to the advancement of aspect-oriented sentiment analysis techniques, offering a comprehensive and efficient solution for understanding sentiment nuances in textual data across various domains. The ResNet 50 and EfficientNet B7 architecture of the modified pre-trained model is proposed for the aspect extraction function. The Reptile Search Optimization based Extreme Gradient Boosting Algorithm (RSO-EGBA) is proposed to analyze and predict customer sentiments. The execution of this study is carried out using python software. It has been observed that the overall accuracy of our proposed method is 99.8%, while that of the other state-of-the-art. The overall accuracy of our proposed method shows an increment of 9–16% from that of the state-of-the-art methods. | 10.1007/s11042-024-18964-9 | aspect-oriented extraction and sentiment analysis using optimized hybrid deep learning approaches | aspect-oriented extraction involves the identification and extraction of specific aspects, features, or entities within a piece of text. traditional methods often struggled with the complexity and variability of language, leading to the exploration of advanced deep learning approaches. in the realm of sentiment analysis, the conventional approaches often fall short when it comes to providing a nuanced understanding of sentiments expressed in textual data. traditional sentiment analysis models often overlook the specific aspects or entities within the text that contribute to the overall sentiment. this limitation poses a significant challenge for businesses and organizations aiming to gain detailed insights into customer opinions, product reviews, and other forms of user-generated content.in this research, we propose an innovative approach for aspect-oriented extraction and sentiment analysis leveraging optimized hybrid deep learning techniques. our methodology integrates the powerful capabilities of deep learning models with the efficiency of reptile search optimization. furthermore, we introduce an advanced sentiment analysis framework employing the state-of-the-art extreme gradient boosting algorithm. the fusion of these techniques aims to enhance the precision and interpretability of aspect-oriented sentiment analysis. the proposed approach first utilizes deep learning architectures to extract and comprehend diverse aspects within textual data. through the incorporation of reptile search optimization, we optimize the learning process, ensuring adaptability and improved model generalization across various datasets. subsequently, the sentiment analysis phase employs the robust extreme gradient boosting algorithm, known for its effectiveness in handling complex relationships and patterns within data. our experiments, conducted on diverse datasets, demonstrate the superior performance of the proposed methodology in comparison to traditional approaches. the optimized hybrid deep learning approach, coupled with the reptile search optimization and extreme gradient boosting algorithm, showcases promising results in accurately capturing nuanced sentiments associated with different aspects. this research contributes to the advancement of aspect-oriented sentiment analysis techniques, offering a comprehensive and efficient solution for understanding sentiment nuances in textual data across various domains. the resnet 50 and efficientnet b7 architecture of the modified pre-trained model is proposed for the aspect extraction function. the reptile search optimization based extreme gradient boosting algorithm (rso-egba) is proposed to analyze and predict customer sentiments. the execution of this study is carried out using python software. it has been observed that the overall accuracy of our proposed method is 99.8%, while that of the other state-of-the-art. the overall accuracy of our proposed method shows an increment of 9–16% from that of the state-of-the-art methods. | [
"aspect-oriented extraction",
"the identification",
"extraction",
"specific aspects",
"features",
"entities",
"a piece",
"text",
"traditional methods",
"the complexity",
"variability",
"language",
"the exploration",
"advanced deep learning approaches",
"the realm",
"sentiment analysis",
"the conventional approaches",
"it",
"a nuanced understanding",
"sentiments",
"textual data",
"traditional sentiment analysis models",
"the specific aspects",
"entities",
"the text",
"that",
"the overall sentiment",
"this limitation",
"a significant challenge",
"businesses",
"organizations",
"detailed insights",
"customer opinions",
"product reviews",
"other forms",
"we",
"an innovative approach",
"aspect-oriented extraction",
"sentiment analysis",
"optimized hybrid deep learning techniques",
"our methodology",
"the powerful capabilities",
"deep learning models",
"the efficiency",
"reptile search optimization",
"we",
"an advanced sentiment analysis framework",
"the-art",
"algorithm",
"the fusion",
"these techniques",
"the precision",
"interpretability",
"aspect-oriented sentiment analysis",
"the proposed approach",
"deep learning architectures",
"diverse aspects",
"textual data",
"the incorporation",
"reptile search optimization",
"we",
"the learning process",
"adaptability",
"improved model generalization",
"various datasets",
"the sentiment analysis phase",
"the robust extreme gradient",
"algorithm",
"its effectiveness",
"complex relationships",
"patterns",
"data",
"our experiments",
"diverse datasets",
"the superior performance",
"the proposed methodology",
"comparison",
"traditional approaches",
"the optimized hybrid deep learning approach",
"the reptile search optimization",
"extreme gradient",
"algorithm",
"promising results",
"nuanced sentiments",
"different aspects",
"this research",
"the advancement",
"aspect-oriented sentiment analysis techniques",
"a comprehensive and efficient solution",
"sentiment nuances",
"textual data",
"various domains",
"the resnet 50 and efficientnet b7 architecture",
"the modified pre-trained model",
"the aspect extraction function",
"the reptile search optimization",
"based extreme gradient",
"algorithm",
"rso",
"egba",
"customer sentiments",
"the execution",
"this study",
"python software",
"it",
"the overall accuracy",
"our proposed method",
"99.8%",
"the other state",
"the-art",
"the overall accuracy",
"our proposed method",
"an increment",
"9–16%",
"that",
"the-art",
"first",
"50",
"99.8%",
"9–16%"
] |
A review of research on micro-expression recognition algorithms based on deep learning | [
"Fan Zhang",
"Lin Chai"
] | Micro-expression is a special kind of human emotion. Due to its characteristics of short time, low intensity, and local region, micro-expression recognition is a difficult task. At the same time, it is a natural, spontaneous, and unconcealable emotion that can well convey a person's actual psychological state and, therefore, has certain research value and practical significance. This paper focuses on micro-expression recognition in the field of deep learning through the survey and understanding of existing micro-expression recognition research, as well as grasping the research trend, for the previous literature on micro-expression review ignored the handcrafted features as an important part of the micro-expression recognition framework, and at the same time lacked the analysis of the various enhancement processing, a new micro-expression recognition framework based on deep learning is proposed. The model is designed from the perspective of modularity and streaming data. On the other hand, unlike the previous process of feeding the data directly into the network for training and recognition, the handcrafted features are used as the initial encoding of the micro-expression recognition data, followed by the training and learning of the deep model and at the same time the modular embedding approach is used to incorporate the feature enhancement module, and finally the classification and recognition. The article provides a detailed summary and analysis of each part of the whole framework and a comprehensive introduction to the current problems, experimental protocols, evaluation metrics, and application areas. Finally, it summarizes and gives possible future research directions. Therefore, this paper provides a comprehensive summary and analysis of micro-expression recognition in deep learning so that the related personnel can have a new understanding of the development of this field. On the other hand, it proposes a new recognition framework that also provides a reference for the researchers' later research. | 10.1007/s00521-024-10262-7 | a review of research on micro-expression recognition algorithms based on deep learning | micro-expression is a special kind of human emotion. due to its characteristics of short time, low intensity, and local region, micro-expression recognition is a difficult task. at the same time, it is a natural, spontaneous, and unconcealable emotion that can well convey a person's actual psychological state and, therefore, has certain research value and practical significance. this paper focuses on micro-expression recognition in the field of deep learning through the survey and understanding of existing micro-expression recognition research, as well as grasping the research trend, for the previous literature on micro-expression review ignored the handcrafted features as an important part of the micro-expression recognition framework, and at the same time lacked the analysis of the various enhancement processing, a new micro-expression recognition framework based on deep learning is proposed. the model is designed from the perspective of modularity and streaming data. on the other hand, unlike the previous process of feeding the data directly into the network for training and recognition, the handcrafted features are used as the initial encoding of the micro-expression recognition data, followed by the training and learning of the deep model and at the same time the modular embedding approach is used to incorporate the feature enhancement module, and finally the classification and recognition. the article provides a detailed summary and analysis of each part of the whole framework and a comprehensive introduction to the current problems, experimental protocols, evaluation metrics, and application areas. finally, it summarizes and gives possible future research directions. therefore, this paper provides a comprehensive summary and analysis of micro-expression recognition in deep learning so that the related personnel can have a new understanding of the development of this field. on the other hand, it proposes a new recognition framework that also provides a reference for the researchers' later research. | [
"micro",
"-",
"expression",
"a special kind",
"human emotion",
"its characteristics",
"short time",
"low intensity",
"local region",
"micro-expression recognition",
"a difficult task",
"the same time",
"it",
"a natural, spontaneous, and unconcealable emotion",
"that",
"a person's actual psychological state",
"certain research value",
"practical significance",
"this paper",
"micro-expression recognition",
"the field",
"deep learning",
"the survey",
"understanding",
"existing micro-expression recognition research",
"the research trend",
"the previous literature",
"micro-expression review",
"the handcrafted features",
"an important part",
"the micro-expression recognition framework",
"the same time",
"the analysis",
"the various enhancement processing",
"a new micro-expression recognition framework",
"deep learning",
"the model",
"the perspective",
"modularity and streaming data",
"the other hand",
"the previous process",
"the data",
"the network",
"training",
"recognition",
"the handcrafted features",
"the initial encoding",
"the micro-expression recognition data",
"the training",
"learning",
"the deep model",
"the same time",
"the modular embedding approach",
"the feature enhancement module",
"finally the classification",
"recognition",
"the article",
"a detailed summary",
"analysis",
"each part",
"the whole framework",
"a comprehensive introduction",
"the current problems",
"experimental protocols",
"evaluation metrics",
"application areas",
"it",
"possible future research directions",
"this paper",
"a comprehensive summary",
"analysis",
"micro-expression recognition",
"deep learning",
"the related personnel",
"a new understanding",
"the development",
"this field",
"the other hand",
"it",
"a new recognition framework",
"that",
"a reference",
"the researchers' later research"
] |
On the use of deep learning for phase recovery | [
"Kaiqiang Wang",
"Li Song",
"Chutian Wang",
"Zhenbo Ren",
"Guangyuan Zhao",
"Jiazhen Dou",
"Jianglei Di",
"George Barbastathis",
"Renjie Zhou",
"Jianlin Zhao",
"Edmund Y. Lam"
] | Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource (https://github.com/kqwang/phase-recovery) for readers to learn more about PR. | 10.1038/s41377-023-01340-x | on the use of deep learning for phase recovery | phase recovery (pr) refers to calculating the phase of the light field from its intensity measurements. as exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, pr is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. in recent years, deep learning (dl), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various pr problems. in this review, we first briefly introduce conventional methods for pr. then, we review how dl provides support for pr from the following three stages, namely, pre-processing, in-processing, and post-processing. we also review how dl is used in phase image processing. finally, we summarize the work in dl for pr and provide an outlook on how to better use dl to improve the reliability and efficiency of pr. furthermore, we present a live-updating resource (https://github.com/kqwang/phase-recovery) for readers to learn more about pr. | [
"phase recovery",
"(pr",
"the phase",
"the light field",
"its intensity measurements",
"quantitative phase imaging",
"coherent diffraction",
"adaptive optics",
"the refractive index distribution",
"topography",
"an object",
"the aberration",
"an imaging system",
"recent years",
"deep learning",
"dl",
"deep neural networks",
"unprecedented support",
"computational imaging",
"more efficient solutions",
"various pr problems",
"this review",
"we",
"conventional methods",
"we",
"dl",
"support",
"pr",
"the following three stages",
"-processing",
"processing",
"we",
"dl",
"phase image processing",
"we",
"the work",
"dl",
"pr",
"an outlook",
"dl",
"the reliability",
"efficiency",
"pr",
"we",
"a live-updating resource",
"https://github.com/kqwang/phase-recovery",
"readers",
"pr",
"recent years",
"first",
"three"
] |
A multimodal deep learning model for predicting severe hemorrhage in placenta previa | [
"Munetoshi Akazawa",
"Kazunori Hashimoto"
] | Placenta previa causes life-threatening bleeding and accurate prediction of severe hemorrhage leads to risk stratification and optimum allocation of interventions. We aimed to use a multimodal deep learning model to predict severe hemorrhage. Using MRI T2-weighted image of the placenta and tabular data consisting of patient demographics and preoperative blood examination data, a multimodal deep learning model was constructed to predict cases of intraoperative blood loss > 2000 ml. We evaluated the prediction performance of the model by comparing it with that of two machine learning methods using only tabular data and MRI images, as well as with that of two human expert obstetricians. Among the enrolled 48 patients, 26 (54.2%) lost > 2000 ml of blood and 22 (45.8%) lost < 2000 ml of blood. Multimodal deep learning model showed the best accuracy of 0.68 and AUC of 0.74, whereas the machine learning model using tabular data and MRI images had a class accuracy of 0.61 and 0.53, respectively. The human experts had median accuracies of 0.61. Multimodal deep learning models could integrate the two types of information and predict severe hemorrhage cases. The model might assist human expert in the prediction of intraoperative hemorrhage in the case of placenta previa. | 10.1038/s41598-023-44634-1 | a multimodal deep learning model for predicting severe hemorrhage in placenta previa | placenta previa causes life-threatening bleeding and accurate prediction of severe hemorrhage leads to risk stratification and optimum allocation of interventions. we aimed to use a multimodal deep learning model to predict severe hemorrhage. using mri t2-weighted image of the placenta and tabular data consisting of patient demographics and preoperative blood examination data, a multimodal deep learning model was constructed to predict cases of intraoperative blood loss > 2000 ml. we evaluated the prediction performance of the model by comparing it with that of two machine learning methods using only tabular data and mri images, as well as with that of two human expert obstetricians. among the enrolled 48 patients, 26 (54.2%) lost > 2000 ml of blood and 22 (45.8%) lost < 2000 ml of blood. multimodal deep learning model showed the best accuracy of 0.68 and auc of 0.74, whereas the machine learning model using tabular data and mri images had a class accuracy of 0.61 and 0.53, respectively. the human experts had median accuracies of 0.61. multimodal deep learning models could integrate the two types of information and predict severe hemorrhage cases. the model might assist human expert in the prediction of intraoperative hemorrhage in the case of placenta previa. | [
"placenta previa",
"life-threatening bleeding",
"accurate prediction",
"severe hemorrhage",
"risk stratification",
"optimum allocation",
"interventions",
"we",
"a multimodal deep learning model",
"severe hemorrhage",
"mri t2-weighted image",
"the placenta and tabular data",
"patient demographics",
"preoperative blood examination data",
"a multimodal deep learning model",
"cases",
"intraoperative blood loss",
"we",
"the prediction performance",
"the model",
"it",
"that",
"two machine learning methods",
"only tabular data and mri images",
"that",
"two human expert obstetricians",
"the enrolled 48 patients",
"54.2%",
"2000 ml",
"blood",
"22 (45.8%",
"2000 ml",
"blood",
"multimodal deep learning model",
"the best accuracy",
"auc",
"the machine learning model",
"tabular data and mri images",
"a class accuracy",
"the human experts",
"median accuracies",
"multimodal deep learning models",
"the two types",
"information",
"severe hemorrhage cases",
"the model",
"human expert",
"the prediction",
"intraoperative hemorrhage",
"the case",
"placenta previa",
"placenta previa",
"2000 ml",
"two",
"two",
"48",
"26",
"54.2%",
"2000 ml",
"22",
"45.8%",
"2000 ml",
"0.68",
"0.74",
"0.61",
"0.53",
"0.61",
"two"
] |
Curriculum learning for ab initio deep learned refractive optics | [
"Xinge Yang",
"Qiang Fu",
"Wolfgang Heidrich"
] | Deep optical optimization has recently emerged as a new paradigm for designing computational imaging systems using only the output image as the objective. However, it has been limited to either simple optical systems consisting of a single element such as a diffractive optical element or metalens, or the fine-tuning of compound lenses from good initial designs. Here we present a DeepLens design method based on curriculum learning, which is able to learn optical designs of compound lenses ab initio from randomly initialized surfaces without human intervention, therefore overcoming the need for a good initial design. We demonstrate the effectiveness of our approach by fully automatically designing both classical imaging lenses and a large field-of-view extended depth-of-field computational lens in a cellphone-style form factor, with highly aspheric surfaces and a short back focal length. | 10.1038/s41467-024-50835-7 | curriculum learning for ab initio deep learned refractive optics | deep optical optimization has recently emerged as a new paradigm for designing computational imaging systems using only the output image as the objective. however, it has been limited to either simple optical systems consisting of a single element such as a diffractive optical element or metalens, or the fine-tuning of compound lenses from good initial designs. here we present a deeplens design method based on curriculum learning, which is able to learn optical designs of compound lenses ab initio from randomly initialized surfaces without human intervention, therefore overcoming the need for a good initial design. we demonstrate the effectiveness of our approach by fully automatically designing both classical imaging lenses and a large field-of-view extended depth-of-field computational lens in a cellphone-style form factor, with highly aspheric surfaces and a short back focal length. | [
"deep optical optimization",
"a new paradigm",
"computational imaging systems",
"only the output image",
"the objective",
"it",
"either simple optical systems",
"a single element",
"a diffractive optical element",
"metalens",
"the fine-tuning",
"compound lenses",
"good initial designs",
"we",
"a deeplens design method",
"curriculum learning",
"which",
"optical designs",
"compound lenses",
"ab initio",
"randomly initialized surfaces",
"human intervention",
"the need",
"a good initial design",
"we",
"the effectiveness",
"our approach",
"both classical imaging lenses",
"view",
"field",
"a cellphone-style form factor",
"highly aspheric surfaces",
"a short back focal length",
"deeplens"
] |
HDLP: air quality modeling with hybrid deep learning approaches and particle swam optimization | [
"Elmenawy Osman",
"C. Banerjee",
"Ajeet Singh Poonia"
] | Predicting air pollution in cities has become an important tool for preventing its negative impacts. Therefore, citizens should be aware of air quality level, especially for individuals suffering from diseases caused by air pollutants. Collective efforts from researchers, environmental institutions, governments, industrial companies, and policy makers are shaping the future of the Air Quality Index (AQI) to effectively address severe air pollution in urban areas. Many air quality prediction models have been introduced in the literature; modern advances in deep learning techniques are promising more precise prediction results and data integration. The aim of this paper is to review methods, contributions, findings, limitations, and gaps in predicting air quality index and PM2.5 concentrations using a hybrid deep learning approach. A literature review led researchers to propose a Hybrid Deep Learning model with Particle Swarm Optimization (HDLP) which combines CNN, LSTM, and PSO. Algorithm, first Discrete Wavelet Transform (DWT) is used to solve the air pollution signal, then it is fed to CNN-LSTM neural network for PSO optimization to get the final prediction result. Optimized parameters input models are trained on the original data. In addition to the beneficial assessment cycle, to outperform current models. | 10.1007/s11334-024-00559-0 | hdlp: air quality modeling with hybrid deep learning approaches and particle swam optimization | predicting air pollution in cities has become an important tool for preventing its negative impacts. therefore, citizens should be aware of air quality level, especially for individuals suffering from diseases caused by air pollutants. collective efforts from researchers, environmental institutions, governments, industrial companies, and policy makers are shaping the future of the air quality index (aqi) to effectively address severe air pollution in urban areas. many air quality prediction models have been introduced in the literature; modern advances in deep learning techniques are promising more precise prediction results and data integration. the aim of this paper is to review methods, contributions, findings, limitations, and gaps in predicting air quality index and pm2.5 concentrations using a hybrid deep learning approach. a literature review led researchers to propose a hybrid deep learning model with particle swarm optimization (hdlp) which combines cnn, lstm, and pso. algorithm, first discrete wavelet transform (dwt) is used to solve the air pollution signal, then it is fed to cnn-lstm neural network for pso optimization to get the final prediction result. optimized parameters input models are trained on the original data. in addition to the beneficial assessment cycle, to outperform current models. | [
"air pollution",
"cities",
"an important tool",
"its negative impacts",
"citizens",
"air quality level",
"individuals",
"diseases",
"air pollutants",
"collective efforts",
"researchers",
"environmental institutions",
"governments",
"industrial companies",
"policy makers",
"the future",
"the air quality index",
"aqi",
"severe air pollution",
"urban areas",
"many air quality prediction models",
"the literature",
"modern advances",
"deep learning techniques",
"more precise prediction results",
"data integration",
"the aim",
"this paper",
"methods",
"contributions",
"findings",
"limitations",
"gaps",
"air quality index",
"pm2.5 concentrations",
"a hybrid deep learning approach",
"a literature review",
"researchers",
"a hybrid deep learning model",
"particle swarm optimization",
"hdlp",
"which",
"cnn",
"lstm",
"algorithm",
"first discrete wavelet transform",
"dwt",
"the air pollution signal",
"it",
"cnn-lstm neural network",
"pso optimization",
"the final prediction result",
"optimized parameters input models",
"the original data",
"addition",
"the beneficial assessment cycle",
"current models",
"cnn",
"first",
"fed",
"cnn"
] |
Research progress in water quality prediction based on deep learning technology: a review | [
"Wenhao Li",
"Yin Zhao",
"Yining Zhu",
"Zhongtian Dong",
"Fenghe Wang",
"Fengliang Huang"
] | Water, an invaluable and non-renewable resource, plays an indispensable role in human survival and societal development. Accurate forecasting of water quality involves early identification of future pollutant concentrations and water quality indices, enabling evidence-based decision-making and targeted environmental interventions. The emergence of advanced computational technologies, particularly deep learning, has garnered considerable interest among researchers for applications in water quality prediction because of its robust data analytics capabilities. This article comprehensively reviews the deployment of deep learning methodologies in water quality forecasting, encompassing single-model and mixed-model approaches. Additionally, we delineate optimization strategies, data fusion techniques, and other factors influencing the efficacy of deep learning-based water quality prediction models, because understanding and mastering these factors are crucial for accurate water quality prediction. Although challenges such as data scarcity, long-term prediction accuracy, and limited deployments of large-scale models persist, future research aims to address these limitations by refining prediction algorithms, leveraging high-dimensional datasets, evaluating model performance, and broadening large-scale model application. These efforts contribute to precise water resource management and environmental conservation. | 10.1007/s11356-024-33058-7 | research progress in water quality prediction based on deep learning technology: a review | water, an invaluable and non-renewable resource, plays an indispensable role in human survival and societal development. accurate forecasting of water quality involves early identification of future pollutant concentrations and water quality indices, enabling evidence-based decision-making and targeted environmental interventions. the emergence of advanced computational technologies, particularly deep learning, has garnered considerable interest among researchers for applications in water quality prediction because of its robust data analytics capabilities. this article comprehensively reviews the deployment of deep learning methodologies in water quality forecasting, encompassing single-model and mixed-model approaches. additionally, we delineate optimization strategies, data fusion techniques, and other factors influencing the efficacy of deep learning-based water quality prediction models, because understanding and mastering these factors are crucial for accurate water quality prediction. although challenges such as data scarcity, long-term prediction accuracy, and limited deployments of large-scale models persist, future research aims to address these limitations by refining prediction algorithms, leveraging high-dimensional datasets, evaluating model performance, and broadening large-scale model application. these efforts contribute to precise water resource management and environmental conservation. | [
"water",
"an invaluable and non-renewable resource",
"an indispensable role",
"human survival",
"societal development",
"accurate forecasting",
"water quality",
"early identification",
"future pollutant concentrations",
"water quality indices",
"evidence-based decision-making and targeted environmental interventions",
"the emergence",
"advanced computational technologies",
"particularly deep learning",
"considerable interest",
"researchers",
"applications",
"water quality prediction",
"its robust data analytics capabilities",
"this article",
"the deployment",
"deep learning methodologies",
"water quality forecasting",
"single-model and mixed-model approaches",
"we",
"optimization strategies",
"data fusion techniques",
"other factors",
"the efficacy",
"deep learning-based water quality prediction models",
"understanding",
"these factors",
"accurate water quality prediction",
"challenges",
"data scarcity",
"long-term prediction accuracy",
"limited deployments",
"large-scale models",
"future research",
"these limitations",
"refining prediction algorithms",
"high-dimensional datasets",
"model performance",
"large-scale model application",
"these efforts",
"precise water resource management",
"environmental conservation"
] |
Implementing a Hierarchical Deep Learning Approach for Simulating Multilevel Auction Data | [
"Igor Sadoune",
"Marcelin Joanis",
"Andrea Lodi"
] | We present a deep learning solution to address the challenges of simulating realistic synthetic first-price sealed-bid auction data. The complexities encountered in this type of auction data include high-cardinality discrete feature spaces and a multilevel structure arising from multiple bids associated with a single auction instance. Our methodology combines deep generative modeling (DGM) with an artificial learner that predicts the conditional bid distribution based on auction characteristics, contributing to advancements in simulation-based research. This approach lays the groundwork for creating realistic auction environments suitable for agent-based learning and modeling applications. Our contribution is twofold: we introduce a comprehensive methodology for simulating multilevel discrete auction data, and we underscore the potential of DGM as a powerful instrument for refining simulation techniques and fostering the development of economic models grounded in generative AI. | 10.1007/s10614-024-10622-4 | implementing a hierarchical deep learning approach for simulating multilevel auction data | we present a deep learning solution to address the challenges of simulating realistic synthetic first-price sealed-bid auction data. the complexities encountered in this type of auction data include high-cardinality discrete feature spaces and a multilevel structure arising from multiple bids associated with a single auction instance. our methodology combines deep generative modeling (dgm) with an artificial learner that predicts the conditional bid distribution based on auction characteristics, contributing to advancements in simulation-based research. this approach lays the groundwork for creating realistic auction environments suitable for agent-based learning and modeling applications. our contribution is twofold: we introduce a comprehensive methodology for simulating multilevel discrete auction data, and we underscore the potential of dgm as a powerful instrument for refining simulation techniques and fostering the development of economic models grounded in generative ai. | [
"we",
"a deep learning solution",
"the challenges",
"realistic synthetic first-price sealed-bid auction data",
"the complexities",
"this type",
"auction data",
"high-cardinality discrete feature spaces",
"a multilevel structure",
"multiple bids",
"a single auction instance",
"our methodology",
"deep generative modeling",
"(dgm",
"an artificial learner",
"that",
"the conditional bid distribution",
"auction characteristics",
"advancements",
"simulation-based research",
"this approach",
"the groundwork",
"realistic auction environments",
"agent-based learning and modeling applications",
"our contribution",
"we",
"a comprehensive methodology",
"multilevel discrete auction data",
"we",
"the potential",
"dgm",
"a powerful instrument",
"simulation techniques",
"the development",
"economic models",
"generative ai",
"first"
] |
A deep learning based approach for image retrieval extraction in mobile edge computing | [
"Jamal Alasadi",
"Ghassan F. Bati",
"Ahmed Al Hilli"
] | Deep learning has been widely explored in 5G applications, including computer vision, the Internet of Things (IoT), and intermedia classification. However, applying the deep learning approach in limited-resource mobile devices is one of the most challenging issues. At the same time, users’ experience in terms of Quality of Service (QoS) (e.g., service latency, outcome accuracy, and achievable data rate) performs poorly while interacting with machine learning applications. Mobile edge computing (MEC) has been introduced as a cooperative approach to bring computation resources in proximity to end-user devices to overcome these limitations. This article aims to design a novel image reiterative extraction algorithm based on convolution neural network (CNN) learning and computational task offloading to support machine learning-based mobile applications in resource-limited and uncertain environments. Accordingly, we leverage the framework of image retrieval extraction and introduce three approaches. First, privacy preservation is strict and aims to protect personal data. Second, network traffic reduction. Third, minimizing feature matching time. Our simulation results associated with real-time experiments on a small-scale MEC server have shown the effectiveness of the proposed deep learning-based approach over existing schemes. The source code is available here: https://github.com/jamalalasadi/CNN_Image_retrieval. | 10.1007/s43995-024-00060-6 | a deep learning based approach for image retrieval extraction in mobile edge computing | deep learning has been widely explored in 5g applications, including computer vision, the internet of things (iot), and intermedia classification. however, applying the deep learning approach in limited-resource mobile devices is one of the most challenging issues. at the same time, users’ experience in terms of quality of service (qos) (e.g., service latency, outcome accuracy, and achievable data rate) performs poorly while interacting with machine learning applications. mobile edge computing (mec) has been introduced as a cooperative approach to bring computation resources in proximity to end-user devices to overcome these limitations. this article aims to design a novel image reiterative extraction algorithm based on convolution neural network (cnn) learning and computational task offloading to support machine learning-based mobile applications in resource-limited and uncertain environments. accordingly, we leverage the framework of image retrieval extraction and introduce three approaches. first, privacy preservation is strict and aims to protect personal data. second, network traffic reduction. third, minimizing feature matching time. our simulation results associated with real-time experiments on a small-scale mec server have shown the effectiveness of the proposed deep learning-based approach over existing schemes. the source code is available here: https://github.com/jamalalasadi/cnn_image_retrieval. | [
"deep learning",
"5g applications",
"computer vision",
"the internet",
"things",
"iot",
"intermedia classification",
"the deep learning approach",
"limited-resource mobile devices",
"the most challenging issues",
"the same time",
"users’ experience",
"terms",
"quality",
"service",
"qos",
"machine learning applications",
"mobile edge computing",
"(mec",
"a cooperative approach",
"computation resources",
"proximity",
"end-user devices",
"these limitations",
"this article",
"a novel image reiterative extraction algorithm",
"convolution neural network",
"(cnn) learning",
"computational task",
"machine learning-based mobile applications",
"resource-limited and uncertain environments",
"we",
"the framework",
"image retrieval extraction",
"three approaches",
"privacy preservation",
"personal data",
"feature matching time",
"our simulation results",
"real-time experiments",
"a small-scale mec server",
"the effectiveness",
"the proposed deep learning-based approach",
"existing schemes",
"the source code",
"https://github.com/jamalalasadi/cnn_image_retrieval",
"5",
"cnn",
"three",
"first",
"second",
"third"
] |
Deep learning-based automated angle measurement for flatfoot diagnosis in weight-bearing lateral radiographs | [
"Won-Jun Noh",
"Mu Sook Lee",
"Byoung-Dai Lee"
] | This study aimed to develop and evaluate a deep learning-based system for the automatic measurement of angles (specifically, Meary’s angle and calcaneal pitch) in weight-bearing lateral radiographs of the foot for flatfoot diagnosis. We utilized 3960 lateral radiographs, either from the left or right foot, sourced from a pool of 4000 patients to construct and evaluate a deep learning-based model. These radiographs were captured between June and November 2021, and patients who had undergone total ankle replacement surgery or ankle arthrodesis surgery were excluded. Various methods, including correlation analysis, Bland–Altman plots, and paired T-tests, were employed to assess the concordance between the angles automatically measured using the system and those assessed by clinical experts. The evaluation dataset comprised 150 weight-bearing radiographs from 150 patients. In all test cases, the angles automatically computed using the deep learning-based system were in good agreement with the reference standards (Meary’s angle: Pearson correlation coefficient (PCC) = 0.964, intraclass correlation coefficient (ICC) = 0.963, concordance correlation coefficient (CCC) = 0.963, p-value = 0.632, mean absolute error (MAE) = 1.59°; calcaneal pitch: PCC = 0.988, ICC = 0.987, CCC = 0.987, p-value = 0.055, MAE = 0.63°). The average time required for angle measurement using only the CPU to execute the deep learning-based system was 11 ± 1 s. The deep learning-based automatic angle measurement system, a tool for diagnosing flatfoot, demonstrated comparable accuracy and reliability with the results obtained by medical professionals for patients without internal fixation devices. | 10.1038/s41598-024-69549-3 | deep learning-based automated angle measurement for flatfoot diagnosis in weight-bearing lateral radiographs | this study aimed to develop and evaluate a deep learning-based system for the automatic measurement of angles (specifically, meary’s angle and calcaneal pitch) in weight-bearing lateral radiographs of the foot for flatfoot diagnosis. we utilized 3960 lateral radiographs, either from the left or right foot, sourced from a pool of 4000 patients to construct and evaluate a deep learning-based model. these radiographs were captured between june and november 2021, and patients who had undergone total ankle replacement surgery or ankle arthrodesis surgery were excluded. various methods, including correlation analysis, bland–altman plots, and paired t-tests, were employed to assess the concordance between the angles automatically measured using the system and those assessed by clinical experts. the evaluation dataset comprised 150 weight-bearing radiographs from 150 patients. in all test cases, the angles automatically computed using the deep learning-based system were in good agreement with the reference standards (meary’s angle: pearson correlation coefficient (pcc) = 0.964, intraclass correlation coefficient (icc) = 0.963, concordance correlation coefficient (ccc) = 0.963, p-value = 0.632, mean absolute error (mae) = 1.59°; calcaneal pitch: pcc = 0.988, icc = 0.987, ccc = 0.987, p-value = 0.055, mae = 0.63°). the average time required for angle measurement using only the cpu to execute the deep learning-based system was 11 ± 1 s. the deep learning-based automatic angle measurement system, a tool for diagnosing flatfoot, demonstrated comparable accuracy and reliability with the results obtained by medical professionals for patients without internal fixation devices. | [
"this study",
"a deep learning-based system",
"the automatic measurement",
"angles",
"(specifically, meary’s angle",
"calcaneal pitch",
"weight-bearing lateral radiographs",
"the foot",
"flatfoot diagnosis",
"we",
"3960 lateral radiographs",
"the left or right foot",
"a pool",
"4000 patients",
"a deep learning-based model",
"these radiographs",
"june",
"november",
"patients",
"who",
"total ankle replacement surgery",
"ankle arthrodesis surgery",
"various methods",
"correlation analysis",
"bland–altman plots",
"t-tests",
"the concordance",
"the angles",
"the system",
"those",
"clinical experts",
"the evaluation dataset",
"150 weight-bearing",
"150 patients",
"all test cases",
"the angles",
"the deep learning-based system",
"good agreement",
"the reference standards",
"meary’s angle",
"pcc",
"intraclass correlation coefficient",
"icc",
"concordance correlation coefficient",
"ccc",
"absolute error",
"mae",
"1.59°",
"calcaneal pitch",
"the average time",
"angle measurement",
"only the cpu",
"the deep learning-based system",
"11 ±",
"the deep learning-based automatic angle measurement system",
"a tool",
"comparable accuracy",
"reliability",
"the results",
"medical professionals",
"patients",
"internal fixation devices",
"3960",
"4000",
"between june and november 2021",
"150",
"150",
"0.964",
"0.963",
"0.963",
"0.632",
"1.59",
"0.988",
"0.987",
"0.987",
"0.055",
"0.63",
"11",
"1"
] |
Presenting a three layer stacking ensemble classifier of deep learning and machine learning for skin cancer classification | [
"Bahman Jafari Tabaghsar",
"Reza Tavoli",
"Mohammad Mahdi Alizadeh Toosi"
] | One of the most common types of cancer in the world is skin cancer. Despite the different types of skin diseases with different shapes, the classification of skin diseases is a very difficult task. As a result, considering such a problem, a combination model of deep learning algorithms and machine has been proposed for skin disease classification.In this paper, a three-layer architecture based on ensemble learning is presented. In the first layer, the training input is given to convolutional neural network and EfficientNET. The output of the first layer is given to the classifiers of the second layer including machine learning classifiers. The output of the best decision of these classifiers is sent to the third layer classifier and the final prediction is made.The reason for using the three-layer architecture based on group learning was the lack of correct recognition of some classes by simple classifications. On the other hand, some diseases with different classes are classified in the same class. This model helps to correctly identify input samples with the correct combination of classifications in different layers.HAM10000 data set has been used to test and validate the proposed method. The mentioned dataset includes 10,015 images of skin lesions in seven different classes and includes different types of skin diseases. The accuracy is 99.97 on the testing set, which was much better than the previous heavy models. | 10.1007/s11042-024-19195-8 | presenting a three layer stacking ensemble classifier of deep learning and machine learning for skin cancer classification | one of the most common types of cancer in the world is skin cancer. despite the different types of skin diseases with different shapes, the classification of skin diseases is a very difficult task. as a result, considering such a problem, a combination model of deep learning algorithms and machine has been proposed for skin disease classification.in this paper, a three-layer architecture based on ensemble learning is presented. in the first layer, the training input is given to convolutional neural network and efficientnet. the output of the first layer is given to the classifiers of the second layer including machine learning classifiers. the output of the best decision of these classifiers is sent to the third layer classifier and the final prediction is made.the reason for using the three-layer architecture based on group learning was the lack of correct recognition of some classes by simple classifications. on the other hand, some diseases with different classes are classified in the same class. this model helps to correctly identify input samples with the correct combination of classifications in different layers.ham10000 data set has been used to test and validate the proposed method. the mentioned dataset includes 10,015 images of skin lesions in seven different classes and includes different types of skin diseases. the accuracy is 99.97 on the testing set, which was much better than the previous heavy models. | [
"the most common types",
"cancer",
"the world",
"skin cancer",
"the different types",
"skin diseases",
"different shapes",
"the classification",
"skin diseases",
"a very difficult task",
"a result",
"such a problem",
"a combination model",
"deep learning algorithms",
"machine",
"a three-layer architecture",
"ensemble learning",
"the first layer",
"the training input",
"convolutional neural network",
"efficientnet",
"the output",
"the first layer",
"the classifiers",
"the second layer",
"machine learning classifiers",
"the output",
"the best decision",
"these classifiers",
"the third layer classifier",
"the final prediction",
"made.the reason",
"the three-layer architecture",
"group learning",
"the lack",
"correct recognition",
"some classes",
"simple classifications",
"the other hand",
"some diseases",
"different classes",
"the same class",
"this model",
"input samples",
"the correct combination",
"classifications",
"different layers.ham10000 data set",
"the proposed method",
"the mentioned dataset",
"10,015 images",
"skin lesions",
"seven different classes",
"different types",
"skin diseases",
"the accuracy",
"the testing set",
"which",
"the previous heavy models",
"one",
"three",
"first",
"first",
"second",
"third",
"three",
"10,015",
"seven",
"99.97"
] |
Beschleunigte muskuloskeletale Magnetresonanztomographie mit Deep-Learning-gestützter Bildrekonstruktion bei 0,55 T–3 T | [
"Jan Vosshenrich",
"Jan Fritz"
] | Klinisches/methodisches ProblemDie Magnetresonanztomographie (MRT) ist ein zentraler Bestandteil der muskuloskeletalen Diagnostik. Lange Akquisitionszeiten können jedoch zu Einschränkungen in der klinischen Praxis führen.Radiologische StandardverfahrenDie MRT hat sich aufgrund des hohen Auflösungsvermögens und Signal-zu-Rausch-Verhältnisses (SNR) sowie des exzellenten Weichteilkontrastes als Modalität der Wahl in der Diagnostik von Verletzungen und Erkrankungen des muskuloskeletalen Systems etabliert.Methodische InnovationenKontinuierliche Weiterentwicklungen in der Hard- und Softwaretechnologie haben eine bildqualitäts- und genauigkeitsneutrale Beschleunigung von 2D-Turbo-Spin-Echo(TSE)-Sequenzen um den Faktor 4 ermöglicht. Kürzlich vorgestellte, auf Deep Learning (DL) basierende Bildrekonstruktionsalgorithmen helfen, die Abhängigkeit zwischen SNR, räumlicher Auflösung und Akquisitionszeit weiter zu minimieren und erlauben die Anwendung höherer Beschleunigungsfaktoren.LeistungsfähigkeitDie kombinierte Anwendung fortschrittlicher Beschleunigungstechniken und DL-basierter Bildrekonstruktion birgt enormes Potenzial, um die Effizienz, den Patientenkomfort und die Zugänglichkeit der muskuloskeletalen MRT bei gleichbleibend hoher diagnostischer Genauigkeit zu maximieren.BewertungDL-rekonstruierte beschleunigte MRT-Untersuchungen haben ihre Praxisreife und ihren Mehrwert innerhalb kürzester Zeit unter Beweis gestellt. Aktuelle wissenschaftliche Erkenntnisse legen nahe, dass das Potenzial dieser Technologie noch nicht ausgeschöpft ist.Empfehlung für die PraxisBeschleunigte MRT-Untersuchungen mit DL-gestützter Bildrekonstruktion können zuverlässig in der Primärdiagnostik und Verlaufskontrolle muskuloskeletaler Fragestellungen eingesetzt werden. | 10.1007/s00117-024-01325-w | beschleunigte muskuloskeletale magnetresonanztomographie mit deep-learning-gestützter bildrekonstruktion bei 0,55 t–3 t | klinisches/methodisches problemdie magnetresonanztomographie (mrt) ist ein zentraler bestandteil der muskuloskeletalen diagnostik. lange akquisitionszeiten können jedoch zu einschränkungen in der klinischen praxis führen.radiologische standardverfahrendie mrt hat sich aufgrund des hohen auflösungsvermögens und signal-zu-rausch-verhältnisses (snr) sowie des exzellenten weichteilkontrastes als modalität der wahl in der diagnostik von verletzungen und erkrankungen des muskuloskeletalen systems etabliert.methodische innovationenkontinuierliche weiterentwicklungen in der hard- und softwaretechnologie haben eine bildqualitäts- und genauigkeitsneutrale beschleunigung von 2d-turbo-spin-echo(tse)-sequenzen um den faktor 4 ermöglicht. kürzlich vorgestellte, auf deep learning (dl) basierende bildrekonstruktionsalgorithmen helfen, die abhängigkeit zwischen snr, räumlicher auflösung und akquisitionszeit weiter zu minimieren und erlauben die anwendung höherer beschleunigungsfaktoren.leistungsfähigkeitdie kombinierte anwendung fortschrittlicher beschleunigungstechniken und dl-basierter bildrekonstruktion birgt enormes potenzial, um die effizienz, den patientenkomfort und die zugänglichkeit der muskuloskeletalen mrt bei gleichbleibend hoher diagnostischer genauigkeit zu maximieren.bewertungdl-rekonstruierte beschleunigte mrt-untersuchungen haben ihre praxisreife und ihren mehrwert innerhalb kürzester zeit unter beweis gestellt. aktuelle wissenschaftliche erkenntnisse legen nahe, dass das potenzial dieser technologie noch nicht ausgeschöpft ist.empfehlung für die praxisbeschleunigte mrt-untersuchungen mit dl-gestützter bildrekonstruktion können zuverlässig in der primärdiagnostik und verlaufskontrolle muskuloskeletaler fragestellungen eingesetzt werden. | [
"klinisches/methodisches problemdie magnetresonanztomographie",
"(mrt",
"lange akquisitionszeiten können jedoch zu einschränkungen",
"der",
"standardverfahrendie mrt hat",
"des hohen auflösungsvermögens",
"signal-zu-rausch-verhältnisses",
"snr",
"als modalität der",
"der hard- und softwaretechnologie",
"haben eine",
"und genauigkeitsneutrale beschleunigung von 2d-turbo-spin-echo(tse)-sequenzen um den faktor 4 ermöglicht",
"kürzlich vorgestellte",
"auf",
"deep learning",
"dl",
"bildrekonstruktionsalgorithmen",
"abhängigkeit zwischen snr",
"räumlicher auflösung und akquisitionszeit weiter zu minimieren und erlauben die anwendung höherer beschleunigungsfaktoren.leistungsfähigkeitdie kombinierte anwendung fortschrittlicher",
"und dl-basierter bildrekonstruktion birgt",
"die effizienz, den patientenkomfort und die zugänglichkeit der",
"mrt bei",
"hoher diagnostischer genauigkeit",
"zu maximieren.bewertungdl-rekonstruierte beschleunigte mrt-untersuchungen",
"ihre praxisreife",
"ihren mehrwert",
"kürzester zeit unter beweis gestellt",
"dass das potenzial dieser technologie noch nicht",
"praxisbeschleunigte mrt-untersuchungen mit dl-gestützter bildrekonstruktion können zuverlässig",
"der primärdiagnostik und verlaufskontrolle muskuloskeletaler fragestellungen eingesetzt werden",
"klinisches",
"problemdie magnetresonanztomographie",
"ein zentraler",
"bestandteil der muskuloskeletalen diagnostik",
"führen.radiologische standardverfahrendie mrt",
"von verletzungen und erkrankungen des muskuloskeletalen",
"beschleunigung von",
"2d",
"4",
"kürzlich vorgestellte",
"bildrekonstruktionsalgorithmen helfen",
"zwischen snr",
"akquisitionszeit",
"zu minimieren",
"anwendung höherer",
"anwendung fortschrittlicher beschleunigungstechniken",
"zugänglichkeit der muskuloskeletalen mrt",
"ihren mehrwert",
"kürzester zeit",
"aktuelle wissenschaftliche erkenntnisse legen",
"das",
"verlaufskontrolle",
"fragestellungen"
] |
Deep-learning based supervisory monitoring of robotized DE-GMAW process through learning from human welders | [
"Rui Yu",
"Yue Cao",
"Jennifer Martin",
"Otto Chiang",
"YuMing Zhang"
] | Double-electrode gas metal arc welding (DE-GMAW) modifies GMAW by adding a second electrode to bypass a portion of the current flowing from the wire. This reduces the current to, and the heat input on, the workpiece. Successful bypassing depends on the relative position of the bypass electrode to the continuously varying wire tip. To ensure proper operation, we propose robotizing the system using a follower robot to carry and adaptively adjust the bypass electrode. The primary information for monitoring this process is the arc image, which directly shows desired and undesired modes. However, developing a robust algorithm for processing the complex arc image is time-consuming and challenging. Employing a deep learning approach requires labeling numerous arc images for the corresponding DE-GMAW modes, which is not practically feasible. To introduce alternative labels, we analyze arc phenomena in various DE-GMAW modes and correlate them with distinct arc systems having varying voltages. These voltages serve as automatically derived labels to train the deep-learning network. The results demonstrated reliable process monitoring. | 10.1007/s40194-023-01635-y | deep-learning based supervisory monitoring of robotized de-gmaw process through learning from human welders | double-electrode gas metal arc welding (de-gmaw) modifies gmaw by adding a second electrode to bypass a portion of the current flowing from the wire. this reduces the current to, and the heat input on, the workpiece. successful bypassing depends on the relative position of the bypass electrode to the continuously varying wire tip. to ensure proper operation, we propose robotizing the system using a follower robot to carry and adaptively adjust the bypass electrode. the primary information for monitoring this process is the arc image, which directly shows desired and undesired modes. however, developing a robust algorithm for processing the complex arc image is time-consuming and challenging. employing a deep learning approach requires labeling numerous arc images for the corresponding de-gmaw modes, which is not practically feasible. to introduce alternative labels, we analyze arc phenomena in various de-gmaw modes and correlate them with distinct arc systems having varying voltages. these voltages serve as automatically derived labels to train the deep-learning network. the results demonstrated reliable process monitoring. | [
"double-electrode gas metal arc welding",
"de",
"-",
"gmaw",
"gmaw",
"a second electrode",
"a portion",
"the current",
"the wire",
"this",
"the heat input",
"the workpiece",
"successful bypassing",
"the relative position",
"the bypass",
"the continuously varying wire tip",
"proper operation",
"we",
"the system",
"a follower robot",
"the bypass",
"the primary information",
"this process",
"the arc image",
"which",
"desired and undesired modes",
"a robust algorithm",
"the complex arc image",
"a deep learning approach",
"numerous arc images",
"the corresponding de-gmaw modes",
"which",
"alternative labels",
"we",
"arc phenomena",
"various de-gmaw modes",
"them",
"distinct arc systems",
"varying voltages",
"these voltages",
"automatically derived labels",
"the deep-learning network",
"the results",
"reliable process monitoring",
"second"
] |
Fluid dynamic control and optimization using deep reinforcement learning | [
"Innyoung Kim",
"Donghyun You"
] | This paper presents a review of recent research on applying deep reinforcement learning in fluid dynamics. Reinforcement learning is a technique in which the agent autonomously learns optimal action strategies while interacting with the environment, mimicking human learning mechanisms. Combined with artificial intelligence technology, it is providing a new direction in fluid dynamic control and optimization, which were challenging due to the nonlinear and high-dimensional characteristics of the fluid. In the section on fluid dynamic control, control strategies for drag reduction and research on controlling biological motion are reviewed. The optimization section focuses on shape optimization and automation of computational fluid dynamics. Current challenges and possible future developments are also described.Graphical Abstract | 10.1007/s42791-024-00067-z | fluid dynamic control and optimization using deep reinforcement learning | this paper presents a review of recent research on applying deep reinforcement learning in fluid dynamics. reinforcement learning is a technique in which the agent autonomously learns optimal action strategies while interacting with the environment, mimicking human learning mechanisms. combined with artificial intelligence technology, it is providing a new direction in fluid dynamic control and optimization, which were challenging due to the nonlinear and high-dimensional characteristics of the fluid. in the section on fluid dynamic control, control strategies for drag reduction and research on controlling biological motion are reviewed. the optimization section focuses on shape optimization and automation of computational fluid dynamics. current challenges and possible future developments are also described.graphical abstract | [
"this paper",
"a review",
"recent research",
"deep reinforcement learning",
"fluid dynamics",
"reinforcement learning",
"a technique",
"which",
"the agent",
"optimal action strategies",
"the environment",
"human learning mechanisms",
"artificial intelligence technology",
"it",
"a new direction",
"fluid dynamic control",
"optimization",
"which",
"the nonlinear and high-dimensional characteristics",
"the fluid",
"the section",
"fluid dynamic control",
"control strategies",
"drag reduction",
"research",
"biological motion",
"the optimization section",
"shape optimization",
"automation",
"computational fluid dynamics",
"current challenges",
"possible future developments"
] |
Deep-learning based artificial intelligence tool for melt pools and defect segmentation | [
"Amra Peles",
"Vincent C. Paquit",
"Ryan R. Dehoff"
] | Accelerating fabrication of additively manufactured components with precise microstructures is important for quality and qualification of built parts, as well as for a fundamental understanding of process improvement. Accomplishing this requires fast and robust characterization of melt pool geometries and structural defects in images. This paper proposes a pragmatic approach based on implementation of deep learning models and self-consistent workflow that enable systematic segmentation of defects and melt pools in optical images. Deep learning is based on an image-to-image translation–conditional generative adversarial neural network architecture. An artificial intelligence (AI) tool based on this deep learning model enables fast and incrementally more accurate predictions of the prevalent geometric features, including melt pool boundaries and printing-induced structural defects. We present statistical analysis of geometric features that is enabled by the AI tool, showing strong spatial correlation of defects and the melt pool boundaries. The correlations of widths and heights of melt pools with dataset processing parameters show the highest sensitivity to thermal influences resulting from laser passes in adjacent and subsequent layer passes. The presented models and tools are demonstrated on the aluminum alloy and datasets produced with different sets of processing parameters. However, they have universal quality and could easily be adapted to different material compositions. The method can be easily generalized to microstructural characterizations other than optical microscopy. | 10.1007/s10845-024-02457-5 | deep-learning based artificial intelligence tool for melt pools and defect segmentation | accelerating fabrication of additively manufactured components with precise microstructures is important for quality and qualification of built parts, as well as for a fundamental understanding of process improvement. accomplishing this requires fast and robust characterization of melt pool geometries and structural defects in images. this paper proposes a pragmatic approach based on implementation of deep learning models and self-consistent workflow that enable systematic segmentation of defects and melt pools in optical images. deep learning is based on an image-to-image translation–conditional generative adversarial neural network architecture. an artificial intelligence (ai) tool based on this deep learning model enables fast and incrementally more accurate predictions of the prevalent geometric features, including melt pool boundaries and printing-induced structural defects. we present statistical analysis of geometric features that is enabled by the ai tool, showing strong spatial correlation of defects and the melt pool boundaries. the correlations of widths and heights of melt pools with dataset processing parameters show the highest sensitivity to thermal influences resulting from laser passes in adjacent and subsequent layer passes. the presented models and tools are demonstrated on the aluminum alloy and datasets produced with different sets of processing parameters. however, they have universal quality and could easily be adapted to different material compositions. the method can be easily generalized to microstructural characterizations other than optical microscopy. | [
"accelerating fabrication",
"additively manufactured components",
"precise microstructures",
"quality",
"qualification",
"built parts",
"a fundamental understanding",
"process improvement",
"this",
"fast and robust characterization",
"melt pool geometries",
"structural defects",
"images",
"this paper",
"a pragmatic approach",
"implementation",
"deep learning models",
"self-consistent workflow",
"that",
"systematic segmentation",
"defects",
"pools",
"optical images",
"deep learning",
"image",
"conditional generative adversarial neural network architecture",
"an artificial intelligence",
"(ai) tool",
"this deep learning model",
"fast and incrementally more accurate predictions",
"the prevalent geometric features",
"pool boundaries",
"printing-induced structural defects",
"we",
"statistical analysis",
"geometric features",
"that",
"the ai tool",
"strong spatial correlation",
"defects",
"the melt pool boundaries",
"the correlations",
"widths",
"heights",
"melt pools",
"dataset processing parameters",
"the highest sensitivity",
"thermal influences",
"laser passes",
"adjacent and subsequent layer passes",
"the presented models",
"tools",
"the aluminum alloy",
"datasets",
"different sets",
"processing parameters",
"they",
"universal quality",
"different material compositions",
"the method",
"microstructural characterizations",
"optical microscopy"
] |
Multi-echelon inventory optimization using deep reinforcement learning | [
"Kevin Geevers",
"Lotte van Hezewijk",
"Martijn R. K. Mes"
] | This paper studies the applicability of a deep reinforcement learning approach to three different multi-echelon inventory systems, with the objective of minimizing the holding and backorder costs. First, we conduct an extensive literature review to map the current applications of reinforcement learning in multi-echelon inventory systems. Next, we apply our deep reinforcement learning method to three cases with different network structures (linear, divergent, and general structures). The linear and divergent cases are derived from literature, whereas the general case is based on a real-life manufacturer. We apply the proximal policy optimization (PPO) algorithm, with a continuous action space, and show that it consistently outperforms the benchmark solution. It achieves an average improvement of 16.4% for the linear case, 11.3% for the divergent case, and 6.6% for the general case. We explain the limitations of our approach and propose avenues for future research. | 10.1007/s10100-023-00872-2 | multi-echelon inventory optimization using deep reinforcement learning | this paper studies the applicability of a deep reinforcement learning approach to three different multi-echelon inventory systems, with the objective of minimizing the holding and backorder costs. first, we conduct an extensive literature review to map the current applications of reinforcement learning in multi-echelon inventory systems. next, we apply our deep reinforcement learning method to three cases with different network structures (linear, divergent, and general structures). the linear and divergent cases are derived from literature, whereas the general case is based on a real-life manufacturer. we apply the proximal policy optimization (ppo) algorithm, with a continuous action space, and show that it consistently outperforms the benchmark solution. it achieves an average improvement of 16.4% for the linear case, 11.3% for the divergent case, and 6.6% for the general case. we explain the limitations of our approach and propose avenues for future research. | [
"this paper",
"the applicability",
"a deep reinforcement learning approach",
"three different multi-echelon inventory systems",
"the objective",
"the holding and backorder costs",
"we",
"an extensive literature review",
"the current applications",
"reinforcement learning",
"multi-echelon inventory systems",
"we",
"our deep reinforcement learning method",
"three cases",
"different network structures",
"general structures",
"the linear and divergent cases",
"literature",
"the general case",
"a real-life manufacturer",
"we",
"the proximal policy optimization",
"ppo",
"algorithm",
"a continuous action space",
"it",
"the benchmark solution",
"it",
"an average improvement",
"16.4%",
"the linear case",
"11.3%",
"the divergent case",
"6.6%",
"the general case",
"we",
"the limitations",
"our approach",
"avenues",
"future research",
"three",
"first",
"three",
"linear",
"linear",
"16.4%",
"11.3%",
"6.6%"
] |
An Analysis of Plant Diseases on Detection and Classification: From Machine Learning to Deep Learning Techniques | [
"P. K. Midhunraj",
"K. S. Thivya",
"M. Anand"
] | Plants are acknowledged as being crucial because they are the main source of human energy generation due to their nutritional, therapeutic, and other benefits. Therefore, it is necessary to increase crop productivity. One of these significant factors contributing to reduced agricultural yields is the prevalence of bacterial, fungal, and viral illnesses. Applying techniques for plant disease identification can stop and treat these diseases. So, numerous machine learning (ML) and deep learning (DL) methods were created and tested by researchers to identify plant diseases. Therefore, this study gives a detailed discussion of the various research studies conducted in plant disease detection utilizing ML and DL-based techniques. This review offers research advancements in plant disease recognition from ML to DL techniques. Additionally, many datasets about plant diseases are thoroughly examined. It also addresses the difficulties and issues with the current systems. | 10.1007/s11042-023-17600-2 | an analysis of plant diseases on detection and classification: from machine learning to deep learning techniques | plants are acknowledged as being crucial because they are the main source of human energy generation due to their nutritional, therapeutic, and other benefits. therefore, it is necessary to increase crop productivity. one of these significant factors contributing to reduced agricultural yields is the prevalence of bacterial, fungal, and viral illnesses. applying techniques for plant disease identification can stop and treat these diseases. so, numerous machine learning (ml) and deep learning (dl) methods were created and tested by researchers to identify plant diseases. therefore, this study gives a detailed discussion of the various research studies conducted in plant disease detection utilizing ml and dl-based techniques. this review offers research advancements in plant disease recognition from ml to dl techniques. additionally, many datasets about plant diseases are thoroughly examined. it also addresses the difficulties and issues with the current systems. | [
"plants",
"they",
"the main source",
"human energy generation",
"their nutritional, therapeutic, and other benefits",
"it",
"crop productivity",
"these significant factors",
"reduced agricultural yields",
"the prevalence",
"bacterial, fungal, and viral illnesses",
"techniques",
"plant disease identification",
"these diseases",
"numerous machine learning",
"ml",
"deep learning (dl) methods",
"researchers",
"plant diseases",
"this study",
"a detailed discussion",
"the various research studies",
"plant disease detection",
"ml",
"dl-based techniques",
"this review",
"research advancements",
"plant disease recognition",
"ml",
"dl techniques",
"many datasets",
"plant diseases",
"it",
"the difficulties",
"issues",
"the current systems",
"one"
] |
The Mori–Zwanzig formulation of deep learning | [
"Daniele Venturi",
"Xiantao Li"
] | We develop a new formulation of deep learning based on the Mori–Zwanzig (MZ) formalism of irreversible statistical mechanics. The new formulation is built upon the well-known duality between deep neural networks and discrete dynamical systems, and it allows us to directly propagate quantities of interest (conditional expectations and probability density functions) forward and backward through the network by means of exact linear operator equations. Such new equations can be used as a starting point to develop new effective parameterizations of deep neural networks and provide a new framework to study deep learning via operator-theoretic methods. The proposed MZ formulation of deep learning naturally introduces a new concept, i.e., the memory of the neural network, which plays a fundamental role in low-dimensional modeling and parameterization. By using the theory of contraction mappings, we develop sufficient conditions for the memory of the neural network to decay with the number of layers. This allows us to rigorously transform deep networks into shallow ones, e.g., by reducing the number of neurons per layer (using projection operators), or by reducing the total number of layers (using the decay property of the memory operator). | 10.1007/s40687-023-00390-2 | the mori–zwanzig formulation of deep learning | we develop a new formulation of deep learning based on the mori–zwanzig (mz) formalism of irreversible statistical mechanics. the new formulation is built upon the well-known duality between deep neural networks and discrete dynamical systems, and it allows us to directly propagate quantities of interest (conditional expectations and probability density functions) forward and backward through the network by means of exact linear operator equations. such new equations can be used as a starting point to develop new effective parameterizations of deep neural networks and provide a new framework to study deep learning via operator-theoretic methods. the proposed mz formulation of deep learning naturally introduces a new concept, i.e., the memory of the neural network, which plays a fundamental role in low-dimensional modeling and parameterization. by using the theory of contraction mappings, we develop sufficient conditions for the memory of the neural network to decay with the number of layers. this allows us to rigorously transform deep networks into shallow ones, e.g., by reducing the number of neurons per layer (using projection operators), or by reducing the total number of layers (using the decay property of the memory operator). | [
"we",
"a new formulation",
"deep learning",
"the mori",
"zwanzig (mz) formalism",
"irreversible statistical mechanics",
"the new formulation",
"the well-known duality",
"deep neural networks",
"discrete dynamical systems",
"it",
"us",
"quantities",
"interest",
"conditional expectations",
"probability density functions",
"the network",
"means",
"exact linear operator equations",
"such new equations",
"a starting point",
"new effective parameterizations",
"deep neural networks",
"a new framework",
"deep learning",
"operator-theoretic methods",
"the proposed mz formulation",
"deep learning",
"a new concept",
"the neural network",
"which",
"a fundamental role",
"low-dimensional modeling",
"parameterization",
"the theory",
"contraction mappings",
"we",
"sufficient conditions",
"the memory",
"the neural network",
"the number",
"layers",
"this",
"us",
"deep networks",
"shallow ones",
"the number",
"neurons",
"layer",
"projection operators",
"the total number",
"layers",
"the decay property",
"the memory operator",
"linear"
] |
XDeMo: a novel deep learning framework for DNA motif mining using transformer models | [
"Rajashree Chaurasia",
"Udayan Ghose"
] | Motivation: Recognizing and studying DNA patterns is crucial for improving knowledge of illnesses, cell function, and gene control. Motifs determine which transcription factor a protein may bind to, leading to a better unraveling of gene expression. Advancements in the fields of deep learning and high-throughput sequencing have made possible the exploration of motif discovery anew, with greater accuracy and performance. Methodology: In this paper, a novel deep learning framework (XDeMo – Transformer-based Deep Motifs) for DNA motif mining using Transformer models is proposed. Furthermore, a hybrid encoding scheme is also introduced, called ‘blended’ encoding specifically designed for use with deep learning transformer models that are trained using DNA sequences. Results: Our proposed transformer-based framework for DNA motif discovery augmented by blended encoding outperforms many state-of-the-art deep learning models on many baseline performance metrics when trained on the standard datasets. Our models demonstrated robust performance in predicting motifs with high discriminative power, precision, recall, and F1 score. Conclusion: The model’s ability to capture intricate sequence patterns and long-range dependencies led to the discovery of biologically meaningful motifs that were verified from known transcription factor binding motif databases. This shows that our novel framework can be effectively used to find DNA motifs and therefore, aid in further downstream analyses for biomedical and biotechnological applications. | 10.1007/s13721-024-00463-4 | xdemo: a novel deep learning framework for dna motif mining using transformer models | motivation: recognizing and studying dna patterns is crucial for improving knowledge of illnesses, cell function, and gene control. motifs determine which transcription factor a protein may bind to, leading to a better unraveling of gene expression. advancements in the fields of deep learning and high-throughput sequencing have made possible the exploration of motif discovery anew, with greater accuracy and performance. methodology: in this paper, a novel deep learning framework (xdemo – transformer-based deep motifs) for dna motif mining using transformer models is proposed. furthermore, a hybrid encoding scheme is also introduced, called ‘blended’ encoding specifically designed for use with deep learning transformer models that are trained using dna sequences. results: our proposed transformer-based framework for dna motif discovery augmented by blended encoding outperforms many state-of-the-art deep learning models on many baseline performance metrics when trained on the standard datasets. our models demonstrated robust performance in predicting motifs with high discriminative power, precision, recall, and f1 score. conclusion: the model’s ability to capture intricate sequence patterns and long-range dependencies led to the discovery of biologically meaningful motifs that were verified from known transcription factor binding motif databases. this shows that our novel framework can be effectively used to find dna motifs and therefore, aid in further downstream analyses for biomedical and biotechnological applications. | [
"dna patterns",
"knowledge",
"illnesses",
"cell function",
"gene control",
"motifs",
"which transcription factor",
"a protein",
"a better unraveling",
"gene expression",
"advancements",
"the fields",
"deep learning",
"high-throughput sequencing",
"the exploration",
"motif discovery",
"greater accuracy",
"performance",
"methodology",
"this paper",
"a novel deep learning framework",
"xdemo – transformer-based deep motifs",
"dna motif mining",
"transformer models",
"a hybrid encoding scheme",
"‘blended’ encoding",
"use",
"deep learning transformer models",
"that",
"dna sequences",
"results",
"our proposed transformer-based framework",
"dna motif discovery",
"blended encoding outperforms",
"the-art",
"many baseline performance metrics",
"the standard datasets",
"our models",
"robust performance",
"motifs",
"high discriminative power",
"precision",
"recall",
"f1 score",
"conclusion",
"the model’s ability",
"intricate sequence patterns",
"long-range dependencies",
"the discovery",
"biologically meaningful motifs",
"that",
"known transcription factor binding motif databases",
"this",
"our novel framework",
"dna motifs",
", aid",
"further downstream analyses",
"biomedical and biotechnological applications"
] |
DeepDOF-SE: affordable deep-learning microscopy platform for slide-free histology | [
"Lingbo Jin",
"Yubo Tang",
"Jackson B. Coole",
"Melody T. Tan",
"Xuan Zhao",
"Hawraa Badaoui",
"Jacob T. Robinson",
"Michelle D. Williams",
"Nadarajah Vigneswaran",
"Ann M. Gillenwater",
"Rebecca R. Richards-Kortum",
"Ashok Veeraraghavan"
] | Histopathology plays a critical role in the diagnosis and surgical management of cancer. However, access to histopathology services, especially frozen section pathology during surgery, is limited in resource-constrained settings because preparing slides from resected tissue is time-consuming, labor-intensive, and requires expensive infrastructure. Here, we report a deep-learning-enabled microscope, named DeepDOF-SE, to rapidly scan intact tissue at cellular resolution without the need for physical sectioning. Three key features jointly make DeepDOF-SE practical. First, tissue specimens are stained directly with inexpensive vital fluorescent dyes and optically sectioned with ultra-violet excitation that localizes fluorescent emission to a thin surface layer. Second, a deep-learning algorithm extends the depth-of-field, allowing rapid acquisition of in-focus images from large areas of tissue even when the tissue surface is highly irregular. Finally, a semi-supervised generative adversarial network virtually stains DeepDOF-SE fluorescence images with hematoxylin-and-eosin appearance, facilitating image interpretation by pathologists without significant additional training. We developed the DeepDOF-SE platform using a data-driven approach and validated its performance by imaging surgical resections of suspected oral tumors. Our results show that DeepDOF-SE provides histological information of diagnostic importance, offering a rapid and affordable slide-free histology platform for intraoperative tumor margin assessment and in low-resource settings. | 10.1038/s41467-024-47065-2 | deepdof-se: affordable deep-learning microscopy platform for slide-free histology | histopathology plays a critical role in the diagnosis and surgical management of cancer. however, access to histopathology services, especially frozen section pathology during surgery, is limited in resource-constrained settings because preparing slides from resected tissue is time-consuming, labor-intensive, and requires expensive infrastructure. here, we report a deep-learning-enabled microscope, named deepdof-se, to rapidly scan intact tissue at cellular resolution without the need for physical sectioning. three key features jointly make deepdof-se practical. first, tissue specimens are stained directly with inexpensive vital fluorescent dyes and optically sectioned with ultra-violet excitation that localizes fluorescent emission to a thin surface layer. second, a deep-learning algorithm extends the depth-of-field, allowing rapid acquisition of in-focus images from large areas of tissue even when the tissue surface is highly irregular. finally, a semi-supervised generative adversarial network virtually stains deepdof-se fluorescence images with hematoxylin-and-eosin appearance, facilitating image interpretation by pathologists without significant additional training. we developed the deepdof-se platform using a data-driven approach and validated its performance by imaging surgical resections of suspected oral tumors. our results show that deepdof-se provides histological information of diagnostic importance, offering a rapid and affordable slide-free histology platform for intraoperative tumor margin assessment and in low-resource settings. | [
"histopathology",
"a critical role",
"the diagnosis",
"surgical management",
"cancer",
"access",
"histopathology services",
"especially frozen section pathology",
"surgery",
"resource-constrained settings",
"slides",
"resected tissue",
"expensive infrastructure",
"we",
"a deep-learning-enabled microscope",
"deepdof-se",
"intact tissue",
"cellular resolution",
"the need",
"physical sectioning",
"three key features",
"deepdof-se",
"tissue specimens",
"inexpensive vital fluorescent dyes",
"ultra-violet excitation",
"that",
"fluorescent emission",
"a thin surface layer",
"a deep-learning algorithm",
"the depth",
"field",
"rapid acquisition",
"focus",
"large areas",
"tissue",
"the tissue surface",
"a semi-supervised generative adversarial network",
"deepdof-se fluorescence images",
"hematoxylin-and-eosin appearance",
"image interpretation",
"pathologists",
"significant additional training",
"we",
"the deepdof-se platform",
"a data-driven approach",
"its performance",
"surgical resections",
"suspected oral tumors",
"our results",
"deepdof-se",
"histological information",
"diagnostic importance",
"a rapid and affordable slide-free histology platform",
"intraoperative tumor margin assessment",
"low-resource settings",
"three",
"first",
"second",
"hematoxylin"
] |
Identification of rural courtyards’ utilization status using deep learning and machine learning methods on unmanned aerial vehicle images in north China | [
"Maojun Wang",
"Wenyu Xu",
"Guangzhong Cao",
"Tao Liu"
] | The issue of unoccupied or abandoned homesteads (courtyards) in China emerges given the increasing aging population, rapid urbanization and massive rural-urban migration. From the aspect of rural vitalization, land-use planning, and policy making, determining the number of unoccupied courtyards is important. Field and questionnaire-based surveys were currently the main approaches, but these traditional methods were often expensive and laborious. A new workflow is explored using deep learning and machine learning algorithms on unmanned aerial vehicle (UAV) images. Initially, features of the built environment were extracted using deep learning to evaluate the courtyard management, including extracting complete or collapsed farmhouses by Alexnet, detecting solar water heaters by YOLOv5s, calculating green looking ratio (GLR) by FCN. Their precisions exceeded 98%. Then, seven machine learning algorithms (Adaboost, binomial logistic regression, neural network, random forest, support vector machine, decision trees, and XGBoost algorithms) were applied to identify the rural courtyards’ utilization status. The Adaboost algorithm showed the best performance with the comprehensive consideration of most metrics (Accuracy: 0.933, Precision: 0.932, Recall: 0.984, F1-score: 0.957). Results showed that identifying the courtyards’ utilization statuses based on the courtyard built environment is feasible. It is transferable and cost-effective for large-scale village surveys, and may contribute to the intensive and sustainable approach to rural land use. | 10.1007/s12273-023-1099-9 | identification of rural courtyards’ utilization status using deep learning and machine learning methods on unmanned aerial vehicle images in north china | the issue of unoccupied or abandoned homesteads (courtyards) in china emerges given the increasing aging population, rapid urbanization and massive rural-urban migration. from the aspect of rural vitalization, land-use planning, and policy making, determining the number of unoccupied courtyards is important. field and questionnaire-based surveys were currently the main approaches, but these traditional methods were often expensive and laborious. a new workflow is explored using deep learning and machine learning algorithms on unmanned aerial vehicle (uav) images. initially, features of the built environment were extracted using deep learning to evaluate the courtyard management, including extracting complete or collapsed farmhouses by alexnet, detecting solar water heaters by yolov5s, calculating green looking ratio (glr) by fcn. their precisions exceeded 98%. then, seven machine learning algorithms (adaboost, binomial logistic regression, neural network, random forest, support vector machine, decision trees, and xgboost algorithms) were applied to identify the rural courtyards’ utilization status. the adaboost algorithm showed the best performance with the comprehensive consideration of most metrics (accuracy: 0.933, precision: 0.932, recall: 0.984, f1-score: 0.957). results showed that identifying the courtyards’ utilization statuses based on the courtyard built environment is feasible. it is transferable and cost-effective for large-scale village surveys, and may contribute to the intensive and sustainable approach to rural land use. | [
"the issue",
"unoccupied or abandoned homesteads",
"courtyards",
"china",
"the increasing aging population",
"rapid urbanization",
"massive rural-urban migration",
"the aspect",
"rural vitalization",
"land-use planning",
"policy making",
"the number",
"unoccupied courtyards",
"field",
"questionnaire-based surveys",
"the main approaches",
"these traditional methods",
"a new workflow",
"deep learning and machine learning algorithms",
"unmanned aerial vehicle",
"(uav) images",
"features",
"the built environment",
"deep learning",
"the courtyard management",
"complete or collapsed farmhouses",
"alexnet",
"solar water heaters",
"yolov5s",
"green looking ratio",
"glr",
"fcn",
"their precisions",
"seven machine learning algorithms",
"adaboost, binomial logistic regression",
"neural network",
"random forest",
"support vector machine",
"decision trees",
"xgboost algorithms",
"the rural courtyards’ utilization status",
"the adaboost algorithm",
"the best performance",
"the comprehensive consideration",
"most metrics",
"accuracy",
"precision",
"recall",
"f1-score",
"results",
"the courtyards’ utilization statuses",
"the courtyard built environment",
"it",
"large-scale village surveys",
"the intensive and sustainable approach",
"rural land use",
"china",
"98%",
"seven",
"0.933",
"0.932",
"0.984",
"0.957"
] |
A Transfer Learning-Based CNN Deep Learning Model for Unfavorable Driving State Recognition | [
"Jichi Chen",
"Hong Wang",
"Enqiu He"
] | The detection of unfavorable driving states (UDS) of drivers based on electroencephalogram (EEG) measures has received continuous attention from extensive scholars on account of directly reflecting brain neural activity with high temporal resolution and low risk of being deceived. However, the existing EEG-based driver UDS detection methods involve limited exploration of the functional connectivity patterns and interaction relationships within the brain network. Therefore, there is still room for improvement in the accuracy of detection. In this project, we propose three pretrained convolutional neural network (CNN)-based automatic detection frameworks for UDS of drivers with 30-channel EEG signals. The frameworks are investigated by adjusting the learning rate and choosing the optimization solver, etc. Two different conditions of driving experiments are performed, collecting EEG signals from sixteen subjects. The acquired 1-dimensional 30-channel EEG signals are converted into 2-dimensional matrices by the Granger causality (GC) method to form the functional connectivity graphs of the brain (FCGB). Then, the FCGB are fed into pretrained deep learning models that employed transfer learning strategy for feature extraction and judgment of different EEG signal types. Furthermore, we adopt two visualization interpretability techniques, named, activation visualization and gradient-weighted class activation mapping (Grad-CAM) for better visualizing and understanding the predictions of the pretrained models after fine-tuning. The experimental outcomes show that Resnet 18 model yields the highest average recognition accuracy of 90% using the rmsprop optimizer with a learning rate of 1e − 3. The overall outcomes suggest that cooperating of biologically inspired functional connectivity graphs of the brain and pretrained transfer learning algorithms is a prospective approach in reducing the rate of major traffic accidents caused by driver unfavorable driving states. | 10.1007/s12559-023-10196-7 | a transfer learning-based cnn deep learning model for unfavorable driving state recognition | the detection of unfavorable driving states (uds) of drivers based on electroencephalogram (eeg) measures has received continuous attention from extensive scholars on account of directly reflecting brain neural activity with high temporal resolution and low risk of being deceived. however, the existing eeg-based driver uds detection methods involve limited exploration of the functional connectivity patterns and interaction relationships within the brain network. therefore, there is still room for improvement in the accuracy of detection. in this project, we propose three pretrained convolutional neural network (cnn)-based automatic detection frameworks for uds of drivers with 30-channel eeg signals. the frameworks are investigated by adjusting the learning rate and choosing the optimization solver, etc. two different conditions of driving experiments are performed, collecting eeg signals from sixteen subjects. the acquired 1-dimensional 30-channel eeg signals are converted into 2-dimensional matrices by the granger causality (gc) method to form the functional connectivity graphs of the brain (fcgb). then, the fcgb are fed into pretrained deep learning models that employed transfer learning strategy for feature extraction and judgment of different eeg signal types. furthermore, we adopt two visualization interpretability techniques, named, activation visualization and gradient-weighted class activation mapping (grad-cam) for better visualizing and understanding the predictions of the pretrained models after fine-tuning. the experimental outcomes show that resnet 18 model yields the highest average recognition accuracy of 90% using the rmsprop optimizer with a learning rate of 1e − 3. the overall outcomes suggest that cooperating of biologically inspired functional connectivity graphs of the brain and pretrained transfer learning algorithms is a prospective approach in reducing the rate of major traffic accidents caused by driver unfavorable driving states. | [
"the detection",
"unfavorable driving states",
"uds",
"drivers",
"electroencephalogram (eeg) measures",
"continuous attention",
"extensive scholars",
"account",
"brain neural activity",
"high temporal resolution",
"low risk",
"the existing eeg-based driver uds detection methods",
"limited exploration",
"the functional connectivity patterns",
"interaction relationships",
"the brain network",
"room",
"improvement",
"the accuracy",
"detection",
"this project",
"we",
"three pretrained convolutional neural network",
"cnn)-based automatic detection frameworks",
"uds",
"drivers",
"30-channel eeg signals",
"the frameworks",
"the learning rate",
"the optimization",
"two different conditions",
"driving experiments",
"eeg signals",
"sixteen subjects",
"the acquired 1-dimensional 30-channel eeg signals",
"2-dimensional matrices",
"the granger causality",
"(gc) method",
"the functional connectivity graphs",
"the brain",
"fcgb",
"the fcgb",
"pretrained deep learning models",
"that",
"transfer learning strategy",
"feature extraction",
"judgment",
"different eeg signal types",
"we",
"two visualization interpretability techniques",
"grad-cam",
"better visualizing",
"the predictions",
"the pretrained models",
"fine-tuning",
"the experimental outcomes",
"resnet",
"18 model",
"the highest average recognition accuracy",
"90%",
"the rmsprop optimizer",
"a learning rate",
"the overall outcomes",
"biologically inspired functional connectivity graphs",
"the brain",
"pretrained transfer learning algorithms",
"a prospective approach",
"the rate",
"major traffic accidents",
"driver unfavorable driving states",
"three",
"30",
"two",
"1",
"30",
"2",
"fed",
"two",
"18",
"90%",
"1e −",
"3"
] |
A hybrid deep model with cumulative learning for few-shot learning | [
"Jiehao Liu",
"Zhao Yang",
"Liufei Luo",
"Mingkai Luo",
"Luyu Hu",
"Jiahao Li"
] | Few-shot learning (FSL) aims to recognize unseen classes with only a few samples for each class. This challenging research endeavors to narrow the gap between the computer vision technology and the human visual system. Recently, mainstream approaches for FSL can be grouped into meta-learning and classification learning. These two methods train the FSL model from local and global classification viewpoints respectively. In our work, we find the former method can effectively learn transferable knowledge (generalization capacity) with an episodic training paradigm but encounters the problem of slow convergence. The latter method can build an essential classification ability quickly (classification capacity) with a mini-batch training paradigm but easily causes an over-fitting problem. In light of this issue, we propose a hybrid deep model with cumulative learning to tackle the FSL problem by absorbing the advantages of the both methods. The proposed hybrid deep model innovatively integrates meta-learning and classification learning (IMC) in a unified two-branch network framework in which a meta-learning branch and a classification learning branch can work simultaneously. Besides, by considering the different characteristics of the two branches, we propose a cumulative learning strategy to take care of both generalization capacity learning and classification capacity learning in our IMC model training. With the proposed method, the model can quickly build the basic classification capability at the initial stage and continually mine discriminative class information during the remaining training for better generalization. Extensive experiments on CIFAR-FS, FC100, mini-ImageNet and tiered-ImageNet datasets are implemented to demonstrate the promising performance of our method. | 10.1007/s11042-022-14218-8 | a hybrid deep model with cumulative learning for few-shot learning | few-shot learning (fsl) aims to recognize unseen classes with only a few samples for each class. this challenging research endeavors to narrow the gap between the computer vision technology and the human visual system. recently, mainstream approaches for fsl can be grouped into meta-learning and classification learning. these two methods train the fsl model from local and global classification viewpoints respectively. in our work, we find the former method can effectively learn transferable knowledge (generalization capacity) with an episodic training paradigm but encounters the problem of slow convergence. the latter method can build an essential classification ability quickly (classification capacity) with a mini-batch training paradigm but easily causes an over-fitting problem. in light of this issue, we propose a hybrid deep model with cumulative learning to tackle the fsl problem by absorbing the advantages of the both methods. the proposed hybrid deep model innovatively integrates meta-learning and classification learning (imc) in a unified two-branch network framework in which a meta-learning branch and a classification learning branch can work simultaneously. besides, by considering the different characteristics of the two branches, we propose a cumulative learning strategy to take care of both generalization capacity learning and classification capacity learning in our imc model training. with the proposed method, the model can quickly build the basic classification capability at the initial stage and continually mine discriminative class information during the remaining training for better generalization. extensive experiments on cifar-fs, fc100, mini-imagenet and tiered-imagenet datasets are implemented to demonstrate the promising performance of our method. | [
"few-shot learning",
"unseen classes",
"only a few samples",
"each class",
"this challenging research endeavors",
"the gap",
"the computer vision technology",
"the human visual system",
"mainstream approaches",
"fsl",
"meta-learning and classification learning",
"these two methods",
"the fsl model",
"local and global classification viewpoints",
"our work",
"we",
"the former method",
"transferable knowledge",
"generalization capacity",
"an episodic training paradigm",
"the problem",
"slow convergence",
"the latter method",
"an essential classification ability",
"classification capacity",
"a mini-batch training paradigm",
"an over-fitting problem",
"light",
"this issue",
"we",
"a hybrid deep model",
"cumulative learning",
"the fsl problem",
"the advantages",
"the both methods",
"the proposed hybrid deep model",
"meta-learning and classification learning",
"imc",
"a unified two-branch network framework",
"which",
"a meta-learning branch",
"a classification learning branch",
"the different characteristics",
"the two branches",
"we",
"a cumulative learning strategy",
"care",
"both generalization capacity learning and classification capacity",
"our imc model training",
"the proposed method",
"the model",
"the basic classification capability",
"the initial stage",
"continually mine discriminative class information",
"the remaining training",
"better generalization",
"extensive experiments",
"cifar-fs, fc100, mini",
"-",
"imagenet",
"tiered-imagenet datasets",
"the promising performance",
"our method",
"two",
"two",
"two"
] |
Distributed source DOA estimation based on deep learning networks | [
"Quan Tian",
"Ruiyan Cai",
"Gongrun Qiu",
"Yang Luo"
] | With space electromagnetic environments becoming increasingly complex, the direction of arrival (DOA) estimation based on the point source model can no longer meet the requirements of spatial target location. Based on the characteristics of the distributed source, a new DOA estimation algorithm based on deep learning is proposed. The algorithm first maps the distributed source model into the point source model via a generative adversarial network (GAN) and further combines the subspace-based method to achieve central DOA estimation. Second, by constructing a deep neural network (DNN), the covariance matrix of the received signals is used as the input to estimate the angular spread of the distributed source. The experimental results show that the proposed algorithm can achieve better performance than the existing methods for a distributed source. | 10.1007/s11760-024-03402-y | distributed source doa estimation based on deep learning networks | with space electromagnetic environments becoming increasingly complex, the direction of arrival (doa) estimation based on the point source model can no longer meet the requirements of spatial target location. based on the characteristics of the distributed source, a new doa estimation algorithm based on deep learning is proposed. the algorithm first maps the distributed source model into the point source model via a generative adversarial network (gan) and further combines the subspace-based method to achieve central doa estimation. second, by constructing a deep neural network (dnn), the covariance matrix of the received signals is used as the input to estimate the angular spread of the distributed source. the experimental results show that the proposed algorithm can achieve better performance than the existing methods for a distributed source. | [
"space electromagnetic environments",
"the direction",
"arrival",
"(doa) estimation",
"the point source model",
"the requirements",
"spatial target location",
"the characteristics",
"the distributed source",
"a new doa estimation algorithm",
"deep learning",
"algorithm",
"the distributed source model",
"the point source model",
"a generative adversarial network",
"gan",
"the subspace-based method",
"central doa estimation",
"a deep neural network",
"dnn",
"the covariance matrix",
"the received signals",
"the input",
"the angular spread",
"the distributed source",
"the experimental results",
"the proposed algorithm",
"better performance",
"the existing methods",
"a distributed source",
"first",
"second"
] |
Deep learning reconstruction for lumbar spine MRI acceleration: a prospective study | [
"Hui Tang",
"Ming Hong",
"Lu Yu",
"Yang Song",
"Mengqiu Cao",
"Lei Xiang",
"Yan Zhou",
"Shiteng Suo"
] | BackgroundWe compared magnetic resonance imaging (MRI) turbo spin-echo images reconstructed using a deep learning technique (TSE-DL) with standard turbo spin-echo (TSE-SD) images of the lumbar spine regarding image quality and detection performance of common degenerative pathologies.MethodsThis prospective, single-center study included 31 patients (15 males and 16 females; aged 51 ± 16 years (mean ± standard deviation)) who underwent lumbar spine exams with both TSE-SD and TSE-DL acquisitions for degenerative spine diseases. Images were analyzed by two radiologists and assessed for qualitative image quality using a 4-point Likert scale, quantitative signal-to-noise ratio (SNR) of anatomic landmarks, and detection of common pathologies. Paired-sample t, Wilcoxon, and McNemar tests, unweighted/linearly weighted Cohen κ statistics, and intraclass correlation coefficients were used.ResultsScan time for TSE-DL and TSE-SD protocols was 2:55 and 5:17 min:s, respectively. The overall image quality was either significantly higher for TSE-DL or not significantly different between TSE-SD and TSE-DL. TSE-DL demonstrated higher SNR and subject noise scores than TSE-SD. For pathology detection, the interreader agreement was substantial to almost perfect for TSE-DL, with κ values ranging from 0.61 to 1.00; the interprotocol agreement was almost perfect for both readers, with κ values ranging from 0.84 to 1.00. There was no significant difference in the diagnostic confidence or detection rate of common pathologies between the two sequences (p ≥ 0.081).ConclusionsTSE-DL allowed for a 45% reduction in scan time over TSE-SD in lumbar spine MRI without compromising the overall image quality and showed comparable detection performance of common pathologies in the evaluation of degenerative lumbar spine changes.Relevance statementDeep learning-reconstructed lumbar spine MRI protocol enabled a 45% reduction in scan time compared with conventional reconstruction, with comparable image quality and detection performance of common degenerative pathologies.Key points• Lumbar spine MRI with deep learning reconstruction has broad application prospects.• Deep learning reconstruction of lumbar spine MRI saved 45% scan time without compromising overall image quality.• When compared with standard sequences, deep learning reconstruction showed similar detection performance of common degenerative lumbar spine pathologies.Graphical Abstract | 10.1186/s41747-024-00470-0 | deep learning reconstruction for lumbar spine mri acceleration: a prospective study | backgroundwe compared magnetic resonance imaging (mri) turbo spin-echo images reconstructed using a deep learning technique (tse-dl) with standard turbo spin-echo (tse-sd) images of the lumbar spine regarding image quality and detection performance of common degenerative pathologies.methodsthis prospective, single-center study included 31 patients (15 males and 16 females; aged 51 ± 16 years (mean ± standard deviation)) who underwent lumbar spine exams with both tse-sd and tse-dl acquisitions for degenerative spine diseases. images were analyzed by two radiologists and assessed for qualitative image quality using a 4-point likert scale, quantitative signal-to-noise ratio (snr) of anatomic landmarks, and detection of common pathologies. paired-sample t, wilcoxon, and mcnemar tests, unweighted/linearly weighted cohen κ statistics, and intraclass correlation coefficients were used.resultsscan time for tse-dl and tse-sd protocols was 2:55 and 5:17 min:s, respectively. the overall image quality was either significantly higher for tse-dl or not significantly different between tse-sd and tse-dl. tse-dl demonstrated higher snr and subject noise scores than tse-sd. for pathology detection, the interreader agreement was substantial to almost perfect for tse-dl, with κ values ranging from 0.61 to 1.00; the interprotocol agreement was almost perfect for both readers, with κ values ranging from 0.84 to 1.00. there was no significant difference in the diagnostic confidence or detection rate of common pathologies between the two sequences (p ≥ 0.081).conclusionstse-dl allowed for a 45% reduction in scan time over tse-sd in lumbar spine mri without compromising the overall image quality and showed comparable detection performance of common pathologies in the evaluation of degenerative lumbar spine changes.relevance statementdeep learning-reconstructed lumbar spine mri protocol enabled a 45% reduction in scan time compared with conventional reconstruction, with comparable image quality and detection performance of common degenerative pathologies.key points• lumbar spine mri with deep learning reconstruction has broad application prospects.• deep learning reconstruction of lumbar spine mri saved 45% scan time without compromising overall image quality.• when compared with standard sequences, deep learning reconstruction showed similar detection performance of common degenerative lumbar spine pathologies.graphical abstract | [
"backgroundwe",
"mri",
"a deep learning technique",
"tse-dl",
"standard turbo spin-echo (tse-sd) images",
"the lumbar spine",
"image quality and detection performance",
"common degenerative pathologies.methodsthis prospective, single-center study",
"31 patients",
"15 males",
"16 females",
"51 ±",
"± standard deviation",
"who",
"lumbar spine exams",
"both tse-sd and tse-dl acquisitions",
"degenerative spine diseases",
"images",
"two radiologists",
"qualitative image quality",
"a 4-point likert scale",
"noise",
"snr",
"anatomic landmarks",
"detection",
"common pathologies",
"paired-sample t",
"wilcoxon",
"mcnemar tests",
"cohen κ statistics",
"correlation coefficients",
"used.resultsscan time",
"tse-dl and tse-sd protocols",
"min",
":",
"s",
"the overall image quality",
"tse-dl",
"tse-sd and tse-dl. tse-dl",
"higher snr and subject noise scores",
"tse-sd",
"pathology detection",
"the interreader agreement",
"tse-dl",
"κ values",
"the interprotocol agreement",
"both readers",
"κ values",
"no significant difference",
"the diagnostic confidence or detection rate",
"common pathologies",
"the two sequences",
"p ≥",
"0.081).conclusionstse-dl",
"a 45% reduction",
"scan time",
"tse-sd",
"lumbar spine mri",
"the overall image quality",
"comparable detection performance",
"common pathologies",
"the evaluation",
"degenerative lumbar spine changes.relevance",
"learning-reconstructed lumbar spine mri protocol",
"a 45% reduction",
"scan time",
"conventional reconstruction",
"comparable image quality and detection performance",
"lumbar spine mri",
"deep learning reconstruction",
"broad application prospects.•",
"reconstruction",
"lumbar spine mri",
"45% scan time",
"overall image",
"standard sequences",
"deep learning reconstruction",
"similar detection performance",
"common degenerative lumbar spine",
"31",
"15",
"16",
"51 ± 16 years",
"two",
"4",
"mcnemar",
"used.resultsscan",
"2:55",
"5:17",
"0.61",
"1.00",
"0.84",
"1.00",
"two",
"45%",
"45%",
"45%"
] |
Machine learning vs deep learning in stock market investment: an international evidence | [
"Jing Hao",
"Feng He",
"Feng Ma",
"Shibo Zhang",
"Xiaotao Zhang"
] | Machine learning and deep learning are powerful tools for quantitative investment. To examine the effectiveness of the models in different markets, this paper applies random forest and DNN models to forecast stock prices and construct statistical arbitrage strategies in five stock markets, including mainland China, the United States, the United Kingdom, Canada and Japan. Each model is applied to the price of major stock indices constituting stocks in these markets from 2005 to 2020 to construct a long-short portfolio with 20 selected stocks by the model. The results show that the a particular model obtains significantly different profits in different markets, among which DNN has the best performance, especially in the Chinese stock market. We find that DNN models generally perform better than other machine learning models in all markets. | 10.1007/s10479-023-05286-6 | machine learning vs deep learning in stock market investment: an international evidence | machine learning and deep learning are powerful tools for quantitative investment. to examine the effectiveness of the models in different markets, this paper applies random forest and dnn models to forecast stock prices and construct statistical arbitrage strategies in five stock markets, including mainland china, the united states, the united kingdom, canada and japan. each model is applied to the price of major stock indices constituting stocks in these markets from 2005 to 2020 to construct a long-short portfolio with 20 selected stocks by the model. the results show that the a particular model obtains significantly different profits in different markets, among which dnn has the best performance, especially in the chinese stock market. we find that dnn models generally perform better than other machine learning models in all markets. | [
"machine learning",
"deep learning",
"powerful tools",
"quantitative investment",
"the effectiveness",
"the models",
"different markets",
"this paper",
"random forest",
"dnn models",
"stock prices",
"statistical arbitrage strategies",
"five stock markets",
"mainland china",
"the united states",
"the united kingdom",
"canada",
"japan",
"each model",
"the price",
"major stock indices",
"stocks",
"these markets",
"a long-short portfolio",
"20 selected stocks",
"the model",
"the results",
"the a particular model",
"significantly different profits",
"different markets",
"which",
"dnn",
"the best performance",
"the chinese stock market",
"we",
"dnn models",
"other machine learning models",
"all markets",
"five",
"china",
"the united states",
"the united kingdom",
"canada",
"japan",
"2005",
"2020",
"20",
"chinese"
] |
Deep learning implementations in mining applications: a compact critical review | [
"Faris Azhari",
"Charlotte C. Sennersten",
"Craig A. Lindley",
"Ewan Sellers"
] | Deep learning is a sub-field of artificial intelligence that combines feature engineering and classification in one method. It is a data-driven technique that optimises a predictive model via learning from a large dataset. Digitisation in industry has included acquisition and storage of a variety of large datasets for interpretation and decision making. This has led to the adoption of deep learning in different industries, such as transportation, manufacturing, medicine and agriculture. However, in the mining industry, the adoption and development of new technologies, including deep learning methods, has not progressed at the same rate as in other industries. Nevertheless, in the past 5 years, applications of deep learning have been increasing in the mining research space. Deep learning has been implemented to solve a variety of problems related to mine exploration, ore and metal extraction and reclamation processes. The increased automation adoption in mining provides an avenue for wider application of deep learning as an element within a mine automation framework. This work provides a compact, comprehensive review of deep learning implementations in mining-related applications. The trends of these implementations in terms of years, venues, deep learning network types, tasks and general implementation, categorised by the value chain operations of exploration, extraction and reclamation are outlined. The review enables shortcomings regarding progress within the research context to be highlighted such as the proprietary nature of data, small datasets (tens to thousands of data points) limited to single operations with unique geology, mine design and equipment, lack of large scale publicly available mining related datasets and limited sensor types leading to the majority of applications being image-based analysis. Gaps identified for future research and application includes the usage of a wider range of sensor data, improved understanding of the outputs by mining practitioners, adversarial testing of the deep learning models, development of public datasets covering the extensive range of conditions experienced in mines. | 10.1007/s10462-023-10500-9 | deep learning implementations in mining applications: a compact critical review | deep learning is a sub-field of artificial intelligence that combines feature engineering and classification in one method. it is a data-driven technique that optimises a predictive model via learning from a large dataset. digitisation in industry has included acquisition and storage of a variety of large datasets for interpretation and decision making. this has led to the adoption of deep learning in different industries, such as transportation, manufacturing, medicine and agriculture. however, in the mining industry, the adoption and development of new technologies, including deep learning methods, has not progressed at the same rate as in other industries. nevertheless, in the past 5 years, applications of deep learning have been increasing in the mining research space. deep learning has been implemented to solve a variety of problems related to mine exploration, ore and metal extraction and reclamation processes. the increased automation adoption in mining provides an avenue for wider application of deep learning as an element within a mine automation framework. this work provides a compact, comprehensive review of deep learning implementations in mining-related applications. the trends of these implementations in terms of years, venues, deep learning network types, tasks and general implementation, categorised by the value chain operations of exploration, extraction and reclamation are outlined. the review enables shortcomings regarding progress within the research context to be highlighted such as the proprietary nature of data, small datasets (tens to thousands of data points) limited to single operations with unique geology, mine design and equipment, lack of large scale publicly available mining related datasets and limited sensor types leading to the majority of applications being image-based analysis. gaps identified for future research and application includes the usage of a wider range of sensor data, improved understanding of the outputs by mining practitioners, adversarial testing of the deep learning models, development of public datasets covering the extensive range of conditions experienced in mines. | [
"deep learning",
"a sub",
"-",
"field",
"artificial intelligence",
"that",
"feature engineering",
"classification",
"one method",
"it",
"a data-driven technique",
"that",
"a predictive model",
"a large dataset",
"digitisation",
"industry",
"acquisition",
"storage",
"a variety",
"large datasets",
"interpretation",
"decision making",
"this",
"the adoption",
"deep learning",
"different industries",
"transportation",
"manufacturing",
"medicine",
"agriculture",
"the mining industry",
"new technologies",
"deep learning methods",
"the same rate",
"other industries",
"the past 5 years",
"applications",
"deep learning",
"the mining research space",
"deep learning",
"a variety",
"problems",
"mine exploration, ore and metal extraction and reclamation processes",
"the increased automation adoption",
"mining",
"an avenue",
"wider application",
"deep learning",
"an element",
"a mine automation framework",
"this work",
"a compact, comprehensive review",
"deep learning implementations",
"mining-related applications",
"the trends",
"these implementations",
"terms",
"years",
"venues",
"deep learning network types",
"tasks",
"general implementation",
"the value chain operations",
"exploration",
"extraction",
"reclamation",
"the review",
"shortcomings",
"progress",
"the research context",
"the proprietary nature",
"data",
"small datasets",
"tens to thousands",
"data points",
"single operations",
"unique geology",
"mine design",
"equipment",
"large scale publicly available mining related datasets",
"limited sensor types",
"the majority",
"applications",
"image-based analysis",
"gaps",
"future research",
"application",
"the usage",
"a wider range",
"sensor data",
"understanding",
"the outputs",
"mining practitioners",
"adversarial testing",
"the deep learning models",
"development",
"public datasets",
"the extensive range",
"conditions",
"mines",
"one",
"the past 5 years",
"tens to thousands"
] |
Surface wave inversion with unknown number of soil layers based on a hybrid learning procedure of deep learning and genetic algorithm | [
"Zan Zhou",
"Thomas Man-Hoi Lok",
"Wan-Huan Zhou"
] | Surface wave inversion is a key step in the application of surface waves to soil velocity profiling. Currently, a common practice for the process of inversion is that the number of soil layers is assumed to be known before using heuristic search algorithms to compute the shear wave velocity profile or the number of soil layers is considered as an optimization variable. However, an improper selection of the number of layers may lead to an incorrect shear wave velocity profile. In this study, a deep learning and genetic algorithm hybrid learning procedure is proposed to perform the surface wave inversion without the need to assume the number of soil layers. First, a deep neural network is adapted to learn from a large number of synthetic dispersion curves for inferring the layer number. Then, the shear-wave velocity profile is determined by a genetic algorithm with the known layer number. By applying this procedure to both simulated and real-world cases, the results indicate that the proposed method is reliable and efficient for surface wave inversion. | 10.1007/s11803-024-2240-1 | surface wave inversion with unknown number of soil layers based on a hybrid learning procedure of deep learning and genetic algorithm | surface wave inversion is a key step in the application of surface waves to soil velocity profiling. currently, a common practice for the process of inversion is that the number of soil layers is assumed to be known before using heuristic search algorithms to compute the shear wave velocity profile or the number of soil layers is considered as an optimization variable. however, an improper selection of the number of layers may lead to an incorrect shear wave velocity profile. in this study, a deep learning and genetic algorithm hybrid learning procedure is proposed to perform the surface wave inversion without the need to assume the number of soil layers. first, a deep neural network is adapted to learn from a large number of synthetic dispersion curves for inferring the layer number. then, the shear-wave velocity profile is determined by a genetic algorithm with the known layer number. by applying this procedure to both simulated and real-world cases, the results indicate that the proposed method is reliable and efficient for surface wave inversion. | [
"surface wave inversion",
"a key step",
"the application",
"surface waves",
"velocity profiling",
"a common practice",
"the process",
"inversion",
"the number",
"soil layers",
"heuristic search algorithms",
"the shear wave velocity profile",
"the number",
"soil layers",
"an optimization variable",
"an improper selection",
"the number",
"layers",
"an incorrect shear wave velocity profile",
"this study",
"a deep learning and genetic algorithm hybrid learning procedure",
"the surface wave inversion",
"the need",
"the number",
"soil layers",
"a deep neural network",
"a large number",
"synthetic dispersion curves",
"the layer number",
"the shear-wave velocity profile",
"a genetic algorithm",
"the known layer number",
"this procedure",
"both simulated and real-world cases",
"the results",
"the proposed method",
"surface wave inversion",
"first"
] |
An Extensive Review on Deep Learning and Machine Learning Intervention in Prediction and Classification of Types of Aneurysms | [
"Renugadevi Ammapalayam Sinnaswamy",
"Natesan Palanisamy",
"Kavitha Subramaniam",
"Suresh Muthusamy",
"Ravita Lamba",
"Sreejith Sekaran"
] | Aneurysm (Rupture of blood vessels) may happen in the cerebrum, abdominal aorta and thoracic aorta of humans, which has a high fatal rate. The advancement of the artificial technologies specifically machine learning algorithms and deep learning models have attempted to predict the aneurysm, which may reduce the death rate. The main objective of this paper is to provide the review of various algorithms and models for the early prediction of the various types of aneurysms. The focused literature review was conducted from the preferred journals from 2007 to 2022 on various parameters such as way of collecting images, the techniques used, number of images used in data set, performance metrics and future work. The summarized overview of advances in prediction of aneurysms using the machine learning algorithms from non linear kernel support regression algorithm to 3D Unet architecture of deep learning models starting from CT scan images to final performance analysis in prediction. The range of sensitivity, specificity and area under receiving operating characteristic was from 0. 7 to 1 for the abdominal aortic aneurysm detection, intracranial aneurysm detection. The thoracic aortic aneurysm was not concentrated much in the literature review, so the prediction of thoracic aortic aneurysm using machine learning as well as deep learning model is recommended. | 10.1007/s11277-023-10532-y | an extensive review on deep learning and machine learning intervention in prediction and classification of types of aneurysms | aneurysm (rupture of blood vessels) may happen in the cerebrum, abdominal aorta and thoracic aorta of humans, which has a high fatal rate. the advancement of the artificial technologies specifically machine learning algorithms and deep learning models have attempted to predict the aneurysm, which may reduce the death rate. the main objective of this paper is to provide the review of various algorithms and models for the early prediction of the various types of aneurysms. the focused literature review was conducted from the preferred journals from 2007 to 2022 on various parameters such as way of collecting images, the techniques used, number of images used in data set, performance metrics and future work. the summarized overview of advances in prediction of aneurysms using the machine learning algorithms from non linear kernel support regression algorithm to 3d unet architecture of deep learning models starting from ct scan images to final performance analysis in prediction. the range of sensitivity, specificity and area under receiving operating characteristic was from 0. 7 to 1 for the abdominal aortic aneurysm detection, intracranial aneurysm detection. the thoracic aortic aneurysm was not concentrated much in the literature review, so the prediction of thoracic aortic aneurysm using machine learning as well as deep learning model is recommended. | [
"aneurysm",
"rupture",
"blood vessels",
"the cerebrum",
"abdominal aorta",
"aorta",
"humans",
"which",
"a high fatal rate",
"the advancement",
"the artificial technologies",
"specifically machine learning algorithms",
"deep learning models",
"the aneurysm",
"which",
"the death rate",
"the main objective",
"this paper",
"the review",
"various algorithms",
"models",
"the early prediction",
"the various types",
"aneurysms",
"the focused literature review",
"the preferred journals",
"various parameters",
"way",
"images",
"the techniques",
"images",
"data set",
"performance metrics",
"future work",
"the summarized overview",
"advances",
"prediction",
"aneurysms",
"the machine learning algorithms",
"non linear kernel support regression algorithm",
"3d unet architecture",
"deep learning models",
"ct scan images",
"final performance analysis",
"prediction",
"the range",
"sensitivity",
"specificity",
"area",
"operating characteristic",
"the abdominal aortic aneurysm detection",
"intracranial aneurysm detection",
"the thoracic aortic aneurysm",
"the literature review",
"the prediction",
"thoracic aortic aneurysm",
"machine learning",
"deep learning model",
"2007",
"2022",
"3d",
"scan",
"0",
"7 to 1",
"intracranial aneurysm detection",
"the thoracic aortic aneurysm"
] |
Deep learning based active image steganalysis: a review | [
"Punam Bedi",
"Anuradha Singhal",
"Veenu Bhasin"
] | Steganalysis plays a vital role in cybersecurity in today’s digital era where exchange of malicious information can be done easily across web pages. Steganography techniques are used to hide data in an object where the existence of hidden information is also obscured. Steganalysis is the process for detection of steganography within an object and can be categorized as active and passive steganalysis. Passive steganalysis tries to classify a given object as a clean or modified object. Active steganalysis aims to extract more details about hidden contents such as length of embedded message, region of inserted message, key used for embedding, required by cybersecurity experts for comprehensive analysis. Images being a viable source of exchange of information in the era of internet, social media are the most susceptible source for such transmission. Many researchers have worked and developed techniques required to detect and alert about such counterfeit exchanges over the internet. Literature present in passive and active image steganalysis techniques, addresses these issues by detecting and unveiling details of such obscured communication respectively. This paper provides a systematic and comprehensive review of work done on active image steganalysis techniques using deep learning techniques. This review will be helpful to the new researchers to become aware and build a strong foundation of literature present in active image steganalysis using deep learning techniques. The paper also includes various steganographic algorithms, dataset and performance evaluation metrics used in literature. Open research challenges and possible future research directions are also discussed in the paper. | 10.1007/s13198-023-02203-9 | deep learning based active image steganalysis: a review | steganalysis plays a vital role in cybersecurity in today’s digital era where exchange of malicious information can be done easily across web pages. steganography techniques are used to hide data in an object where the existence of hidden information is also obscured. steganalysis is the process for detection of steganography within an object and can be categorized as active and passive steganalysis. passive steganalysis tries to classify a given object as a clean or modified object. active steganalysis aims to extract more details about hidden contents such as length of embedded message, region of inserted message, key used for embedding, required by cybersecurity experts for comprehensive analysis. images being a viable source of exchange of information in the era of internet, social media are the most susceptible source for such transmission. many researchers have worked and developed techniques required to detect and alert about such counterfeit exchanges over the internet. literature present in passive and active image steganalysis techniques, addresses these issues by detecting and unveiling details of such obscured communication respectively. this paper provides a systematic and comprehensive review of work done on active image steganalysis techniques using deep learning techniques. this review will be helpful to the new researchers to become aware and build a strong foundation of literature present in active image steganalysis using deep learning techniques. the paper also includes various steganographic algorithms, dataset and performance evaluation metrics used in literature. open research challenges and possible future research directions are also discussed in the paper. | [
"steganalysis",
"a vital role",
"cybersecurity",
"today’s digital era",
"exchange",
"malicious information",
"web pages",
"steganography techniques",
"data",
"an object",
"the existence",
"hidden information",
"steganalysis",
"the process",
"detection",
"steganography",
"an object",
"active and passive steganalysis",
"passive steganalysis",
"a given object",
"a clean or modified object",
"active steganalysis",
"more details",
"hidden contents",
"length",
"embedded message",
"region",
"inserted message",
"cybersecurity experts",
"comprehensive analysis",
"images",
"a viable source",
"exchange",
"information",
"the era",
"internet",
"social media",
"the most susceptible source",
"such transmission",
"many researchers",
"techniques",
"such counterfeit exchanges",
"the internet",
"literature",
"passive and active image steganalysis techniques",
"these issues",
"details",
"such obscured communication",
"this paper",
"a systematic and comprehensive review",
"work",
"active image steganalysis techniques",
"deep learning techniques",
"this review",
"the new researchers",
"a strong foundation",
"literature",
"active image steganalysis",
"deep learning techniques",
"the paper",
"various steganographic algorithms",
"dataset",
"performance evaluation metrics",
"literature",
"open research challenges",
"possible future research directions",
"the paper",
"today"
] |
Early detection and prediction of Heart Disease using Wearable devices and Deep Learning algorithms | [
"S. Sivasubramaniam",
"S. P. Balamurugan"
] | In this paper, we propose a multimodal deep learning algorithm that combines convolutional neural networks (CNNs) and long short-term memory (LSTM) networks for early detection and prediction of heart disease using data collected from wearable devices. This combined multi-model deep learning algorithm is used to detect the accurate precision and accuracy value. At first, we consider, ECG and PPG signals, which are collected from the dataset. Then, the features from ECG and PPG are extracted using CNN and the accelerometer features are extracted using the LSTM model. The combined features are then classified using hybrid CNN-LSTM network architecture. The algorithm is evaluated using a publicly available benchmark dataset. The model achieved an accuracy of 99.33% in detecting heart disease, outperforming several state-of-the-art deep learning models. In addition, the model can predict the likelihood of developing heart disease with a precision of 99.33%, providing an early warning system for at-risk patients. The results demonstrate the potential of a multimodal approach for early detection and prediction of heart disease using wearable devices and deep learning algorithms. | 10.1007/s11042-024-19127-6 | early detection and prediction of heart disease using wearable devices and deep learning algorithms | in this paper, we propose a multimodal deep learning algorithm that combines convolutional neural networks (cnns) and long short-term memory (lstm) networks for early detection and prediction of heart disease using data collected from wearable devices. this combined multi-model deep learning algorithm is used to detect the accurate precision and accuracy value. at first, we consider, ecg and ppg signals, which are collected from the dataset. then, the features from ecg and ppg are extracted using cnn and the accelerometer features are extracted using the lstm model. the combined features are then classified using hybrid cnn-lstm network architecture. the algorithm is evaluated using a publicly available benchmark dataset. the model achieved an accuracy of 99.33% in detecting heart disease, outperforming several state-of-the-art deep learning models. in addition, the model can predict the likelihood of developing heart disease with a precision of 99.33%, providing an early warning system for at-risk patients. the results demonstrate the potential of a multimodal approach for early detection and prediction of heart disease using wearable devices and deep learning algorithms. | [
"this paper",
"we",
"a multimodal deep learning algorithm",
"that",
"cnns",
"lstm",
"early detection",
"prediction",
"heart disease",
"data",
"wearable devices",
"this combined multi-model deep learning algorithm",
"the accurate precision",
"accuracy value",
"we",
"ecg and ppg signals",
"which",
"the dataset",
"the features",
"ecg",
"ppg",
"cnn",
"the accelerometer features",
"the lstm model",
"the combined features",
"hybrid cnn-lstm network architecture",
"the algorithm",
"a publicly available benchmark dataset",
"the model",
"an accuracy",
"99.33%",
"heart disease",
"the-art",
"addition",
"the model",
"the likelihood",
"heart disease",
"a precision",
"99.33%",
"an early warning system",
"risk",
"the results",
"the potential",
"a multimodal approach",
"early detection",
"prediction",
"heart disease",
"wearable devices",
"deep learning algorithms",
"first",
"cnn",
"cnn",
"99.33%",
"99.33%"
] |
Deep cross-domain transfer for emotion recognition via joint learning | [
"Dung Nguyen",
"Duc Thanh Nguyen",
"Sridha Sridharan",
"Mohamed Abdelrazek",
"Simon Denman",
"Son N. Tran",
"Rui Zeng",
"Clinton Fookes"
] | Deep learning has been applied to achieve significant progress in emotion recognition from multimedia data. Despite such substantial progress, existing approaches are hindered by insufficient training data, leading to weak generalisation under mismatched conditions. To address these challenges, we propose a learning strategy which jointly transfers emotional knowledge learnt from rich datasets to source-poor datasets. Our method is also able to learn cross-domain features, leading to improved recognition performance. To demonstrate the robustness of the proposed learning strategy, we conducted extensive experiments on several benchmark datasets including eNTERFACE, SAVEE, EMODB, and RAVDESS. Experimental results show that the proposed method surpassed existing transfer learning schemes by a significant margin. | 10.1007/s11042-023-15441-7 | deep cross-domain transfer for emotion recognition via joint learning | deep learning has been applied to achieve significant progress in emotion recognition from multimedia data. despite such substantial progress, existing approaches are hindered by insufficient training data, leading to weak generalisation under mismatched conditions. to address these challenges, we propose a learning strategy which jointly transfers emotional knowledge learnt from rich datasets to source-poor datasets. our method is also able to learn cross-domain features, leading to improved recognition performance. to demonstrate the robustness of the proposed learning strategy, we conducted extensive experiments on several benchmark datasets including enterface, savee, emodb, and ravdess. experimental results show that the proposed method surpassed existing transfer learning schemes by a significant margin. | [
"deep learning",
"significant progress",
"emotion recognition",
"multimedia data",
"such substantial progress",
"existing approaches",
"insufficient training data",
"weak generalisation",
"mismatched conditions",
"these challenges",
"we",
"a learning strategy",
"which",
"emotional knowledge",
"rich datasets",
"source-poor datasets",
"our method",
"cross-domain features",
"improved recognition performance",
"the robustness",
"the proposed learning strategy",
"we",
"extensive experiments",
"several benchmark datasets",
"enterface",
"emodb",
"ravdess",
"experimental results",
"the proposed method",
"existing transfer learning schemes",
"a significant margin"
] |
Scoring method of English composition integrating deep learning in higher vocational colleges | [
"Shuo Feng",
"Lixia Yu",
"Fen Liu"
] | Along with the progress of natural language processing technology and deep learning, the subjectivity, slow feedback, and long grading time of traditional English essay grading have been addressed. Intelligent English automatic scoring has been widely concerned by scholars. Given the limitations of topic relevance feature extraction methods and traditional automatic grading methods for English compositions, a topic decision model is proposed to calculate the topic relevance score of the topic richness in English composition. Then, based on the Score of Relevance Based on Topic Richness (TRSR) calculation method, an intelligent English composition scoring method combining artificial feature extraction and deep learning is designed. From the findings, the Topic Decision (TD) model achieved the best effect only when it was iterated 80 times. The corresponding accuracy, recall and F1 value were 0.97, 0.93 and 0.95 respectively. The model training loss finally stabilized at 0.03. The Intelligent English Composition Grading Method Integrating Deep Learning (DLIECG) method has the best overall performance and the best performance on dataset P. To sum up, the intelligent English composition scoring method has better effectiveness and reliability. | 10.1038/s41598-024-57419-x | scoring method of english composition integrating deep learning in higher vocational colleges | along with the progress of natural language processing technology and deep learning, the subjectivity, slow feedback, and long grading time of traditional english essay grading have been addressed. intelligent english automatic scoring has been widely concerned by scholars. given the limitations of topic relevance feature extraction methods and traditional automatic grading methods for english compositions, a topic decision model is proposed to calculate the topic relevance score of the topic richness in english composition. then, based on the score of relevance based on topic richness (trsr) calculation method, an intelligent english composition scoring method combining artificial feature extraction and deep learning is designed. from the findings, the topic decision (td) model achieved the best effect only when it was iterated 80 times. the corresponding accuracy, recall and f1 value were 0.97, 0.93 and 0.95 respectively. the model training loss finally stabilized at 0.03. the intelligent english composition grading method integrating deep learning (dliecg) method has the best overall performance and the best performance on dataset p. to sum up, the intelligent english composition scoring method has better effectiveness and reliability. | [
"the progress",
"natural language processing technology",
"deep learning",
"the subjectivity",
"slow feedback",
"long grading time",
"intelligent english automatic scoring",
"scholars",
"the limitations",
"topic relevance feature extraction methods",
"traditional automatic grading methods",
"english compositions",
"a topic decision model",
"the topic relevance score",
"the topic richness",
"english composition",
"the score",
"relevance",
"topic richness (trsr) calculation method",
"an intelligent english composition scoring method",
"artificial feature extraction",
"deep learning",
"the findings",
"the topic decision",
"(td) model",
"the best effect",
"it",
"the corresponding accuracy",
"recall",
"f1 value",
"the model training loss",
"the intelligent english composition",
"method",
"deep learning (dliecg) method",
"the best overall performance",
"the best performance",
"dataset p.",
"the intelligent english composition scoring method",
"better effectiveness",
"reliability",
"english",
"english",
"english",
"english",
"english",
"80",
"0.97",
"0.93",
"0.95",
"0.03",
"english",
"english"
] |
Machine learning and deep learning techniques for breast cancer diagnosis and classification: a comprehensive review of medical imaging studies | [
"Mehran Radak",
"Haider Yabr Lafta",
"Hossein Fallahi"
] | BackgroundBreast cancer is a major public health concern, and early diagnosis and classification are critical for effective treatment. Machine learning and deep learning techniques have shown great promise in the classification and diagnosis of breast cancer.PurposeIn this review, we examine studies that have used these techniques for breast cancer classification and diagnosis, focusing on five groups of medical images: mammography, ultrasound, MRI, histology, and thermography. We discuss the use of five popular machine learning techniques, including Nearest Neighbor, SVM, Naive Bayesian Network, DT, and ANN, as well as deep learning architectures and convolutional neural networks.ConclusionOur review finds that machine learning and deep learning techniques have achieved high accuracy rates in breast cancer classification and diagnosis across various medical imaging modalities. Furthermore, these techniques have the potential to improve clinical decision-making and ultimately lead to better patient outcomes. | 10.1007/s00432-023-04956-z | machine learning and deep learning techniques for breast cancer diagnosis and classification: a comprehensive review of medical imaging studies | backgroundbreast cancer is a major public health concern, and early diagnosis and classification are critical for effective treatment. machine learning and deep learning techniques have shown great promise in the classification and diagnosis of breast cancer.purposein this review, we examine studies that have used these techniques for breast cancer classification and diagnosis, focusing on five groups of medical images: mammography, ultrasound, mri, histology, and thermography. we discuss the use of five popular machine learning techniques, including nearest neighbor, svm, naive bayesian network, dt, and ann, as well as deep learning architectures and convolutional neural networks.conclusionour review finds that machine learning and deep learning techniques have achieved high accuracy rates in breast cancer classification and diagnosis across various medical imaging modalities. furthermore, these techniques have the potential to improve clinical decision-making and ultimately lead to better patient outcomes. | [
"backgroundbreast cancer",
"a major public health concern",
"early diagnosis",
"classification",
"effective treatment",
"machine learning",
"deep learning techniques",
"great promise",
"the classification",
"diagnosis",
"breast cancer.purposein",
"this review",
"we",
"studies",
"that",
"these techniques",
"breast cancer classification",
"diagnosis",
"five groups",
"medical images",
"mammography",
"ultrasound",
"mri",
"histology",
"thermography",
"we",
"the use",
"five popular machine learning techniques",
"neighbor",
"svm",
"naive bayesian network",
"dt",
"ann",
"deep learning architectures",
"convolutional neural networks.conclusionour review",
"machine learning",
"deep learning techniques",
"high accuracy rates",
"breast cancer classification",
"diagnosis",
"various medical imaging modalities",
"these techniques",
"the potential",
"clinical decision-making",
"better patient outcomes",
"five",
"five"
] |
Deep learning based vessel arrivals monitoring via autoregressive statistical control charts | [
"Sara El Mekkaoui",
"Ghait Boukachab",
"Loubna Benabbou",
"Abdelaziz Berrado"
] | This paper introduces a methodology for monitoring the vessel arrival process, a critical factor in enhancing maritime operational efficiency. This approach uses deep learning sequence models and Statistical Process Control Charts to track the variability in a vessel arrival process. The proposed solution uses the predictive deep learning model to get a vessel’s estimated time of arrival, produces quality characteristics, and applies statistical control charts to monitor their variability. The paper presents the results of applying the proposed methodology for vessel arrivals at a coal terminal, which demonstrates the effectiveness of the method. By enabling precise monitoring of arrival times, this methodology not only supports efficient ship and port operations planning but also aids in the timely adoption of operational adjustments. This can significantly contribute to operational measures aimed at reducing shipping emissions and optimizing resource utilization. | 10.1007/s13437-024-00342-9 | deep learning based vessel arrivals monitoring via autoregressive statistical control charts | this paper introduces a methodology for monitoring the vessel arrival process, a critical factor in enhancing maritime operational efficiency. this approach uses deep learning sequence models and statistical process control charts to track the variability in a vessel arrival process. the proposed solution uses the predictive deep learning model to get a vessel’s estimated time of arrival, produces quality characteristics, and applies statistical control charts to monitor their variability. the paper presents the results of applying the proposed methodology for vessel arrivals at a coal terminal, which demonstrates the effectiveness of the method. by enabling precise monitoring of arrival times, this methodology not only supports efficient ship and port operations planning but also aids in the timely adoption of operational adjustments. this can significantly contribute to operational measures aimed at reducing shipping emissions and optimizing resource utilization. | [
"this paper",
"a methodology",
"the vessel arrival process",
"a critical factor",
"maritime operational efficiency",
"this approach",
"deep learning sequence models",
"statistical process control charts",
"the variability",
"a vessel arrival process",
"the proposed solution",
"the predictive deep learning model",
"a vessel’s estimated time",
"arrival",
"quality characteristics",
"statistical control charts",
"their variability",
"the paper",
"the results",
"the proposed methodology",
"vessel arrivals",
"a coal terminal",
"which",
"the effectiveness",
"the method",
"precise monitoring",
"arrival times",
"this methodology",
"efficient ship and port operations planning",
"the timely adoption",
"operational adjustments",
"this",
"operational measures",
"shipping emissions",
"resource utilization"
] |
SAKMR: Industrial control anomaly detection based on semi-supervised hybrid deep learning | [
"Shijie Tang",
"Yong Ding",
"Meng Zhao",
"Huiyong Wang"
] | With the advent of Industry 4.0, industrial control systems (ICS) are more and more closely connected with the Internet, leading to a rapid increase in the types and quantities of security threats that arise from ICS. Anomaly detection is an effective defense measure against attacks. At present, it is the main trend to use hybrid deep learning methods to realize ICS anomaly detection. However, we found that many ICS anomaly detection methods based on hybrid deep learning adopt phased learning, in which each phase is optimized separately with optimization goals deviating from the overall goal. In view of this issue, we propose an end-to-end anomaly detection method SAKMR based on hybrid deep learning. Our method uses radial basis function network (RBFN) to realize K-means clustering, and combines it with stacked auto-encoder (SAE), which is conducive to defining reconstruction error and clustering error into an objective function to ensure joint optimization of feature extraction and classification. Experiments were conducted on the commonly used KDDCUP99 and SWAT datasets. The results show that SAKMR is effective in detecting abnormal industrial control data and outperforms the baseline methods on multiple performance indicators such as F1-Measure. | 10.1007/s12083-023-01586-7 | sakmr: industrial control anomaly detection based on semi-supervised hybrid deep learning | with the advent of industry 4.0, industrial control systems (ics) are more and more closely connected with the internet, leading to a rapid increase in the types and quantities of security threats that arise from ics. anomaly detection is an effective defense measure against attacks. at present, it is the main trend to use hybrid deep learning methods to realize ics anomaly detection. however, we found that many ics anomaly detection methods based on hybrid deep learning adopt phased learning, in which each phase is optimized separately with optimization goals deviating from the overall goal. in view of this issue, we propose an end-to-end anomaly detection method sakmr based on hybrid deep learning. our method uses radial basis function network (rbfn) to realize k-means clustering, and combines it with stacked auto-encoder (sae), which is conducive to defining reconstruction error and clustering error into an objective function to ensure joint optimization of feature extraction and classification. experiments were conducted on the commonly used kddcup99 and swat datasets. the results show that sakmr is effective in detecting abnormal industrial control data and outperforms the baseline methods on multiple performance indicators such as f1-measure. | [
"the advent",
"industry",
"industrial control systems",
"ics",
"the internet",
"a rapid increase",
"the types",
"quantities",
"security threats",
"that",
"ics",
"anomaly detection",
"an effective defense measure",
"attacks",
"present",
"it",
"the main trend",
"hybrid deep learning methods",
"ics anomaly detection",
"we",
"that many ics anomaly detection methods",
"hybrid deep learning adopt phased learning",
"which",
"each phase",
"optimization goals",
"the overall goal",
"view",
"this issue",
"we",
"end",
"hybrid deep learning",
"our method",
"radial basis function network",
"rbfn",
"k",
"it",
"stacked auto-encoder",
"sae",
"which",
"reconstruction error",
"error",
"an objective function",
"joint optimization",
"feature extraction",
"classification",
"experiments",
"the commonly used kddcup99",
"swat datasets",
"the results",
"sakmr",
"abnormal industrial control data",
"the baseline methods",
"multiple performance indicators",
"f1-measure",
"4.0",
"anomaly detection",
"anomaly detection method",
"kddcup99"
] |
Single sample face recognition using deep learning: a survey | [
"Vivek Tomar",
"Nitin Kumar",
"Ayush Raj Srivastava"
] | Face recognition has become popular in the last few decades among researchers across the globe due to its applicability in several domains. This problem becomes more challenging when only a single training image is available and is popularly known as single sample face recognition (SSFR) problem. SSFR becomes even more complex when images are captured under varying illumination conditions, different poses, occlusion, and expression. Further, deep learning methods have shown performance at par with humans recently. Due to the emergence of deep learning methods in the last decade, it has been made possible to recognize faces with excellent accuracy even in a single sample scenario. In this paper, we present a comprehensive survey of SSFR using deep learning. We also propose a novel taxonomy and broadly divide these methods into three categories viz. virtual sample generation, feature-based, and hybrid methods. Performance comparison of these methods as reported in the literature has also been performed. Finally, we review publicly available databases used by the researchers and give some important future research directions which will help aspiring researchers in this fascinating area. | 10.1007/s10462-023-10551-y | single sample face recognition using deep learning: a survey | face recognition has become popular in the last few decades among researchers across the globe due to its applicability in several domains. this problem becomes more challenging when only a single training image is available and is popularly known as single sample face recognition (ssfr) problem. ssfr becomes even more complex when images are captured under varying illumination conditions, different poses, occlusion, and expression. further, deep learning methods have shown performance at par with humans recently. due to the emergence of deep learning methods in the last decade, it has been made possible to recognize faces with excellent accuracy even in a single sample scenario. in this paper, we present a comprehensive survey of ssfr using deep learning. we also propose a novel taxonomy and broadly divide these methods into three categories viz. virtual sample generation, feature-based, and hybrid methods. performance comparison of these methods as reported in the literature has also been performed. finally, we review publicly available databases used by the researchers and give some important future research directions which will help aspiring researchers in this fascinating area. | [
"face recognition",
"the last few decades",
"researchers",
"the globe",
"its applicability",
"several domains",
"this problem",
"only a single training image",
"single sample face recognition (ssfr) problem",
"images",
"illumination conditions",
"different poses",
"occlusion",
"expression",
"deep learning methods",
"performance",
"par",
"humans",
"the emergence",
"deep learning methods",
"the last decade",
"it",
"faces",
"excellent accuracy",
"a single sample scenario",
"this paper",
"we",
"a comprehensive survey",
"deep learning",
"we",
"a novel taxonomy",
"these methods",
"three categories",
"virtual sample generation",
"feature-based, and hybrid methods",
"performance comparison",
"these methods",
"the literature",
"we",
"publicly available databases",
"the researchers",
"some important future research directions",
"which",
"researchers",
"this fascinating area",
"the last few decades",
"the last decade",
"three"
] |
RNA contact prediction by data efficient deep learning | [
"Oskar Taubert",
"Fabrice von der Lehr",
"Alina Bazarova",
"Christian Faber",
"Philipp Knechtges",
"Marie Weiel",
"Charlotte Debus",
"Daniel Coquelin",
"Achim Basermann",
"Achim Streit",
"Stefan Kesselheim",
"Markus Götz",
"Alexander Schug"
] | On the path to full understanding of the structure-function relationship or even design of RNA, structure prediction would offer an intriguing complement to experimental efforts. Any deep learning on RNA structure, however, is hampered by the sparsity of labeled training data. Utilizing the limited data available, we here focus on predicting spatial adjacencies ("contact maps”) as a proxy for 3D structure. Our model, BARNACLE, combines the utilization of unlabeled data through self-supervised pre-training and efficient use of the sparse labeled data through an XGBoost classifier. BARNACLE shows a considerable improvement over both the established classical baseline and a deep neural network. In order to demonstrate that our approach can be applied to tasks with similar data constraints, we show that our findings generalize to the related setting of accessible surface area prediction. | 10.1038/s42003-023-05244-9 | rna contact prediction by data efficient deep learning | on the path to full understanding of the structure-function relationship or even design of rna, structure prediction would offer an intriguing complement to experimental efforts. any deep learning on rna structure, however, is hampered by the sparsity of labeled training data. utilizing the limited data available, we here focus on predicting spatial adjacencies ("contact maps”) as a proxy for 3d structure. our model, barnacle, combines the utilization of unlabeled data through self-supervised pre-training and efficient use of the sparse labeled data through an xgboost classifier. barnacle shows a considerable improvement over both the established classical baseline and a deep neural network. in order to demonstrate that our approach can be applied to tasks with similar data constraints, we show that our findings generalize to the related setting of accessible surface area prediction. | [
"the path",
"full understanding",
"the structure-function relationship",
"even design",
"rna",
"structure prediction",
"an intriguing complement",
"experimental efforts",
"any deep learning",
"rna structure",
"the sparsity",
"labeled training data",
"the limited data",
"we",
"spatial adjacencies",
"(\"contact maps",
"a proxy",
"3d structure",
"our model",
"barnacle",
"the utilization",
"unlabeled data",
"self-supervised pre",
"-",
"training and efficient use",
"the sparse",
"data",
"an xgboost classifier",
"barnacle",
"a considerable improvement",
"both the established classical baseline",
"a deep neural network",
"order",
"our approach",
"tasks",
"similar data constraints",
"we",
"our findings",
"the related setting",
"accessible surface area prediction",
"3d"
] |
Deep learning for the harmonization of structural MRI scans: a survey | [
"Soolmaz Abbasi",
"Haoyu Lan",
"Jeiran Choupan",
"Nasim Sheikh-Bahaei",
"Gaurav Pandey",
"Bino Varghese"
] | Medical imaging datasets for research are frequently collected from multiple imaging centers using different scanners, protocols, and settings. These variations affect data consistency and compatibility across different sources. Image harmonization is a critical step to mitigate the effects of factors like inherent differences between various vendors, hardware upgrades, protocol changes, and scanner calibration drift, as well as to ensure consistent data for medical image processing techniques. Given the critical importance and widespread relevance of this issue, a vast array of image harmonization methodologies have emerged, with deep learning-based approaches driving substantial advancements in recent times. The goal of this review paper is to examine the latest deep learning techniques employed for image harmonization by analyzing cutting-edge architectural approaches in the field of medical image harmonization, evaluating both their strengths and limitations. This paper begins by providing a comprehensive fundamental overview of image harmonization strategies, covering three critical aspects: established imaging datasets, commonly used evaluation metrics, and characteristics of different scanners. Subsequently, this paper analyzes recent structural MRI (Magnetic Resonance Imaging) harmonization techniques based on network architecture, network learning algorithm, network supervision strategy, and network output. The underlying architectures include U-Net, Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), flow-based generative models, transformer-based approaches, as well as custom-designed network architectures. This paper investigates the effectiveness of Disentangled Representation Learning (DRL) as a pivotal learning algorithm in harmonization. Lastly, the review highlights the primary limitations in harmonization techniques, specifically the lack of comprehensive quantitative comparisons across different methods. The overall aim of this review is to serve as a guide for researchers and practitioners to select appropriate architectures based on their specific conditions and requirements. It also aims to foster discussions around ongoing challenges in the field and shed light on promising future research directions with the potential for significant advancements. | 10.1186/s12938-024-01280-6 | deep learning for the harmonization of structural mri scans: a survey | medical imaging datasets for research are frequently collected from multiple imaging centers using different scanners, protocols, and settings. these variations affect data consistency and compatibility across different sources. image harmonization is a critical step to mitigate the effects of factors like inherent differences between various vendors, hardware upgrades, protocol changes, and scanner calibration drift, as well as to ensure consistent data for medical image processing techniques. given the critical importance and widespread relevance of this issue, a vast array of image harmonization methodologies have emerged, with deep learning-based approaches driving substantial advancements in recent times. the goal of this review paper is to examine the latest deep learning techniques employed for image harmonization by analyzing cutting-edge architectural approaches in the field of medical image harmonization, evaluating both their strengths and limitations. this paper begins by providing a comprehensive fundamental overview of image harmonization strategies, covering three critical aspects: established imaging datasets, commonly used evaluation metrics, and characteristics of different scanners. subsequently, this paper analyzes recent structural mri (magnetic resonance imaging) harmonization techniques based on network architecture, network learning algorithm, network supervision strategy, and network output. the underlying architectures include u-net, generative adversarial networks (gans), variational autoencoders (vaes), flow-based generative models, transformer-based approaches, as well as custom-designed network architectures. this paper investigates the effectiveness of disentangled representation learning (drl) as a pivotal learning algorithm in harmonization. lastly, the review highlights the primary limitations in harmonization techniques, specifically the lack of comprehensive quantitative comparisons across different methods. the overall aim of this review is to serve as a guide for researchers and practitioners to select appropriate architectures based on their specific conditions and requirements. it also aims to foster discussions around ongoing challenges in the field and shed light on promising future research directions with the potential for significant advancements. | [
"medical imaging datasets",
"research",
"multiple imaging centers",
"different scanners",
"protocols",
"settings",
"these variations",
"data consistency",
"compatibility",
"different sources",
"image harmonization",
"a critical step",
"the effects",
"factors",
"inherent differences",
"various vendors",
"hardware upgrades",
"protocol changes",
"scanner calibration drift",
"consistent data",
"medical image processing techniques",
"the critical importance",
"widespread relevance",
"this issue",
"a vast array",
"image harmonization methodologies",
"deep learning-based approaches",
"substantial advancements",
"recent times",
"the goal",
"this review paper",
"the latest deep learning techniques",
"image harmonization",
"cutting-edge architectural approaches",
"the field",
"medical image harmonization",
"both their strengths",
"limitations",
"this paper",
"a comprehensive fundamental overview",
"image harmonization strategies",
"three critical aspects",
"imaging datasets",
"commonly used evaluation metrics",
"characteristics",
"different scanners",
"this paper",
"recent structural mri",
"magnetic resonance imaging",
"harmonization techniques",
"network architecture",
"network learning algorithm",
"network supervision strategy",
"network output",
"the underlying architectures",
"u",
"-",
"net",
"generative adversarial networks",
"gans",
"variational autoencoders",
"flow-based generative models",
"transformer-based approaches",
"custom-designed network architectures",
"this paper",
"the effectiveness",
"drl",
"a pivotal learning algorithm",
"harmonization",
"the review",
"the primary limitations",
"harmonization techniques",
"specifically the lack",
"comprehensive quantitative comparisons",
"different methods",
"the overall aim",
"this review",
"a guide",
"researchers",
"practitioners",
"appropriate architectures",
"their specific conditions",
"requirements",
"it",
"discussions",
"ongoing challenges",
"the field",
"light",
"future research directions",
"the potential",
"significant advancements",
"three"
] |
A review on vision-based deep learning techniques for damage detection in bolted joints | [
"Zahir Malik",
"Ansh Mirani",
"Tanneru Gopi",
"Mallika Alapati"
] | Bolted connections are widely used in steel structures. Detection of bolt loosening is the prime concern in the bolted joints to avoid sudden failure leading to catastrophe. Loosening of the bolts causes interfacial movement by reducing the pre-torque when subjected to vibrations due to dynamic loads. With the advent of computing capabilities, sensor technologies, and machine learning model accuracy in bolt loosening detection, damage recognition efficiency in bolted joints has increased. Integrating deep learning with machine vision, effective models can be proposed without human interventions. The present paper summarizes the research review on bolt loosening detection using machine vision and deep learning techniques from the past decade. | 10.1007/s42107-024-01139-0 | a review on vision-based deep learning techniques for damage detection in bolted joints | bolted connections are widely used in steel structures. detection of bolt loosening is the prime concern in the bolted joints to avoid sudden failure leading to catastrophe. loosening of the bolts causes interfacial movement by reducing the pre-torque when subjected to vibrations due to dynamic loads. with the advent of computing capabilities, sensor technologies, and machine learning model accuracy in bolt loosening detection, damage recognition efficiency in bolted joints has increased. integrating deep learning with machine vision, effective models can be proposed without human interventions. the present paper summarizes the research review on bolt loosening detection using machine vision and deep learning techniques from the past decade. | [
"bolted connections",
"steel structures",
"detection",
"bolt loosening",
"the prime concern",
"the bolted joints",
"sudden failure",
"catastrophe",
"loosening",
"the bolts",
"interfacial movement",
"the pre",
"torque",
"vibrations",
"dynamic loads",
"the advent",
"computing capabilities",
"sensor technologies",
"machine learning model accuracy",
"bolt loosening detection",
"damage recognition efficiency",
"bolted joints",
"deep learning",
"machine vision",
"effective models",
"human interventions",
"the present paper",
"the research review",
"bolt loosening detection",
"machine vision",
"deep learning techniques",
"the past decade",
"the past decade"
] |
Real-time thermography for breast cancer detection with deep learning | [
"Mohammed Abdulla Salim Al Husaini",
"Mohamed Hadi Habaebi",
"Md Rafiqul Islam"
] | In this study, we propose a framework that enhances breast cancer classification accuracy by preserving spatial features and leveraging in situ cooling support. The framework utilizes real-time thermography video streaming for early breast cancer detection using Deep Learning models. Inception v3, Inception v4, and a modified Inception Mv4 were developed using MATLAB 2019. However, the thermal camera was connected to a mobile phone to capture images of the breast area for classification of normal and abnormal breast. This study’s training dataset included 1000 thermal images, where a FLIR One Pro thermal camera connected to a mobile device was used for the imaging process. Of the 1000 images obtained, 700 images were considered for the normal breast thermography class while the 300 images were suitable for the abnormal class. We evaluate Deep Convolutional Neural Network models, such as Inception v3, Inception v4, and a modified Inception Mv4. Our results demonstrate that Inception Mv4, with real-time video streaming, efficiently detects even the slightest temperature contrast in breast tissue sequences achieving a 99.748% accuracy in comparison to a 99.712% and 96.8% for Inception v4 and v3, respectively. The use of in situ cooling gel further enhances image acquisition efficiency and detection accuracy. Interestingly, increasing the tumor surface temperature by 0.1% leads to an average 7% improvement in detection and classification accuracy. Our findings support the effectiveness of Inception Mv4 for real-time breast cancer detection, especially when combined with in situ cooling gel and varying tumor temperatures. In conclusion, future research directions should focus on incorporating thermal video clips into the thermal images database, utilizing high-quality thermal cameras, and exploring alternative Deep Learning models for improved breast cancer detection. | 10.1007/s44163-024-00157-w | real-time thermography for breast cancer detection with deep learning | in this study, we propose a framework that enhances breast cancer classification accuracy by preserving spatial features and leveraging in situ cooling support. the framework utilizes real-time thermography video streaming for early breast cancer detection using deep learning models. inception v3, inception v4, and a modified inception mv4 were developed using matlab 2019. however, the thermal camera was connected to a mobile phone to capture images of the breast area for classification of normal and abnormal breast. this study’s training dataset included 1000 thermal images, where a flir one pro thermal camera connected to a mobile device was used for the imaging process. of the 1000 images obtained, 700 images were considered for the normal breast thermography class while the 300 images were suitable for the abnormal class. we evaluate deep convolutional neural network models, such as inception v3, inception v4, and a modified inception mv4. our results demonstrate that inception mv4, with real-time video streaming, efficiently detects even the slightest temperature contrast in breast tissue sequences achieving a 99.748% accuracy in comparison to a 99.712% and 96.8% for inception v4 and v3, respectively. the use of in situ cooling gel further enhances image acquisition efficiency and detection accuracy. interestingly, increasing the tumor surface temperature by 0.1% leads to an average 7% improvement in detection and classification accuracy. our findings support the effectiveness of inception mv4 for real-time breast cancer detection, especially when combined with in situ cooling gel and varying tumor temperatures. in conclusion, future research directions should focus on incorporating thermal video clips into the thermal images database, utilizing high-quality thermal cameras, and exploring alternative deep learning models for improved breast cancer detection. | [
"this study",
"we",
"a framework",
"that",
"breast cancer classification accuracy",
"spatial features",
"situ cooling support",
"the framework",
"real-time thermography video streaming",
"early breast cancer detection",
"deep learning models",
"inception v3",
"inception v4",
"a modified inception mv4",
"matlab",
"the thermal camera",
"a mobile phone",
"images",
"the breast area",
"classification",
"normal and abnormal breast",
"this study’s training dataset",
"1000 thermal images",
"a flir one pro thermal camera",
"a mobile device",
"the imaging process",
"the 1000 images",
"700 images",
"the normal breast thermography class",
"the 300 images",
"the abnormal class",
"we",
"deep convolutional neural network models",
"inception v3",
"inception v4",
"a modified inception mv4",
"our results",
"inception mv4",
"real-time video streaming",
"even the slightest temperature contrast",
"breast tissue sequences",
"a 99.748% accuracy",
"comparison",
"a 99.712%",
"96.8%",
"inception v4",
"v3",
"the use",
"situ",
"gel",
"image acquisition efficiency",
"detection accuracy",
"the tumor surface temperature",
"0.1%",
"an average 7% improvement",
"detection and classification accuracy",
"our findings",
"the effectiveness",
"inception mv4",
"real-time breast cancer detection",
"gel and varying tumor temperatures",
"conclusion",
"future research directions",
"thermal video clips",
"the thermal images database",
"high-quality thermal cameras",
"alternative deep learning models",
"improved breast cancer detection",
"v3",
"2019",
"1000",
"one",
"1000",
"700",
"300",
"99.748%",
"99.712%",
"96.8%",
"v3",
"0.1%",
"an average",
"7%"
] |
How deep learning is complementing deep thinking in ATLAS | [
"Deepak Kar"
] | ATLAS collaboration uses machine learning (ML) algorithms in many different ways in its physics programme, starting from object reconstruction, simulation of calorimeter showers, signal to background discrimination in searches and measurements, tagging jets based on their origin and so on. Anomaly detection (AD) techniques are also gaining popularity where they are used to find hidden patterns in the data, with lesser dependence on simulated samples as in the case of supervised learning-based methods. ML methods used in detector simulation and in jet tagging in ATLAS will be discussed, along with four searches using ML/AD techniques. | 10.1140/epjs/s11734-024-01238-8 | how deep learning is complementing deep thinking in atlas | atlas collaboration uses machine learning (ml) algorithms in many different ways in its physics programme, starting from object reconstruction, simulation of calorimeter showers, signal to background discrimination in searches and measurements, tagging jets based on their origin and so on. anomaly detection (ad) techniques are also gaining popularity where they are used to find hidden patterns in the data, with lesser dependence on simulated samples as in the case of supervised learning-based methods. ml methods used in detector simulation and in jet tagging in atlas will be discussed, along with four searches using ml/ad techniques. | [
"atlas collaboration",
"machine learning",
"ml",
"many different ways",
"its physics programme",
"object reconstruction",
"simulation",
"calorimeter showers",
"background discrimination",
"searches",
"measurements",
"tagging jets",
"their origin",
"anomaly detection",
"(ad) techniques",
"popularity",
"they",
"hidden patterns",
"the data",
"lesser dependence",
"simulated samples",
"the case",
"supervised learning-based methods",
"ml methods",
"detector simulation",
"jet tagging",
"atlas",
"four searches",
"ml/ad techniques",
"four"
] |
Multiclass skin lesion classification using deep learning networks optimal information fusion | [
"Muhammad Attique Khan",
"Ameer Hamza",
"Mohammad Shabaz",
"Seifeine Kadry",
"Saddaf Rubab",
"Muhammad Abdullah Bilal",
"Muhammad Naeem Akbar",
"Suresh Manic Kesavan"
] | A serious, all-encompassing, and deadly cancer that affects every part of the body is skin cancer. The most prevalent causes of skin lesions are UV radiation, which can damage human skin, and moles. If skin cancer is discovered early, it may be adequately treated. In order to diagnose skin lesions with less effort, dermatologists are increasingly turning to machine learning (ML) techniques and computer-aided diagnostic (CAD) systems. This paper proposes a computerized method for multiclass lesion classification using a fusion of optimal deep-learning model features. The dataset used in this work, ISIC2018, is imbalanced; therefore, augmentation is performed based on a few mathematical operations. After that, two pre-trained deep learning models (DarkNet-19 and MobileNet-V2) have been fine-tuned and trained on the selected dataset. After training, features are extracted from the average pool layer and optimized using a hybrid firefly optimization technique. The selected features are fused in two ways: (i) original serial approach and (ii) proposed threshold approach. Machine learning classifiers are used to classify the chosen features at the end. Using the ISIC2018 dataset, the experimental procedure produced an accuracy of 89.0%. Whereas, 87.34, 87.57, and 87.45 are sensitivity, precision, and F1 score respectively. At the end, comparison is also conducted with recent techniques, and it shows the proposed method shows improved accuracy along with other performance measures. | 10.1007/s42452-024-05998-9 | multiclass skin lesion classification using deep learning networks optimal information fusion | a serious, all-encompassing, and deadly cancer that affects every part of the body is skin cancer. the most prevalent causes of skin lesions are uv radiation, which can damage human skin, and moles. if skin cancer is discovered early, it may be adequately treated. in order to diagnose skin lesions with less effort, dermatologists are increasingly turning to machine learning (ml) techniques and computer-aided diagnostic (cad) systems. this paper proposes a computerized method for multiclass lesion classification using a fusion of optimal deep-learning model features. the dataset used in this work, isic2018, is imbalanced; therefore, augmentation is performed based on a few mathematical operations. after that, two pre-trained deep learning models (darknet-19 and mobilenet-v2) have been fine-tuned and trained on the selected dataset. after training, features are extracted from the average pool layer and optimized using a hybrid firefly optimization technique. the selected features are fused in two ways: (i) original serial approach and (ii) proposed threshold approach. machine learning classifiers are used to classify the chosen features at the end. using the isic2018 dataset, the experimental procedure produced an accuracy of 89.0%. whereas, 87.34, 87.57, and 87.45 are sensitivity, precision, and f1 score respectively. at the end, comparison is also conducted with recent techniques, and it shows the proposed method shows improved accuracy along with other performance measures. | [
"a serious, all-encompassing, and deadly cancer",
"that",
"every part",
"the body",
"skin cancer",
"the most prevalent causes",
"skin lesions",
"uv radiation",
"which",
"human skin",
"skin cancer",
"it",
"order",
"skin lesions",
"less effort",
"dermatologists",
"machine learning",
"(ml) techniques",
"computer-aided diagnostic (cad) systems",
"this paper",
"a computerized method",
"multiclass lesion classification",
"a fusion",
"optimal deep-learning model features",
"the dataset",
"this work",
"isic2018",
"augmentation",
"a few mathematical operations",
"that",
"two pre-trained deep learning models",
"darknet-19",
"mobilenet-v2",
"the selected dataset",
"training",
"features",
"the average pool layer",
"a hybrid firefly optimization technique",
"the selected features",
"two ways",
"(i) original serial approach",
"(ii) proposed threshold approach",
"machine learning classifiers",
"the chosen features",
"the end",
"the isic2018 dataset",
"the experimental procedure",
"an accuracy",
"89.0%",
"sensitivity",
"precision",
"f1 score",
"the end",
"comparison",
"recent techniques",
"it",
"the proposed method",
"improved accuracy",
"other performance measures",
"isic2018",
"two",
"darknet-19",
"two",
"isic2018",
"89.0%",
"87.34",
"87.57",
"87.45"
] |
REDQT: a method for automated mobile application GUI testing based on deep reinforcement learning algorithms | [
"Fengyu Wang",
"Chuanqi Tao",
"Jerry Gao"
] | As mobile applications become increasingly prevalent in daily life, the demand for their functionality and reliability continues to grow. Traditional mobile application testing methods, particularly graphical user interface (GUI) testing, face challenges of limited automation and adaptability.Despite the application of various machine learning approaches to GUI testing, enhancing the utilization of limited component samples in complex mobile application environments remains an overlooked issue in many automated testing methods. This study introduces a mobile application testing method based on deep reinforcement learning, aimed at improving performance and adaptability during the testing process. By integrating the feature recognition capabilities of deep learning with the decision-making mechanisms of reinforcement learning, our method can effectively simulate user operations and identify potential application pitfalls. Initially, the study analyzes the limitations of traditional mobile application testing methods and explores the advantages of deep reinforcement learning in handling complex tasks. Subsequently, we present RedqT: an automated mobile application GUI testing method based on a deep reinforcement learning algorithm (REDQ), aimed at enhancing the utilization of application information through the characteristics of the REDQ algorithm. A study testing 18 open-source Android applications on GitHub demonstrated that our method shows promising performance in terms of code coverage and testing speed. | 10.1007/s11761-024-00413-y | redqt: a method for automated mobile application gui testing based on deep reinforcement learning algorithms | as mobile applications become increasingly prevalent in daily life, the demand for their functionality and reliability continues to grow. traditional mobile application testing methods, particularly graphical user interface (gui) testing, face challenges of limited automation and adaptability.despite the application of various machine learning approaches to gui testing, enhancing the utilization of limited component samples in complex mobile application environments remains an overlooked issue in many automated testing methods. this study introduces a mobile application testing method based on deep reinforcement learning, aimed at improving performance and adaptability during the testing process. by integrating the feature recognition capabilities of deep learning with the decision-making mechanisms of reinforcement learning, our method can effectively simulate user operations and identify potential application pitfalls. initially, the study analyzes the limitations of traditional mobile application testing methods and explores the advantages of deep reinforcement learning in handling complex tasks. subsequently, we present redqt: an automated mobile application gui testing method based on a deep reinforcement learning algorithm (redq), aimed at enhancing the utilization of application information through the characteristics of the redq algorithm. a study testing 18 open-source android applications on github demonstrated that our method shows promising performance in terms of code coverage and testing speed. | [
"mobile applications",
"daily life",
"the demand",
"their functionality",
"reliability",
"traditional mobile application testing methods",
"particularly graphical user interface",
"gui",
"testing",
"challenges",
"limited automation",
"the application",
"various machine learning approaches",
"gui testing",
"the utilization",
"limited component samples",
"complex mobile application environments",
"an overlooked issue",
"many automated testing methods",
"this study",
"a mobile application testing method",
"deep reinforcement learning",
"performance",
"adaptability",
"the testing process",
"the feature recognition capabilities",
"deep learning",
"the decision-making mechanisms",
"reinforcement learning",
"our method",
"user operations",
"potential application pitfalls",
"the study",
"the limitations",
"traditional mobile application testing methods",
"the advantages",
"deep reinforcement learning",
"complex tasks",
"we",
"an automated mobile application gui testing method",
"a deep reinforcement learning algorithm",
"redq",
"the utilization",
"application information",
"the characteristics",
"the redq",
"a study",
"18 open-source android applications",
"github",
"our method",
"promising performance",
"terms",
"code coverage",
"testing",
"speed",
"daily",
"18"
] |
Streamflow Prediction Utilizing Deep Learning and Machine Learning Algorithms for Sustainable Water Supply Management | [
"Sarmad Dashti Latif",
"Ali Najah Ahmed"
] | As a result of global climate change, sustainable water supply management is becoming increasingly difficult. Dams and reservoirs are key tools for controlling and managing water resources; they have benefited human cultures in a variety of ways, including enhanced human health, increased food production, water supply for domestic and industrial use, economic growth, irrigation, hydro-power generation, and flood control. This study aims to compare the application of deep learning and conventional machine learning algorithms for predicting daily reservoir inflow. Long short-term memory (LSTM) has been applied as a deep learning algorithm and boosted regression tree (BRT) has been implemented as a machine learning algorithm. Five statistical indices have been selected to evaluate the performance of the proposed models. The selected statistical measurements are mean absolute error (MAE), root mean square error (RMSE), correlation coefficient (R), coefficient of determination (R2), mean square error (MSE), Nash Sutcliffe Model Efficiency Coefficient (NSE), and the RMSE-observations standard deviation ratio (RSR). The findings showed that LSTM outperformed BRT with a significant difference in terms of accuracy. | 10.1007/s11269-023-03499-9 | streamflow prediction utilizing deep learning and machine learning algorithms for sustainable water supply management | as a result of global climate change, sustainable water supply management is becoming increasingly difficult. dams and reservoirs are key tools for controlling and managing water resources; they have benefited human cultures in a variety of ways, including enhanced human health, increased food production, water supply for domestic and industrial use, economic growth, irrigation, hydro-power generation, and flood control. this study aims to compare the application of deep learning and conventional machine learning algorithms for predicting daily reservoir inflow. long short-term memory (lstm) has been applied as a deep learning algorithm and boosted regression tree (brt) has been implemented as a machine learning algorithm. five statistical indices have been selected to evaluate the performance of the proposed models. the selected statistical measurements are mean absolute error (mae), root mean square error (rmse), correlation coefficient (r), coefficient of determination (r2), mean square error (mse), nash sutcliffe model efficiency coefficient (nse), and the rmse-observations standard deviation ratio (rsr). the findings showed that lstm outperformed brt with a significant difference in terms of accuracy. | [
"a result",
"global climate change",
"sustainable water supply management",
"dams",
"reservoirs",
"key tools",
"water resources",
"they",
"human cultures",
"a variety",
"ways",
"enhanced human health",
"increased food production",
"water supply",
"domestic and industrial use",
"economic growth",
"irrigation",
"hydro-power generation",
"flood control",
"this study",
"the application",
"deep learning",
"conventional machine",
"algorithms",
"daily reservoir inflow",
"long short-term memory",
"lstm",
"a deep learning algorithm",
"regression tree",
"brt",
"a machine learning algorithm",
"five statistical indices",
"the performance",
"the proposed models",
"the selected statistical measurements",
"mean absolute error",
"mae",
"root mean square error",
"rmse",
"correlation coefficient",
"r",
"coefficient",
"determination",
"r2",
"square error",
"mse",
"nash sutcliffe model efficiency coefficient",
"nse",
"the rmse-observations standard deviation ratio",
"rsr",
"the findings",
"lstm",
"brt",
"a significant difference",
"terms",
"accuracy",
"daily",
"five",
"nash sutcliffe"
] |
Deep learning enables fast, gentle STED microscopy | [
"Vahid Ebrahimi",
"Till Stephan",
"Jiah Kim",
"Pablo Carravilla",
"Christian Eggeling",
"Stefan Jakobs",
"Kyu Young Han"
] | STED microscopy is widely used to image subcellular structures with super-resolution. Here, we report that restoring STED images with deep learning can mitigate photobleaching and photodamage by reducing the pixel dwell time by one or two orders of magnitude. Our method allows for efficient and robust restoration of noisy 2D and 3D STED images with multiple targets and facilitates long-term imaging of mitochondrial dynamics. | 10.1038/s42003-023-05054-z | deep learning enables fast, gentle sted microscopy | sted microscopy is widely used to image subcellular structures with super-resolution. here, we report that restoring sted images with deep learning can mitigate photobleaching and photodamage by reducing the pixel dwell time by one or two orders of magnitude. our method allows for efficient and robust restoration of noisy 2d and 3d sted images with multiple targets and facilitates long-term imaging of mitochondrial dynamics. | [
"sted microscopy",
"subcellular structures",
"super",
"-",
"resolution",
"we",
"sted images",
"deep learning",
"photobleaching",
"photodamage",
"the pixel",
"dwell time",
"one or two orders",
"magnitude",
"our method",
"efficient and robust restoration",
"noisy 2d",
"3d sted images",
"multiple targets",
"long-term imaging",
"mitochondrial dynamics",
"one",
"two",
"2d",
"3d"
] |
Biological gender identification in Turkish news text using deep learning models | [
"Pınar Tüfekci",
"Melike Bektaş Kösesoy"
] | Identifying the biological gender of authors based on the content of their written work is a crucial task in Natural Language Processing (NLP). Accurate biological gender identification finds numerous applications in fields such as linguistics, sociology, and marketing. However, achieving high accuracy in identifying the biological gender of the author is heavily dependent on the quality of the collected data and its proper splitting. Therefore, determining the best-performing model necessitates experimental evaluation. This study aimed to develop and evaluate four learning algorithms for biological gender identification in news texts. To this end, a comprehensive dataset, IAG-TNKU, was created from a Turkish newspaper, comprising 43,292 news articles. Four models utilizing popular machine learning algorithms, including Naive Bayes and Random Forest, and two deep learning algorithms, Long Short Term Memory and Convolutional Neural Networks, were developed and evaluated rigorously. The results indicated that the Long Short Term Memory (LSTM) algorithm outperformed the other three models, exhibiting an exceptional accuracy of 88.51%. This model's outstanding performance underpins the importance of utilizing innovative deep learning algorithms for biological gender identification tasks in NLP. The present study contributes to extant literature by developing a new dataset for biological gender identification in news texts and evaluating four machine learning algorithms. Our findings highlight the significance of utilizing innovative techniques for biological gender identification tasks. The dataset and deep learning algorithm can be applied in many areas such as sociolinguistics, marketing research, and journalism, where the identification of biological gender in written content plays a pivotal role. | 10.1007/s11042-023-17622-w | biological gender identification in turkish news text using deep learning models | identifying the biological gender of authors based on the content of their written work is a crucial task in natural language processing (nlp). accurate biological gender identification finds numerous applications in fields such as linguistics, sociology, and marketing. however, achieving high accuracy in identifying the biological gender of the author is heavily dependent on the quality of the collected data and its proper splitting. therefore, determining the best-performing model necessitates experimental evaluation. this study aimed to develop and evaluate four learning algorithms for biological gender identification in news texts. to this end, a comprehensive dataset, iag-tnku, was created from a turkish newspaper, comprising 43,292 news articles. four models utilizing popular machine learning algorithms, including naive bayes and random forest, and two deep learning algorithms, long short term memory and convolutional neural networks, were developed and evaluated rigorously. the results indicated that the long short term memory (lstm) algorithm outperformed the other three models, exhibiting an exceptional accuracy of 88.51%. this model's outstanding performance underpins the importance of utilizing innovative deep learning algorithms for biological gender identification tasks in nlp. the present study contributes to extant literature by developing a new dataset for biological gender identification in news texts and evaluating four machine learning algorithms. our findings highlight the significance of utilizing innovative techniques for biological gender identification tasks. the dataset and deep learning algorithm can be applied in many areas such as sociolinguistics, marketing research, and journalism, where the identification of biological gender in written content plays a pivotal role. | [
"the biological gender",
"authors",
"the content",
"their written work",
"a crucial task",
"natural language processing",
"nlp",
"accurate biological gender identification",
"numerous applications",
"fields",
"linguistics",
"sociology",
"marketing",
"high accuracy",
"the biological gender",
"the author",
"the quality",
"the collected data",
"its proper splitting",
"the best-performing model necessitates experimental evaluation",
"this study",
"four learning algorithms",
"biological gender identification",
"news texts",
"this end",
"a comprehensive dataset",
"iag",
"tnku",
"a turkish newspaper",
"43,292 news articles",
"four models",
"popular machine learning algorithms",
"naive bayes",
"random forest",
"two deep learning algorithms",
"long short term memory",
"convolutional neural networks",
"the results",
"the long short term memory",
"lstm",
"algorithm",
"the other three models",
"an exceptional accuracy",
"88.51%",
"this model's outstanding performance",
"the importance",
"innovative deep learning algorithms",
"biological gender identification tasks",
"nlp",
"the present study",
"extant literature",
"a new dataset",
"biological gender identification",
"news texts",
"four machine learning algorithms",
"our findings",
"the significance",
"innovative techniques",
"biological gender identification tasks",
"the dataset and deep learning algorithm",
"many areas",
"sociolinguistics",
"marketing research",
"journalism",
"the identification",
"biological gender",
"written content",
"a pivotal role",
"four",
"turkish",
"43,292",
"four",
"two",
"three",
"88.51%",
"four"
] |
Analysis, characterization, prediction, and attribution of extreme atmospheric events with machine learning and deep learning techniques: a review | [
"Sancho Salcedo-Sanz",
"Jorge Pérez-Aracil",
"Guido Ascenso",
"Javier Del Ser",
"David Casillas-Pérez",
"Christopher Kadow",
"Dušan Fister",
"David Barriopedro",
"Ricardo García-Herrera",
"Matteo Giuliani",
"Andrea Castelletti"
] | Atmospheric extreme events cause severe damage to human societies and ecosystems. The frequency and intensity of extremes and other associated events are continuously increasing due to climate change and global warming. The accurate prediction, characterization, and attribution of atmospheric extreme events is, therefore, a key research field in which many groups are currently working by applying different methodologies and computational tools. Machine learning and deep learning methods have arisen in the last years as powerful techniques to tackle many of the problems related to atmospheric extreme events. This paper reviews machine learning and deep learning approaches applied to the analysis, characterization, prediction, and attribution of the most important atmospheric extremes. A summary of the most used machine learning and deep learning techniques in this area, and a comprehensive critical review of literature related to ML in EEs, are provided. The critical literature review has been extended to extreme events related to rainfall and floods, heatwaves and extreme temperatures, droughts, severe weather events and fog, and low-visibility episodes. A case study focused on the analysis of extreme atmospheric temperature prediction with ML and DL techniques is also presented in the paper. Conclusions, perspectives, and outlooks on the field are finally drawn. | 10.1007/s00704-023-04571-5 | analysis, characterization, prediction, and attribution of extreme atmospheric events with machine learning and deep learning techniques: a review | atmospheric extreme events cause severe damage to human societies and ecosystems. the frequency and intensity of extremes and other associated events are continuously increasing due to climate change and global warming. the accurate prediction, characterization, and attribution of atmospheric extreme events is, therefore, a key research field in which many groups are currently working by applying different methodologies and computational tools. machine learning and deep learning methods have arisen in the last years as powerful techniques to tackle many of the problems related to atmospheric extreme events. this paper reviews machine learning and deep learning approaches applied to the analysis, characterization, prediction, and attribution of the most important atmospheric extremes. a summary of the most used machine learning and deep learning techniques in this area, and a comprehensive critical review of literature related to ml in ees, are provided. the critical literature review has been extended to extreme events related to rainfall and floods, heatwaves and extreme temperatures, droughts, severe weather events and fog, and low-visibility episodes. a case study focused on the analysis of extreme atmospheric temperature prediction with ml and dl techniques is also presented in the paper. conclusions, perspectives, and outlooks on the field are finally drawn. | [
"atmospheric extreme events",
"severe damage",
"human societies",
"ecosystems",
"the frequency",
"intensity",
"extremes",
"other associated events",
"climate change",
"global warming",
"the accurate prediction",
"characterization",
"attribution",
"atmospheric extreme events",
"a key research field",
"which",
"many groups",
"different methodologies",
"computational tools",
"machine learning",
"deep learning methods",
"the last years",
"powerful techniques",
"the problems",
"atmospheric extreme events",
"this paper",
"machine learning",
"deep learning approaches",
"the analysis",
"characterization",
"prediction",
"attribution",
"the most important atmospheric extremes",
"a summary",
"the most used machine learning",
"deep learning techniques",
"this area",
"a comprehensive critical review",
"literature",
"ml",
"ees",
"the critical literature review",
"extreme events",
"rainfall",
"floods",
"heatwaves",
"extreme temperatures",
"droughts",
"severe weather events",
"fog",
"low-visibility episodes",
"a case study",
"the analysis",
"extreme atmospheric temperature prediction",
"ml",
"dl techniques",
"the paper",
"conclusions",
"perspectives",
"outlooks",
"the field",
"the last years"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.