title
stringlengths
31
206
authors
sequencelengths
1
85
abstract
stringlengths
428
3.21k
doi
stringlengths
21
31
cleaned_title
stringlengths
31
206
cleaned_abstract
stringlengths
428
3.21k
key_phrases
sequencelengths
19
150
Fisheye freshness detection using common deep learning algorithms and machine learning methods with a developed mobile application
[ "Muslume Beyza Yildiz", "Elham Tahsin Yasin", "Murat Koklu" ]
AbstractFish is commonly ingested as a source of protein and essential nutrients for humans. To fully benefit from the proteins and substances in fish it is crucial to ensure its freshness. If fish is stored for an extended period, its freshness deteriorates. Determining the freshness of fish can be done by examining its eyes, smell, skin, and gills. In this study, artificial intelligence techniques are employed to assess fish freshness. The author’s objective is to evaluate the freshness of fish by analyzing its eye characteristics. To achieve this, we have developed a combination of deep and machine learning models that accurately classify the freshness of fish. Furthermore, an application that utilizes both deep learning and machine learning, to instantly detect the freshness of any given fish sample was created. Two deep learning algorithms (SqueezeNet, and VGG19) were implemented to extract features from image data. Additionally, five machine learning models to classify the freshness levels of fish samples were applied. Machine learning models include (k-NN, RF, SVM, LR, and ANN). Based on the results, it can be inferred that employing the VGG19 model for feature selection in conjunction with an Artificial Neural Network (ANN) for classification yields the most favorable success rate of 77.3% for the FFE dataset.Graphical Abstract
10.1007/s00217-024-04493-0
fisheye freshness detection using common deep learning algorithms and machine learning methods with a developed mobile application
abstractfish is commonly ingested as a source of protein and essential nutrients for humans. to fully benefit from the proteins and substances in fish it is crucial to ensure its freshness. if fish is stored for an extended period, its freshness deteriorates. determining the freshness of fish can be done by examining its eyes, smell, skin, and gills. in this study, artificial intelligence techniques are employed to assess fish freshness. the author’s objective is to evaluate the freshness of fish by analyzing its eye characteristics. to achieve this, we have developed a combination of deep and machine learning models that accurately classify the freshness of fish. furthermore, an application that utilizes both deep learning and machine learning, to instantly detect the freshness of any given fish sample was created. two deep learning algorithms (squeezenet, and vgg19) were implemented to extract features from image data. additionally, five machine learning models to classify the freshness levels of fish samples were applied. machine learning models include (k-nn, rf, svm, lr, and ann). based on the results, it can be inferred that employing the vgg19 model for feature selection in conjunction with an artificial neural network (ann) for classification yields the most favorable success rate of 77.3% for the ffe dataset.graphical abstract
[ "abstractfish", "a source", "protein", "essential nutrients", "humans", "the proteins", "substances", "fish", "it", "its freshness", "fish", "an extended period", "its freshness", "the freshness", "fish", "its eyes", "smell", "skin", "gills", "this study", "artificial intelligence techniques", "fish freshness", "the author’s objective", "the freshness", "fish", "its eye characteristics", "this", "we", "a combination", "deep and machine learning models", "that", "the freshness", "fish", "an application", "that", "both deep learning", "machine learning", "the freshness", "any given fish sample", "two deep learning algorithms", "squeezenet", "vgg19", "features", "image data", "five machine learning models", "the freshness levels", "fish samples", "machine learning models", "k", ", rf, svm", "lr", "ann", "the results", "it", "the vgg19 model", "feature selection", "conjunction", "an artificial neural network", "ann", "classification yields", "the most favorable success rate", "77.3%", "the ffe dataset.graphical abstract", "two", "five", "77.3%" ]
Deep learning for transesophageal echocardiography view classification
[ "Kirsten R. Steffner", "Matthew Christensen", "George Gill", "Michael Bowdish", "Justin Rhee", "Abirami Kumaresan", "Bryan He", "James Zou", "David Ouyang" ]
Transesophageal echocardiography (TEE) imaging is a vital tool used in the evaluation of complex cardiac pathology and the management of cardiac surgery patients. A key limitation to the application of deep learning strategies to intraoperative and intraprocedural TEE data is the complexity and unstructured nature of these images. In the present study, we developed a deep learning-based, multi-category TEE view classification model that can be used to add structure to intraoperative and intraprocedural TEE imaging data. More specifically, we trained a convolutional neural network (CNN) to predict standardized TEE views using labeled intraoperative and intraprocedural TEE videos from Cedars-Sinai Medical Center (CSMC). We externally validated our model on intraoperative TEE videos from Stanford University Medical Center (SUMC). Accuracy of our model was high across all labeled views. The highest performance was achieved for the Trans-Gastric Left Ventricular Short Axis View (area under the receiver operating curve [AUC] = 0.971 at CSMC, 0.957 at SUMC), the Mid-Esophageal Long Axis View (AUC = 0.954 at CSMC, 0.905 at SUMC), the Mid-Esophageal Aortic Valve Short Axis View (AUC = 0.946 at CSMC, 0.898 at SUMC), and the Mid-Esophageal 4-Chamber View (AUC = 0.939 at CSMC, 0.902 at SUMC). Ultimately, we demonstrate that our deep learning model can accurately classify standardized TEE views, which will facilitate further downstream deep learning analyses for intraoperative and intraprocedural TEE imaging.
10.1038/s41598-023-50735-8
deep learning for transesophageal echocardiography view classification
transesophageal echocardiography (tee) imaging is a vital tool used in the evaluation of complex cardiac pathology and the management of cardiac surgery patients. a key limitation to the application of deep learning strategies to intraoperative and intraprocedural tee data is the complexity and unstructured nature of these images. in the present study, we developed a deep learning-based, multi-category tee view classification model that can be used to add structure to intraoperative and intraprocedural tee imaging data. more specifically, we trained a convolutional neural network (cnn) to predict standardized tee views using labeled intraoperative and intraprocedural tee videos from cedars-sinai medical center (csmc). we externally validated our model on intraoperative tee videos from stanford university medical center (sumc). accuracy of our model was high across all labeled views. the highest performance was achieved for the trans-gastric left ventricular short axis view (area under the receiver operating curve [auc] = 0.971 at csmc, 0.957 at sumc), the mid-esophageal long axis view (auc = 0.954 at csmc, 0.905 at sumc), the mid-esophageal aortic valve short axis view (auc = 0.946 at csmc, 0.898 at sumc), and the mid-esophageal 4-chamber view (auc = 0.939 at csmc, 0.902 at sumc). ultimately, we demonstrate that our deep learning model can accurately classify standardized tee views, which will facilitate further downstream deep learning analyses for intraoperative and intraprocedural tee imaging.
[ "transesophageal echocardiography", "(tee) imaging", "a vital tool", "the evaluation", "complex cardiac pathology", "the management", "cardiac surgery patients", "a key limitation", "the application", "deep learning strategies", "to intraoperative and intraprocedural tee data", "the complexity", "unstructured nature", "these images", "the present study", "we", "a deep learning-based, multi-category tee view classification model", "that", "structure", "intraoperative and intraprocedural tee imaging data", "we", "a convolutional neural network", "cnn", "standardized tee views", "labeled intraoperative and intraprocedural tee videos", "cedars-sinai medical center", "csmc", "we", "our model", "intraoperative tee videos", "stanford university medical center", "sumc", "accuracy", "our model", "all labeled views", "the highest performance", "ventricular short axis view", "area", "the receiver operating curve", "csmc", "sumc", ", the mid-esophageal long axis view", "auc =", "csmc", "sumc", "the mid-esophageal aortic valve short axis view", "csmc", "sumc", "the mid-esophageal 4-chamber view", "csmc", "sumc", "we", "our deep learning model", "standardized tee views", "which", "further downstream deep learning analyses", "intraoperative and intraprocedural tee imaging", "cnn", "stanford university medical center", "0.971", "0.957", "0.954", "0.905", "0.946", "0.898", "4", "0.939", "0.902" ]
Context-aware geometric deep learning for protein sequence design
[ "Lucien F. Krapp", "Fernando A. Meireles", "Luciano A. Abriata", "Jean Devillard", "Sarah Vacle", "Maria J. Marcaida", "Matteo Dal Peraro" ]
Protein design and engineering are evolving at an unprecedented pace leveraging the advances in deep learning. Current models nonetheless cannot natively consider non-protein entities within the design process. Here, we introduce a deep learning approach based solely on a geometric transformer of atomic coordinates and element names that predicts protein sequences from backbone scaffolds aware of the restraints imposed by diverse molecular environments. To validate the method, we show that it can produce highly thermostable, catalytically active enzymes with high success rates. This concept is anticipated to improve the versatility of protein design pipelines for crafting desired functions.
10.1038/s41467-024-50571-y
context-aware geometric deep learning for protein sequence design
protein design and engineering are evolving at an unprecedented pace leveraging the advances in deep learning. current models nonetheless cannot natively consider non-protein entities within the design process. here, we introduce a deep learning approach based solely on a geometric transformer of atomic coordinates and element names that predicts protein sequences from backbone scaffolds aware of the restraints imposed by diverse molecular environments. to validate the method, we show that it can produce highly thermostable, catalytically active enzymes with high success rates. this concept is anticipated to improve the versatility of protein design pipelines for crafting desired functions.
[ "protein design", "engineering", "an unprecedented pace", "the advances", "deep learning", "current models", "non-protein entities", "the design process", "we", "a deep learning approach", "a geometric transformer", "atomic coordinates", "element names", "that", "protein sequences", "backbone scaffolds", "the restraints", "diverse molecular environments", "the method", "we", "it", "highly thermostable, catalytically active enzymes", "high success rates", "this concept", "the versatility", "protein design pipelines", "desired functions" ]
Automated optical inspection based on synthetic mechanisms combining deep learning and machine learning
[ "Chung-Ming Lo", "Ting-Yi Lin" ]
The quality inspection of products before delivery plays a critical role in ensuring manufacturing quality. Quick and accurate inspection of samples is realized by highly automated inspection based on pattern recognition in smart manufacturing. Conventional ensemble methods have been demonstrated to be effective for defect detection. This study further proposed synthetic mechanisms based on using various features and learning classifiers. A database of 6000 sample images of printed circuit board (PCB) connectors collected from factories was compiled. A novel confidence synthesis mechanism was proposed to prescreen images using deep learning features. Spatially connected texture features were then used to reclassify images with low reliabilities. The synthetic mechanism was found to outperform a single classifier. In particular, the highest improvement in accuracy (from 96.00 to 97.83%) was obtained using the confidence-based synthesis. The synthetic mechanism can be used to achieve high accuracy in defect detection and make automation in smart manufacturing more practicable.
10.1007/s10845-024-02474-4
automated optical inspection based on synthetic mechanisms combining deep learning and machine learning
the quality inspection of products before delivery plays a critical role in ensuring manufacturing quality. quick and accurate inspection of samples is realized by highly automated inspection based on pattern recognition in smart manufacturing. conventional ensemble methods have been demonstrated to be effective for defect detection. this study further proposed synthetic mechanisms based on using various features and learning classifiers. a database of 6000 sample images of printed circuit board (pcb) connectors collected from factories was compiled. a novel confidence synthesis mechanism was proposed to prescreen images using deep learning features. spatially connected texture features were then used to reclassify images with low reliabilities. the synthetic mechanism was found to outperform a single classifier. in particular, the highest improvement in accuracy (from 96.00 to 97.83%) was obtained using the confidence-based synthesis. the synthetic mechanism can be used to achieve high accuracy in defect detection and make automation in smart manufacturing more practicable.
[ "the quality inspection", "products", "delivery", "a critical role", "manufacturing quality", "quick and accurate inspection", "samples", "highly automated inspection", "pattern recognition", "smart manufacturing", "conventional ensemble methods", "defect detection", "this study", "further proposed synthetic mechanisms", "various features", "learning classifiers", "a database", "6000 sample images", "printed circuit board (pcb) connectors", "factories", "a novel confidence synthesis mechanism", "images", "deep learning features", "spatially connected texture features", "images", "low reliabilities", "the synthetic mechanism", "a single classifier", "the highest improvement", "accuracy", "97.83%", "the confidence-based synthesis", "the synthetic mechanism", "high accuracy", "defect detection", "automation", "smart manufacturing", "6000", "96.00", "97.83%" ]
Healthcare predictive analytics using machine learning and deep learning techniques: a survey
[ "Mohammed Badawy", "Nagy Ramadan", "Hesham Ahmed Hefny" ]
Healthcare prediction has been a significant factor in saving lives in recent years. In the domain of health care, there is a rapid development of intelligent systems for analyzing complicated data relationships and transforming them into real information for use in the prediction process. Consequently, artificial intelligence is rapidly transforming the healthcare industry, and thus comes the role of systems depending on machine learning and deep learning in the creation of steps that diagnose and predict diseases, whether from clinical data or based on images, that provide tremendous clinical support by simulating human perception and can even diagnose diseases that are difficult to detect by human intelligence. Predictive analytics for healthcare a critical imperative in the healthcare industry. It can significantly affect the accuracy of disease prediction, which may lead to saving patients' lives in the case of accurate and timely prediction; on the contrary, in the case of an incorrect prediction, it may endanger patients' lives. Therefore, diseases must be accurately predicted and estimated. Hence, reliable and efficient methods for healthcare predictive analysis are essential. Therefore, this paper aims to present a comprehensive survey of existing machine learning and deep learning approaches utilized in healthcare prediction and identify the inherent obstacles to applying these approaches in the healthcare domain.
10.1186/s43067-023-00108-y
healthcare predictive analytics using machine learning and deep learning techniques: a survey
healthcare prediction has been a significant factor in saving lives in recent years. in the domain of health care, there is a rapid development of intelligent systems for analyzing complicated data relationships and transforming them into real information for use in the prediction process. consequently, artificial intelligence is rapidly transforming the healthcare industry, and thus comes the role of systems depending on machine learning and deep learning in the creation of steps that diagnose and predict diseases, whether from clinical data or based on images, that provide tremendous clinical support by simulating human perception and can even diagnose diseases that are difficult to detect by human intelligence. predictive analytics for healthcare a critical imperative in the healthcare industry. it can significantly affect the accuracy of disease prediction, which may lead to saving patients' lives in the case of accurate and timely prediction; on the contrary, in the case of an incorrect prediction, it may endanger patients' lives. therefore, diseases must be accurately predicted and estimated. hence, reliable and efficient methods for healthcare predictive analysis are essential. therefore, this paper aims to present a comprehensive survey of existing machine learning and deep learning approaches utilized in healthcare prediction and identify the inherent obstacles to applying these approaches in the healthcare domain.
[ "healthcare prediction", "a significant factor", "lives", "recent years", "the domain", "health care", "a rapid development", "intelligent systems", "complicated data relationships", "them", "real information", "use", "the prediction process", "artificial intelligence", "the healthcare industry", "the role", "systems", "machine learning", "deep learning", "the creation", "steps", "that", "diseases", "clinical data", "images", "that", "tremendous clinical support", "human perception", "diseases", "that", "human intelligence", "predictive analytics", "healthcare", "the healthcare industry", "it", "the accuracy", "disease prediction", "which", "patients' lives", "the case", "accurate and timely prediction", "the contrary", "the case", "an incorrect prediction", "it", "patients' lives", "diseases", "reliable and efficient methods", "healthcare predictive analysis", "this paper", "a comprehensive survey", "existing machine learning", "deep learning approaches", "healthcare prediction", "the inherent obstacles", "these approaches", "the healthcare domain", "healthcare", "recent years" ]
LSNet: a deep learning based method for skin lesion classification using limited samples and transfer learning
[ "Xiaodan Deng" ]
When analyzing skin lesion image data using deep learning, the lack of a sufficient amount of effective training data poses a challenge. Although transfer learning can alleviate the problem of a small amount of data, the difference between the source data and the target data makes the tranfer learning missing some key knowledge. It is important to find the key knowledge neglected by transfer learning. This paper argues that this key knowledge is contained in challenging samples. We propose a novel method named as Limited Samples Network (LSNet) to search challenging samples and strengthen the learning of them. Specifically, LSNet utilizes patch-based structured input and employs pseudoinverse learning autoencoder to quickly obtain position-sensitive loss. Challenging samples can be obtained by searching for position-sensitive loss. Subsequently, challenging samples-augmented transfer learning is employed to enhance the classification performance of deep learning models on skin lesion datasets with limited samples. We carry comparison experiment with the existing state-of-the-art method. Experiments are carried out on the ISIC 2017, ISIC 2018 and ISIC 2019 skin lesion datasets. The results demonstrate that our training strategy significantly improves the vanilla transfer learning procedure for different types of pre-trained DCNNs. In particular, our method achieves state-of-the-art performance on different skin lesion datasets without using any extra training data.
10.1007/s11042-023-17975-2
lsnet: a deep learning based method for skin lesion classification using limited samples and transfer learning
when analyzing skin lesion image data using deep learning, the lack of a sufficient amount of effective training data poses a challenge. although transfer learning can alleviate the problem of a small amount of data, the difference between the source data and the target data makes the tranfer learning missing some key knowledge. it is important to find the key knowledge neglected by transfer learning. this paper argues that this key knowledge is contained in challenging samples. we propose a novel method named as limited samples network (lsnet) to search challenging samples and strengthen the learning of them. specifically, lsnet utilizes patch-based structured input and employs pseudoinverse learning autoencoder to quickly obtain position-sensitive loss. challenging samples can be obtained by searching for position-sensitive loss. subsequently, challenging samples-augmented transfer learning is employed to enhance the classification performance of deep learning models on skin lesion datasets with limited samples. we carry comparison experiment with the existing state-of-the-art method. experiments are carried out on the isic 2017, isic 2018 and isic 2019 skin lesion datasets. the results demonstrate that our training strategy significantly improves the vanilla transfer learning procedure for different types of pre-trained dcnns. in particular, our method achieves state-of-the-art performance on different skin lesion datasets without using any extra training data.
[ "skin lesion image data", "deep learning", "the lack", "a sufficient amount", "effective training data", "a challenge", "transfer learning", "the problem", "a small amount", "data", "the difference", "the source data", "the target data", "the tranfer", "some key knowledge", "it", "the key knowledge", "transfer learning", "this paper", "this key knowledge", "challenging samples", "we", "a novel method", "limited samples network", "lsnet", "challenging samples", "the learning", "them", "lsnet", "patch-based structured input", "pseudoinverse", "autoencoder", "position-sensitive loss", "challenging samples", "position-sensitive loss", "challenging samples-augmented transfer learning", "the classification performance", "deep learning models", "skin lesion datasets", "limited samples", "we", "comparison experiment", "the-art", "experiments", "the isic", "2019 skin lesion datasets", "the results", "our training strategy", "the vanilla transfer learning procedure", "different types", "pre-trained dcnns", "our method", "the-art", "different skin lesion datasets", "any extra training data", "2017", "2018", "2019" ]
Classical learning or deep learning: a study on food photo aesthetic assessment
[ "Zhaotong Li", "Zeru Zhang", "Song Gao" ]
Food photo aesthetic assessment has gained increasing attention in both commercial activity and social life. However, there has been little research dedicated to the quality classification of food photos. This paper presents a study on food photo aesthetic evaluation, covering dataset collection and evaluation methods. First, a dataset of food photos was collected by web crawler from food-sharing websites, and the appropriate images were selected and labeled using a WeChat applet for binary classification. Then, food photo aesthetic assessment was evaluated using classical machine learning and deep learning methods. Different hand-crafted features, including layout, texture, color, local, and deep features, were manually extracted. Two classifiers, support vector machine and random forest, were used to establish the classical learning models. Meanwhile, three convolutional neural networks (AlexNet, VGGNet, ResNet) were applied to compare with former methods by fine-tuning the model parameters. Four quantitative metrics (accuracy, recall, precision, and f1-score) were used to evaluate the performance of food photo aesthetic assessment, with the accuracy of classical and deep learning methods being 91.09% vs 94.70%, respectively. This demonstrates that classical learning with good enough hand-crafted features is capable of producing performance close to that of CNNs. The dataset for food photo aesthetic assessment can be used as a preliminary exploration of food image aesthetics assessment from both classical learning and deep learning.
10.1007/s11042-023-15791-2
classical learning or deep learning: a study on food photo aesthetic assessment
food photo aesthetic assessment has gained increasing attention in both commercial activity and social life. however, there has been little research dedicated to the quality classification of food photos. this paper presents a study on food photo aesthetic evaluation, covering dataset collection and evaluation methods. first, a dataset of food photos was collected by web crawler from food-sharing websites, and the appropriate images were selected and labeled using a wechat applet for binary classification. then, food photo aesthetic assessment was evaluated using classical machine learning and deep learning methods. different hand-crafted features, including layout, texture, color, local, and deep features, were manually extracted. two classifiers, support vector machine and random forest, were used to establish the classical learning models. meanwhile, three convolutional neural networks (alexnet, vggnet, resnet) were applied to compare with former methods by fine-tuning the model parameters. four quantitative metrics (accuracy, recall, precision, and f1-score) were used to evaluate the performance of food photo aesthetic assessment, with the accuracy of classical and deep learning methods being 91.09% vs 94.70%, respectively. this demonstrates that classical learning with good enough hand-crafted features is capable of producing performance close to that of cnns. the dataset for food photo aesthetic assessment can be used as a preliminary exploration of food image aesthetics assessment from both classical learning and deep learning.
[ "food photo aesthetic assessment", "increasing attention", "both commercial activity", "social life", "little research", "the quality classification", "food photos", "this paper", "a study", "food photo aesthetic evaluation", "dataset collection and evaluation methods", "a dataset", "food photos", "web crawler", "food-sharing websites", "the appropriate images", "a wechat applet", "binary classification", "food photo aesthetic assessment", "classical machine learning", "deep learning methods", "different hand-crafted features", "layout", "texture", "color", "deep features", "two classifiers", "support vector machine", "random forest", "the classical learning models", "three convolutional neural networks", "alexnet", "vggnet", "resnet", "former methods", "the model parameters", "four quantitative metrics", "accuracy", "recall", "precision", "f1-score", "the performance", "food photo aesthetic assessment", "the accuracy", "classical and deep learning methods", "91.09%", "94.70%", "this", "classical learning", "good enough hand-crafted features", "performance", "that", "cnns", "the dataset", "food photo aesthetic assessment", "a preliminary exploration", "food image aesthetics assessment", "both classical learning", "deep learning", "first", "two", "three", "four", "91.09%", "94.70%" ]
Deep learning in terrestrial conservation biology
[ "Zoltán Barta" ]
Biodiversity is being lost at an unprecedented rate on Earth. As a first step to more effectively combat this process we need efficient methods to monitor biodiversity changes. Recent technological advance can provide powerful tools (e.g. camera traps, digital acoustic recorders, satellite imagery, social media records) that can speed up the collection of biological data. Nevertheless, the processing steps of the raw data served by these tools are still painstakingly slow. A new computer technology, deep learning based artificial intelligence, might, however, help. In this short and subjective review I oversee recent technological advances used in conservation biology, highlight problems of processing their data, shortly describe deep learning technology and show case studies of its use in conservation biology. Some of the limitations of the technology are also highlighted.
10.1007/s42977-023-00200-4
deep learning in terrestrial conservation biology
biodiversity is being lost at an unprecedented rate on earth. as a first step to more effectively combat this process we need efficient methods to monitor biodiversity changes. recent technological advance can provide powerful tools (e.g. camera traps, digital acoustic recorders, satellite imagery, social media records) that can speed up the collection of biological data. nevertheless, the processing steps of the raw data served by these tools are still painstakingly slow. a new computer technology, deep learning based artificial intelligence, might, however, help. in this short and subjective review i oversee recent technological advances used in conservation biology, highlight problems of processing their data, shortly describe deep learning technology and show case studies of its use in conservation biology. some of the limitations of the technology are also highlighted.
[ "biodiversity", "an unprecedented rate", "earth", "a first step", "this process", "we", "efficient methods", "biodiversity changes", "recent technological advance", "powerful tools", "e.g. camera traps", "digital acoustic recorders", "satellite imagery", "social media records", "that", "the collection", "biological data", "the processing steps", "the raw data", "these tools", "a new computer technology", "based artificial intelligence", "this short and subjective review", "i", "recent technological advances", "conservation biology", "problems", "their data", "deep learning technology", "show case studies", "its use", "conservation biology", "some", "the limitations", "the technology", "earth", "first" ]
Medical images classification using deep learning: a survey
[ "Rakesh Kumar", "Pooja Kumbharkar", "Sandeep Vanam", "Sanjeev Sharma" ]
Deep learning has made significant advancements in recent years. The technology is rapidly evolving and has been used in numerous automated applications with minimal loss. With these deep learning methods, medical image analysis for disease detection can be performed with minimal errors and losses. A survey of deep learning-based medical image classification is presented in this paper. As a result of their automatic feature representations, these methods have high accuracy and precision. This paper reviews various models like CNN, Transfer learning, Long short term memory, Generative adversarial networks, and Autoencoders and their combinations for various purposes in medical image classification. The total number of papers reviewed is 158. In the study, we discussed the advantages and limitations of the methods. A discussion is provided on the various applications of medical imaging, the available datasets for medical imaging, and the evaluation metrics. We also discuss the future trends in medical imaging using artificial intelligence.
10.1007/s11042-023-15576-7
medical images classification using deep learning: a survey
deep learning has made significant advancements in recent years. the technology is rapidly evolving and has been used in numerous automated applications with minimal loss. with these deep learning methods, medical image analysis for disease detection can be performed with minimal errors and losses. a survey of deep learning-based medical image classification is presented in this paper. as a result of their automatic feature representations, these methods have high accuracy and precision. this paper reviews various models like cnn, transfer learning, long short term memory, generative adversarial networks, and autoencoders and their combinations for various purposes in medical image classification. the total number of papers reviewed is 158. in the study, we discussed the advantages and limitations of the methods. a discussion is provided on the various applications of medical imaging, the available datasets for medical imaging, and the evaluation metrics. we also discuss the future trends in medical imaging using artificial intelligence.
[ "deep learning", "significant advancements", "recent years", "the technology", "numerous automated applications", "minimal loss", "these deep learning methods", "medical image analysis", "disease detection", "minimal errors", "losses", "a survey", "deep learning-based medical image classification", "this paper", "a result", "their automatic feature representations", "these methods", "high accuracy", "precision", "this paper", "various models", "cnn", "generative adversarial networks", "autoencoders", "their combinations", "various purposes", "medical image classification", "the total number", "papers", "the study", "we", "the advantages", "limitations", "the methods", "a discussion", "the various applications", "medical imaging", "the available datasets", "medical imaging", "the evaluation metrics", "we", "the future trends", "medical imaging", "artificial intelligence", "recent years", "cnn", "158" ]
Super-resolution Deep Learning Reconstruction Cervical Spine 1.5T MRI: Improved Interobserver Agreement in Evaluations of Neuroforaminal Stenosis Compared to Conventional Deep Learning Reconstruction
[ "Koichiro Yasaka", "Shunichi Uehara", "Shimpei Kato", "Yusuke Watanabe", "Taku Tajima", "Hiroyuki Akai", "Naoki Yoshioka", "Masaaki Akahane", "Kuni Ohtomo", "Osamu Abe", "Shigeru Kiryu" ]
The aim of this study was to investigate whether super-resolution deep learning reconstruction (SR-DLR) is superior to conventional deep learning reconstruction (DLR) with respect to interobserver agreement in the evaluation of neuroforaminal stenosis using 1.5T cervical spine MRI. This retrospective study included 39 patients who underwent 1.5T cervical spine MRI. T2-weighted sagittal images were reconstructed with SR-DLR and DLR. Three blinded radiologists independently evaluated the images in terms of the degree of neuroforaminal stenosis, depictions of the vertebrae, spinal cord and neural foramina, sharpness, noise, artefacts and diagnostic acceptability. In quantitative image analyses, a fourth radiologist evaluated the signal-to-noise ratio (SNR) by placing a circular or ovoid region of interest on the spinal cord, and the edge slope based on a linear region of interest placed across the surface of the spinal cord. Interobserver agreement in the evaluations of neuroforaminal stenosis using SR-DLR and DLR was 0.422–0.571 and 0.410–0.542, respectively. The kappa values between reader 1 vs. reader 2 and reader 2 vs. reader 3 significantly differed. Two of the three readers rated depictions of the spinal cord, sharpness, and diagnostic acceptability as significantly better with SR-DLR than with DLR. Both SNR and edge slope (/mm) were also significantly better with SR-DLR (12.9 and 6031, respectively) than with DLR (11.5 and 3741, respectively) (p < 0.001 for both). In conclusion, compared to DLR, SR-DLR improved interobserver agreement in the evaluations of neuroforaminal stenosis using 1.5T cervical spine MRI.
10.1007/s10278-024-01112-y
super-resolution deep learning reconstruction cervical spine 1.5t mri: improved interobserver agreement in evaluations of neuroforaminal stenosis compared to conventional deep learning reconstruction
the aim of this study was to investigate whether super-resolution deep learning reconstruction (sr-dlr) is superior to conventional deep learning reconstruction (dlr) with respect to interobserver agreement in the evaluation of neuroforaminal stenosis using 1.5t cervical spine mri. this retrospective study included 39 patients who underwent 1.5t cervical spine mri. t2-weighted sagittal images were reconstructed with sr-dlr and dlr. three blinded radiologists independently evaluated the images in terms of the degree of neuroforaminal stenosis, depictions of the vertebrae, spinal cord and neural foramina, sharpness, noise, artefacts and diagnostic acceptability. in quantitative image analyses, a fourth radiologist evaluated the signal-to-noise ratio (snr) by placing a circular or ovoid region of interest on the spinal cord, and the edge slope based on a linear region of interest placed across the surface of the spinal cord. interobserver agreement in the evaluations of neuroforaminal stenosis using sr-dlr and dlr was 0.422–0.571 and 0.410–0.542, respectively. the kappa values between reader 1 vs. reader 2 and reader 2 vs. reader 3 significantly differed. two of the three readers rated depictions of the spinal cord, sharpness, and diagnostic acceptability as significantly better with sr-dlr than with dlr. both snr and edge slope (/mm) were also significantly better with sr-dlr (12.9 and 6031, respectively) than with dlr (11.5 and 3741, respectively) (p < 0.001 for both). in conclusion, compared to dlr, sr-dlr improved interobserver agreement in the evaluations of neuroforaminal stenosis using 1.5t cervical spine mri.
[ "the aim", "this study", "super-resolution deep learning reconstruction", "sr-dlr", "conventional deep learning reconstruction", "dlr", "respect", "interobserver agreement", "the evaluation", "neuroforaminal stenosis", "1.5t cervical spine mri", "this retrospective study", "39 patients", "who", "1.5t cervical spine mri", "t2-weighted sagittal images", "sr-dlr", "dlr", "three blinded radiologists", "the images", "terms", "the degree", "neuroforaminal stenosis", "the vertebrae", "spinal cord", "neural foramina", "sharpness", "noise", "artefacts", "diagnostic acceptability", "quantitative image analyses", "a fourth radiologist", "noise", "snr", "a circular or ovoid region", "interest", "the spinal cord", "the edge slope", "a linear region", "interest", "the surface", "the spinal cord", "interobserver agreement", "the evaluations", "neuroforaminal stenosis", "sr-dlr", "dlr", "0.422–0.571", "the kappa", "reader", "reader", "reader", "the three readers", "depictions", "the spinal cord", "sharpness", "diagnostic acceptability", "sr-dlr", "dlr", "edge slope", "/mm", "dlr", "both", "conclusion", "dlr", "sr-dlr", "the evaluations", "neuroforaminal stenosis", "1.5t cervical spine mri", "1.5", "39", "1.5", "three", "fourth", "0.422–0.571", "0.410–0.542", "1", "2", "2", "3", "two", "three", "12.9", "6031", "11.5", "3741", "1.5" ]
Accelerating three-dimensional phase-field simulations via deep learning approaches
[ "Xuewei Zhou", "Sheng Sun", "Songlin Cai", "Gongyu Chen", "Honghui Wu", "Jie Xiong", "Jiaming Zhu" ]
Phase-field modeling (PFM) is a powerful but computationally expensive technique for simulating three-dimensional (3D) microstructure evolutions. Very recently, integrating machine learning into phase-field simulations provides a promising way to reduce calculation time remarkably. In this study, we propose a deep learning model that combines a convolutional autoencoder with a deep operator network to predict 3D microstructure evolution by using 2D slices of the 3D system. It is found that the deep learning model can shorten the calculation time from 37 min to 3 s after the initial training, while skipping 5-time steps, and reduce the phase-field simulation time by 31% in entire calculation of the evolution process. Interestingly, this model achieves good accuracy in predicting 3D microstructures by utilizing only 2D information. This work demonstrates the efficiency of machine learning in accelerating phase-field simulations while maintaining high accuracy and promotes the application of PFM in fundamental studies.
10.1007/s10853-024-10118-4
accelerating three-dimensional phase-field simulations via deep learning approaches
phase-field modeling (pfm) is a powerful but computationally expensive technique for simulating three-dimensional (3d) microstructure evolutions. very recently, integrating machine learning into phase-field simulations provides a promising way to reduce calculation time remarkably. in this study, we propose a deep learning model that combines a convolutional autoencoder with a deep operator network to predict 3d microstructure evolution by using 2d slices of the 3d system. it is found that the deep learning model can shorten the calculation time from 37 min to 3 s after the initial training, while skipping 5-time steps, and reduce the phase-field simulation time by 31% in entire calculation of the evolution process. interestingly, this model achieves good accuracy in predicting 3d microstructures by utilizing only 2d information. this work demonstrates the efficiency of machine learning in accelerating phase-field simulations while maintaining high accuracy and promotes the application of pfm in fundamental studies.
[ "phase-field modeling", "pfm", "a powerful but computationally expensive technique", "three-dimensional (3d) microstructure evolutions", "phase-field simulations", "a promising way", "calculation time", "this study", "we", "a deep learning model", "that", "a convolutional autoencoder", "a deep operator network", "3d microstructure evolution", "2d slices", "the 3d system", "it", "the deep learning model", "the calculation time", "37 min", "3 s", "the initial training", "5-time steps", "the phase-field simulation time", "31%", "entire calculation", "the evolution process", "this model", "good accuracy", "3d microstructures", "only 2d information", "this work", "the efficiency", "accelerating phase-field simulations", "high accuracy", "the application", "pfm", "fundamental studies", "three", "3d", "3d", "2d", "3d", "37", "3 s", "5", "31%", "3d", "2d" ]
Fostering success in online English education: Exploring the effects of ICT literacy, online learning self-efficacy, and motivation on deep learning
[ "Wei Sun", "Hong Shi" ]
This research explores how motivation and online learning self-efficacy (OLSE) act as mediators in the association between information and communication technology (ICT) literacy and deep learning within the context of online English as a foreign language (EFL) education. A sample of 372 participants were recruited on a voluntary and anonymous basis from a public university in northern China for this study. Confirmatory factor analysis (CFA) was employed to evaluate the reliability and validity of the questionnaires. Subsequently, structural equation modeling (SEM) was conducted to examine the hypothesized model, with both CFA and SEM conducted with AMOS 29.0. The results reveal that ICT literacy, motivation, and OLSE positively and directly predict deep learning in online education. Additionally, ICT literacy also positively predicts deep learning indirectly with motivation and OLSE being significant mediators. The findings underscore the importance of ICT literacy, motivation, and OLSE for EFL learners to achieve deep learning in online EFL education. Drawn from these findings, pedagogical implications for alleviating EFL learners’ online deep learning were provided.
10.1007/s10639-024-12827-4
fostering success in online english education: exploring the effects of ict literacy, online learning self-efficacy, and motivation on deep learning
this research explores how motivation and online learning self-efficacy (olse) act as mediators in the association between information and communication technology (ict) literacy and deep learning within the context of online english as a foreign language (efl) education. a sample of 372 participants were recruited on a voluntary and anonymous basis from a public university in northern china for this study. confirmatory factor analysis (cfa) was employed to evaluate the reliability and validity of the questionnaires. subsequently, structural equation modeling (sem) was conducted to examine the hypothesized model, with both cfa and sem conducted with amos 29.0. the results reveal that ict literacy, motivation, and olse positively and directly predict deep learning in online education. additionally, ict literacy also positively predicts deep learning indirectly with motivation and olse being significant mediators. the findings underscore the importance of ict literacy, motivation, and olse for efl learners to achieve deep learning in online efl education. drawn from these findings, pedagogical implications for alleviating efl learners’ online deep learning were provided.
[ "this research", "motivation", "self-efficacy", "olse", "mediators", "the association", "information", "communication technology", "ict) literacy", "deep learning", "the context", "online english", "a foreign language (efl) education", "a sample", "372 participants", "a voluntary and anonymous basis", "a public university", "northern china", "this study", "confirmatory factor analysis", "cfa", "the reliability", "validity", "the questionnaires", "sem", "the hypothesized model", "both cfa", "sem", "amos", "the results", "ict literacy", "motivation", "olse", "deep learning", "online education", "ict literacy", "motivation", "olse", "significant mediators", "the findings", "the importance", "ict literacy", "motivation", "olse", "efl learners", "deep learning", "online efl education", "these findings", "pedagogical implications", "efl learners", "online deep learning", "english", "372", "china", "cfa", "cfa", "29.0" ]
Efficient socket-based data transmission method and implementation in deep learning
[ "Xin-Jian Wei", "Shu-Ping Li", "Wu-Yang Yang", "Xiang-Yang Zhang", "Hai-Shan Li", "Xin Xu", "Nan Wang", "Zhanbao Fu" ]
The deep learning algorithm, which has been increasingly applied in the field of petroleum geophysical prospecting, has achieved good results in improving efficiency and accuracy based on test applications. To play a greater role in actual production, these algorithm modules must be integrated into software systems and used more often in actual production projects. Deep learning frameworks, such as TensorFlow and PyTorch, basically take Python as the core architecture, while the application program mainly uses Java, C#, and other programming languages. During integration, the seismic data read by the Java and C# data interfaces must be transferred to the Python main program module. The data exchange methods between Java, C#, and Python include shared memory, shared directory, and so on. However, these methods have the disadvantages of low transmission efficiency and unsuitability for asynchronous networks. Considering the large volume of seismic data and the need for network support for deep learning, this paper proposes a method of transmitting seismic data based on Socket. By maximizing Socket’s cross-network and efficient longdistance transmission, this approach solves the problem of inefficient transmission of underlying data while integrating the deep learning algorithm module into a software system. Furthermore, the actual production application shows that this method effectively solves the shortage of data transmission in shared memory, shared directory, and other modes while simultaneously improving the transmission efficiency of massive seismic data across modules at the bottom of the software.
10.1007/s11770-024-1090-y
efficient socket-based data transmission method and implementation in deep learning
the deep learning algorithm, which has been increasingly applied in the field of petroleum geophysical prospecting, has achieved good results in improving efficiency and accuracy based on test applications. to play a greater role in actual production, these algorithm modules must be integrated into software systems and used more often in actual production projects. deep learning frameworks, such as tensorflow and pytorch, basically take python as the core architecture, while the application program mainly uses java, c#, and other programming languages. during integration, the seismic data read by the java and c# data interfaces must be transferred to the python main program module. the data exchange methods between java, c#, and python include shared memory, shared directory, and so on. however, these methods have the disadvantages of low transmission efficiency and unsuitability for asynchronous networks. considering the large volume of seismic data and the need for network support for deep learning, this paper proposes a method of transmitting seismic data based on socket. by maximizing socket’s cross-network and efficient longdistance transmission, this approach solves the problem of inefficient transmission of underlying data while integrating the deep learning algorithm module into a software system. furthermore, the actual production application shows that this method effectively solves the shortage of data transmission in shared memory, shared directory, and other modes while simultaneously improving the transmission efficiency of massive seismic data across modules at the bottom of the software.
[ "the deep learning algorithm", "which", "the field", "petroleum geophysical prospecting", "good results", "efficiency", "accuracy", "test applications", "a greater role", "actual production", "these algorithm modules", "software systems", "actual production projects", "deep learning frameworks", "tensorflow", "pytorch", "python", "the core architecture", "the application program", "integration", "the seismic data", "the java and c# data interfaces", "the python main program module", "the data exchange methods", "java", "python", "shared memory", "shared directory", "these methods", "the disadvantages", "low transmission efficiency", "unsuitability", "asynchronous networks", "the large volume", "seismic data", "the need", "network support", "deep learning", "this paper", "a method", "seismic data", "socket", "socket", "-", "efficient longdistance transmission", "this approach", "the problem", "inefficient transmission", "underlying data", "the deep learning algorithm module", "a software system", "the actual production application", "this method", "the shortage", "data transmission", "shared memory", "shared directory", "other modes", "the transmission efficiency", "massive seismic data", "modules", "the bottom", "the software", "#", "java", "#", "java", "#" ]
Regularization by deep learning in signal processing
[ "Carlos Ramirez Villamarin", "Erwin Suazo", "Tamer Oraby" ]
In this paper, we explore a new idea of using deep learning representations as a principle for regularization in inverse problems for digital signal processing. Specifically, we consider the standard variational formulation, where a composite function encodes a fidelity term that quantifies the proximity of the candidate solution to the observations (under a physical process), and a second regularization term that constrains the space of solutions according to some prior knowledge. In this work, we investigate deep learning representations as a means of fulfilling the role of this second (regularization) term. Several numerical examples are presented for signal restoration under different degradation processes, showing successful recovery under the proposed methodology. Moreover, one of these examples uses real data on energy usage by households in London from 2012 to 2014.
10.1007/s11760-024-03083-7
regularization by deep learning in signal processing
in this paper, we explore a new idea of using deep learning representations as a principle for regularization in inverse problems for digital signal processing. specifically, we consider the standard variational formulation, where a composite function encodes a fidelity term that quantifies the proximity of the candidate solution to the observations (under a physical process), and a second regularization term that constrains the space of solutions according to some prior knowledge. in this work, we investigate deep learning representations as a means of fulfilling the role of this second (regularization) term. several numerical examples are presented for signal restoration under different degradation processes, showing successful recovery under the proposed methodology. moreover, one of these examples uses real data on energy usage by households in london from 2012 to 2014.
[ "this paper", "we", "a new idea", "deep learning representations", "a principle", "regularization", "inverse problems", "digital signal processing", "we", "the standard variational formulation", "a composite function", "a fidelity term", "that", "the proximity", "the candidate solution", "the observations", "a physical process", "a second regularization term", "that", "the space", "solutions", "some prior knowledge", "this work", "we", "deep learning representations", "a means", "the role", "this second (regularization) term", "several numerical examples", "signal restoration", "different degradation processes", "successful recovery", "the proposed methodology", "these examples", "real data", "energy usage", "households", "london", "fidelity", "second", "one", "london", "2012", "2014" ]
Network intrusion detection using feature fusion with deep learning
[ "Abiodun Ayantayo", "Amrit Kaur", "Anit Kour", "Xavier Schmoor", "Fayyaz Shah", "Ian Vickers", "Paul Kearney", "Mohammed M. Abdelsamea" ]
Network intrusion detection systems (NIDSs) are one of the main tools used to defend against cyber-attacks. Deep learning has shown remarkable success in network intrusion detection. However, the effect of feature fusion has yet to be explored in how to boost the performance of the deep learning model and improve its generalisation capability in NIDS. In this paper, we propose novel deep learning architectures with different feature fusion mechanisms aimed at improving the performance of the multi-classification components of NIDS. We propose three different deep learning models, which we call early-fusion, late-fusion, and late-ensemble learning models using feature fusion with fully connected deep networks. Our feature fusion mechanisms were designed to encourage deep learning models to learn relationships between different input features more efficiently and mitigate any potential bias that may occur with a particular feature type. To assess the efficacy of our deep learning solutions and make comparisons with state-of-the-art models, we employ the widely accessible UNSW-NB15 and NSL-KDD datasets specifically designed to enhance the development and evaluation of improved NIDSs. Through quantitative analysis, we demonstrate the resilience of our proposed models in effectively addressing the challenges posed by multi-classification tasks, especially in the presence of class imbalance issues. Moreover, our late-fusion and late-ensemble models showed the best generalisation behaviour (against overfitting) with similar performance on the training and validation sets.
10.1186/s40537-023-00834-0
network intrusion detection using feature fusion with deep learning
network intrusion detection systems (nidss) are one of the main tools used to defend against cyber-attacks. deep learning has shown remarkable success in network intrusion detection. however, the effect of feature fusion has yet to be explored in how to boost the performance of the deep learning model and improve its generalisation capability in nids. in this paper, we propose novel deep learning architectures with different feature fusion mechanisms aimed at improving the performance of the multi-classification components of nids. we propose three different deep learning models, which we call early-fusion, late-fusion, and late-ensemble learning models using feature fusion with fully connected deep networks. our feature fusion mechanisms were designed to encourage deep learning models to learn relationships between different input features more efficiently and mitigate any potential bias that may occur with a particular feature type. to assess the efficacy of our deep learning solutions and make comparisons with state-of-the-art models, we employ the widely accessible unsw-nb15 and nsl-kdd datasets specifically designed to enhance the development and evaluation of improved nidss. through quantitative analysis, we demonstrate the resilience of our proposed models in effectively addressing the challenges posed by multi-classification tasks, especially in the presence of class imbalance issues. moreover, our late-fusion and late-ensemble models showed the best generalisation behaviour (against overfitting) with similar performance on the training and validation sets.
[ "network intrusion detection systems", "nidss", "the main tools", "cyber-attacks", "deep learning", "remarkable success", "network intrusion detection", "the effect", "feature fusion", "the performance", "the deep learning model", "its generalisation capability", "nids", "this paper", "we", "novel deep learning architectures", "different feature fusion mechanisms", "the performance", "the multi-classification components", "nids", "we", "three different deep learning models", "which", "we", "early-fusion, late-fusion, and late-ensemble learning models", "feature fusion", "fully connected deep networks", "our feature fusion mechanisms", "deep learning models", "relationships", "different input features", "any potential bias", "that", "a particular feature type", "the efficacy", "our deep learning solutions", "comparisons", "the-art", "we", "the widely accessible unsw-nb15 and nsl-kdd datasets", "the development", "evaluation", "improved nidss", "quantitative analysis", "we", "the resilience", "our proposed models", "the challenges", "multi-classification tasks", "the presence", "class imbalance issues", "our late-fusion and late-ensemble models", "the best generalisation behaviour", "overfitting", "similar performance", "the training and validation sets", "one", "three" ]
Amplifying document categorization with advanced features and deep learning
[ "M. Kavitha", "K. Akila" ]
The field of natural language processing (NLP) plays a pivotal role in discerning unstructured data from diverse origins. This study employs advanced techniques rooted in machine learning and deep learning to effectively categorize news articles. Notably, deep learning models have demonstrated superior performance over traditional machine learning algorithms, rendering them a popular choice for a range of NLP tasks. The research employs feature extraction techniques to identify multiword tokens, negation words, and out-of-vocabulary words and replace them. Additionally, convolutional neural network models leverage embedding, convolutional layers, and max pooling layers to capture intricate features. For tasks requiring an understanding of dependencies among long phrases, long short-term memory models come into play. The evaluation of the proposed model hinges on training it with datasets like AG News, BBC, and 20 Newsgroup, gauging its efficacy. The study delves into the myriad challenges inherent to text classification. These challenges are thoughtfully discussed, shedding light on the intricacies of the process. Furthermore, the research furnishes comprehensive test outcomes for both conventional machine learning and deep learning models. The significance of this proposed model is that it uses a multiword expression lexicon, wordnet synset, and word embedding techniques for feature extraction. The performance of the models is increased when using these feature extraction techniques.
10.1007/s11042-024-18483-7
amplifying document categorization with advanced features and deep learning
the field of natural language processing (nlp) plays a pivotal role in discerning unstructured data from diverse origins. this study employs advanced techniques rooted in machine learning and deep learning to effectively categorize news articles. notably, deep learning models have demonstrated superior performance over traditional machine learning algorithms, rendering them a popular choice for a range of nlp tasks. the research employs feature extraction techniques to identify multiword tokens, negation words, and out-of-vocabulary words and replace them. additionally, convolutional neural network models leverage embedding, convolutional layers, and max pooling layers to capture intricate features. for tasks requiring an understanding of dependencies among long phrases, long short-term memory models come into play. the evaluation of the proposed model hinges on training it with datasets like ag news, bbc, and 20 newsgroup, gauging its efficacy. the study delves into the myriad challenges inherent to text classification. these challenges are thoughtfully discussed, shedding light on the intricacies of the process. furthermore, the research furnishes comprehensive test outcomes for both conventional machine learning and deep learning models. the significance of this proposed model is that it uses a multiword expression lexicon, wordnet synset, and word embedding techniques for feature extraction. the performance of the models is increased when using these feature extraction techniques.
[ "the field", "natural language processing", "nlp", "a pivotal role", "unstructured data", "diverse origins", "this study", "advanced techniques", "machine learning", "deep learning", "news articles", "deep learning models", "superior performance", "traditional machine learning algorithms", "them", "a popular choice", "a range", "nlp tasks", "the research", "feature extraction techniques", "multiword tokens", "negation words", "out-of-vocabulary words", "them", "convolutional neural network models", "max pooling layers", "intricate features", "tasks", "an understanding", "dependencies", "long phrases", "long short-term memory models", "play", "the evaluation", "the proposed model hinges", "it", "datasets", "ag news", "20 newsgroup", "its efficacy", "the study", "the myriad challenges", "classification", "these challenges", "light", "the intricacies", "the process", "the research", "comprehensive test outcomes", "both conventional machine learning", "deep learning models", "the significance", "this proposed model", "it", "a multiword expression lexicon", "wordnet synset", "techniques", "feature extraction", "the performance", "the models", "these feature extraction techniques", "multiword tokens", "max", "ag news", "bbc", "20", "multiword" ]
Severity Grading of Ulcerative Colitis Using Endoscopy Images: An Ensembled Deep Learning and Transfer Learning Approach
[ "Subhashree Mohapatra", "Pukhraj Singh Jeji", "Girish Kumar Pati", "Janmenjoy Nayak", "Manohar Mishra", "Tripti Swarnkar" ]
Ulcerative colitis (UC) is a persistent condition necessitating prompt treatment to avert potential complications. Detecting UC severity aids treatment decisions. The Mayo-endoscopic subscore is a standard for UC severity grading (UCSG). Deep learning (DL) and transfer learning (TL) have enhanced severity grading, but ensemble learning’s impact remains unexplored. This study designed DL-ensemble and TL-ensemble models for UCSG. Using the HyperKvasir dataset, we classified UCSG into two stages: initial and advanced. Three deep convolutional neural networks were trained from scratch for DL, and three pre-trained networks were trained for TL. UCSG was conducted using a majority voting ensemble scheme. A detailed comparative analysis evaluated individual networks. It is observed that TL models perform better than the DL models, and implementation of ensemble learning enhances the performance of both DL and TL models. Following a comprehensive assessment, it is observed that the TL-ensemble model has delivered the optimal outcome, boasting an accuracy of 90.58% and a MCC of 0.7624. This study highlights the efficacy of our methodology. TL-ensemble models, especially, excelled, providing valuable insights into automatic UCSG systems’ potential enhancement. Ensemble learning offers promise for enhancing accuracy and reliability in UCSG, with implications for future research in this field.
10.1007/s40031-024-01099-8
severity grading of ulcerative colitis using endoscopy images: an ensembled deep learning and transfer learning approach
ulcerative colitis (uc) is a persistent condition necessitating prompt treatment to avert potential complications. detecting uc severity aids treatment decisions. the mayo-endoscopic subscore is a standard for uc severity grading (ucsg). deep learning (dl) and transfer learning (tl) have enhanced severity grading, but ensemble learning’s impact remains unexplored. this study designed dl-ensemble and tl-ensemble models for ucsg. using the hyperkvasir dataset, we classified ucsg into two stages: initial and advanced. three deep convolutional neural networks were trained from scratch for dl, and three pre-trained networks were trained for tl. ucsg was conducted using a majority voting ensemble scheme. a detailed comparative analysis evaluated individual networks. it is observed that tl models perform better than the dl models, and implementation of ensemble learning enhances the performance of both dl and tl models. following a comprehensive assessment, it is observed that the tl-ensemble model has delivered the optimal outcome, boasting an accuracy of 90.58% and a mcc of 0.7624. this study highlights the efficacy of our methodology. tl-ensemble models, especially, excelled, providing valuable insights into automatic ucsg systems’ potential enhancement. ensemble learning offers promise for enhancing accuracy and reliability in ucsg, with implications for future research in this field.
[ "ulcerative colitis", "uc", "a persistent condition", "prompt treatment", "potential complications", "uc severity aids treatment decisions", "the mayo-endoscopic", "a standard", "ucsg", "deep learning", "dl", "learning", "tl", "severity grading", "ensemble learning’s impact", "this study", "dl", "-ensemble and tl-ensemble models", "ucsg", "the hyperkvasir", "we", "ucsg", "two stages", "three deep convolutional neural networks", "scratch", "dl", "three pre-trained networks", "tl. ucsg", "a majority voting ensemble scheme", "a detailed comparative analysis", "it", "tl models", "the dl models", "implementation", "the performance", "both dl and tl models", "a comprehensive assessment", "it", "the tl-ensemble model", "the optimal outcome", "an accuracy", "90.58%", "a mcc", "this study", "the efficacy", "our methodology", "tl-ensemble models", "valuable insights", "automatic ucsg systems’ potential enhancement", "ensemble learning", "promise", "accuracy", "reliability", "ucsg", "implications", "future research", "this field", "two", "three", "three", "ucsg", "90.58%", "0.7624" ]
A systematic review on smart waste biomass production using machine learning and deep learning
[ "Wei Peng", "Omid Karimi Sadaghiani" ]
The utilization of waste materials, as an energy resources, requires four main steps of production, pre-treatment, bio-refinery, and upgrading. This work reviews Machine Learning applications in the waste biomass production step. By investigating numerous related works, it is concluded that there is a considerable reviewing gap in the surveying and collecting the applications of Machine Learning in the waste biomass. To fill this gap with the current work, the kinds and resources of waste biomass as well as the role of Machine Learning and Deep Learning in their development are reviewed. Moreover, the storage and transportation of the wastes are surveyed followed by the application of Machine Learning and Deep Learning in these areas. Summarily, after analysis of numerous papers, it is concluded that Machine Learning and Deep Learning are widely utilized in waste biomass production areas to enhance the waste collecting quality and quality, improve the predictions, diminish the losses, as well as increase storage and transformation conditions.
10.1007/s10163-023-01794-6
a systematic review on smart waste biomass production using machine learning and deep learning
the utilization of waste materials, as an energy resources, requires four main steps of production, pre-treatment, bio-refinery, and upgrading. this work reviews machine learning applications in the waste biomass production step. by investigating numerous related works, it is concluded that there is a considerable reviewing gap in the surveying and collecting the applications of machine learning in the waste biomass. to fill this gap with the current work, the kinds and resources of waste biomass as well as the role of machine learning and deep learning in their development are reviewed. moreover, the storage and transportation of the wastes are surveyed followed by the application of machine learning and deep learning in these areas. summarily, after analysis of numerous papers, it is concluded that machine learning and deep learning are widely utilized in waste biomass production areas to enhance the waste collecting quality and quality, improve the predictions, diminish the losses, as well as increase storage and transformation conditions.
[ "the utilization", "waste materials", "an energy resources", "four main steps", "production", ", bio", "refinery", "upgrading", "this work", "machine learning applications", "the waste biomass production step", "numerous related works", "it", "a considerable reviewing gap", "the surveying", "the applications", "machine learning", "the waste biomass", "this gap", "the current work", "the kinds", "resources", "waste biomass", "the role", "machine learning", "deep learning", "their development", "the storage", "transportation", "the wastes", "the application", "machine learning", "deep learning", "these areas", "analysis", "numerous papers", "it", "machine learning", "deep learning", "waste biomass production areas", "the waste", "quality", "quality", "the predictions", "the losses", "storage and transformation conditions", "four" ]
Accelerating three-dimensional phase-field simulations via deep learning approaches
[ "Xuewei Zhou", "Sheng Sun", "Songlin Cai", "Gongyu Chen", "Honghui Wu", "Jie Xiong", "Jiaming Zhu" ]
Phase-field modeling (PFM) is a powerful but computationally expensive technique for simulating three-dimensional (3D) microstructure evolutions. Very recently, integrating machine learning into phase-field simulations provides a promising way to reduce calculation time remarkably. In this study, we propose a deep learning model that combines a convolutional autoencoder with a deep operator network to predict 3D microstructure evolution by using 2D slices of the 3D system. It is found that the deep learning model can shorten the calculation time from 37 min to 3 s after the initial training, while skipping 5-time steps, and reduce the phase-field simulation time by 31% in entire calculation of the evolution process. Interestingly, this model achieves good accuracy in predicting 3D microstructures by utilizing only 2D information. This work demonstrates the efficiency of machine learning in accelerating phase-field simulations while maintaining high accuracy and promotes the application of PFM in fundamental studies.
10.1007/s10853-024-10118-4
accelerating three-dimensional phase-field simulations via deep learning approaches
phase-field modeling (pfm) is a powerful but computationally expensive technique for simulating three-dimensional (3d) microstructure evolutions. very recently, integrating machine learning into phase-field simulations provides a promising way to reduce calculation time remarkably. in this study, we propose a deep learning model that combines a convolutional autoencoder with a deep operator network to predict 3d microstructure evolution by using 2d slices of the 3d system. it is found that the deep learning model can shorten the calculation time from 37 min to 3 s after the initial training, while skipping 5-time steps, and reduce the phase-field simulation time by 31% in entire calculation of the evolution process. interestingly, this model achieves good accuracy in predicting 3d microstructures by utilizing only 2d information. this work demonstrates the efficiency of machine learning in accelerating phase-field simulations while maintaining high accuracy and promotes the application of pfm in fundamental studies.
[ "phase-field modeling", "pfm", "a powerful but computationally expensive technique", "three-dimensional (3d) microstructure evolutions", "phase-field simulations", "a promising way", "calculation time", "this study", "we", "a deep learning model", "that", "a convolutional autoencoder", "a deep operator network", "3d microstructure evolution", "2d slices", "the 3d system", "it", "the deep learning model", "the calculation time", "37 min", "3 s", "the initial training", "5-time steps", "the phase-field simulation time", "31%", "entire calculation", "the evolution process", "this model", "good accuracy", "3d microstructures", "only 2d information", "this work", "the efficiency", "accelerating phase-field simulations", "high accuracy", "the application", "pfm", "fundamental studies", "three", "3d", "3d", "2d", "3d", "37", "3 s", "5", "31%", "3d", "2d" ]
Efficient socket-based data transmission method and implementation in deep learning
[ "Xin-Jian Wei", "Shu-Ping Li", "Wu-Yang Yang", "Xiang-Yang Zhang", "Hai-Shan Li", "Xin Xu", "Nan Wang", "Zhanbao Fu" ]
The deep learning algorithm, which has been increasingly applied in the field of petroleum geophysical prospecting, has achieved good results in improving efficiency and accuracy based on test applications. To play a greater role in actual production, these algorithm modules must be integrated into software systems and used more often in actual production projects. Deep learning frameworks, such as TensorFlow and PyTorch, basically take Python as the core architecture, while the application program mainly uses Java, C#, and other programming languages. During integration, the seismic data read by the Java and C# data interfaces must be transferred to the Python main program module. The data exchange methods between Java, C#, and Python include shared memory, shared directory, and so on. However, these methods have the disadvantages of low transmission efficiency and unsuitability for asynchronous networks. Considering the large volume of seismic data and the need for network support for deep learning, this paper proposes a method of transmitting seismic data based on Socket. By maximizing Socket’s cross-network and efficient longdistance transmission, this approach solves the problem of inefficient transmission of underlying data while integrating the deep learning algorithm module into a software system. Furthermore, the actual production application shows that this method effectively solves the shortage of data transmission in shared memory, shared directory, and other modes while simultaneously improving the transmission efficiency of massive seismic data across modules at the bottom of the software.
10.1007/s11770-024-1090-y
efficient socket-based data transmission method and implementation in deep learning
the deep learning algorithm, which has been increasingly applied in the field of petroleum geophysical prospecting, has achieved good results in improving efficiency and accuracy based on test applications. to play a greater role in actual production, these algorithm modules must be integrated into software systems and used more often in actual production projects. deep learning frameworks, such as tensorflow and pytorch, basically take python as the core architecture, while the application program mainly uses java, c#, and other programming languages. during integration, the seismic data read by the java and c# data interfaces must be transferred to the python main program module. the data exchange methods between java, c#, and python include shared memory, shared directory, and so on. however, these methods have the disadvantages of low transmission efficiency and unsuitability for asynchronous networks. considering the large volume of seismic data and the need for network support for deep learning, this paper proposes a method of transmitting seismic data based on socket. by maximizing socket’s cross-network and efficient longdistance transmission, this approach solves the problem of inefficient transmission of underlying data while integrating the deep learning algorithm module into a software system. furthermore, the actual production application shows that this method effectively solves the shortage of data transmission in shared memory, shared directory, and other modes while simultaneously improving the transmission efficiency of massive seismic data across modules at the bottom of the software.
[ "the deep learning algorithm", "which", "the field", "petroleum geophysical prospecting", "good results", "efficiency", "accuracy", "test applications", "a greater role", "actual production", "these algorithm modules", "software systems", "actual production projects", "deep learning frameworks", "tensorflow", "pytorch", "python", "the core architecture", "the application program", "integration", "the seismic data", "the java and c# data interfaces", "the python main program module", "the data exchange methods", "java", "python", "shared memory", "shared directory", "these methods", "the disadvantages", "low transmission efficiency", "unsuitability", "asynchronous networks", "the large volume", "seismic data", "the need", "network support", "deep learning", "this paper", "a method", "seismic data", "socket", "socket", "-", "efficient longdistance transmission", "this approach", "the problem", "inefficient transmission", "underlying data", "the deep learning algorithm module", "a software system", "the actual production application", "this method", "the shortage", "data transmission", "shared memory", "shared directory", "other modes", "the transmission efficiency", "massive seismic data", "modules", "the bottom", "the software", "#", "java", "#", "java", "#" ]
Amplifying document categorization with advanced features and deep learning
[ "M. Kavitha", "K. Akila" ]
The field of natural language processing (NLP) plays a pivotal role in discerning unstructured data from diverse origins. This study employs advanced techniques rooted in machine learning and deep learning to effectively categorize news articles. Notably, deep learning models have demonstrated superior performance over traditional machine learning algorithms, rendering them a popular choice for a range of NLP tasks. The research employs feature extraction techniques to identify multiword tokens, negation words, and out-of-vocabulary words and replace them. Additionally, convolutional neural network models leverage embedding, convolutional layers, and max pooling layers to capture intricate features. For tasks requiring an understanding of dependencies among long phrases, long short-term memory models come into play. The evaluation of the proposed model hinges on training it with datasets like AG News, BBC, and 20 Newsgroup, gauging its efficacy. The study delves into the myriad challenges inherent to text classification. These challenges are thoughtfully discussed, shedding light on the intricacies of the process. Furthermore, the research furnishes comprehensive test outcomes for both conventional machine learning and deep learning models. The significance of this proposed model is that it uses a multiword expression lexicon, wordnet synset, and word embedding techniques for feature extraction. The performance of the models is increased when using these feature extraction techniques.
10.1007/s11042-024-18483-7
amplifying document categorization with advanced features and deep learning
the field of natural language processing (nlp) plays a pivotal role in discerning unstructured data from diverse origins. this study employs advanced techniques rooted in machine learning and deep learning to effectively categorize news articles. notably, deep learning models have demonstrated superior performance over traditional machine learning algorithms, rendering them a popular choice for a range of nlp tasks. the research employs feature extraction techniques to identify multiword tokens, negation words, and out-of-vocabulary words and replace them. additionally, convolutional neural network models leverage embedding, convolutional layers, and max pooling layers to capture intricate features. for tasks requiring an understanding of dependencies among long phrases, long short-term memory models come into play. the evaluation of the proposed model hinges on training it with datasets like ag news, bbc, and 20 newsgroup, gauging its efficacy. the study delves into the myriad challenges inherent to text classification. these challenges are thoughtfully discussed, shedding light on the intricacies of the process. furthermore, the research furnishes comprehensive test outcomes for both conventional machine learning and deep learning models. the significance of this proposed model is that it uses a multiword expression lexicon, wordnet synset, and word embedding techniques for feature extraction. the performance of the models is increased when using these feature extraction techniques.
[ "the field", "natural language processing", "nlp", "a pivotal role", "unstructured data", "diverse origins", "this study", "advanced techniques", "machine learning", "deep learning", "news articles", "deep learning models", "superior performance", "traditional machine learning algorithms", "them", "a popular choice", "a range", "nlp tasks", "the research", "feature extraction techniques", "multiword tokens", "negation words", "out-of-vocabulary words", "them", "convolutional neural network models", "max pooling layers", "intricate features", "tasks", "an understanding", "dependencies", "long phrases", "long short-term memory models", "play", "the evaluation", "the proposed model hinges", "it", "datasets", "ag news", "20 newsgroup", "its efficacy", "the study", "the myriad challenges", "classification", "these challenges", "light", "the intricacies", "the process", "the research", "comprehensive test outcomes", "both conventional machine learning", "deep learning models", "the significance", "this proposed model", "it", "a multiword expression lexicon", "wordnet synset", "techniques", "feature extraction", "the performance", "the models", "these feature extraction techniques", "multiword tokens", "max", "ag news", "bbc", "20", "multiword" ]
Severity Grading of Ulcerative Colitis Using Endoscopy Images: An Ensembled Deep Learning and Transfer Learning Approach
[ "Subhashree Mohapatra", "Pukhraj Singh Jeji", "Girish Kumar Pati", "Janmenjoy Nayak", "Manohar Mishra", "Tripti Swarnkar" ]
Ulcerative colitis (UC) is a persistent condition necessitating prompt treatment to avert potential complications. Detecting UC severity aids treatment decisions. The Mayo-endoscopic subscore is a standard for UC severity grading (UCSG). Deep learning (DL) and transfer learning (TL) have enhanced severity grading, but ensemble learning’s impact remains unexplored. This study designed DL-ensemble and TL-ensemble models for UCSG. Using the HyperKvasir dataset, we classified UCSG into two stages: initial and advanced. Three deep convolutional neural networks were trained from scratch for DL, and three pre-trained networks were trained for TL. UCSG was conducted using a majority voting ensemble scheme. A detailed comparative analysis evaluated individual networks. It is observed that TL models perform better than the DL models, and implementation of ensemble learning enhances the performance of both DL and TL models. Following a comprehensive assessment, it is observed that the TL-ensemble model has delivered the optimal outcome, boasting an accuracy of 90.58% and a MCC of 0.7624. This study highlights the efficacy of our methodology. TL-ensemble models, especially, excelled, providing valuable insights into automatic UCSG systems’ potential enhancement. Ensemble learning offers promise for enhancing accuracy and reliability in UCSG, with implications for future research in this field.
10.1007/s40031-024-01099-8
severity grading of ulcerative colitis using endoscopy images: an ensembled deep learning and transfer learning approach
ulcerative colitis (uc) is a persistent condition necessitating prompt treatment to avert potential complications. detecting uc severity aids treatment decisions. the mayo-endoscopic subscore is a standard for uc severity grading (ucsg). deep learning (dl) and transfer learning (tl) have enhanced severity grading, but ensemble learning’s impact remains unexplored. this study designed dl-ensemble and tl-ensemble models for ucsg. using the hyperkvasir dataset, we classified ucsg into two stages: initial and advanced. three deep convolutional neural networks were trained from scratch for dl, and three pre-trained networks were trained for tl. ucsg was conducted using a majority voting ensemble scheme. a detailed comparative analysis evaluated individual networks. it is observed that tl models perform better than the dl models, and implementation of ensemble learning enhances the performance of both dl and tl models. following a comprehensive assessment, it is observed that the tl-ensemble model has delivered the optimal outcome, boasting an accuracy of 90.58% and a mcc of 0.7624. this study highlights the efficacy of our methodology. tl-ensemble models, especially, excelled, providing valuable insights into automatic ucsg systems’ potential enhancement. ensemble learning offers promise for enhancing accuracy and reliability in ucsg, with implications for future research in this field.
[ "ulcerative colitis", "uc", "a persistent condition", "prompt treatment", "potential complications", "uc severity aids treatment decisions", "the mayo-endoscopic", "a standard", "ucsg", "deep learning", "dl", "learning", "tl", "severity grading", "ensemble learning’s impact", "this study", "dl", "-ensemble and tl-ensemble models", "ucsg", "the hyperkvasir", "we", "ucsg", "two stages", "three deep convolutional neural networks", "scratch", "dl", "three pre-trained networks", "tl. ucsg", "a majority voting ensemble scheme", "a detailed comparative analysis", "it", "tl models", "the dl models", "implementation", "the performance", "both dl and tl models", "a comprehensive assessment", "it", "the tl-ensemble model", "the optimal outcome", "an accuracy", "90.58%", "a mcc", "this study", "the efficacy", "our methodology", "tl-ensemble models", "valuable insights", "automatic ucsg systems’ potential enhancement", "ensemble learning", "promise", "accuracy", "reliability", "ucsg", "implications", "future research", "this field", "two", "three", "three", "ucsg", "90.58%", "0.7624" ]
Learning and predicting the unknown class using evidential deep learning
[ "Akihito Nagahama" ]
In practical deep-learning applications, such as medical image analysis, autonomous driving, and traffic simulation, the uncertainty of a classification model’s output is critical. Evidential deep learning (EDL) can output this uncertainty for the prediction; however, its accuracy depends on a user-defined threshold, and it cannot handle training data with unknown classes that are unexpectedly contaminated or deliberately mixed for better classification of unknown class. To address these limitations, I propose a classification method called modified-EDL that extends classical EDL such that it outputs a prediction, i.e. an input belongs to a collective unknown class along with a probability. Although other methods handle unknown classes by creating new unknown classes and attempting to learn each class efficiently, the proposed m-EDL outputs, in a natural way, the “uncertainty of the prediction” of classical EDL and uses the output as the probability of an unknown class. Although classical EDL can also classify both known and unknown classes, experiments on three datasets from different domains demonstrated that m-EDL outperformed EDL on known classes when there were instances of unknown classes. Moreover, extensive experiments under different conditions established that m-EDL can predict unknown classes even when the unknown classes in the training and test data have different properties. If unknown class data are to be mixed intentionally during training to increase the discrimination accuracy of unknown classes, it is necessary to mix such data that the characteristics of the mixed data are as close as possible to those of known class data. This ability extends the range of practical applications that can benefit from deep learning-based classification and prediction models.
10.1038/s41598-023-40649-w
learning and predicting the unknown class using evidential deep learning
in practical deep-learning applications, such as medical image analysis, autonomous driving, and traffic simulation, the uncertainty of a classification model’s output is critical. evidential deep learning (edl) can output this uncertainty for the prediction; however, its accuracy depends on a user-defined threshold, and it cannot handle training data with unknown classes that are unexpectedly contaminated or deliberately mixed for better classification of unknown class. to address these limitations, i propose a classification method called modified-edl that extends classical edl such that it outputs a prediction, i.e. an input belongs to a collective unknown class along with a probability. although other methods handle unknown classes by creating new unknown classes and attempting to learn each class efficiently, the proposed m-edl outputs, in a natural way, the “uncertainty of the prediction” of classical edl and uses the output as the probability of an unknown class. although classical edl can also classify both known and unknown classes, experiments on three datasets from different domains demonstrated that m-edl outperformed edl on known classes when there were instances of unknown classes. moreover, extensive experiments under different conditions established that m-edl can predict unknown classes even when the unknown classes in the training and test data have different properties. if unknown class data are to be mixed intentionally during training to increase the discrimination accuracy of unknown classes, it is necessary to mix such data that the characteristics of the mixed data are as close as possible to those of known class data. this ability extends the range of practical applications that can benefit from deep learning-based classification and prediction models.
[ "practical deep-learning applications", "medical image analysis", "autonomous driving", "traffic simulation", "the uncertainty", "a classification model’s output", "evidential deep learning", "edl", "this uncertainty", "the prediction", "its accuracy", "a user-defined threshold", "it", "training data", "unknown classes", "that", "better classification", "unknown class", "these limitations", "i", "a classification method", "modified-edl", "that", "classical edl", "it", "a prediction", "i.e. an input", "a collective unknown class", "a probability", "other methods", "unknown classes", "new unknown classes", "each class", "the proposed m-edl outputs", "a natural way", "the prediction", "classical edl", "the output", "the probability", "an unknown class", "classical edl", "both known and unknown classes", "experiments", "three datasets", "different domains", "m-edl", "edl", "known classes", "instances", "unknown classes", "extensive experiments", "different conditions", "m", "edl", "unknown classes", "the unknown classes", "the training", "test data", "different properties", "unknown class data", "training", "the discrimination accuracy", "unknown classes", "it", "such data", "the characteristics", "the mixed data", "those", "known class data", "this ability", "the range", "practical applications", "that", "deep learning-based classification and prediction models", "three" ]
Optimized model architectures for deep learning on genomic data
[ "Hüseyin Anil Gündüz", "René Mreches", "Julia Moosbauer", "Gary Robertson", "Xiao-Yin To", "Eric A. Franzosa", "Curtis Huttenhower", "Mina Rezaei", "Alice C. McHardy", "Bernd Bischl", "Philipp C. Münch", "Martin Binder" ]
The success of deep learning in various applications depends on task-specific architecture design choices, including the types, hyperparameters, and number of layers. In computational biology, there is no consensus on the optimal architecture design, and decisions are often made using insights from more well-established fields such as computer vision. These may not consider the domain-specific characteristics of genome sequences, potentially limiting performance. Here, we present GenomeNet-Architect, a neural architecture design framework that automatically optimizes deep learning models for genome sequence data. It optimizes the overall layout of the architecture, with a search space specifically designed for genomics. Additionally, it optimizes hyperparameters of individual layers and the model training procedure. On a viral classification task, GenomeNet-Architect reduced the read-level misclassification rate by 19%, with 67% faster inference and 83% fewer parameters, and achieved similar contig-level accuracy with ~100 times fewer parameters compared to the best-performing deep learning baselines.
10.1038/s42003-024-06161-1
optimized model architectures for deep learning on genomic data
the success of deep learning in various applications depends on task-specific architecture design choices, including the types, hyperparameters, and number of layers. in computational biology, there is no consensus on the optimal architecture design, and decisions are often made using insights from more well-established fields such as computer vision. these may not consider the domain-specific characteristics of genome sequences, potentially limiting performance. here, we present genomenet-architect, a neural architecture design framework that automatically optimizes deep learning models for genome sequence data. it optimizes the overall layout of the architecture, with a search space specifically designed for genomics. additionally, it optimizes hyperparameters of individual layers and the model training procedure. on a viral classification task, genomenet-architect reduced the read-level misclassification rate by 19%, with 67% faster inference and 83% fewer parameters, and achieved similar contig-level accuracy with ~100 times fewer parameters compared to the best-performing deep learning baselines.
[ "the success", "deep learning", "various applications", "task-specific architecture design choices", "the types", "hyperparameters", "number", "layers", "computational biology", "no consensus", "the optimal architecture design", "decisions", "insights", "more well-established fields", "computer vision", "these", "the domain-specific characteristics", "genome sequences", "performance", "we", "genomenet-architect", "a neural architecture design framework", "that", "deep learning models", "genome sequence data", "it", "the overall layout", "the architecture", "a search space", "genomics", "it", "hyperparameters", "individual layers", "the model training procedure", "a viral classification task", "genomenet-architect", "the read-level misclassification rate", "19%", "67% faster inference", "83% fewer parameters", "similar contig-level accuracy", "~100 times fewer parameters", "the best-performing deep learning baselines", "19%", "67%", "83%" ]
Stroke detection in the brain using MRI and deep learning models
[ "Subba Rao Polamuri" ]
When it comes to finding solutions to issues, deep learning models are pretty much everywhere. Medical image data is best analysed using models based on Convolutional Neural Networks (CNNs). Better methods for early detection are crucial due to the concerning increase in the number of people suffering from brain stroke. Among the several medical imaging modalities used for brain imaging, magnetic resonance imaging (MRI) stands out. When it comes to analysing medical photos, the deep learning models currently utilised with MRI have showed good outcomes. To improve the efficacy of brain stroke diagnosis, we suggested several upgrades to deep learning models in this work, including DenseNet121, ResNet50, and VGG16. Since these models are not purpose-built to solve any particular issue, they are modified according to the present situation involving the detection of brain strokes. To make use of all of these cutting-edge deep learning models in a pipeline, we proposed a strategy based on supervised learning. Results from the experiments showed that optimised models outperformed baseline models.
10.1007/s11042-024-19318-1
stroke detection in the brain using mri and deep learning models
when it comes to finding solutions to issues, deep learning models are pretty much everywhere. medical image data is best analysed using models based on convolutional neural networks (cnns). better methods for early detection are crucial due to the concerning increase in the number of people suffering from brain stroke. among the several medical imaging modalities used for brain imaging, magnetic resonance imaging (mri) stands out. when it comes to analysing medical photos, the deep learning models currently utilised with mri have showed good outcomes. to improve the efficacy of brain stroke diagnosis, we suggested several upgrades to deep learning models in this work, including densenet121, resnet50, and vgg16. since these models are not purpose-built to solve any particular issue, they are modified according to the present situation involving the detection of brain strokes. to make use of all of these cutting-edge deep learning models in a pipeline, we proposed a strategy based on supervised learning. results from the experiments showed that optimised models outperformed baseline models.
[ "it", "solutions", "issues", "deep learning models", "medical image data", "using models", "convolutional neural networks", "cnns", "better methods", "early detection", "the concerning increase", "the number", "people", "brain stroke", "the several medical imaging modalities", "brain imaging", "magnetic resonance imaging", "mri", "it", "medical photos", "the deep learning models", "mri", "good outcomes", "the efficacy", "brain stroke diagnosis", "we", "several upgrades", "deep learning models", "this work", "densenet121", "resnet50", "vgg16", "these models", "any particular issue", "they", "the present situation", "the detection", "brain strokes", "use", "all", "these cutting-edge deep learning models", "a pipeline", "we", "a strategy", "supervised learning", "results", "the experiments", "optimised models", "baseline models", "resnet50" ]
Combining Deep Learning with Good Old-Fashioned Machine Learning
[ "Moshe Sipper" ]
We present a comprehensive, stacking-based framework for combining deep learning with good old-fashioned machine learning, called Deep GOld. Our framework involves ensemble selection from 51 retrained pretrained deep networks as first-level models, and 10 machine-learning algorithms as second-level models. Enabled by today’s state-of-the-art software tools and hardware platforms, Deep GOld delivers consistent improvement when tested on four image-classification datasets: Fashion MNIST, CIFAR10, CIFAR100, and Tiny ImageNet. Of 120 experiments, in all but 10 Deep GOld improved the original networks’ performance.
10.1007/s42979-022-01505-2
combining deep learning with good old-fashioned machine learning
we present a comprehensive, stacking-based framework for combining deep learning with good old-fashioned machine learning, called deep gold. our framework involves ensemble selection from 51 retrained pretrained deep networks as first-level models, and 10 machine-learning algorithms as second-level models. enabled by today’s state-of-the-art software tools and hardware platforms, deep gold delivers consistent improvement when tested on four image-classification datasets: fashion mnist, cifar10, cifar100, and tiny imagenet. of 120 experiments, in all but 10 deep gold improved the original networks’ performance.
[ "we", "a comprehensive, stacking-based framework", "deep learning", "good old-fashioned machine learning", "deep gold", "our framework", "ensemble selection", "deep networks", "first-level models", "10 machine-learning algorithms", "second-level models", "the-art", "hardware platforms", "deep gold", "consistent improvement", "four image-classification datasets", "fashion mnist", "cifar10", "cifar100", "tiny imagenet", "120 experiments", "all but 10 deep gold", "the original networks’ performance", "51", "first", "10", "second", "today", "four", "cifar10", "120", "10" ]
Deep learning evaluation of echocardiograms to identify occult atrial fibrillation
[ "Neal Yuan", "Nathan R. Stein", "Grant Duffy", "Roopinder K. Sandhu", "Sumeet S. Chugh", "Peng-Sheng Chen", "Carine Rosenberg", "Christine M. Albert", "Susan Cheng", "Robert J. Siegel", "David Ouyang" ]
Atrial fibrillation (AF) often escapes detection, given its frequent paroxysmal and asymptomatic presentation. Deep learning of transthoracic echocardiograms (TTEs), which have structural information, could help identify occult AF. We created a two-stage deep learning algorithm using a video-based convolutional neural network model that (1) distinguished whether TTEs were in sinus rhythm or AF and then (2) predicted which of the TTEs in sinus rhythm were in patients who had experienced AF within 90 days. Our model, trained on 111,319 TTE videos, distinguished TTEs in AF from those in sinus rhythm with high accuracy in a held-out test cohort (AUC 0.96 (0.95–0.96), AUPRC 0.91 (0.90–0.92)). Among TTEs in sinus rhythm, the model predicted the presence of concurrent paroxysmal AF (AUC 0.74 (0.71–0.77), AUPRC 0.19 (0.16–0.23)). Model discrimination remained similar in an external cohort of 10,203 TTEs (AUC of 0.69 (0.67–0.70), AUPRC 0.34 (0.31–0.36)). Performance held across patients who were women (AUC 0.76 (0.72–0.81)), older than 65 years (0.73 (0.69–0.76)), or had a CHA2DS2VASc ≥2 (0.73 (0.79–0.77)). The model performed better than using clinical risk factors (AUC 0.64 (0.62–0.67)), TTE measurements (0.64 (0.62–0.67)), left atrial size (0.63 (0.62–0.64)), or CHA2DS2VASc (0.61 (0.60–0.62)). An ensemble model in a cohort subset combining the TTE model with an electrocardiogram (ECGs) deep learning model performed better than using the ECG model alone (AUC 0.81 vs. 0.79, p = 0.01). Deep learning using TTEs can predict patients with active or occult AF and could be used for opportunistic AF screening that could lead to earlier treatment.
10.1038/s41746-024-01090-z
deep learning evaluation of echocardiograms to identify occult atrial fibrillation
atrial fibrillation (af) often escapes detection, given its frequent paroxysmal and asymptomatic presentation. deep learning of transthoracic echocardiograms (ttes), which have structural information, could help identify occult af. we created a two-stage deep learning algorithm using a video-based convolutional neural network model that (1) distinguished whether ttes were in sinus rhythm or af and then (2) predicted which of the ttes in sinus rhythm were in patients who had experienced af within 90 days. our model, trained on 111,319 tte videos, distinguished ttes in af from those in sinus rhythm with high accuracy in a held-out test cohort (auc 0.96 (0.95–0.96), auprc 0.91 (0.90–0.92)). among ttes in sinus rhythm, the model predicted the presence of concurrent paroxysmal af (auc 0.74 (0.71–0.77), auprc 0.19 (0.16–0.23)). model discrimination remained similar in an external cohort of 10,203 ttes (auc of 0.69 (0.67–0.70), auprc 0.34 (0.31–0.36)). performance held across patients who were women (auc 0.76 (0.72–0.81)), older than 65 years (0.73 (0.69–0.76)), or had a cha2ds2vasc ≥2 (0.73 (0.79–0.77)). the model performed better than using clinical risk factors (auc 0.64 (0.62–0.67)), tte measurements (0.64 (0.62–0.67)), left atrial size (0.63 (0.62–0.64)), or cha2ds2vasc (0.61 (0.60–0.62)). an ensemble model in a cohort subset combining the tte model with an electrocardiogram (ecgs) deep learning model performed better than using the ecg model alone (auc 0.81 vs. 0.79, p = 0.01). deep learning using ttes can predict patients with active or occult af and could be used for opportunistic af screening that could lead to earlier treatment.
[ "atrial fibrillation", "detection", "its frequent paroxysmal and asymptomatic presentation", "deep learning", "transthoracic echocardiograms", "ttes", "which", "structural information", "occult", "we", "a two-stage deep learning algorithm", "a video-based convolutional neural network model", "ttes", "sinus rhythm", "which", "the ttes", "sinus rhythm", "patients", "who", "90 days", "our model", "111,319 tte videos", "ttes", "af", "those", "sinus rhythm", "high accuracy", "a held-out test cohort", "auc", "(0.95–0.96", "(0.90–0.92", "ttes", "sinus rhythm", "the model", "the presence", "concurrent paroxysmal", "auc", "model discrimination", "an external cohort", "10,203 ttes", "auc", "performance", "patients", "who", "women", "auc", "(0.72–0.81", "older than 65 years", "0.69–0.76", "a cha2ds2vasc", "the model", "clinical risk factors", "auc", "tte measurements", "(0.62–0.67", "atrial size", "cha2ds2vasc", "an ensemble model", "a cohort subset", "the tte model", "an electrocardiogram (ecgs", "deep learning model", "the ecg model", "auc", "=", "deep learning", "ttes", "patients", "occult", "screening", "that", "earlier treatment", "two", "1", "2", "90 days", "111,319", "0.96", "0.91", "0.19", "10,203", "0.69", "0.34", "0.76", "older than 65 years", "0.73", "0.73", "0.64", "0.64", "0.61", "0.81", "0.79", "0.01" ]
Relationship constraint deep metric learning
[ "Yanbing Zhang", "Ting Xiao", "Zhe Wang", "Xinru Wang", "Wenyi Feng", "Zhiling Fu", "Hai Yang" ]
AbstractDeep metric learning (DML) models aim to learn semantically meaningful representations in which similar samples are pulled together and dissimilar samples are pushed apart. However, the classification effect is limited due to the high time complexity of previous models and their poor performance in extracting data relationships. This paper presents a novel relationship constraint deep metric learning (RCDML) approach, including proxy relationship constraint (PRC) and sample relationship constraint (SRC) for inter-class separability and intra-class compactness, to solve the above problems and improve the classification effect. The PRC combines the proxy-to-proxy relationship loss term with the proxy-to-sample relationship loss function to maximize the proxy features, hence enhancing inter-class separability by decreasing proxy similarity. Additionally, the SRC combines the sample-to-sample relationship loss term with the proxy-to-sample relationship loss function to maximize the sample features, which promotes intra-class compactness by increasing the similarity between the most different samples of the same class. Unlike existing proxy-based and pair-based methods, the relationship constraint framework uses a diverse range of proxy and sample data relationships. In addition, the proxy correction (PC) module is used to optimize the proxy. Extensive tests conducted on the widely popular CUB-200-2011, CARS-196, and SOP datasets show that the framework is effective and attains state-of-the-art performance.Graphical abstract
10.1007/s10489-024-05425-x
relationship constraint deep metric learning
abstractdeep metric learning (dml) models aim to learn semantically meaningful representations in which similar samples are pulled together and dissimilar samples are pushed apart. however, the classification effect is limited due to the high time complexity of previous models and their poor performance in extracting data relationships. this paper presents a novel relationship constraint deep metric learning (rcdml) approach, including proxy relationship constraint (prc) and sample relationship constraint (src) for inter-class separability and intra-class compactness, to solve the above problems and improve the classification effect. the prc combines the proxy-to-proxy relationship loss term with the proxy-to-sample relationship loss function to maximize the proxy features, hence enhancing inter-class separability by decreasing proxy similarity. additionally, the src combines the sample-to-sample relationship loss term with the proxy-to-sample relationship loss function to maximize the sample features, which promotes intra-class compactness by increasing the similarity between the most different samples of the same class. unlike existing proxy-based and pair-based methods, the relationship constraint framework uses a diverse range of proxy and sample data relationships. in addition, the proxy correction (pc) module is used to optimize the proxy. extensive tests conducted on the widely popular cub-200-2011, cars-196, and sop datasets show that the framework is effective and attains state-of-the-art performance.graphical abstract
[ "dml", "semantically meaningful representations", "which", "similar samples", "dissimilar samples", "the classification effect", "the high time complexity", "previous models", "their poor performance", "data relationships", "this paper", "a novel relationship", "deep metric learning (rcdml) approach", "proxy relationship constraint", "prc", "sample relationship constraint", "(src", "inter-class separability", "intra-class compactness", "the above problems", "the classification effect", "the prc", "proxy", "sample", "the proxy features", "inter-class separability", "proxy similarity", "the src", "sample", "sample", "the sample features", "which", "intra-class compactness", "the similarity", "the most different samples", "the same class", "existing proxy-based and pair-based methods", "the relationship constraint framework", "a diverse range", "proxy and sample data relationships", "addition", "the proxy correction", "(pc) module", "the proxy", "extensive tests", "the widely popular cub-200", "sop datasets", "the framework", "the-art", "abstractdeep metric learning", "prc", "prc", "cars-196" ]
Deep structure-level N-glycan identification using feature-induced structure diagnosis integrated with a deep learning model
[ "Suideng Qin", "Zhixin Tian" ]
Being a widely occurring protein post-translational modification, N-glycosylation features unique multi-dimensional structures including sequence and linkage isomers. There have been successful bioinformatics efforts in N-glycan structure identification using N-glycoproteomics data; however, symmetric “mirror” branch isomers and linkage isomers are largely unresolved. Here, we report deep structure-level N-glycan identification using feature-induced structure diagnosis (FISD) integrated with a deep learning model. A neural network model is integrated to conduct the identification of featured N-glycan motifs and boosts the process of structure diagnosis and distinction for linkage isomers. By adopting publicly available N-glycoproteomics datasets of five mouse tissues (17,136 intact N-glycopeptide spectrum matches) and a consideration of 23 motif features, a deep learning model integrated with a convolutional autoencoder and a multilayer perceptron was trained to be capable of predicting N-glycan featured motifs in the MS/MS spectra with previously identified compositions. In the test of the trained model, a prediction accuracy of 0.8 and AUC value of 0.95 were achieved; 5701 previously unresolved N-glycan structures were assigned by matched structure-diagnostic ions; and by using an explainable learning algorithm, two new fragmentation features of m/z = 674.25 and m/z = 835.28 were found to be significant to three N-glycan structure motifs with fucose, NeuAc, and NeuGc, proving the capability of FISD to discover new features in the MS/MS spectra.Graphical Abstract
10.1007/s00216-024-05505-4
deep structure-level n-glycan identification using feature-induced structure diagnosis integrated with a deep learning model
being a widely occurring protein post-translational modification, n-glycosylation features unique multi-dimensional structures including sequence and linkage isomers. there have been successful bioinformatics efforts in n-glycan structure identification using n-glycoproteomics data; however, symmetric “mirror” branch isomers and linkage isomers are largely unresolved. here, we report deep structure-level n-glycan identification using feature-induced structure diagnosis (fisd) integrated with a deep learning model. a neural network model is integrated to conduct the identification of featured n-glycan motifs and boosts the process of structure diagnosis and distinction for linkage isomers. by adopting publicly available n-glycoproteomics datasets of five mouse tissues (17,136 intact n-glycopeptide spectrum matches) and a consideration of 23 motif features, a deep learning model integrated with a convolutional autoencoder and a multilayer perceptron was trained to be capable of predicting n-glycan featured motifs in the ms/ms spectra with previously identified compositions. in the test of the trained model, a prediction accuracy of 0.8 and auc value of 0.95 were achieved; 5701 previously unresolved n-glycan structures were assigned by matched structure-diagnostic ions; and by using an explainable learning algorithm, two new fragmentation features of m/z = 674.25 and m/z = 835.28 were found to be significant to three n-glycan structure motifs with fucose, neuac, and neugc, proving the capability of fisd to discover new features in the ms/ms spectra.graphical abstract
[ "a widely occurring protein post-translational modification, n-glycosylation features", "unique multi-dimensional structures", "sequence and linkage isomers", "successful bioinformatics efforts", "n-glycan structure identification", "n-glycoproteomics data", "symmetric “mirror” branch isomers", "linkage isomers", "we", "deep structure-level n-glycan identification", "feature-induced structure diagnosis", "(fisd", "a deep learning model", "a neural network model", "the identification", "featured n-glycan motifs", "the process", "structure diagnosis", "distinction", "linkage isomers", "publicly available n-glycoproteomics datasets", "five mouse tissues", "17,136 intact n-glycopeptide spectrum matches", "a consideration", "23 motif features", "a deep learning model", "a convolutional autoencoder", "a multilayer perceptron", "n-glycan featured motifs", "the ms/ms spectra", "previously identified compositions", "the test", "the trained model", "a prediction accuracy", "0.8 and auc value", "5701 previously unresolved n-glycan structures", "matched structure-diagnostic ions", "an explainable learning algorithm", "two new fragmentation features", "m/z", "three n-glycan structure motifs", "fucose", "neuac", "neugc", "the capability", "fisd", "new features", "the ms/ms", "spectra.graphical abstract", "five", "17,136", "23", "0.8", "0.95", "5701", "two", "674.25", "835.28", "three" ]
An empirical study of sentiment analysis utilizing machine learning and deep learning algorithms
[ "Betul Erkantarci", "Gokhan Bakal" ]
Among text-mining studies, one of the most studied topics is the text classification task applied in various domains, including medicine, social media, and academia. As a sub-problem in text classification, sentiment analysis has been widely investigated to classify often opinion-based textual elements. Specifically, user reviews and experiential feedback for products or services have been employed as fundamental data sources for sentiment analysis efforts. As a result of rapidly emerging technological advancements, social media platforms such as Twitter, Facebook, and Reddit, have become central opinion-sharing mediums since the early 2000s. In this sense, we build various machine-learning models to solve the sentiment analysis problem on the Reddit comments dataset in this work. The experimental models we constructed achieve F1 scores within intervals of 73–76%. Consequently, we present comparative performance scores obtained by traditional machine learning and deep learning models and discuss the results.
10.1007/s42001-023-00236-5
an empirical study of sentiment analysis utilizing machine learning and deep learning algorithms
among text-mining studies, one of the most studied topics is the text classification task applied in various domains, including medicine, social media, and academia. as a sub-problem in text classification, sentiment analysis has been widely investigated to classify often opinion-based textual elements. specifically, user reviews and experiential feedback for products or services have been employed as fundamental data sources for sentiment analysis efforts. as a result of rapidly emerging technological advancements, social media platforms such as twitter, facebook, and reddit, have become central opinion-sharing mediums since the early 2000s. in this sense, we build various machine-learning models to solve the sentiment analysis problem on the reddit comments dataset in this work. the experimental models we constructed achieve f1 scores within intervals of 73–76%. consequently, we present comparative performance scores obtained by traditional machine learning and deep learning models and discuss the results.
[ "text-mining studies", "the most studied topics", "the text classification task", "various domains", "medicine", "social media", "academia", "a sub", "-", "problem", "text classification", "sentiment analysis", "opinion-based textual elements", "user reviews", "experiential feedback", "products", "services", "fundamental data sources", "sentiment analysis efforts", "a result", "rapidly emerging technological advancements", "social media platforms", "twitter", "facebook", "reddit", "central opinion-sharing mediums", "this sense", "we", "various machine-learning models", "the sentiment analysis problem", "the reddit comments", "this work", "the experimental models", "we", "f1 scores", "intervals", "73–76%", "we", "comparative performance scores", "traditional machine learning", "deep learning models", "the results", "one", "the early 2000s", "73–76%" ]
Deep learning in neglected vector-borne diseases: a systematic review
[ "Atmika Mishra", "Arya Pandey", "Ruchika Malhotra" ]
This study explores the application of Deep Learning in combating neglected vector-borne Diseases, a significant global health concern, particularly in resource-limited areas. It examines areas where Deep Learning has proven effective, compares popular Deep Learning techniques, focuses on interdisciplinary approaches with translational impact, and finds untapped potential for deep learning application. Thorough searches across multiple databases yielded 64 pertinent studies, from which 16 were selected based on inclusion criteria and quality assessment. Deep Learning applications in disease transmission risk prediction, vector detection, parasite classification, and treatment procedure optimization were investigated and focused on diseases such as Schistosomiasis, Chagas disease, Leishmaniasis, Echinococcosis, and Trachoma. Convolutional neural networks, artificial neural networks, multilayer perceptrons, and AutoML algorithms surpassed traditional methods for disease prediction, species identification, and diagnosis. The interdisciplinary integration of Deep Learning with public health, entomology, and epidemiology provides prospects for improved disease control and understanding. Deep Learning models automate disease surveillance, simplify epidemiological data processing, and enable early detection, particularly in resource-constrained settings. Smartphone apps driven by deep learning allow for rapid disease diagnosis and identification, boosting healthcare accessibility and global health outcomes. Improved algorithms, broadening the scope of applications to areas such as one health approach, and community engagement, and expanding deep learning applications to diseases such as lymphatic filariasis, hydatidosis, and onchocerciasis hold promise for improving global health outcomes.
10.1007/s13198-024-02380-1
deep learning in neglected vector-borne diseases: a systematic review
this study explores the application of deep learning in combating neglected vector-borne diseases, a significant global health concern, particularly in resource-limited areas. it examines areas where deep learning has proven effective, compares popular deep learning techniques, focuses on interdisciplinary approaches with translational impact, and finds untapped potential for deep learning application. thorough searches across multiple databases yielded 64 pertinent studies, from which 16 were selected based on inclusion criteria and quality assessment. deep learning applications in disease transmission risk prediction, vector detection, parasite classification, and treatment procedure optimization were investigated and focused on diseases such as schistosomiasis, chagas disease, leishmaniasis, echinococcosis, and trachoma. convolutional neural networks, artificial neural networks, multilayer perceptrons, and automl algorithms surpassed traditional methods for disease prediction, species identification, and diagnosis. the interdisciplinary integration of deep learning with public health, entomology, and epidemiology provides prospects for improved disease control and understanding. deep learning models automate disease surveillance, simplify epidemiological data processing, and enable early detection, particularly in resource-constrained settings. smartphone apps driven by deep learning allow for rapid disease diagnosis and identification, boosting healthcare accessibility and global health outcomes. improved algorithms, broadening the scope of applications to areas such as one health approach, and community engagement, and expanding deep learning applications to diseases such as lymphatic filariasis, hydatidosis, and onchocerciasis hold promise for improving global health outcomes.
[ "this study", "the application", "deep learning", "neglected vector-borne diseases", "a significant global health concern", "resource-limited areas", "it", "areas", "deep learning", "popular deep learning techniques", "interdisciplinary approaches", "translational impact", "untapped potential", "deep learning application", "thorough searches", "multiple databases", "64 pertinent studies", "which", "inclusion criteria", "quality assessment", "deep learning applications", "disease transmission risk prediction", "vector detection", "classification", "treatment procedure optimization", "diseases", "schistosomiasis", "chagas disease", "leishmaniasis", "echinococcosis", "trachoma", "convolutional neural networks", "artificial neural networks", "multilayer perceptrons", "automl algorithms", "traditional methods", "disease prediction", "species identification", "diagnosis", "the interdisciplinary integration", "deep learning", "public health", "entomology", "epidemiology", "prospects", "improved disease control", "understanding", "deep learning models", "automate disease surveillance", "epidemiological data processing", "early detection", "resource-constrained settings", "smartphone apps", "deep learning", "rapid disease diagnosis", "identification", "healthcare accessibility", "global health outcomes", "improved algorithms", "the scope", "applications", "areas", "one health approach", "community engagement", "deep learning applications", "diseases", "lymphatic filariasis", "hydatidosis", "onchocerciasis hold promise", "global health outcomes", "64", "16", "one" ]
Machine Learning and Deep Learning-Based Students’ Grade Prediction
[ "Adil Korchi", "Fayçal Messaoudi", "Ahmed Abatal", "Youness Manzali" ]
Predicting student performance in a curriculum or program offers the prospect of improving academic outcomes. By using effective performance prediction methods, instructional leaders can allocate adequate resources and instruction more accurately. This paper aims to identify machine learning algorithm features for predicting student grades as an early intervention. Predictive models spot at-risk students early, allowing educators to provide timely support. Educators can customize teaching methods, and these models assess program success, helping institutions refine or expand them through data-driven decisions. But the problem definition of student grade prediction is to develop predictive models or algorithms that can forecast or estimate the future academic performance or grades of students based on various input features and historical data, and to do so, we utilized a student dataset comprising personal information and grades, employing various regression algorithms, including decision tree, random forest, linear regression, k-nearest neighbor, XGBoost, and deep neural network. We chose these algorithms for their suitability and distinct strengths. We assessed their performance using determination coefficient, mean average error, mean squared error, and root mean squared error. The results showed that the deep neural network outperformed others with a determination coefficient of 99.97%, confirming its reliability in predicting student grades and assessing performance, and this will certainly help to develop predictive models that can accurately forecast or estimate students’ academic performance based on various input features and enable teaching staff to provide timely assistance in addressing these issues.
10.1007/s43069-023-00267-8
machine learning and deep learning-based students’ grade prediction
predicting student performance in a curriculum or program offers the prospect of improving academic outcomes. by using effective performance prediction methods, instructional leaders can allocate adequate resources and instruction more accurately. this paper aims to identify machine learning algorithm features for predicting student grades as an early intervention. predictive models spot at-risk students early, allowing educators to provide timely support. educators can customize teaching methods, and these models assess program success, helping institutions refine or expand them through data-driven decisions. but the problem definition of student grade prediction is to develop predictive models or algorithms that can forecast or estimate the future academic performance or grades of students based on various input features and historical data, and to do so, we utilized a student dataset comprising personal information and grades, employing various regression algorithms, including decision tree, random forest, linear regression, k-nearest neighbor, xgboost, and deep neural network. we chose these algorithms for their suitability and distinct strengths. we assessed their performance using determination coefficient, mean average error, mean squared error, and root mean squared error. the results showed that the deep neural network outperformed others with a determination coefficient of 99.97%, confirming its reliability in predicting student grades and assessing performance, and this will certainly help to develop predictive models that can accurately forecast or estimate students’ academic performance based on various input features and enable teaching staff to provide timely assistance in addressing these issues.
[ "student performance", "a curriculum", "program", "the prospect", "academic outcomes", "effective performance prediction methods", "instructional leaders", "adequate resources", "instruction", "this paper", "machine learning", "algorithm", "student grades", "an early intervention", "predictive models", "risk", "educators", "timely support", "educators", "teaching methods", "these models", "program success", "institutions", "them", "data-driven decisions", "the problem definition", "student grade prediction", "predictive models", "algorithms", "that", "the future academic performance", "grades", "students", "various input features", "historical data", "we", "a student dataset", "personal information", "grades", "various regression algorithms", "decision tree", "random forest", "linear regression", "k-nearest neighbor", "xgboost", "deep neural network", "we", "these algorithms", "their suitability", "distinct strengths", "we", "their performance", "determination coefficient", "average error", "mean squared error", "root mean squared error", "the results", "the deep neural network", "others", "a determination coefficient", "99.97%", "its reliability", "student grades", "performance", "this", "predictive models", "that", "students’ academic performance", "various input features", "teaching staff", "timely assistance", "these issues", "linear", "99.97%" ]
Rf-based fingerprinting for indoor localization: deep transfer learning approach
[ "Rokaya Safwat", "Eman Shaaban", "Shahinaz. M. Al-Tabbakh", "Karim Emara" ]
Transfer Learning (TL) has emerged as a powerful approach for improving the performance of Deep Learning systems in various domains by leveraging pre-trained models. It was proven that features learned by deep learning can smoothly be reused across similar domains. Deep transfer learning schemes compensate for limited training data via transfer learning of a rich data environment. This paper investigates the effectiveness of applying TL schemes in indoor localization. It proposes four deep TL models where the knowledge is transferred from the rich-measurement data source domain to multiple target domains with limited data measurements. The architecture of the source domain is based on Convolutional Neural Network (CNN), where the four deep TL models for the target domain are: standalone feature extractor, integrated feature extractor, selective fine-tuning, and weight initialization. We employed a dataset of RF fingerprinting measurement signals representing common interior conditions, including extremely crowded, medium cluttered, low cluttered, and open environments, to test the effectiveness of the proposed TL models. We measured the accuracy and computation time of target-domain models trained, with varied percentages of restricted data sizes: 40%, 30%, 20%, 15%, 10%, 5%, and 2.5%. The experimental results show that all TL models are effective in achieving significant improvement in accuracy when compared to non-transferred models, even with minimal training data size. However, the proper determination of the TL model and the amount of training data profoundly influence the performance results.
10.1007/s12652-024-04819-6
rf-based fingerprinting for indoor localization: deep transfer learning approach
transfer learning (tl) has emerged as a powerful approach for improving the performance of deep learning systems in various domains by leveraging pre-trained models. it was proven that features learned by deep learning can smoothly be reused across similar domains. deep transfer learning schemes compensate for limited training data via transfer learning of a rich data environment. this paper investigates the effectiveness of applying tl schemes in indoor localization. it proposes four deep tl models where the knowledge is transferred from the rich-measurement data source domain to multiple target domains with limited data measurements. the architecture of the source domain is based on convolutional neural network (cnn), where the four deep tl models for the target domain are: standalone feature extractor, integrated feature extractor, selective fine-tuning, and weight initialization. we employed a dataset of rf fingerprinting measurement signals representing common interior conditions, including extremely crowded, medium cluttered, low cluttered, and open environments, to test the effectiveness of the proposed tl models. we measured the accuracy and computation time of target-domain models trained, with varied percentages of restricted data sizes: 40%, 30%, 20%, 15%, 10%, 5%, and 2.5%. the experimental results show that all tl models are effective in achieving significant improvement in accuracy when compared to non-transferred models, even with minimal training data size. however, the proper determination of the tl model and the amount of training data profoundly influence the performance results.
[ "tl", "a powerful approach", "the performance", "deep learning systems", "various domains", "pre-trained models", "it", "features", "deep learning", "similar domains", "deep transfer learning schemes", "limited training data", "transfer learning", "a rich data environment", "this paper", "the effectiveness", "tl schemes", "indoor localization", "it", "four deep tl models", "the knowledge", "the rich-measurement data source domain", "multiple target domains", "limited data measurements", "the architecture", "the source domain", "convolutional neural network", "cnn", "the four deep tl models", "the target domain", "standalone feature extractor", "integrated feature extractor", "selective fine-tuning, and weight initialization", "we", "a dataset", "rf fingerprinting measurement signals", "common interior conditions", "extremely crowded, medium cluttered, low cluttered, and open environments", "the effectiveness", "the proposed tl models", "we", "the accuracy", "computation time", "target-domain models", "varied percentages", "restricted data sizes", "40%", "30%", "20%", "15%", "10%", "5%", "2.5%", "the experimental results", "all tl models", "significant improvement", "accuracy", "non-transferred models", "minimal training data size", "the proper determination", "the tl model", "the amount", "training data", "the performance results", "four", "cnn", "four", "40%", "30%", "20%", "15%", "10%", "5%", "2.5%" ]
Satellite image classification using deep learning approach
[ "Divakar Yadav", "Kritarth Kapoor", "Arun Kumar Yadav", "Mohit Kumar", "Arti Jain", "Jorge Morato" ]
Our planet Earth comprises distinguished topologies based on temperature, location, latitude, longitude, and altitude, which can be captured using Remote Sensing Satellites. In this paper, the classification of satellite images is performed based on their topologies and geographical features. Researchers have worked on several machine learning and deep learning methods like support vector machine, k-nearest neighbor, maximum likelihood, deep belief network, etc. that can be used to solve satellite image classification tasks. All strategies give promising results. Recent trends show that a Convolutional Neural Network (CNN) is an excellent deep learning model for classification purposes, which is used in this paper. The open-source EuroSAT dataset is used for classifying the remote images which contain 27,000 images distributed among ten classes. The 3 baseline CNN models are pre-trained, namely- ResNet50, ResNet101, and GoogleNet models. They have other sequence layers added to them with respect to CNN, and data is pre-processed using LAB channel operations. The highest accuracy of 99.68%, precision of 99.42%, recall of 99.51%, and F- Score of 99.45% are achieved using GoogleNet over the pre-processed dataset. The proposed work is compared with the state-of-art methods and it is observed that more layers in CNN do not necessarily provide a better outcome for a medium-sized dataset. The GoogleNet, a 22-layer CNN, performs faster and better than the 50 layers CNN- ResNet50, and 101 layers CNN- ResNet101.
10.1007/s12145-024-01301-x
satellite image classification using deep learning approach
our planet earth comprises distinguished topologies based on temperature, location, latitude, longitude, and altitude, which can be captured using remote sensing satellites. in this paper, the classification of satellite images is performed based on their topologies and geographical features. researchers have worked on several machine learning and deep learning methods like support vector machine, k-nearest neighbor, maximum likelihood, deep belief network, etc. that can be used to solve satellite image classification tasks. all strategies give promising results. recent trends show that a convolutional neural network (cnn) is an excellent deep learning model for classification purposes, which is used in this paper. the open-source eurosat dataset is used for classifying the remote images which contain 27,000 images distributed among ten classes. the 3 baseline cnn models are pre-trained, namely- resnet50, resnet101, and googlenet models. they have other sequence layers added to them with respect to cnn, and data is pre-processed using lab channel operations. the highest accuracy of 99.68%, precision of 99.42%, recall of 99.51%, and f- score of 99.45% are achieved using googlenet over the pre-processed dataset. the proposed work is compared with the state-of-art methods and it is observed that more layers in cnn do not necessarily provide a better outcome for a medium-sized dataset. the googlenet, a 22-layer cnn, performs faster and better than the 50 layers cnn- resnet50, and 101 layers cnn- resnet101.
[ "our planet earth", "distinguished topologies", "temperature", "location", "latitude", "altitude", "which", "remote sensing satellites", "this paper", "the classification", "satellite images", "their topologies", "geographical features", "researchers", "several machine learning", "deep learning methods", "support vector machine", "k-nearest neighbor", "maximum likelihood", "deep belief network", "that", "satellite image classification tasks", "all strategies", "promising results", "recent trends", "a convolutional neural network", "cnn", "an excellent deep learning model", "classification purposes", "which", "this paper", "the open-source eurosat dataset", "the remote images", "which", "27,000 images", "ten classes", "the 3 baseline cnn models", "-trained, namely- resnet50", "resnet101", "googlenet models", "they", "other sequence layers", "them", "respect", "cnn", "data", "-processed using lab channel operations", "the highest accuracy", "99.68%", "precision", "99.42%", "recall", "99.51%", "f- score", "99.45%", "googlenet", "the pre-processed dataset", "the proposed work", "art", "it", "more layers", "cnn", "a better outcome", "a medium-sized dataset", "the googlenet", "a 22-layer cnn", "the 50 layers cnn- resnet50", "101 layers", "resnet101", "cnn", "27,000", "ten", "3", "cnn", "cnn", "99.68%", "99.42%", "99.51%", "99.45%", "cnn", "22", "cnn", "50", "resnet50", "101" ]
Amplifying document categorization with advanced features and deep learning
[ "M. Kavitha", "K. Akila" ]
The field of natural language processing (NLP) plays a pivotal role in discerning unstructured data from diverse origins. This study employs advanced techniques rooted in machine learning and deep learning to effectively categorize news articles. Notably, deep learning models have demonstrated superior performance over traditional machine learning algorithms, rendering them a popular choice for a range of NLP tasks. The research employs feature extraction techniques to identify multiword tokens, negation words, and out-of-vocabulary words and replace them. Additionally, convolutional neural network models leverage embedding, convolutional layers, and max pooling layers to capture intricate features. For tasks requiring an understanding of dependencies among long phrases, long short-term memory models come into play. The evaluation of the proposed model hinges on training it with datasets like AG News, BBC, and 20 Newsgroup, gauging its efficacy. The study delves into the myriad challenges inherent to text classification. These challenges are thoughtfully discussed, shedding light on the intricacies of the process. Furthermore, the research furnishes comprehensive test outcomes for both conventional machine learning and deep learning models. The significance of this proposed model is that it uses a multiword expression lexicon, wordnet synset, and word embedding techniques for feature extraction. The performance of the models is increased when using these feature extraction techniques.
10.1007/s11042-024-18483-7
amplifying document categorization with advanced features and deep learning
the field of natural language processing (nlp) plays a pivotal role in discerning unstructured data from diverse origins. this study employs advanced techniques rooted in machine learning and deep learning to effectively categorize news articles. notably, deep learning models have demonstrated superior performance over traditional machine learning algorithms, rendering them a popular choice for a range of nlp tasks. the research employs feature extraction techniques to identify multiword tokens, negation words, and out-of-vocabulary words and replace them. additionally, convolutional neural network models leverage embedding, convolutional layers, and max pooling layers to capture intricate features. for tasks requiring an understanding of dependencies among long phrases, long short-term memory models come into play. the evaluation of the proposed model hinges on training it with datasets like ag news, bbc, and 20 newsgroup, gauging its efficacy. the study delves into the myriad challenges inherent to text classification. these challenges are thoughtfully discussed, shedding light on the intricacies of the process. furthermore, the research furnishes comprehensive test outcomes for both conventional machine learning and deep learning models. the significance of this proposed model is that it uses a multiword expression lexicon, wordnet synset, and word embedding techniques for feature extraction. the performance of the models is increased when using these feature extraction techniques.
[ "the field", "natural language processing", "nlp", "a pivotal role", "unstructured data", "diverse origins", "this study", "advanced techniques", "machine learning", "deep learning", "news articles", "deep learning models", "superior performance", "traditional machine learning algorithms", "them", "a popular choice", "a range", "nlp tasks", "the research", "feature extraction techniques", "multiword tokens", "negation words", "out-of-vocabulary words", "them", "convolutional neural network models", "max pooling layers", "intricate features", "tasks", "an understanding", "dependencies", "long phrases", "long short-term memory models", "play", "the evaluation", "the proposed model hinges", "it", "datasets", "ag news", "20 newsgroup", "its efficacy", "the study", "the myriad challenges", "classification", "these challenges", "light", "the intricacies", "the process", "the research", "comprehensive test outcomes", "both conventional machine learning", "deep learning models", "the significance", "this proposed model", "it", "a multiword expression lexicon", "wordnet synset", "techniques", "feature extraction", "the performance", "the models", "these feature extraction techniques", "multiword tokens", "max", "ag news", "bbc", "20", "multiword" ]
Image steganalysis using deep learning models
[ "Alexandr Kuznetsov", "Nicolas Luhanko", "Emanuele Frontoni", "Luca Romeo", "Riccardo Rosati" ]
In the domain of digital steganography, the problem of efficient and accurate steganalysis is of utmost importance. Steganalysis seeks to detect the presence of hidden data within digital media, a task that is continually evolving due to advancements in steganographic techniques. This study undertakes a detailed exploration of the SRNet model, a prominent deep learning model for steganalysis. We aim to evaluate the impact of various factors, including choice of deep learning framework, model initialization and optimization parameters, and architectural modifications on the model's steganalysis performance. Three separate implementations of the SRNet model are examined in this study: our custom implementation, an implementation using TensorFlow, and another utilizing PyTorch. Each model is evaluated on its ability to detect different payloads, or bytes of hidden data per pixel, in digital images. This investigation includes a thorough comparative analysis of different performance metrics including accuracy, recall, precision, and F1-score. Our findings indicate that the choice of deep learning framework and the parameters utilized for model initialization and optimization play significant roles in influencing the model's steganalysis effectiveness. Notably, the TensorFlow implementation, enhanced with an additional dense layer, outperforms all other models. In contrast, our custom SRNet implementation, trained with fewer epochs, offers a balance between computational cost and steganalysis performance. This study thus provides valuable insights into the adaptability and potential of the SRNet model for steganalysis, illustrating the model's performance under different configurations and implementations. It underscores the importance of continued exploration and optimization in the field of steganalysis, offering guidance for future research in this evolving domain.
10.1007/s11042-023-17591-0
image steganalysis using deep learning models
in the domain of digital steganography, the problem of efficient and accurate steganalysis is of utmost importance. steganalysis seeks to detect the presence of hidden data within digital media, a task that is continually evolving due to advancements in steganographic techniques. this study undertakes a detailed exploration of the srnet model, a prominent deep learning model for steganalysis. we aim to evaluate the impact of various factors, including choice of deep learning framework, model initialization and optimization parameters, and architectural modifications on the model's steganalysis performance. three separate implementations of the srnet model are examined in this study: our custom implementation, an implementation using tensorflow, and another utilizing pytorch. each model is evaluated on its ability to detect different payloads, or bytes of hidden data per pixel, in digital images. this investigation includes a thorough comparative analysis of different performance metrics including accuracy, recall, precision, and f1-score. our findings indicate that the choice of deep learning framework and the parameters utilized for model initialization and optimization play significant roles in influencing the model's steganalysis effectiveness. notably, the tensorflow implementation, enhanced with an additional dense layer, outperforms all other models. in contrast, our custom srnet implementation, trained with fewer epochs, offers a balance between computational cost and steganalysis performance. this study thus provides valuable insights into the adaptability and potential of the srnet model for steganalysis, illustrating the model's performance under different configurations and implementations. it underscores the importance of continued exploration and optimization in the field of steganalysis, offering guidance for future research in this evolving domain.
[ "the domain", "digital steganography", "the problem", "efficient and accurate steganalysis", "utmost importance", "steganalysis", "the presence", "hidden data", "digital media", "a task", "that", "advancements", "steganographic techniques", "this study", "a detailed exploration", "the srnet model", "a prominent deep learning model", "steganalysis", "we", "the impact", "various factors", "choice", "deep learning framework", "model initialization", "optimization parameters", "architectural modifications", "the model's steganalysis performance", "three separate implementations", "the srnet model", "this study", "our custom implementation", "an implementation", "tensorflow", "another utilizing pytorch", "each model", "its ability", "different payloads", "bytes", "hidden data", "pixel", "digital images", "this investigation", "a thorough comparative analysis", "different performance metrics", "accuracy", "recall", "precision", "f1-score", "our findings", "the choice", "deep learning framework", "the parameters", "model initialization", "optimization", "significant roles", "the model's steganalysis effectiveness", "the tensorflow implementation", "an additional dense layer", "all other models", "contrast", "our custom srnet implementation", "fewer epochs", "a balance", "computational cost", "steganalysis performance", "this study", "valuable insights", "the adaptability", "potential", "the srnet model", "steganalysis", "the model's performance", "different configurations", "implementations", "it", "the importance", "continued exploration", "optimization", "the field", "steganalysis", "guidance", "future research", "this evolving domain", "three" ]
Control learning rate for autism facial detection via deep transfer learning
[ "Abdelkrim El Mouatasim", "Mohamed Ikermane" ]
Autism spectrum disorder (ASD) is a complex neurodevelopmental disorder that affects social interaction and communication. Early detection of ASD can significantly improve outcomes for individuals with the disorder, and there has been increasing interest in using machine learning techniques to aid in the diagnosis of ASD. One promising approach is the use of deep learning techniques, particularly convolutional neural networks (CNNs), to classify facial images as indicative of ASD or not. However, choosing a learning rate for optimizing the performance of these deep CNNs can be tedious and may not always result in optimal convergence. In this paper, we propose a novel approach called the control subgradient algorithm (CSA) for tackling ASD diagnosis based on facial images using deep CNNs. CSA is a variation of the subgradient method in which the learning rate is updated by a control step in each iteration of each epoch. We apply CSA to the popular DensNet-121 CNN model and evaluate its performance on a publicly available facial ASD dataset. Our results show that CSA is faster than the baseline method and improves the classification accuracy and loss compared to the baseline. We also demonstrate the effectiveness of using CSA with \(L_1\)-regularization to further improve the performance of our deep CNN model.
10.1007/s11760-023-02598-9
control learning rate for autism facial detection via deep transfer learning
autism spectrum disorder (asd) is a complex neurodevelopmental disorder that affects social interaction and communication. early detection of asd can significantly improve outcomes for individuals with the disorder, and there has been increasing interest in using machine learning techniques to aid in the diagnosis of asd. one promising approach is the use of deep learning techniques, particularly convolutional neural networks (cnns), to classify facial images as indicative of asd or not. however, choosing a learning rate for optimizing the performance of these deep cnns can be tedious and may not always result in optimal convergence. in this paper, we propose a novel approach called the control subgradient algorithm (csa) for tackling asd diagnosis based on facial images using deep cnns. csa is a variation of the subgradient method in which the learning rate is updated by a control step in each iteration of each epoch. we apply csa to the popular densnet-121 cnn model and evaluate its performance on a publicly available facial asd dataset. our results show that csa is faster than the baseline method and improves the classification accuracy and loss compared to the baseline. we also demonstrate the effectiveness of using csa with \(l_1\)-regularization to further improve the performance of our deep cnn model.
[ "autism spectrum disorder", "asd", "a complex neurodevelopmental disorder", "that", "social interaction", "communication", "early detection", "asd", "outcomes", "individuals", "the disorder", "interest", "machine learning techniques", "the diagnosis", "asd", "one promising approach", "the use", "deep learning techniques", "particularly convolutional neural networks", "cnns", "facial images", "asd", "a learning rate", "the performance", "these deep cnns", "optimal convergence", "this paper", "we", "a novel approach", "the control subgradient", "algorithm", "csa", "asd diagnosis", "facial images", "deep cnns", "csa", "a variation", "the subgradient method", "which", "the learning rate", "a control step", "each iteration", "each epoch", "we", "the popular densnet-121 cnn model", "its performance", "a publicly available facial asd dataset", "our results", "csa", "the baseline method", "the classification accuracy", "loss", "the baseline", "we", "the effectiveness", "\\(l_1\\)-regularization", "the performance", "our deep cnn model", "one", "cnn" ]
Optimized model architectures for deep learning on genomic data
[ "Hüseyin Anil Gündüz", "René Mreches", "Julia Moosbauer", "Gary Robertson", "Xiao-Yin To", "Eric A. Franzosa", "Curtis Huttenhower", "Mina Rezaei", "Alice C. McHardy", "Bernd Bischl", "Philipp C. Münch", "Martin Binder" ]
The success of deep learning in various applications depends on task-specific architecture design choices, including the types, hyperparameters, and number of layers. In computational biology, there is no consensus on the optimal architecture design, and decisions are often made using insights from more well-established fields such as computer vision. These may not consider the domain-specific characteristics of genome sequences, potentially limiting performance. Here, we present GenomeNet-Architect, a neural architecture design framework that automatically optimizes deep learning models for genome sequence data. It optimizes the overall layout of the architecture, with a search space specifically designed for genomics. Additionally, it optimizes hyperparameters of individual layers and the model training procedure. On a viral classification task, GenomeNet-Architect reduced the read-level misclassification rate by 19%, with 67% faster inference and 83% fewer parameters, and achieved similar contig-level accuracy with ~100 times fewer parameters compared to the best-performing deep learning baselines.
10.1038/s42003-024-06161-1
optimized model architectures for deep learning on genomic data
the success of deep learning in various applications depends on task-specific architecture design choices, including the types, hyperparameters, and number of layers. in computational biology, there is no consensus on the optimal architecture design, and decisions are often made using insights from more well-established fields such as computer vision. these may not consider the domain-specific characteristics of genome sequences, potentially limiting performance. here, we present genomenet-architect, a neural architecture design framework that automatically optimizes deep learning models for genome sequence data. it optimizes the overall layout of the architecture, with a search space specifically designed for genomics. additionally, it optimizes hyperparameters of individual layers and the model training procedure. on a viral classification task, genomenet-architect reduced the read-level misclassification rate by 19%, with 67% faster inference and 83% fewer parameters, and achieved similar contig-level accuracy with ~100 times fewer parameters compared to the best-performing deep learning baselines.
[ "the success", "deep learning", "various applications", "task-specific architecture design choices", "the types", "hyperparameters", "number", "layers", "computational biology", "no consensus", "the optimal architecture design", "decisions", "insights", "more well-established fields", "computer vision", "these", "the domain-specific characteristics", "genome sequences", "performance", "we", "genomenet-architect", "a neural architecture design framework", "that", "deep learning models", "genome sequence data", "it", "the overall layout", "the architecture", "a search space", "genomics", "it", "hyperparameters", "individual layers", "the model training procedure", "a viral classification task", "genomenet-architect", "the read-level misclassification rate", "19%", "67% faster inference", "83% fewer parameters", "similar contig-level accuracy", "~100 times fewer parameters", "the best-performing deep learning baselines", "19%", "67%", "83%" ]
Virtual reality-empowered deep-learning analysis of brain cells
[ "Doris Kaltenecker", "Rami Al-Maskari", "Moritz Negwer", "Luciano Hoeher", "Florian Kofler", "Shan Zhao", "Mihail Todorov", "Zhouyi Rong", "Johannes Christian Paetzold", "Benedikt Wiestler", "Marie Piraud", "Daniel Rueckert", "Julia Geppert", "Pauline Morigny", "Maria Rohm", "Bjoern H. Menze", "Stephan Herzig", "Mauricio Berriel Diaz", "Ali Ertürk" ]
Automated detection of specific cells in three-dimensional datasets such as whole-brain light-sheet image stacks is challenging. Here, we present DELiVR, a virtual reality-trained deep-learning pipeline for detecting c-Fos+ cells as markers for neuronal activity in cleared mouse brains. Virtual reality annotation substantially accelerated training data generation, enabling DELiVR to outperform state-of-the-art cell-segmenting approaches. Our pipeline is available in a user-friendly Docker container that runs with a standalone Fiji plugin. DELiVR features a comprehensive toolkit for data visualization and can be customized to other cell types of interest, as we did here for microglia somata, using Fiji for dataset-specific training. We applied DELiVR to investigate cancer-related brain activity, unveiling an activation pattern that distinguishes weight-stable cancer from cancers associated with weight loss. Overall, DELiVR is a robust deep-learning tool that does not require advanced coding skills to analyze whole-brain imaging data in health and disease.
10.1038/s41592-024-02245-2
virtual reality-empowered deep-learning analysis of brain cells
automated detection of specific cells in three-dimensional datasets such as whole-brain light-sheet image stacks is challenging. here, we present delivr, a virtual reality-trained deep-learning pipeline for detecting c-fos+ cells as markers for neuronal activity in cleared mouse brains. virtual reality annotation substantially accelerated training data generation, enabling delivr to outperform state-of-the-art cell-segmenting approaches. our pipeline is available in a user-friendly docker container that runs with a standalone fiji plugin. delivr features a comprehensive toolkit for data visualization and can be customized to other cell types of interest, as we did here for microglia somata, using fiji for dataset-specific training. we applied delivr to investigate cancer-related brain activity, unveiling an activation pattern that distinguishes weight-stable cancer from cancers associated with weight loss. overall, delivr is a robust deep-learning tool that does not require advanced coding skills to analyze whole-brain imaging data in health and disease.
[ "automated detection", "specific cells", "three-dimensional datasets", "whole-brain light-sheet image stacks", "we", "delivr", "a virtual reality-trained deep-learning pipeline", "c-fos+ cells", "markers", "neuronal activity", "cleared mouse brains", "virtual reality annotation", "training data generation", "delivr", "the-art", "our pipeline", "a user-friendly docker container", "that", "a standalone fiji plugin", "delivr", "a comprehensive toolkit", "data visualization", "other cell types", "interest", "we", "microglia somata", "fiji", "dataset-specific training", "we", "delivr", "cancer-related brain activity", "an activation pattern", "that", "weight-stable cancer", "cancers", "weight loss", "delivr", "a robust deep-learning tool", "that", "advanced coding skills", "whole-brain imaging data", "health", "disease", "three", "delivr", "delivr" ]
Fast flow field prediction of pollutant leakage diffusion based on deep learning
[ "Wan YunBo", "Zhao Zhong", "Liu Jie", "Zuo KuiJun", "Zhang Yong" ]
AbstractPredicting pollutant leakage and diffusion processes is crucial for ensuring people’s safety. While the deep learning method offers high simulation efficiency and superior generalization, there is currently a lack of research on predicting pollutant leakage and diffusion flow field using deep learning. Therefore, it is necessary to conduct further studies in this area. This paper introduces a two-level network method to model the flow characteristics of pollutant diffusion. The proposed method in this study demonstrates a significant enhancement in flow field prediction accuracy compared to traditional deep learning methods. Moreover, it improves computational efficiency by over 800 times compared to traditional computational fluid dynamics (CFD) methods. Unlike conventional CFD methods that require grid expansion to calculate all operation conditions, the deep learning method is not confined by grid limitations. While deep learning methods may not entirely replace CFD methods, they can serve as a valuable supplementary tool, expanding the versatility of CFD methods. The findings of this research establish a robust foundation for incorporating deep learning methods in addressing pollutant leakage and diffusion challenges.Graphical abstractA deep learning method for simulating pollutant leakage diffusion.
10.1007/s11356-024-34462-9
fast flow field prediction of pollutant leakage diffusion based on deep learning
abstractpredicting pollutant leakage and diffusion processes is crucial for ensuring people’s safety. while the deep learning method offers high simulation efficiency and superior generalization, there is currently a lack of research on predicting pollutant leakage and diffusion flow field using deep learning. therefore, it is necessary to conduct further studies in this area. this paper introduces a two-level network method to model the flow characteristics of pollutant diffusion. the proposed method in this study demonstrates a significant enhancement in flow field prediction accuracy compared to traditional deep learning methods. moreover, it improves computational efficiency by over 800 times compared to traditional computational fluid dynamics (cfd) methods. unlike conventional cfd methods that require grid expansion to calculate all operation conditions, the deep learning method is not confined by grid limitations. while deep learning methods may not entirely replace cfd methods, they can serve as a valuable supplementary tool, expanding the versatility of cfd methods. the findings of this research establish a robust foundation for incorporating deep learning methods in addressing pollutant leakage and diffusion challenges.graphical abstracta deep learning method for simulating pollutant leakage diffusion.
[ "pollutant leakage", "diffusion processes", "people’s safety", "the deep learning method", "high simulation efficiency", "superior generalization", "a lack", "research", "pollutant leakage", "diffusion flow field", "deep learning", "it", "further studies", "this area", "this paper", "a two-level network method", "the flow characteristics", "pollutant diffusion", "the proposed method", "this study", "a significant enhancement", "flow field prediction accuracy", "traditional deep learning methods", "it", "computational efficiency", "over 800 times", "traditional computational fluid dynamics", "(cfd) methods", "conventional cfd methods", "that", "grid expansion", "all operation conditions", "the deep learning method", "grid limitations", "deep learning methods", "cfd methods", "they", "a valuable supplementary tool", "the versatility", "cfd methods", "the findings", "this research", "a robust foundation", "deep learning methods", "pollutant leakage", "diffusion challenges.graphical abstracta", "deep learning method", "pollutant leakage diffusion", "two" ]
Stroke detection in the brain using MRI and deep learning models
[ "Subba Rao Polamuri" ]
When it comes to finding solutions to issues, deep learning models are pretty much everywhere. Medical image data is best analysed using models based on Convolutional Neural Networks (CNNs). Better methods for early detection are crucial due to the concerning increase in the number of people suffering from brain stroke. Among the several medical imaging modalities used for brain imaging, magnetic resonance imaging (MRI) stands out. When it comes to analysing medical photos, the deep learning models currently utilised with MRI have showed good outcomes. To improve the efficacy of brain stroke diagnosis, we suggested several upgrades to deep learning models in this work, including DenseNet121, ResNet50, and VGG16. Since these models are not purpose-built to solve any particular issue, they are modified according to the present situation involving the detection of brain strokes. To make use of all of these cutting-edge deep learning models in a pipeline, we proposed a strategy based on supervised learning. Results from the experiments showed that optimised models outperformed baseline models.
10.1007/s11042-024-19318-1
stroke detection in the brain using mri and deep learning models
when it comes to finding solutions to issues, deep learning models are pretty much everywhere. medical image data is best analysed using models based on convolutional neural networks (cnns). better methods for early detection are crucial due to the concerning increase in the number of people suffering from brain stroke. among the several medical imaging modalities used for brain imaging, magnetic resonance imaging (mri) stands out. when it comes to analysing medical photos, the deep learning models currently utilised with mri have showed good outcomes. to improve the efficacy of brain stroke diagnosis, we suggested several upgrades to deep learning models in this work, including densenet121, resnet50, and vgg16. since these models are not purpose-built to solve any particular issue, they are modified according to the present situation involving the detection of brain strokes. to make use of all of these cutting-edge deep learning models in a pipeline, we proposed a strategy based on supervised learning. results from the experiments showed that optimised models outperformed baseline models.
[ "it", "solutions", "issues", "deep learning models", "medical image data", "using models", "convolutional neural networks", "cnns", "better methods", "early detection", "the concerning increase", "the number", "people", "brain stroke", "the several medical imaging modalities", "brain imaging", "magnetic resonance imaging", "mri", "it", "medical photos", "the deep learning models", "mri", "good outcomes", "the efficacy", "brain stroke diagnosis", "we", "several upgrades", "deep learning models", "this work", "densenet121", "resnet50", "vgg16", "these models", "any particular issue", "they", "the present situation", "the detection", "brain strokes", "use", "all", "these cutting-edge deep learning models", "a pipeline", "we", "a strategy", "supervised learning", "results", "the experiments", "optimised models", "baseline models", "resnet50" ]
Combining Deep Learning with Good Old-Fashioned Machine Learning
[ "Moshe Sipper" ]
We present a comprehensive, stacking-based framework for combining deep learning with good old-fashioned machine learning, called Deep GOld. Our framework involves ensemble selection from 51 retrained pretrained deep networks as first-level models, and 10 machine-learning algorithms as second-level models. Enabled by today’s state-of-the-art software tools and hardware platforms, Deep GOld delivers consistent improvement when tested on four image-classification datasets: Fashion MNIST, CIFAR10, CIFAR100, and Tiny ImageNet. Of 120 experiments, in all but 10 Deep GOld improved the original networks’ performance.
10.1007/s42979-022-01505-2
combining deep learning with good old-fashioned machine learning
we present a comprehensive, stacking-based framework for combining deep learning with good old-fashioned machine learning, called deep gold. our framework involves ensemble selection from 51 retrained pretrained deep networks as first-level models, and 10 machine-learning algorithms as second-level models. enabled by today’s state-of-the-art software tools and hardware platforms, deep gold delivers consistent improvement when tested on four image-classification datasets: fashion mnist, cifar10, cifar100, and tiny imagenet. of 120 experiments, in all but 10 deep gold improved the original networks’ performance.
[ "we", "a comprehensive, stacking-based framework", "deep learning", "good old-fashioned machine learning", "deep gold", "our framework", "ensemble selection", "deep networks", "first-level models", "10 machine-learning algorithms", "second-level models", "the-art", "hardware platforms", "deep gold", "consistent improvement", "four image-classification datasets", "fashion mnist", "cifar10", "cifar100", "tiny imagenet", "120 experiments", "all but 10 deep gold", "the original networks’ performance", "51", "first", "10", "second", "today", "four", "cifar10", "120", "10" ]
Image classification of intracranial tumor using deep residual learning technique
[ "G. Vidya Sagar", "M. Ravi Kumar", "Sk. Hasane Ahammad", "Chella Santhosh" ]
Classifying brain tumours is essential for diagnosing tumour progression and planning effective treatments. Different imaging modalities are used to diagnose brain tumours. The opposite is true for magnetic resonance imaging (MRI), which has gained widespread use due to its superior image quality and the fact that it does not require ionizing radiation. Image classification of intracranial tumors using deep residual learning technique is an application of deep learning in the field of medical imaging analysis. It involves using convolutional neural networks (CNNs) to automatically classify brain images into different categories based on the presence or absence of tumors. ResNet is a deep neural network that addresses the problem of vanishing gradients during training of very deep networks.The deep learning subfield of machine learning has recently shown remarkable success, especially in classification and segmentation. We trained a deep residual network using picture datasets to distinguish between several brain cancers. The information generated by MRI scans is extensive. A radiologist analyses these images. The three most common brain tumours are meningioma, glioma, and pituitary tumour. Brain tumours are complex diseases, and a manual examination may be fraught with error. Experimental outcomes based on various techniques under augmentation with image-based datasets are presented. The accuracy with no augmentation is about 98% and under augmentation is approximately 99.08%.By leveraging deep residual learning techniques, image classification of intracranial tumors can benefit from the ability of deep neural networks to automatically learn complex representations from raw image data.Classification methods that use machine learning to automate the process have proven superior to human curation. Thus, we present a system that can identify and classify utilizing deep CNN-based residual networks.
10.1007/s11042-023-17712-9
image classification of intracranial tumor using deep residual learning technique
classifying brain tumours is essential for diagnosing tumour progression and planning effective treatments. different imaging modalities are used to diagnose brain tumours. the opposite is true for magnetic resonance imaging (mri), which has gained widespread use due to its superior image quality and the fact that it does not require ionizing radiation. image classification of intracranial tumors using deep residual learning technique is an application of deep learning in the field of medical imaging analysis. it involves using convolutional neural networks (cnns) to automatically classify brain images into different categories based on the presence or absence of tumors. resnet is a deep neural network that addresses the problem of vanishing gradients during training of very deep networks.the deep learning subfield of machine learning has recently shown remarkable success, especially in classification and segmentation. we trained a deep residual network using picture datasets to distinguish between several brain cancers. the information generated by mri scans is extensive. a radiologist analyses these images. the three most common brain tumours are meningioma, glioma, and pituitary tumour. brain tumours are complex diseases, and a manual examination may be fraught with error. experimental outcomes based on various techniques under augmentation with image-based datasets are presented. the accuracy with no augmentation is about 98% and under augmentation is approximately 99.08%.by leveraging deep residual learning techniques, image classification of intracranial tumors can benefit from the ability of deep neural networks to automatically learn complex representations from raw image data.classification methods that use machine learning to automate the process have proven superior to human curation. thus, we present a system that can identify and classify utilizing deep cnn-based residual networks.
[ "brain tumours", "tumour progression", "effective treatments", "different imaging modalities", "brain tumours", "magnetic resonance imaging", "mri", "which", "widespread use", "its superior image quality", "the fact", "it", "ionizing radiation", "image classification", "intracranial tumors", "deep residual learning technique", "an application", "deep learning", "the field", "medical imaging analysis", "it", "convolutional neural networks", "cnns", "brain images", "different categories", "the presence", "absence", "tumors", "resnet", "a deep neural network", "that", "the problem", "vanishing gradients", "training", "very deep networks.the deep learning subfield", "machine learning", "remarkable success", "classification", "segmentation", "we", "a deep residual network", "picture datasets", "several brain cancers", "the information", "mri scans", "a radiologist", "these images", "the three most common brain tumours", "brain tumours", "complex diseases", "a manual examination", "error", "experimental outcomes", "various techniques", "augmentation", "image-based datasets", "the accuracy", "no augmentation", "about 98%", "augmentation", "approximately 99.08%.by", "deep residual learning techniques", "image classification", "intracranial tumors", "the ability", "deep neural networks", "complex representations", "raw image data.classification methods", "that", "the process", "human curation", "we", "a system", "that", "deep cnn-based residual networks", "three", "about 98%", "cnn" ]
Attractor Inspired Deep Learning for Modelling Chaotic Systems
[ "Anurag Dutta", "John Harshith", "A. Ramamoorthy", "K. Lakshmanan" ]
Predicting and understanding the behavior of dynamic systems have driven advancements in various approaches, including physics-based models and data-driven techniques like deep neural networks. Chaotic systems, with their stochastic nature and unpredictable behavior, pose challenges for accurate modeling and forecasting, especially during extreme events. In this paper, we propose a novel deep learning framework called Attractor-Inspired Deep Learning (AiDL), which seamlessly integrates actual statistics and mathematical models of system kinetics. AiDL combines the strengths of physics-informed machine learning and data-driven methods, offering a promising solution for modeling nonlinear systems. By leveraging the intricate dynamics of attractors, AiDL bridges the gap between physics-based models and deep neural networks. We demonstrate the effectiveness of AiDL using real-world data from various domains, including catastrophic weather mechanics, El Niño cycles, and disease transmission. Our empirical results showcase AiDL’s ability to substantially enhance the modeling of extreme events. The proposed AiDL paradigm holds promise for advancing research in Time Series Prediction of Extreme Events and has applications in real-world chaotic system transformations.
10.1007/s44230-023-00045-z
attractor inspired deep learning for modelling chaotic systems
predicting and understanding the behavior of dynamic systems have driven advancements in various approaches, including physics-based models and data-driven techniques like deep neural networks. chaotic systems, with their stochastic nature and unpredictable behavior, pose challenges for accurate modeling and forecasting, especially during extreme events. in this paper, we propose a novel deep learning framework called attractor-inspired deep learning (aidl), which seamlessly integrates actual statistics and mathematical models of system kinetics. aidl combines the strengths of physics-informed machine learning and data-driven methods, offering a promising solution for modeling nonlinear systems. by leveraging the intricate dynamics of attractors, aidl bridges the gap between physics-based models and deep neural networks. we demonstrate the effectiveness of aidl using real-world data from various domains, including catastrophic weather mechanics, el niño cycles, and disease transmission. our empirical results showcase aidl’s ability to substantially enhance the modeling of extreme events. the proposed aidl paradigm holds promise for advancing research in time series prediction of extreme events and has applications in real-world chaotic system transformations.
[ "the behavior", "dynamic systems", "advancements", "various approaches", "physics-based models", "data-driven techniques", "deep neural networks", "chaotic systems", "their stochastic nature", "unpredictable behavior", "challenges", "accurate modeling", "forecasting", "extreme events", "this paper", "we", "a novel deep learning framework", "attractor-inspired deep learning", "aidl", "which", "actual statistics", "mathematical models", "system kinetics", "aidl", "the strengths", "physics-informed machine learning", "data-driven methods", "a promising solution", "nonlinear systems", "the intricate dynamics", "attractors", "aidl", "the gap", "physics-based models", "deep neural networks", "we", "the effectiveness", "aidl", "real-world data", "various domains", "catastrophic weather mechanics", "el niño cycles", "disease transmission", "our empirical results", "showcase", "the modeling", "extreme events", "the proposed aidl paradigm", "promise", "research", "time series prediction", "extreme events", "applications", "real-world chaotic system transformations", "el niño" ]
Deep learning for high-resolution seismic imaging
[ "Liyun Ma", "Liguo Han", "Qiang Feng" ]
Seismic imaging techniques play a crucial role in interpreting subsurface geological structures by analyzing the propagation and reflection of seismic waves. However, traditional methods face challenges in achieving high resolution due to theoretical constraints and computational costs. Leveraging recent advancements in deep learning, this study introduces a neural network framework that integrates Transformer and Convolutional Neural Network (CNN) architectures, enhanced through Adaptive Spatial Feature Fusion (ASFF), to achieve high-resolution seismic imaging. Our approach directly maps seismic data to reflection models, eliminating the need for post-processing low-resolution results. Through extensive numerical experiments, we demonstrate the outstanding ability of this method to accurately infer subsurface structures. Evaluation metrics including Root Mean Square Error (RMSE), Correlation Coefficient (CC), and Structural Similarity Index (SSIM) emphasize the model's capacity to faithfully reconstruct subsurface features. Furthermore, noise injection experiments showcase the reliability of this efficient seismic imaging method, further underscoring the potential of deep learning in seismic imaging.
10.1038/s41598-024-61251-8
deep learning for high-resolution seismic imaging
seismic imaging techniques play a crucial role in interpreting subsurface geological structures by analyzing the propagation and reflection of seismic waves. however, traditional methods face challenges in achieving high resolution due to theoretical constraints and computational costs. leveraging recent advancements in deep learning, this study introduces a neural network framework that integrates transformer and convolutional neural network (cnn) architectures, enhanced through adaptive spatial feature fusion (asff), to achieve high-resolution seismic imaging. our approach directly maps seismic data to reflection models, eliminating the need for post-processing low-resolution results. through extensive numerical experiments, we demonstrate the outstanding ability of this method to accurately infer subsurface structures. evaluation metrics including root mean square error (rmse), correlation coefficient (cc), and structural similarity index (ssim) emphasize the model's capacity to faithfully reconstruct subsurface features. furthermore, noise injection experiments showcase the reliability of this efficient seismic imaging method, further underscoring the potential of deep learning in seismic imaging.
[ "seismic imaging techniques", "a crucial role", "subsurface geological structures", "the propagation", "reflection", "seismic waves", "traditional methods", "challenges", "high resolution", "theoretical constraints", "computational costs", "recent advancements", "deep learning", "this study", "a neural network framework", "that", "transformer", "cnn", "adaptive spatial feature fusion", "(asff", "high-resolution seismic imaging", "our approach", "seismic data", "reflection models", "the need", "post-processing low-resolution results", "extensive numerical experiments", "we", "the outstanding ability", "this method", "subsurface structures", "evaluation metrics", "root mean square error", "rmse", "correlation", "structural similarity index", "ssim", "the model's capacity", "subsurface features", "noise injection experiments", "the reliability", "this efficient seismic imaging method", "the potential", "deep learning", "seismic imaging", "cnn" ]
Predicting equilibrium distributions for molecular systems with deep learning
[ "Shuxin Zheng", "Jiyan He", "Chang Liu", "Yu Shi", "Ziheng Lu", "Weitao Feng", "Fusong Ju", "Jiaxi Wang", "Jianwei Zhu", "Yaosen Min", "He Zhang", "Shidi Tang", "Hongxia Hao", "Peiran Jin", "Chi Chen", "Frank Noé", "Haiguang Liu", "Tie-Yan Liu" ]
Advances in deep learning have greatly improved structure prediction of molecules. However, many macroscopic observations that are important for real-world applications are not functions of a single molecular structure but rather determined from the equilibrium distribution of structures. Conventional methods for obtaining these distributions, such as molecular dynamics simulation, are computationally expensive and often intractable. Here we introduce a deep learning framework, called Distributional Graphormer (DiG), in an attempt to predict the equilibrium distribution of molecular systems. Inspired by the annealing process in thermodynamics, DiG uses deep neural networks to transform a simple distribution towards the equilibrium distribution, conditioned on a descriptor of a molecular system such as a chemical graph or a protein sequence. This framework enables the efficient generation of diverse conformations and provides estimations of state densities, orders of magnitude faster than conventional methods. We demonstrate applications of DiG on several molecular tasks, including protein conformation sampling, ligand structure sampling, catalyst–adsorbate sampling and property-guided structure generation. DiG presents a substantial advancement in methodology for statistically understanding molecular systems, opening up new research opportunities in the molecular sciences.
10.1038/s42256-024-00837-3
predicting equilibrium distributions for molecular systems with deep learning
advances in deep learning have greatly improved structure prediction of molecules. however, many macroscopic observations that are important for real-world applications are not functions of a single molecular structure but rather determined from the equilibrium distribution of structures. conventional methods for obtaining these distributions, such as molecular dynamics simulation, are computationally expensive and often intractable. here we introduce a deep learning framework, called distributional graphormer (dig), in an attempt to predict the equilibrium distribution of molecular systems. inspired by the annealing process in thermodynamics, dig uses deep neural networks to transform a simple distribution towards the equilibrium distribution, conditioned on a descriptor of a molecular system such as a chemical graph or a protein sequence. this framework enables the efficient generation of diverse conformations and provides estimations of state densities, orders of magnitude faster than conventional methods. we demonstrate applications of dig on several molecular tasks, including protein conformation sampling, ligand structure sampling, catalyst–adsorbate sampling and property-guided structure generation. dig presents a substantial advancement in methodology for statistically understanding molecular systems, opening up new research opportunities in the molecular sciences.
[ "advances", "deep learning", "structure prediction", "molecules", "many macroscopic observations", "that", "real-world applications", "functions", "a single molecular structure", "the equilibrium distribution", "structures", "conventional methods", "these distributions", "molecular dynamics simulation", "we", "a deep learning framework", "distributional graphormer", "dig", "an attempt", "the equilibrium distribution", "molecular systems", "the annealing process", "thermodynamics", "dig", "deep neural networks", "a simple distribution", "the equilibrium distribution", "a descriptor", "a molecular system", "a chemical graph", "a protein sequence", "this framework", "the efficient generation", "diverse conformations", "estimations", "state densities", "orders", "magnitude", "conventional methods", "we", "applications", "dig", "several molecular tasks", "protein conformation sampling", "ligand structure sampling", "catalyst", "sampling", "property-guided structure generation", "dig", "a substantial advancement", "methodology", "molecular systems", "new research opportunities", "the molecular sciences" ]
Privacy-Preserving Classification on Deep Learning with Exponential Mechanism
[ "Quan Ju", "Rongqing Xia", "Shuhong Li", "Xiaojian Zhang" ]
How to protect the privacy of training data in deep learning has been the subject of increasing amounts of related research in recent years. Private Aggregation of Teacher Ensembles (PATE) uses transfer learning and differential privacy methods to provide a broadly applicable data privacy framework in deep learning. PATE combines the Laplacian mechanism and the voting method to achieve deep learning privacy classification. However, the Laplacian mechanism may greatly distort the histogram vote counts of each class. This paper proposes a novel exponential mechanism with PATE to ensure the privacy protection. This proposed method improves the protection effect and accuracy through the screening algorithm and uses the differential privacy combination theorems to reduce the total privacy budget. The data-dependent analysis demonstrates that the exponential mechanism outperforms the original Laplace mechanism. Experimental results show that the proposed method can train models with improved accuracy while requiring a smaller privacy budget when compared to the original Pate framework.
10.1007/s44196-024-00422-x
privacy-preserving classification on deep learning with exponential mechanism
how to protect the privacy of training data in deep learning has been the subject of increasing amounts of related research in recent years. private aggregation of teacher ensembles (pate) uses transfer learning and differential privacy methods to provide a broadly applicable data privacy framework in deep learning. pate combines the laplacian mechanism and the voting method to achieve deep learning privacy classification. however, the laplacian mechanism may greatly distort the histogram vote counts of each class. this paper proposes a novel exponential mechanism with pate to ensure the privacy protection. this proposed method improves the protection effect and accuracy through the screening algorithm and uses the differential privacy combination theorems to reduce the total privacy budget. the data-dependent analysis demonstrates that the exponential mechanism outperforms the original laplace mechanism. experimental results show that the proposed method can train models with improved accuracy while requiring a smaller privacy budget when compared to the original pate framework.
[ "the privacy", "training data", "deep learning", "the subject", "increasing amounts", "related research", "recent years", "private aggregation", "teacher ensembles", "transfer learning", "differential privacy methods", "a broadly applicable data privacy framework", "deep learning", "the laplacian mechanism", "the voting method", "deep learning privacy classification", "the laplacian mechanism", "the histogram vote counts", "each class", "this paper", "a novel exponential mechanism", "pate", "the privacy protection", "this proposed method", "the protection effect", "accuracy", "the screening algorithm", "the differential privacy combination", "the total privacy budget", "the data-dependent analysis", "the exponential mechanism", "the original laplace mechanism", "experimental results", "the proposed method", "models", "improved accuracy", "a smaller privacy budget", "the original pate framework", "recent years" ]
An empirical study of sentiment analysis utilizing machine learning and deep learning algorithms
[ "Betul Erkantarci", "Gokhan Bakal" ]
Among text-mining studies, one of the most studied topics is the text classification task applied in various domains, including medicine, social media, and academia. As a sub-problem in text classification, sentiment analysis has been widely investigated to classify often opinion-based textual elements. Specifically, user reviews and experiential feedback for products or services have been employed as fundamental data sources for sentiment analysis efforts. As a result of rapidly emerging technological advancements, social media platforms such as Twitter, Facebook, and Reddit, have become central opinion-sharing mediums since the early 2000s. In this sense, we build various machine-learning models to solve the sentiment analysis problem on the Reddit comments dataset in this work. The experimental models we constructed achieve F1 scores within intervals of 73–76%. Consequently, we present comparative performance scores obtained by traditional machine learning and deep learning models and discuss the results.
10.1007/s42001-023-00236-5
an empirical study of sentiment analysis utilizing machine learning and deep learning algorithms
among text-mining studies, one of the most studied topics is the text classification task applied in various domains, including medicine, social media, and academia. as a sub-problem in text classification, sentiment analysis has been widely investigated to classify often opinion-based textual elements. specifically, user reviews and experiential feedback for products or services have been employed as fundamental data sources for sentiment analysis efforts. as a result of rapidly emerging technological advancements, social media platforms such as twitter, facebook, and reddit, have become central opinion-sharing mediums since the early 2000s. in this sense, we build various machine-learning models to solve the sentiment analysis problem on the reddit comments dataset in this work. the experimental models we constructed achieve f1 scores within intervals of 73–76%. consequently, we present comparative performance scores obtained by traditional machine learning and deep learning models and discuss the results.
[ "text-mining studies", "the most studied topics", "the text classification task", "various domains", "medicine", "social media", "academia", "a sub", "-", "problem", "text classification", "sentiment analysis", "opinion-based textual elements", "user reviews", "experiential feedback", "products", "services", "fundamental data sources", "sentiment analysis efforts", "a result", "rapidly emerging technological advancements", "social media platforms", "twitter", "facebook", "reddit", "central opinion-sharing mediums", "this sense", "we", "various machine-learning models", "the sentiment analysis problem", "the reddit comments", "this work", "the experimental models", "we", "f1 scores", "intervals", "73–76%", "we", "comparative performance scores", "traditional machine learning", "deep learning models", "the results", "one", "the early 2000s", "73–76%" ]
Application of deep learning in isolated tooth identification
[ "Meng-Xun Li", "Zhi-Wei Wang", "Xin-Ran Chen", "Gui-Song Xia", "Yong Zheng", "Cui Huang", "Zhi Li" ]
BackgroundTeeth identification has a pivotal role in the dental curriculum and provides one of the important foundations of clinical practice. Accurately identifying teeth is a vital aspect of dental education and clinical practice, but can be challenging due to the anatomical similarities between categories. In this study, we aim to explore the possibility of using a deep learning model to classify isolated tooth by a set of photographs.MethodsA collection of 5,100 photographs from 850 isolated human tooth specimens were assembled to serve as the dataset for this study. Each tooth was carefully labeled during the data collection phase through direct observation. We developed a deep learning model that incorporates the state-of-the-art feature extractor and attention mechanism to classify each tooth based on a set of 6 photographs captured from multiple angles. To increase the validity of model evaluation, a voting-based strategy was applied to refine the test set to generate a more reliable label, and the model was evaluated under different types of classification granularities.ResultsThis deep learning model achieved top-3 accuracies of over 90% in all classification types, with an average AUC of 0.95. The Cohen’s Kappa demonstrated good agreement between model prediction and the test set.ConclusionsThis deep learning model can achieve performance comparable to that of human experts and has the potential to become a valuable tool for dental education and various applications in accurately identifying isolated tooth.
10.1186/s12903-024-04274-x
application of deep learning in isolated tooth identification
backgroundteeth identification has a pivotal role in the dental curriculum and provides one of the important foundations of clinical practice. accurately identifying teeth is a vital aspect of dental education and clinical practice, but can be challenging due to the anatomical similarities between categories. in this study, we aim to explore the possibility of using a deep learning model to classify isolated tooth by a set of photographs.methodsa collection of 5,100 photographs from 850 isolated human tooth specimens were assembled to serve as the dataset for this study. each tooth was carefully labeled during the data collection phase through direct observation. we developed a deep learning model that incorporates the state-of-the-art feature extractor and attention mechanism to classify each tooth based on a set of 6 photographs captured from multiple angles. to increase the validity of model evaluation, a voting-based strategy was applied to refine the test set to generate a more reliable label, and the model was evaluated under different types of classification granularities.resultsthis deep learning model achieved top-3 accuracies of over 90% in all classification types, with an average auc of 0.95. the cohen’s kappa demonstrated good agreement between model prediction and the test set.conclusionsthis deep learning model can achieve performance comparable to that of human experts and has the potential to become a valuable tool for dental education and various applications in accurately identifying isolated tooth.
[ "backgroundteeth identification", "a pivotal role", "the dental curriculum", "the important foundations", "clinical practice", "teeth", "a vital aspect", "dental education", "clinical practice", "the anatomical similarities", "categories", "this study", "we", "the possibility", "a deep learning model", "isolated tooth", "a set", "photographs.methodsa collection", "5,100 photographs", "850 isolated human tooth specimens", "the dataset", "this study", "each tooth", "the data collection phase", "direct observation", "we", "a deep learning model", "that", "the-art", "attention mechanism", "each tooth", "a set", "6 photographs", "multiple angles", "the validity", "model evaluation", "a voting-based strategy", "the test", "a more reliable label", "the model", "different types", "classification", "granularities.resultsthis deep learning model", "top-3 accuracies", "over 90%", "all classification types", "an average auc", "the cohen’s kappa", "good agreement", "model prediction", "the test", "deep learning model", "performance", "that", "human experts", "the potential", "a valuable tool", "dental education", "various applications", "isolated tooth", "5,100", "850", "6", "over 90%", "0.95" ]
Deep learning in neglected vector-borne diseases: a systematic review
[ "Atmika Mishra", "Arya Pandey", "Ruchika Malhotra" ]
This study explores the application of Deep Learning in combating neglected vector-borne Diseases, a significant global health concern, particularly in resource-limited areas. It examines areas where Deep Learning has proven effective, compares popular Deep Learning techniques, focuses on interdisciplinary approaches with translational impact, and finds untapped potential for deep learning application. Thorough searches across multiple databases yielded 64 pertinent studies, from which 16 were selected based on inclusion criteria and quality assessment. Deep Learning applications in disease transmission risk prediction, vector detection, parasite classification, and treatment procedure optimization were investigated and focused on diseases such as Schistosomiasis, Chagas disease, Leishmaniasis, Echinococcosis, and Trachoma. Convolutional neural networks, artificial neural networks, multilayer perceptrons, and AutoML algorithms surpassed traditional methods for disease prediction, species identification, and diagnosis. The interdisciplinary integration of Deep Learning with public health, entomology, and epidemiology provides prospects for improved disease control and understanding. Deep Learning models automate disease surveillance, simplify epidemiological data processing, and enable early detection, particularly in resource-constrained settings. Smartphone apps driven by deep learning allow for rapid disease diagnosis and identification, boosting healthcare accessibility and global health outcomes. Improved algorithms, broadening the scope of applications to areas such as one health approach, and community engagement, and expanding deep learning applications to diseases such as lymphatic filariasis, hydatidosis, and onchocerciasis hold promise for improving global health outcomes.
10.1007/s13198-024-02380-1
deep learning in neglected vector-borne diseases: a systematic review
this study explores the application of deep learning in combating neglected vector-borne diseases, a significant global health concern, particularly in resource-limited areas. it examines areas where deep learning has proven effective, compares popular deep learning techniques, focuses on interdisciplinary approaches with translational impact, and finds untapped potential for deep learning application. thorough searches across multiple databases yielded 64 pertinent studies, from which 16 were selected based on inclusion criteria and quality assessment. deep learning applications in disease transmission risk prediction, vector detection, parasite classification, and treatment procedure optimization were investigated and focused on diseases such as schistosomiasis, chagas disease, leishmaniasis, echinococcosis, and trachoma. convolutional neural networks, artificial neural networks, multilayer perceptrons, and automl algorithms surpassed traditional methods for disease prediction, species identification, and diagnosis. the interdisciplinary integration of deep learning with public health, entomology, and epidemiology provides prospects for improved disease control and understanding. deep learning models automate disease surveillance, simplify epidemiological data processing, and enable early detection, particularly in resource-constrained settings. smartphone apps driven by deep learning allow for rapid disease diagnosis and identification, boosting healthcare accessibility and global health outcomes. improved algorithms, broadening the scope of applications to areas such as one health approach, and community engagement, and expanding deep learning applications to diseases such as lymphatic filariasis, hydatidosis, and onchocerciasis hold promise for improving global health outcomes.
[ "this study", "the application", "deep learning", "neglected vector-borne diseases", "a significant global health concern", "resource-limited areas", "it", "areas", "deep learning", "popular deep learning techniques", "interdisciplinary approaches", "translational impact", "untapped potential", "deep learning application", "thorough searches", "multiple databases", "64 pertinent studies", "which", "inclusion criteria", "quality assessment", "deep learning applications", "disease transmission risk prediction", "vector detection", "classification", "treatment procedure optimization", "diseases", "schistosomiasis", "chagas disease", "leishmaniasis", "echinococcosis", "trachoma", "convolutional neural networks", "artificial neural networks", "multilayer perceptrons", "automl algorithms", "traditional methods", "disease prediction", "species identification", "diagnosis", "the interdisciplinary integration", "deep learning", "public health", "entomology", "epidemiology", "prospects", "improved disease control", "understanding", "deep learning models", "automate disease surveillance", "epidemiological data processing", "early detection", "resource-constrained settings", "smartphone apps", "deep learning", "rapid disease diagnosis", "identification", "healthcare accessibility", "global health outcomes", "improved algorithms", "the scope", "applications", "areas", "one health approach", "community engagement", "deep learning applications", "diseases", "lymphatic filariasis", "hydatidosis", "onchocerciasis hold promise", "global health outcomes", "64", "16", "one" ]
State of the Art on Deep Learning-enhanced Rendering Methods
[ "Qi Wang", "Zhihua Zhong", "Yuchi Huo", "Hujun Bao", "Rui Wang" ]
Photorealistic rendering of the virtual world is an important and classic problem in the field of computer graphics. With the development of GPU hardware and continuous research on computer graphics, representing and rendering virtual scenes has become easier and more efficient. However, there are still unresolved challenges in efficiently rendering global illumination effects. At the same time, machine learning and computer vision provide real-world image analysis and synthesis methods, which can be exploited by computer graphics rendering pipelines. Deep learning-enhanced rendering combines techniques from deep learning and computer vision into the traditional graphics rendering pipeline to enhance existing rasterization or Monte Carlo integration renderers. This state-of-the-art report summarizes recent studies of deep learning-enhanced rendering in the computer graphics community. Specifically, we focus on works of renderers represented using neural networks, whether the scene is represented by neural networks or traditional scene files. These works are either for general scenes or specific scenes, which are differentiated by the need to retrain the network for new scenes.
10.1007/s11633-022-1400-x
state of the art on deep learning-enhanced rendering methods
photorealistic rendering of the virtual world is an important and classic problem in the field of computer graphics. with the development of gpu hardware and continuous research on computer graphics, representing and rendering virtual scenes has become easier and more efficient. however, there are still unresolved challenges in efficiently rendering global illumination effects. at the same time, machine learning and computer vision provide real-world image analysis and synthesis methods, which can be exploited by computer graphics rendering pipelines. deep learning-enhanced rendering combines techniques from deep learning and computer vision into the traditional graphics rendering pipeline to enhance existing rasterization or monte carlo integration renderers. this state-of-the-art report summarizes recent studies of deep learning-enhanced rendering in the computer graphics community. specifically, we focus on works of renderers represented using neural networks, whether the scene is represented by neural networks or traditional scene files. these works are either for general scenes or specific scenes, which are differentiated by the need to retrain the network for new scenes.
[ "photorealistic rendering", "the virtual world", "an important and classic problem", "the field", "computer graphics", "the development", "gpu hardware", "continuous research", "computer graphics", "virtual scenes", "unresolved challenges", "global illumination effects", "the same time", "machine learning", "computer vision", "real-world image analysis", "synthesis methods", "which", "computer graphics", "pipelines", "deep learning-enhanced rendering", "techniques", "deep learning and computer vision", "the traditional graphics", "pipeline", "existing rasterization", "monte carlo integration renderers", "the-art", "recent studies", "deep learning-enhanced rendering", "the computer graphics community", "we", "works", "renderers", "neural networks", "the scene", "neural networks", "traditional scene files", "these works", "general scenes", "specific scenes", "which", "the need", "the network", "new scenes" ]
Deep neural networks watermark via universal deep hiding and metric learning
[ "Zhicheng Ye", "Xinpeng Zhang", "Guorui Feng" ]
With the rising costs of model training, it is urgent to safeguard the intellectual property of deep neural networks. To achieve this, researchers have proposed various model watermarking techniques. Existing methods utilize visible trigger patterns, which are vulnerable to being detected by humans or detectors. Moreover, these approaches fail to establish active protection mechanisms that link the model with the user’s identity. In this study, we present an innovative imperceptible model watermarking approach that utilizes deep hiding to encode the user’s copyright verification information. This process superimposes a trigger pattern onto clean images, resulting in watermark trigger images. These watermark trigger images closely mimic the original images, achieving excellent stealthiness while enabling the retrieval of the user’s copyright verification information, thus definitively asserting ownership rights. Slight alterations made to the images to maintain stealthiness can weaken the triggering of the watermark pattern. We first leverage the triple loss in metric learning to tackle this challenge of training watermark samples. Using watermark trigger images as anchor samples and selecting appropriate positive and negative samples, we enhance the model’s capability to discern the watermark trigger. Experimental results on CIFAR-10, GTSRB, and Tiny-ImageNet confirm the defender’s capability to embed watermark successfully. The average watermark accuracy exceeds 90%, while the average performance loss is less than 0.05% points. It is also robust to existing watermark removal attacks and backdoor detection methods.
10.1007/s00521-024-09469-5
deep neural networks watermark via universal deep hiding and metric learning
with the rising costs of model training, it is urgent to safeguard the intellectual property of deep neural networks. to achieve this, researchers have proposed various model watermarking techniques. existing methods utilize visible trigger patterns, which are vulnerable to being detected by humans or detectors. moreover, these approaches fail to establish active protection mechanisms that link the model with the user’s identity. in this study, we present an innovative imperceptible model watermarking approach that utilizes deep hiding to encode the user’s copyright verification information. this process superimposes a trigger pattern onto clean images, resulting in watermark trigger images. these watermark trigger images closely mimic the original images, achieving excellent stealthiness while enabling the retrieval of the user’s copyright verification information, thus definitively asserting ownership rights. slight alterations made to the images to maintain stealthiness can weaken the triggering of the watermark pattern. we first leverage the triple loss in metric learning to tackle this challenge of training watermark samples. using watermark trigger images as anchor samples and selecting appropriate positive and negative samples, we enhance the model’s capability to discern the watermark trigger. experimental results on cifar-10, gtsrb, and tiny-imagenet confirm the defender’s capability to embed watermark successfully. the average watermark accuracy exceeds 90%, while the average performance loss is less than 0.05% points. it is also robust to existing watermark removal attacks and backdoor detection methods.
[ "the rising costs", "model training", "it", "the intellectual property", "deep neural networks", "this", "researchers", "various model watermarking techniques", "existing methods", "visible trigger patterns", "which", "humans", "detectors", "these approaches", "active protection mechanisms", "that", "the model", "the user’s identity", "this study", "we", "an innovative imperceptible model watermarking approach", "that", "deep hiding", "the user’s copyright verification information", "this process", "a trigger pattern", "clean images", "watermark trigger images", "these watermark trigger images", "the original images", "excellent stealthiness", "the retrieval", "the user’s copyright verification information", "ownership rights", "slight alterations", "the images", "stealthiness", "the triggering", "the watermark pattern", "we", "the triple loss", "metric learning", "this challenge", "training watermark samples", "watermark trigger images", "anchor samples", "appropriate positive and negative samples", "we", "the model’s capability", "the watermark trigger", "experimental results", "cifar-10", "gtsrb", "tiny-imagenet", "the defender’s capability", "the average watermark accuracy", "90%", "the average performance loss", "less than 0.05% points", "it", "existing watermark removal attacks", "backdoor detection methods", "first", "cifar-10", "90%", "less than", "0.05%" ]
Deep learning for higher-order nonparametric spatial autoregressive model
[ "Zitong Li", "Yunquan Song", "Ling Jian" ]
Deep learning technology has been successfully applied in more and more fields. In this paper, the application of deep neural networks in higher-order nonparametric spatial autoregressive models is studied. For spatial model, we propose the higher-order nonparametric spatial autoregressive neural network (HNSARNN) to fit the model. This method offers both good interpretability and prediction performance, and solves the black box problem in deep learning models to some degree. In various scenarios of spatial data distribution, the proposed method demonstrates superior performance compared to traditional approaches for handling nonparametric functions (such as the B-spline method). Simulation results show the effectiveness of the proposed model.
10.1007/s10489-024-05541-8
deep learning for higher-order nonparametric spatial autoregressive model
deep learning technology has been successfully applied in more and more fields. in this paper, the application of deep neural networks in higher-order nonparametric spatial autoregressive models is studied. for spatial model, we propose the higher-order nonparametric spatial autoregressive neural network (hnsarnn) to fit the model. this method offers both good interpretability and prediction performance, and solves the black box problem in deep learning models to some degree. in various scenarios of spatial data distribution, the proposed method demonstrates superior performance compared to traditional approaches for handling nonparametric functions (such as the b-spline method). simulation results show the effectiveness of the proposed model.
[ "deep learning technology", "more and more fields", "this paper", "the application", "deep neural networks", "higher-order nonparametric spatial autoregressive models", "spatial model", "we", "the higher-order nonparametric spatial autoregressive neural network", "hnsarnn", "the model", "this method", "both good interpretability", "prediction performance", "the black box problem", "deep learning models", "some degree", "various scenarios", "spatial data distribution", "the proposed method", "superior performance", "traditional approaches", "nonparametric functions", "the b-spline method", "simulation results", "the effectiveness", "the proposed model" ]
A meta-analysis on diabetic retinopathy and deep learning applications
[ "Abdüssamed Erciyas", "Necaattin Barişçi" ]
Diabetic retinopathy is one of the negative effects of diabetes on the eye. Early diagnosis of this disease, which can progress to blindness, is very important in this sense. There are many studies that detect and classify diabetic retinopathy, especially Machine Learning and Deep Learning methods. It is known that Deep Learning has been used more and more on disease detection and classification in recent years. There are three important reasons why deep learning is more successful in disease detection than methods such as image processing or machine learning. The first of these is that it achieves higher accuracies. Secondly, there is no need to develop an algorithm for each disease, that is, the algorithm learns the disease itself. Thirdly, faster results can be achieved with GPU (Graphics Processing Unit) support. For these reasons, in this study, articles written between 2015 and 2022 on the classification of diabetic retinopathy with deep learning were examined, and meta and statistical analysis was performed. Considering the work in the last two years the combined SEN value is 0.97 [95% CI, 0.92, 0.98], and the SPE value is 0.99 [95% CI, 0.98, 1.00]. The results obtained show how effective and necessary deep learning is in the early diagnosis of diabetic retinopathy.
10.1007/s11042-023-17784-7
a meta-analysis on diabetic retinopathy and deep learning applications
diabetic retinopathy is one of the negative effects of diabetes on the eye. early diagnosis of this disease, which can progress to blindness, is very important in this sense. there are many studies that detect and classify diabetic retinopathy, especially machine learning and deep learning methods. it is known that deep learning has been used more and more on disease detection and classification in recent years. there are three important reasons why deep learning is more successful in disease detection than methods such as image processing or machine learning. the first of these is that it achieves higher accuracies. secondly, there is no need to develop an algorithm for each disease, that is, the algorithm learns the disease itself. thirdly, faster results can be achieved with gpu (graphics processing unit) support. for these reasons, in this study, articles written between 2015 and 2022 on the classification of diabetic retinopathy with deep learning were examined, and meta and statistical analysis was performed. considering the work in the last two years the combined sen value is 0.97 [95% ci, 0.92, 0.98], and the spe value is 0.99 [95% ci, 0.98, 1.00]. the results obtained show how effective and necessary deep learning is in the early diagnosis of diabetic retinopathy.
[ "diabetic retinopathy", "the negative effects", "diabetes", "the eye", "early diagnosis", "this disease", "which", "blindness", "this sense", "many studies", "that", "diabetic retinopathy", "especially machine learning", "deep learning methods", "it", "deep learning", "disease detection", "classification", "recent years", "three important reasons", "deep learning", "disease detection", "methods", "image processing", "machine learning", "these", "it", "higher accuracies", "no need", "an algorithm", "each disease", "the algorithm", "the disease", "itself", "faster results", "gpu", "graphics processing unit", "support", "these reasons", "this study", "articles", "the classification", "diabetic retinopathy", "deep learning", "meta and statistical analysis", "the work", "the last two years", "the combined sen value", "[95% ci", "the spe value", "[95% ci", "the results", "how effective and necessary deep learning", "the early diagnosis", "diabetic retinopathy", "recent years", "three", "first", "secondly", "thirdly", "gpu", "between 2015 and 2022", "the last two years", "0.97", "95%", "0.92", "0.98", "0.99", "95%", "0.98", "1.00" ]
Naturalistic Scene Modelling: Deep Learning with Insights from Biology
[ "Kofi Appiah", "Zhiyong Jin", "Lei Shi", "Sze Chai Kwok" ]
Advances in machine learning coupled with the abundances of training data has facilitated the deep learning era, which has demonstrated its ability and effectiveness in solving complex detection and recognition problems. In general application areas with elements of machine learning have seen exponential growth with promising new and sophisticated solutions to complex learning problems. In computer vision, the challenge related to the detection of known objects in a scene is a thing of the past. With the tremendous increase in detection accuracies, some close to that of human detection, there are several areas still lagging in computer vision and machine learning where improvements may call for more architectural designs. In this paper, we propose a physiologically inspired model for scene understanding that encodes three key components: object location, size and category. Our aim is to develop an energy efficient artificial intelligent model for naturalistic scene understanding capable of deploying on a low power neuromorphic hardware. We have reviewed recent advances in deep learning architecture that have taken inspiration from human or primate learning systems and provided direct to future advancement on deep learning with inspiration from physiological experiments. Upon a review of areas that have benefitted from deep learning, we provide recommendations for enhancing those areas that might have stalled or grinded to a halt with little or no significant improvement.
10.1007/s11265-023-01894-4
naturalistic scene modelling: deep learning with insights from biology
advances in machine learning coupled with the abundances of training data has facilitated the deep learning era, which has demonstrated its ability and effectiveness in solving complex detection and recognition problems. in general application areas with elements of machine learning have seen exponential growth with promising new and sophisticated solutions to complex learning problems. in computer vision, the challenge related to the detection of known objects in a scene is a thing of the past. with the tremendous increase in detection accuracies, some close to that of human detection, there are several areas still lagging in computer vision and machine learning where improvements may call for more architectural designs. in this paper, we propose a physiologically inspired model for scene understanding that encodes three key components: object location, size and category. our aim is to develop an energy efficient artificial intelligent model for naturalistic scene understanding capable of deploying on a low power neuromorphic hardware. we have reviewed recent advances in deep learning architecture that have taken inspiration from human or primate learning systems and provided direct to future advancement on deep learning with inspiration from physiological experiments. upon a review of areas that have benefitted from deep learning, we provide recommendations for enhancing those areas that might have stalled or grinded to a halt with little or no significant improvement.
[ "advances", "machine learning", "the abundances", "training data", "the deep learning era", "which", "its ability", "effectiveness", "complex detection and recognition problems", "general application areas", "elements", "machine learning", "exponential growth", "new and sophisticated solutions", "complex learning problems", "computer vision", "the challenge", "the detection", "known objects", "a scene", "a thing", "the past", "the tremendous increase", "detection accuracies", "that", "human detection", "several areas", "computer vision", "machine learning", "improvements", "more architectural designs", "this paper", "we", "a physiologically inspired model", "three key components", "object location", "size", "category", "our aim", "an energy efficient artificial intelligent model", "naturalistic scene", "a low power neuromorphic hardware", "we", "recent advances", "deep learning architecture", "that", "inspiration", "learning systems", "future advancement", "deep learning", "inspiration", "physiological experiments", "a review", "areas", "that", "deep learning", "we", "recommendations", "those areas", "that", "a halt", "little or no significant improvement", "three" ]
Needle tracking in low-resolution ultrasound volumes using deep learning
[ "Sarah Grube", "Sarah Latus", "Finn Behrendt", "Oleksandra Riabova", "Maximilian Neidhardt", "Alexander Schlaefer" ]
PurposeClinical needle insertion into tissue, commonly assisted by 2D ultrasound imaging for real-time navigation, faces the challenge of precise needle and probe alignment to reduce out-of-plane movement. Recent studies investigate 3D ultrasound imaging together with deep learning to overcome this problem, focusing on acquiring high-resolution images to create optimal conditions for needle tip detection. However, high-resolution also requires a lot of time for image acquisition and processing, which limits the real-time capability. Therefore, we aim to maximize the US volume rate with the trade-off of low image resolution. We propose a deep learning approach to directly extract the 3D needle tip position from sparsely sampled US volumes.MethodsWe design an experimental setup with a robot inserting a needle into water and chicken liver tissue. In contrast to manual annotation, we assess the needle tip position from the known robot pose. During insertion, we acquire a large data set of low-resolution volumes using a 16 \(\times \) 16 element matrix transducer with a volume rate of 4 Hz. We compare the performance of our deep learning approach with conventional needle segmentation.ResultsOur experiments in water and liver show that deep learning outperforms the conventional approach while achieving sub-millimeter accuracy. We achieve mean position errors of 0.54 mm in water and 1.54 mm in liver for deep learning.ConclusionOur study underlines the strengths of deep learning to predict the 3D needle positions from low-resolution ultrasound volumes. This is an important milestone for real-time needle navigation, simplifying the alignment of needle and ultrasound probe and enabling a 3D motion analysis.
10.1007/s11548-024-03234-8
needle tracking in low-resolution ultrasound volumes using deep learning
purposeclinical needle insertion into tissue, commonly assisted by 2d ultrasound imaging for real-time navigation, faces the challenge of precise needle and probe alignment to reduce out-of-plane movement. recent studies investigate 3d ultrasound imaging together with deep learning to overcome this problem, focusing on acquiring high-resolution images to create optimal conditions for needle tip detection. however, high-resolution also requires a lot of time for image acquisition and processing, which limits the real-time capability. therefore, we aim to maximize the us volume rate with the trade-off of low image resolution. we propose a deep learning approach to directly extract the 3d needle tip position from sparsely sampled us volumes.methodswe design an experimental setup with a robot inserting a needle into water and chicken liver tissue. in contrast to manual annotation, we assess the needle tip position from the known robot pose. during insertion, we acquire a large data set of low-resolution volumes using a 16 \(\times \) 16 element matrix transducer with a volume rate of 4 hz. we compare the performance of our deep learning approach with conventional needle segmentation.resultsour experiments in water and liver show that deep learning outperforms the conventional approach while achieving sub-millimeter accuracy. we achieve mean position errors of 0.54 mm in water and 1.54 mm in liver for deep learning.conclusionour study underlines the strengths of deep learning to predict the 3d needle positions from low-resolution ultrasound volumes. this is an important milestone for real-time needle navigation, simplifying the alignment of needle and ultrasound probe and enabling a 3d motion analysis.
[ "purposeclinical needle insertion", "tissue", "2d ultrasound imaging", "real-time navigation", "the challenge", "precise needle", "probe alignment", "plane", "recent studies", "3d ultrasound", "deep learning", "this problem", "high-resolution images", "optimal conditions", "needle tip detection", "high-resolution", "a lot", "time", "image acquisition", "processing", "which", "the real-time capability", "we", "the us volume rate", "the trade-off", "low image resolution", "we", "a deep learning approach", "the 3d needle tip position", "us volumes.methodswe design", "a robot", "a needle", "water and chicken liver tissue", "contrast", "manual annotation", "we", "the needle tip position", "the known robot", "insertion", "we", "a large data", "low-resolution volumes", "a 16 \\(\\times", "16 element matrix transducer", "a volume rate", "4 hz", "we", "the performance", "our deep learning approach", "conventional needle", "segmentation.resultsour experiments", "water", "liver", "deep learning", "the conventional approach", "sub-millimeter accuracy", "we", "mean position errors", "0.54 mm", "water", "1.54 mm", "liver", "deep learning.conclusionour study", "the strengths", "deep learning", "the 3d needle positions", "low-resolution ultrasound volumes", "this", "an important milestone", "real-time needle navigation", "the alignment", "needle and ultrasound probe", "a 3d motion analysis", "2d", "3d", "us", "3d", "16 \\(\\times \\", "16", "4", "segmentation.resultsour", "0.54 mm", "1.54 mm", "3d", "3d" ]
Deep learning, textual sentiment, and financial market
[ "Fuwei Jiang", "Yumin Liu", "Lingchao Meng", "Huajing Zhang" ]
In this paper, we apply the BERT model, a cut-edging deep learning model, to construct a novel textual sentiment index in the Chinese stock market. By introducing the stock market returns as sentiment labels, our BERT model effectively extracts textual sentiment-related information useful for asset pricing. We find that the BERT-based sentiment has much greater predictive power for stock market returns than the traditional dictionary method as well as the Baker–Wurgler investor sentiment index both in and out of sample. The BERT-based sentiment shows strong predictive power during economic downturns and can significantly predict future macroeconomic conditions. Overall, our BERT model offers a better measure of textual investor sentiment, highlighting the potentially significant value of deep learning, AI, and FinTech in financial market.
10.1007/s10799-024-00428-z
deep learning, textual sentiment, and financial market
in this paper, we apply the bert model, a cut-edging deep learning model, to construct a novel textual sentiment index in the chinese stock market. by introducing the stock market returns as sentiment labels, our bert model effectively extracts textual sentiment-related information useful for asset pricing. we find that the bert-based sentiment has much greater predictive power for stock market returns than the traditional dictionary method as well as the baker–wurgler investor sentiment index both in and out of sample. the bert-based sentiment shows strong predictive power during economic downturns and can significantly predict future macroeconomic conditions. overall, our bert model offers a better measure of textual investor sentiment, highlighting the potentially significant value of deep learning, ai, and fintech in financial market.
[ "this paper", "we", "the bert model", "a cut-edging deep learning model", "a novel textual sentiment index", "the chinese stock market", "the stock market returns", "sentiment labels", "our bert model", "textual sentiment-related information", "asset pricing", "we", "the bert-based sentiment", "much greater predictive power", "stock market returns", "the traditional dictionary method", "the baker", "wurgler investor sentiment index", "sample", "the bert-based sentiment", "strong predictive power", "economic downturns", "future macroeconomic conditions", "our bert model", "a better measure", "textual investor sentiment", "the potentially significant value", "deep learning", "financial market", "chinese" ]
Research progress on intelligent monitoring of tool condition based on deep learning
[ "Dahu Cao", "Wei Liu", "Jimin Ge", "Shishuai Du", "Wang Liu", "Zhaohui Deng", "Jia Chen" ]
Intelligent monitoring of tool condition is the key to ensuring workshop manufacturing efficiency, product machining quality, and accuracy, and is also an indispensable part of intelligent processing. In the face of complex and massive, multi-source heterogeneous, and low-value density machining process data, the monitoring method based on traditional machine learning is challenging to meet the development needs of intelligent manufacturing. In contrast, with its powerful data processing and automatic feature extraction capabilities, deep learning shows broad application prospects in tool condition intelligent monitoring. Given this, this paper first systematically introduces the components of tool condition intelligent monitoring framework based on deep learning. Subsequently, the basic principles, modeling process, and application status of the four most widely used deep learning models (deep belief network, stacked auto-encoder network, convolutional neural network, and recurrent neural network) in the field of tool condition monitoring are detailed, and the advantages and disadvantages of different models are comparably discussed. Finally, the challenges and prospects of the current tool condition intelligent monitoring based on deep learning are summarized.
10.1007/s00170-024-14273-5
research progress on intelligent monitoring of tool condition based on deep learning
intelligent monitoring of tool condition is the key to ensuring workshop manufacturing efficiency, product machining quality, and accuracy, and is also an indispensable part of intelligent processing. in the face of complex and massive, multi-source heterogeneous, and low-value density machining process data, the monitoring method based on traditional machine learning is challenging to meet the development needs of intelligent manufacturing. in contrast, with its powerful data processing and automatic feature extraction capabilities, deep learning shows broad application prospects in tool condition intelligent monitoring. given this, this paper first systematically introduces the components of tool condition intelligent monitoring framework based on deep learning. subsequently, the basic principles, modeling process, and application status of the four most widely used deep learning models (deep belief network, stacked auto-encoder network, convolutional neural network, and recurrent neural network) in the field of tool condition monitoring are detailed, and the advantages and disadvantages of different models are comparably discussed. finally, the challenges and prospects of the current tool condition intelligent monitoring based on deep learning are summarized.
[ "intelligent monitoring", "tool condition", "the key", "workshop manufacturing efficiency", "product machining quality", "accuracy", "an indispensable part", "intelligent processing", "the face", "low-value density machining process data", "the monitoring method", "traditional machine learning", "the development needs", "intelligent manufacturing", "contrast", "its powerful data processing", "automatic feature extraction capabilities", "deep learning", "broad application prospects", "tool condition intelligent monitoring", "this", "this paper", "the components", "tool condition intelligent monitoring framework", "deep learning", "the basic principles", "modeling process", "application status", "the four most widely used deep learning models", "deep belief network", "stacked auto-encoder network", "convolutional neural network", "recurrent neural network", "the field", "tool condition monitoring", "the advantages", "disadvantages", "different models", "the challenges", "prospects", "the current tool condition intelligent monitoring", "deep learning", "first", "four" ]
Prospective de novo drug design with deep interactome learning
[ "Kenneth Atz", "Leandro Cotos", "Clemens Isert", "Maria Håkansson", "Dorota Focht", "Mattis Hilleke", "David F. Nippa", "Michael Iff", "Jann Ledergerber", "Carl C. G. Schiebroek", "Valentina Romeo", "Jan A. Hiss", "Daniel Merk", "Petra Schneider", "Bernd Kuhn", "Uwe Grether", "Gisbert Schneider" ]
De novo drug design aims to generate molecules from scratch that possess specific chemical and pharmacological properties. We present a computational approach utilizing interactome-based deep learning for ligand- and structure-based generation of drug-like molecules. This method capitalizes on the unique strengths of both graph neural networks and chemical language models, offering an alternative to the need for application-specific reinforcement, transfer, or few-shot learning. It enables the “zero-shot" construction of compound libraries tailored to possess specific bioactivity, synthesizability, and structural novelty. In order to proactively evaluate the deep interactome learning framework for protein structure-based drug design, potential new ligands targeting the binding site of the human peroxisome proliferator-activated receptor (PPAR) subtype gamma are generated. The top-ranking designs are chemically synthesized and computationally, biophysically, and biochemically characterized. Potent PPAR partial agonists are identified, demonstrating favorable activity and the desired selectivity profiles for both nuclear receptors and off-target interactions. Crystal structure determination of the ligand-receptor complex confirms the anticipated binding mode. This successful outcome positively advocates interactome-based de novo design for application in bioorganic and medicinal chemistry, enabling the creation of innovative bioactive molecules.
10.1038/s41467-024-47613-w
prospective de novo drug design with deep interactome learning
de novo drug design aims to generate molecules from scratch that possess specific chemical and pharmacological properties. we present a computational approach utilizing interactome-based deep learning for ligand- and structure-based generation of drug-like molecules. this method capitalizes on the unique strengths of both graph neural networks and chemical language models, offering an alternative to the need for application-specific reinforcement, transfer, or few-shot learning. it enables the “zero-shot" construction of compound libraries tailored to possess specific bioactivity, synthesizability, and structural novelty. in order to proactively evaluate the deep interactome learning framework for protein structure-based drug design, potential new ligands targeting the binding site of the human peroxisome proliferator-activated receptor (ppar) subtype gamma are generated. the top-ranking designs are chemically synthesized and computationally, biophysically, and biochemically characterized. potent ppar partial agonists are identified, demonstrating favorable activity and the desired selectivity profiles for both nuclear receptors and off-target interactions. crystal structure determination of the ligand-receptor complex confirms the anticipated binding mode. this successful outcome positively advocates interactome-based de novo design for application in bioorganic and medicinal chemistry, enabling the creation of innovative bioactive molecules.
[ "de novo drug design", "molecules", "scratch", "that", "specific chemical and pharmacological properties", "we", "a computational approach", "interactome-based deep learning", "ligand-", "structure-based generation", "drug-like molecules", "this method capitalizes", "the unique strengths", "both graph neural networks", "chemical language models", "an alternative", "the need", "application-specific reinforcement", "transfer", "few-shot learning", "it", "the “zero-shot\" construction", "compound libraries", "specific bioactivity", "synthesizability", "structural novelty", "order", "the deep interactome learning framework", "protein", "structure-based drug design", "potential new ligands", "the binding site", "the human peroxisome proliferator-activated receptor", "ppar) subtype gamma", "the top-ranking designs", "potent ppar partial agonists", "favorable activity", "the desired selectivity profiles", "both nuclear receptors", "off-target interactions", "crystal structure determination", "the ligand-receptor complex", "the anticipated binding mode", "this successful outcome", "interactome-based de novo design", "application", "bioorganic and medicinal chemistry", "the creation", "innovative bioactive molecules", "zero", "de novo" ]
Handloomed fabrics recognition with deep learning
[ "Lipi B. Mahanta", "Deva Raj Mahanta", "Taibur Rahman", "Chandan Chakraborty" ]
Every nation treasures its handloom heritage, and in India, the handloom industry safeguards cultural traditions, sustains millions of artisans, and preserves ancient weaving techniques. To protect this legacy, a critical need arises to distinguish genuine handloom products, exemplified by the renowned “gamucha” from India’s northeast, from counterfeit powerloom imitations. Our study’s objective is to create an AI tool for effortless detection of authentic handloom items amidst a sea of fakes. Six deep learning architectures—VGG16, VGG19, ResNet50, InceptionV3, InceptionResNetV2, and DenseNet201—were trained on annotated image repositories of handloom and powerloom towels (17,484 images in total, with 14,020 for training and 3464 for validation). A novel deep learning model was also proposed. Despite respectable training accuracies, the pre-trained models exhibited lower performance on the validation dataset compared to our novel model. The proposed model outperformed pre-trained models, demonstrating superior validation accuracy, lower validation loss, computational efficiency, and adaptability to the specific classification problem. Notably, the existing models showed challenges in generalizing to unseen data and raised concerns about practical deployment due to computational expenses. This study pioneers a computer-assisted approach for automated differentiation between authentic handwoven “gamucha”s and counterfeit powerloom imitations—a groundbreaking recognition method. The methodology presented not only holds scalability potential and opportunities for accuracy improvement but also suggests broader applications across diverse fabric products.
10.1038/s41598-024-58750-z
handloomed fabrics recognition with deep learning
every nation treasures its handloom heritage, and in india, the handloom industry safeguards cultural traditions, sustains millions of artisans, and preserves ancient weaving techniques. to protect this legacy, a critical need arises to distinguish genuine handloom products, exemplified by the renowned “gamucha” from india’s northeast, from counterfeit powerloom imitations. our study’s objective is to create an ai tool for effortless detection of authentic handloom items amidst a sea of fakes. six deep learning architectures—vgg16, vgg19, resnet50, inceptionv3, inceptionresnetv2, and densenet201—were trained on annotated image repositories of handloom and powerloom towels (17,484 images in total, with 14,020 for training and 3464 for validation). a novel deep learning model was also proposed. despite respectable training accuracies, the pre-trained models exhibited lower performance on the validation dataset compared to our novel model. the proposed model outperformed pre-trained models, demonstrating superior validation accuracy, lower validation loss, computational efficiency, and adaptability to the specific classification problem. notably, the existing models showed challenges in generalizing to unseen data and raised concerns about practical deployment due to computational expenses. this study pioneers a computer-assisted approach for automated differentiation between authentic handwoven “gamucha”s and counterfeit powerloom imitations—a groundbreaking recognition method. the methodology presented not only holds scalability potential and opportunities for accuracy improvement but also suggests broader applications across diverse fabric products.
[ "every nation", "its handloom heritage", "india", "cultural traditions", "millions", "artisans", "ancient weaving techniques", "this legacy", "a critical need", "genuine handloom products", "the renowned “gamucha", "india’s northeast", "counterfeit powerloom imitations", "our study’s objective", "an ai tool", "effortless detection", "authentic handloom items", "a sea", "fakes", "six deep learning architectures", "vgg16", "resnet50", "inceptionv3", "densenet201", "annotated image repositories", "handloom", "powerloom towels", "17,484 images", "total", "training", "validation", "a novel deep learning model", "respectable training accuracies", "the pre-trained models", "lower performance", "the validation dataset", "our novel model", "the proposed model", "pre-trained models", "superior validation accuracy", "the specific classification problem", "the existing models", "challenges", "unseen data", "concerns", "practical deployment", "computational expenses", "this study", "a computer-assisted approach", "automated differentiation", "authentic handwoven “gamucha”s", "counterfeit powerloom imitations", "a groundbreaking recognition method", "the methodology", "scalability potential", "opportunities", "accuracy improvement", "broader applications", "diverse fabric products", "india", "millions", "india", "six", "resnet50", "inceptionv3", "inceptionresnetv2", "17,484", "14,020", "3464" ]
Deep fake detection using an optimal deep learning model with multi head attention-based feature extraction scheme
[ "R. Raja Sekar", "T. Dhiliphan Rajkumar", "Koteswara Rao Anne" ]
Face forgery, or deep fake, is a frequently used method to produce fake face images, network pornography, blackmail, and other illegal activities. Researchers developed several detection approaches based on the changing traces presented by deep forgery to limit the damage caused by deep fake methods. They obtain limited performance when evaluating cross-datum scenarios. This paper proposes an optimal deep learning approach with an attention-based feature learning scheme to perform DFD more accurately. The proposed system mainly comprises ‘5’ phases: face detection, preprocessing, texture feature extraction, spatial feature extraction, and classification. The face regions are initially detected from the collected data using the Viola–Jones (VJ) algorithm. Then, preprocessing is carried out, which resizes and normalizes the detected face regions to improve their quality for detection purposes. Next, texture features are learned using the Butterfly Optimized Gabor Filter to get information about the local features of objects in an image. Then, the spatial features are extracted using Residual Network-50 with Multi Head Attention (RN50MHA) to represent the data globally. Finally, classification is done using the Optimal Long Short-Term Memory (OLSTM), which classifies the data as fake or real, in which optimization of network is done using Enhanced Archimedes Optimization Algorithm. The proposed system is evaluated on four benchmark datasets such as Face Forensics +  + (FF + +), Deepfake Detection Challenge, Celebrity Deepfake (CDF), and Wild Deepfake. The experimental results show that DFD using OLSTM and RN50MHA achieves a higher inter and intra-dataset detection rate than existing state-of-the-art methods.
10.1007/s00371-024-03567-0
deep fake detection using an optimal deep learning model with multi head attention-based feature extraction scheme
face forgery, or deep fake, is a frequently used method to produce fake face images, network pornography, blackmail, and other illegal activities. researchers developed several detection approaches based on the changing traces presented by deep forgery to limit the damage caused by deep fake methods. they obtain limited performance when evaluating cross-datum scenarios. this paper proposes an optimal deep learning approach with an attention-based feature learning scheme to perform dfd more accurately. the proposed system mainly comprises ‘5’ phases: face detection, preprocessing, texture feature extraction, spatial feature extraction, and classification. the face regions are initially detected from the collected data using the viola–jones (vj) algorithm. then, preprocessing is carried out, which resizes and normalizes the detected face regions to improve their quality for detection purposes. next, texture features are learned using the butterfly optimized gabor filter to get information about the local features of objects in an image. then, the spatial features are extracted using residual network-50 with multi head attention (rn50mha) to represent the data globally. finally, classification is done using the optimal long short-term memory (olstm), which classifies the data as fake or real, in which optimization of network is done using enhanced archimedes optimization algorithm. the proposed system is evaluated on four benchmark datasets such as face forensics + + (ff + +), deepfake detection challenge, celebrity deepfake (cdf), and wild deepfake. the experimental results show that dfd using olstm and rn50mha achieves a higher inter and intra-dataset detection rate than existing state-of-the-art methods.
[ "face forgery", "a frequently used method", "fake face images", "researchers", "several detection approaches", "the changing traces", "deep forgery", "the damage", "deep fake methods", "they", "limited performance", "cross-datum scenarios", "this paper", "an optimal deep learning approach", "an attention-based feature learning scheme", "the proposed system", "5’ phases", "face", "detection", "preprocessing", "texture feature extraction", "spatial feature extraction", "classification", "the face regions", "the collected data", "the viola", "jones", "vj", "algorithm", "preprocessing", "which", "the detected face regions", "their quality", "detection purposes", "texture features", "the butterfly optimized gabor filter", "information", "the local features", "objects", "an image", "the spatial features", "residual network-50", "multi head attention", "rn50mha", "the data", "classification", "the optimal long short-term memory", "olstm", "which", "the data", "which optimization", "network", "enhanced archimedes optimization algorithm", "the proposed system", "four benchmark datasets", "face forensics", "+ (ff + +), deepfake detection challenge", "celebrity deepfake", "cdf", "wild deepfake", "the experimental results", "olstm", "rn50mha", "a higher inter", "intra-dataset detection rate", "the-art", "5", "network-50", "four", "olstm" ]
Imaging-based deep learning in kidney diseases: recent progress and future prospects
[ "Meng Zhang", "Zheng Ye", "Enyu Yuan", "Xinyang Lv", "Yiteng Zhang", "Yuqi Tan", "Chunchao Xia", "Jing Tang", "Jin Huang", "Zhenlin Li" ]
Kidney diseases result from various causes, which can generally be divided into neoplastic and non-neoplastic diseases. Deep learning based on medical imaging is an established methodology for further data mining and an evolving field of expertise, which provides the possibility for precise management of kidney diseases. Recently, imaging-based deep learning has been widely applied to many clinical scenarios of kidney diseases including organ segmentation, lesion detection, differential diagnosis, surgical planning, and prognosis prediction, which can provide support for disease diagnosis and management. In this review, we will introduce the basic methodology of imaging-based deep learning and its recent clinical applications in neoplastic and non-neoplastic kidney diseases. Additionally, we further discuss its current challenges and future prospects and conclude that achieving data balance, addressing heterogeneity, and managing data size remain challenges for imaging-based deep learning. Meanwhile, the interpretability of algorithms, ethical risks, and barriers of bias assessment are also issues that require consideration in future development. We hope to provide urologists, nephrologists, and radiologists with clear ideas about imaging-based deep learning and reveal its great potential in clinical practice.Critical relevance statement The wide clinical applications of imaging-based deep learning in kidney diseases can help doctors to diagnose, treat, and manage patients with neoplastic or non-neoplastic renal diseases.Key points• Imaging-based deep learning is widely applied to neoplastic and non-neoplastic renal diseases.• Imaging-based deep learning improves the accuracy of the delineation, diagnosis, and evaluation of kidney diseases.• The small dataset, various lesion sizes, and so on are still challenges for deep learning.Graphical Abstract
10.1186/s13244-024-01636-5
imaging-based deep learning in kidney diseases: recent progress and future prospects
kidney diseases result from various causes, which can generally be divided into neoplastic and non-neoplastic diseases. deep learning based on medical imaging is an established methodology for further data mining and an evolving field of expertise, which provides the possibility for precise management of kidney diseases. recently, imaging-based deep learning has been widely applied to many clinical scenarios of kidney diseases including organ segmentation, lesion detection, differential diagnosis, surgical planning, and prognosis prediction, which can provide support for disease diagnosis and management. in this review, we will introduce the basic methodology of imaging-based deep learning and its recent clinical applications in neoplastic and non-neoplastic kidney diseases. additionally, we further discuss its current challenges and future prospects and conclude that achieving data balance, addressing heterogeneity, and managing data size remain challenges for imaging-based deep learning. meanwhile, the interpretability of algorithms, ethical risks, and barriers of bias assessment are also issues that require consideration in future development. we hope to provide urologists, nephrologists, and radiologists with clear ideas about imaging-based deep learning and reveal its great potential in clinical practice.critical relevance statement the wide clinical applications of imaging-based deep learning in kidney diseases can help doctors to diagnose, treat, and manage patients with neoplastic or non-neoplastic renal diseases.key points• imaging-based deep learning is widely applied to neoplastic and non-neoplastic renal diseases.• imaging-based deep learning improves the accuracy of the delineation, diagnosis, and evaluation of kidney diseases.• the small dataset, various lesion sizes, and so on are still challenges for deep learning.graphical abstract
[ "kidney diseases", "various causes", "which", "neoplastic and non-neoplastic diseases", "deep learning", "medical imaging", "an established methodology", "further data mining", "an evolving field", "expertise", "which", "the possibility", "precise management", "kidney diseases", "imaging-based deep learning", "many clinical scenarios", "kidney diseases", "organ segmentation", "lesion detection", "differential diagnosis", "surgical planning", "prognosis prediction", "which", "support", "disease diagnosis", "management", "this review", "we", "the basic methodology", "imaging-based deep learning", "its recent clinical applications", "neoplastic and non-neoplastic kidney diseases", "we", "its current challenges", "future prospects", "data balance", "heterogeneity", "managing data size", "challenges", "imaging-based deep learning", "the interpretability", "algorithms", "ethical risks", "barriers", "bias assessment", "issues", "that", "consideration", "future development", "we", "urologists", "nephrologists", "radiologists", "clear ideas", "imaging-based deep learning", "its great potential", "clinical practice.critical relevance statement", "the wide clinical applications", "imaging-based deep learning", "kidney diseases", "doctors", "patients", "neoplastic or non-neoplastic renal diseases.key", "imaging-based deep learning", "neoplastic and non-neoplastic renal diseases.• imaging-based deep learning", "the accuracy", "the delineation", "diagnosis", "evaluation", "kidney", "the small dataset", ", various lesion sizes", "challenges", "deep learning.graphical abstract" ]
Genotype imputation methods for whole and complex genomic regions utilizing deep learning technology
[ "Tatsuhiko Naito", "Yukinori Okada" ]
The imputation of unmeasured genotypes is essential in human genetic research, particularly in enhancing the power of genome-wide association studies and conducting subsequent fine-mapping. Recently, several deep learning-based genotype imputation methods for genome-wide variants with the capability of learning complex linkage disequilibrium patterns have been developed. Additionally, deep learning-based imputation has been applied to a distinct genomic region known as the major histocompatibility complex, referred to as HLA imputation. Despite their various advantages, the current deep learning-based genotype imputation methods do have certain limitations and have not yet become standard. These limitations include the modest accuracy improvement over statistical and conventional machine learning-based methods. However, their benefits include other aspects, such as their “reference-free” nature, which ensures complete privacy protection, and their higher computational efficiency. Furthermore, the continuing evolution of deep learning technologies is expected to contribute to further improvements in prediction accuracy and usability in the future.
10.1038/s10038-023-01213-6
genotype imputation methods for whole and complex genomic regions utilizing deep learning technology
the imputation of unmeasured genotypes is essential in human genetic research, particularly in enhancing the power of genome-wide association studies and conducting subsequent fine-mapping. recently, several deep learning-based genotype imputation methods for genome-wide variants with the capability of learning complex linkage disequilibrium patterns have been developed. additionally, deep learning-based imputation has been applied to a distinct genomic region known as the major histocompatibility complex, referred to as hla imputation. despite their various advantages, the current deep learning-based genotype imputation methods do have certain limitations and have not yet become standard. these limitations include the modest accuracy improvement over statistical and conventional machine learning-based methods. however, their benefits include other aspects, such as their “reference-free” nature, which ensures complete privacy protection, and their higher computational efficiency. furthermore, the continuing evolution of deep learning technologies is expected to contribute to further improvements in prediction accuracy and usability in the future.
[ "the imputation", "unmeasured genotypes", "human genetic research", "the power", "genome-wide association studies", "subsequent fine-mapping", "several deep learning-based genotype imputation methods", "genome-wide variants", "the capability", "complex linkage disequilibrium patterns", "deep learning-based imputation", "a distinct genomic region", "the major histocompatibility complex", "hla imputation", "their various advantages", "the current deep learning-based genotype imputation methods", "certain limitations", "these limitations", "the modest accuracy improvement", "statistical and conventional machine learning-based methods", "their benefits", "other aspects", "their “reference-free” nature", "which", "complete privacy protection", "their higher computational efficiency", "the continuing evolution", "deep learning technologies", "further improvements", "prediction accuracy", "usability", "the future" ]
Deep learning-assisted medical image compression challenges and opportunities: systematic review
[ "Nour El Houda Bourai", "Hayet Farida Merouani", "Akila Djebbar" ]
Over the preceding decade, there has been a discernible surge in the prominence of artificial intelligence, marked by the development of various methodologies, among which deep learning emerges as a particularly auspicious technique. The captivating attribute of deep learning, characterised by its capacity to glean intricate feature representations from data, has served as a catalyst for pioneering approaches and methodologies spanning a multitude of domains. In the face of the burgeoning exponential growth in digital medical image data, the exigency for adept image compression methodologies has become increasingly pronounced. These methodologies are designed to preserve bandwidth and storage resources, thereby ensuring the seamless and efficient transmission of data within medical applications. The critical nature of medical image compression accentuates the imperative to confront the challenges precipitated by the escalating deluge of medical image data. This review paper undertakes a comprehensive examination of medical image compression, with a predominant focus on sophisticated, research-driven deep learning techniques. It delves into a spectrum of approaches, encompassing the amalgamation of deep learning with conventional compression algorithms and the application of deep learning to enhance compression quality. Additionally, the review endeavours to explicate these fundamental concepts, elucidating their inherent characteristics, merits, and limitations.
10.1007/s00521-024-09660-8
deep learning-assisted medical image compression challenges and opportunities: systematic review
over the preceding decade, there has been a discernible surge in the prominence of artificial intelligence, marked by the development of various methodologies, among which deep learning emerges as a particularly auspicious technique. the captivating attribute of deep learning, characterised by its capacity to glean intricate feature representations from data, has served as a catalyst for pioneering approaches and methodologies spanning a multitude of domains. in the face of the burgeoning exponential growth in digital medical image data, the exigency for adept image compression methodologies has become increasingly pronounced. these methodologies are designed to preserve bandwidth and storage resources, thereby ensuring the seamless and efficient transmission of data within medical applications. the critical nature of medical image compression accentuates the imperative to confront the challenges precipitated by the escalating deluge of medical image data. this review paper undertakes a comprehensive examination of medical image compression, with a predominant focus on sophisticated, research-driven deep learning techniques. it delves into a spectrum of approaches, encompassing the amalgamation of deep learning with conventional compression algorithms and the application of deep learning to enhance compression quality. additionally, the review endeavours to explicate these fundamental concepts, elucidating their inherent characteristics, merits, and limitations.
[ "the preceding decade", "a discernible surge", "the prominence", "artificial intelligence", "the development", "various methodologies", "which", "deep learning", "a particularly auspicious technique", "the captivating attribute", "deep learning", "its capacity", "intricate feature representations", "data", "a catalyst", "approaches", "methodologies", "a multitude", "domains", "the face", "the burgeoning exponential growth", "digital medical image data", "the exigency", "adept image compression methodologies", "these methodologies", "bandwidth and storage resources", "the seamless and efficient transmission", "data", "medical applications", "the critical nature", "medical image compression", "the imperative", "the challenges", "the escalating deluge", "medical image data", "this review paper", "a comprehensive examination", "medical image compression", "a predominant focus", "sophisticated, research-driven deep learning techniques", "it", "a spectrum", "approaches", "the amalgamation", "deep learning", "conventional compression algorithms", "the application", "deep learning", "compression quality", "the review", "these fundamental concepts", "their inherent characteristics", "merits", "limitations", "the preceding decade" ]
Open and reusable deep learning for pathology with WSInfer and QuPath
[ "Jakub R. Kaczmarzyk", "Alan O’Callaghan", "Fiona Inglis", "Swarad Gat", "Tahsin Kurc", "Rajarsi Gupta", "Erich Bremer", "Peter Bankhead", "Joel H. Saltz" ]
Digital pathology has seen a proliferation of deep learning models in recent years, but many models are not readily reusable. To address this challenge, we developed WSInfer: an open-source software ecosystem designed to streamline the sharing and reuse of deep learning models for digital pathology. The increased access to trained models can augment research on the diagnostic, prognostic, and predictive capabilities of digital pathology.
10.1038/s41698-024-00499-9
open and reusable deep learning for pathology with wsinfer and qupath
digital pathology has seen a proliferation of deep learning models in recent years, but many models are not readily reusable. to address this challenge, we developed wsinfer: an open-source software ecosystem designed to streamline the sharing and reuse of deep learning models for digital pathology. the increased access to trained models can augment research on the diagnostic, prognostic, and predictive capabilities of digital pathology.
[ "digital pathology", "a proliferation", "deep learning models", "recent years", "many models", "this challenge", "we", "an open-source software ecosystem", "the sharing", "reuse", "deep learning models", "digital pathology", "the increased access", "trained models", "research", "predictive capabilities", "digital pathology", "digital pathology", "recent years" ]
Adversarial robustness of deep reinforcement learning-based intrusion detection
[ "Mohamed Amine Merzouk", "Christopher Neal", "Joséphine Delas", "Reda Yaich", "Nora Boulahia-Cuppens", "Frédéric Cuppens" ]
Machine learning techniques, including Deep Reinforcement Learning (DRL), enhance intrusion detection systems by adapting to new threats. However, DRL’s reliance on vulnerable deep neural networks leads to susceptibility to adversarial examples-perturbations designed to evade detection. While adversarial examples are well-studied in deep learning, their impact on DRL-based intrusion detection remains underexplored, particularly in critical domains. This article conducts a thorough analysis of DRL-based intrusion detection’s vulnerability to adversarial examples. It systematically evaluates key hyperparameters such as DRL algorithms, neural network depth, and width, impacting agents’ robustness. The study extends to black-box attacks, demonstrating adversarial transferability across DRL algorithms. Findings emphasize neural network architecture’s critical role in DRL agent robustness, addressing underfitting and overfitting challenges. Practical implications include insights for optimizing DRL-based intrusion detection agents to enhance performance and resilience. Experiments encompass multiple DRL algorithms tested on three datasets: NSL-KDD, UNSW-NB15, and CICIoV2024, against gradient-based adversarial attacks, with publicly available implementation code.
10.1007/s10207-024-00903-2
adversarial robustness of deep reinforcement learning-based intrusion detection
machine learning techniques, including deep reinforcement learning (drl), enhance intrusion detection systems by adapting to new threats. however, drl’s reliance on vulnerable deep neural networks leads to susceptibility to adversarial examples-perturbations designed to evade detection. while adversarial examples are well-studied in deep learning, their impact on drl-based intrusion detection remains underexplored, particularly in critical domains. this article conducts a thorough analysis of drl-based intrusion detection’s vulnerability to adversarial examples. it systematically evaluates key hyperparameters such as drl algorithms, neural network depth, and width, impacting agents’ robustness. the study extends to black-box attacks, demonstrating adversarial transferability across drl algorithms. findings emphasize neural network architecture’s critical role in drl agent robustness, addressing underfitting and overfitting challenges. practical implications include insights for optimizing drl-based intrusion detection agents to enhance performance and resilience. experiments encompass multiple drl algorithms tested on three datasets: nsl-kdd, unsw-nb15, and ciciov2024, against gradient-based adversarial attacks, with publicly available implementation code.
[ "machine learning techniques", "deep reinforcement learning", "drl", "intrusion detection systems", "new threats", "drl’s reliance", "vulnerable deep neural networks", "susceptibility", "adversarial examples-perturbations", "detection", "adversarial examples", "deep learning", "their impact", "drl-based intrusion detection", "critical domains", "this article", "a thorough analysis", "drl-based intrusion detection’s vulnerability", "adversarial examples", "it", "key hyperparameters", "drl algorithms", "neural network depth", "width", "impacting agents’ robustness", "the study", "black-box attacks", "adversarial transferability", "drl algorithms", "findings", "neural network architecture’s critical role", "drl agent robustness", "underfitting and overfitting challenges", "practical implications", "insights", "drl-based intrusion detection agents", "performance", "resilience", "experiments", "multiple drl algorithms", "three datasets", "nsl-kdd, unsw-nb15", "ciciov2024", "gradient-based adversarial attacks", "publicly available implementation code", "drl’s", "three", "ciciov2024" ]
Yoga with Deep Learning: Linking Mind and Machine
[ "Sakshi", "Sandeep Saini" ]
Health and fitness play a crucial role in every aspect of an individual’s life. In an era where well-being is an absolute target, Yoga is one of the most straightforward ways to remain healthy and achieve mental focus, breath control and body balance without undergoing useless expenditures. The motive of this study is to combine the knowledge of Yoga with the continuously developing power of deep learning. In this study, we observe yoga pose estimation along with breath control and focus instructions for 10 different poses with the help of deep learning and Mediapipe. The proposed work predicts pose and instructions through model matching with landmark extraction done by obtaining keypoints on the pose by initializing the Mediapipe pose model. Here, a hybrid model is created using ANN with existing state-of-the-art models to achieve the pose estimation. The maximum accuracy achieved during the model implementation was around 93.43%. Later, when the model was deployed on Raspberry Pi-4 hardware, the maximum accuracy achieved was around 90.9%. The user also gets guidance on breath and focus control during a particular pose. Integration with deep learning and data science is seen to have created a whole new personalized Yoga experience with on-demand posture monitoring, customized feedback, and global access while creating future scopes for risk analysis and injury prevention.
10.1007/s42979-024-02784-7
yoga with deep learning: linking mind and machine
health and fitness play a crucial role in every aspect of an individual’s life. in an era where well-being is an absolute target, yoga is one of the most straightforward ways to remain healthy and achieve mental focus, breath control and body balance without undergoing useless expenditures. the motive of this study is to combine the knowledge of yoga with the continuously developing power of deep learning. in this study, we observe yoga pose estimation along with breath control and focus instructions for 10 different poses with the help of deep learning and mediapipe. the proposed work predicts pose and instructions through model matching with landmark extraction done by obtaining keypoints on the pose by initializing the mediapipe pose model. here, a hybrid model is created using ann with existing state-of-the-art models to achieve the pose estimation. the maximum accuracy achieved during the model implementation was around 93.43%. later, when the model was deployed on raspberry pi-4 hardware, the maximum accuracy achieved was around 90.9%. the user also gets guidance on breath and focus control during a particular pose. integration with deep learning and data science is seen to have created a whole new personalized yoga experience with on-demand posture monitoring, customized feedback, and global access while creating future scopes for risk analysis and injury prevention.
[ "health", "fitness", "a crucial role", "every aspect", "an individual’s life", "an era", "well-being", "an absolute target", "yoga", "the most straightforward ways", "mental focus", "breath control", "body balance", "useless expenditures", "the motive", "this study", "the knowledge", "yoga", "the continuously developing power", "deep learning", "this study", "we", "yoga pose estimation", "breath control", "instructions", "10 different poses", "the help", "deep learning", "mediapipe", "the proposed work predicts", "instructions", "model", "landmark extraction", "keypoints", "the pose", "the mediapipe pose model", "a hybrid model", "ann", "the-art", "the pose estimation", "the maximum accuracy", "the model implementation", "93.43%", "the model", "raspberry pi-4 hardware", "the maximum accuracy", "around 90.9%", "the user", "guidance", "breath", "control", "a particular pose", "integration", "deep learning and data science", "a whole new personalized yoga experience", "demand", "customized feedback", "global access", "future scopes", "risk analysis", "injury prevention", "one", "10", "around 93.43%", "around 90.9%" ]
A review of deep learning and machine learning techniques for hydrological inflow forecasting
[ "Sarmad Dashti Latif", "Ali Najah Ahmed" ]
Conventional machine learning models have been widely used for reservoir inflow and rainfall prediction. Nowadays, researchers focus on a new computing architecture in the area of AI, namely, deep learning for hydrological forecasting parameters. This review paper tends to broadcast more of the intriguing interest in reservoir inflow prediction utilizing deep learning and machine learning algorithms. The AI models utilized for different hydrology sectors, as well as the most prevalent machine learning techniques, will be explored in this thorough study, which divides AI techniques into two primary categories: deep learning and machine learning. In this study, we look at the long short-term memory deep learning method as well as three traditional machine learning algorithms: support vector machine, random forest, and boosted regression tree. Under each part, a summary of the findings is provided. For convenience of reference, some of the benefits and drawbacks discovered through literature reviews have been listed. Finally, future recommendations and overall conclusions based on research findings are given. This review focuses on papers from high-impact factor periodicals published over a 4 years period beginning in 2018 onwards.
10.1007/s10668-023-03131-1
a review of deep learning and machine learning techniques for hydrological inflow forecasting
conventional machine learning models have been widely used for reservoir inflow and rainfall prediction. nowadays, researchers focus on a new computing architecture in the area of ai, namely, deep learning for hydrological forecasting parameters. this review paper tends to broadcast more of the intriguing interest in reservoir inflow prediction utilizing deep learning and machine learning algorithms. the ai models utilized for different hydrology sectors, as well as the most prevalent machine learning techniques, will be explored in this thorough study, which divides ai techniques into two primary categories: deep learning and machine learning. in this study, we look at the long short-term memory deep learning method as well as three traditional machine learning algorithms: support vector machine, random forest, and boosted regression tree. under each part, a summary of the findings is provided. for convenience of reference, some of the benefits and drawbacks discovered through literature reviews have been listed. finally, future recommendations and overall conclusions based on research findings are given. this review focuses on papers from high-impact factor periodicals published over a 4 years period beginning in 2018 onwards.
[ "conventional machine learning models", "reservoir inflow", "rainfall prediction", "researchers", "a new computing architecture", "the area", "hydrological forecasting parameters", "this review paper", "the intriguing interest", "reservoir inflow prediction", "deep learning and machine learning algorithms", "the ai models", "different hydrology sectors", "the most prevalent machine learning techniques", "this thorough study", "which", "techniques", "two primary categories", "deep learning", "machine learning", "this study", "we", "the long short-term memory deep learning method", "three traditional machine learning algorithms", "vector machine", "random forest", "regression tree", "each part", "a summary", "the findings", "convenience", "reference", "some", "the benefits", "drawbacks", "literature reviews", "future recommendations", "overall conclusions", "research findings", "this review", "papers", "high-impact factor periodicals", "a 4 years period", "two", "three", "4 years", "2018" ]
Deep Metric Learning: Loss Functions Comparison
[ "R. L. Vasilev", "A. G. D’yakonov" ]
AbstractAn overview of deep metric learning methods is presented. Although they have appeared in recent years, these methods were compared only with their predecessors, with neural networks of outdated architectures used for representation learning (representations on which the metric is calculated). The described methods were compared on different datasets from several domains, using pre-trained neural networks comparable in performance to SotA (state of the art): ConvNeXt for images and DistilBERT for texts. Labeled datasets were used, divided into two parts (train and test) so that the classes did not overlap (i.e., for each class its objects are fully in train or fully in test). Such a large-scale honest comparison was made for the first time and led to unexpected conclusions, viz. some “old” methods, for example, Tuplet Margin Loss, are superior in performance to their modern modifications and methods proposed in very recent works.
10.1134/S1064562423701053
deep metric learning: loss functions comparison
abstractan overview of deep metric learning methods is presented. although they have appeared in recent years, these methods were compared only with their predecessors, with neural networks of outdated architectures used for representation learning (representations on which the metric is calculated). the described methods were compared on different datasets from several domains, using pre-trained neural networks comparable in performance to sota (state of the art): convnext for images and distilbert for texts. labeled datasets were used, divided into two parts (train and test) so that the classes did not overlap (i.e., for each class its objects are fully in train or fully in test). such a large-scale honest comparison was made for the first time and led to unexpected conclusions, viz. some “old” methods, for example, tuplet margin loss, are superior in performance to their modern modifications and methods proposed in very recent works.
[ "abstractan overview", "deep metric learning methods", "they", "recent years", "these methods", "their predecessors", "neural networks", "outdated architectures", "representation learning", "representations", "which", "the metric", "the described methods", "different datasets", "several domains", "pre-trained neural networks", "performance", "sota", "(state", "the art", "convnext", "images", "texts", "labeled datasets", "two parts", "train", "test", "the classes", "each class", "its objects", "train", "test", "such a large-scale honest comparison", "the first time", "unexpected conclusions", "some “old” methods", "example", "margin loss", "performance", "their modern modifications", "methods", "very recent works", "abstractan", "recent years", "two", "first" ]
Comparing Machine Learning and Deep Learning Techniques for Text Analytics: Detecting the Severity of Hate Comments Online
[ "Alaa Marshan", "Farah Nasreen Mohamed Nizar", "Athina Ioannou", "Konstantina Spanaki" ]
Social media platforms have become an increasingly popular tool for individuals to share their thoughts and opinions with other people. However, very often people tend to misuse social media posting abusive comments. Abusive and harassing behaviours can have adverse effects on people's lives. This study takes a novel approach to combat harassment in online platforms by detecting the severity of abusive comments, that has not been investigated before. The study compares the performance of machine learning models such as Naïve Bayes, Random Forest, and Support Vector Machine, with deep learning models such as Convolutional Neural Network (CNN) and Bi-directional Long Short-Term Memory (Bi-LSTM). Moreover, in this work we investigate the effect of text pre-processing on the performance of the machine and deep learning models, the feature set for the abusive comments was made using unigrams and bigrams for the machine learning models and word embeddings for the deep learning models. The comparison of the models’ performances showed that the Random Forest with bigrams achieved the best overall performance with an accuracy of (0.94), a precision of (0.91), a recall of (0.94), and an F1 score of (0.92). The study develops an efficient model to detect severity of abusive language in online platforms, offering important implications both to theory and practice.
10.1007/s10796-023-10446-x
comparing machine learning and deep learning techniques for text analytics: detecting the severity of hate comments online
social media platforms have become an increasingly popular tool for individuals to share their thoughts and opinions with other people. however, very often people tend to misuse social media posting abusive comments. abusive and harassing behaviours can have adverse effects on people's lives. this study takes a novel approach to combat harassment in online platforms by detecting the severity of abusive comments, that has not been investigated before. the study compares the performance of machine learning models such as naïve bayes, random forest, and support vector machine, with deep learning models such as convolutional neural network (cnn) and bi-directional long short-term memory (bi-lstm). moreover, in this work we investigate the effect of text pre-processing on the performance of the machine and deep learning models, the feature set for the abusive comments was made using unigrams and bigrams for the machine learning models and word embeddings for the deep learning models. the comparison of the models’ performances showed that the random forest with bigrams achieved the best overall performance with an accuracy of (0.94), a precision of (0.91), a recall of (0.94), and an f1 score of (0.92). the study develops an efficient model to detect severity of abusive language in online platforms, offering important implications both to theory and practice.
[ "social media platforms", "an increasingly popular tool", "individuals", "their thoughts", "opinions", "other people", "people", "social media", "abusive comments", "abusive and harassing behaviours", "adverse effects", "people's lives", "this study", "a novel approach", "combat harassment", "online platforms", "the severity", "abusive comments", "that", "the study", "the performance", "machine learning models", "naïve bayes", "random forest", "vector machine", "deep learning models", "convolutional neural network", "cnn", "bi-directional long short-term memory", "bi", "-", "lstm", "this work", "we", "the effect", "text pre", "the performance", "the machine", "deep learning models", "the feature", "the abusive comments", "unigrams", "bigrams", "the machine learning models", "word embeddings", "the deep learning models", "the comparison", "the models’ performances", "the random forest", "bigrams", "the best overall performance", "an accuracy", "the study", "an efficient model", "severity", "abusive language", "online platforms", "important implications", "theory", "practice", "cnn", "0.94", "0.91", "0.94", "0.92" ]
Interpretable predictions of chaotic dynamical systems using dynamical system deep learning
[ "Mingyu Wang", "Jianping Li" ]
Making accurate predictions of chaotic dynamical systems is an essential but challenging task with many practical applications in various disciplines. However, the current dynamical methods can only provide short-term precise predictions, while prevailing deep learning techniques with better performances always suffer from model complexity and interpretability. Here, we propose a new dynamic-based deep learning method, namely the dynamical system deep learning (DSDL), to achieve interpretable long-term precise predictions by the combination of nonlinear dynamics theory and deep learning methods. As validated by four chaotic dynamical systems with different complexities, the DSDL framework significantly outperforms other dynamical and deep learning methods. Furthermore, the DSDL also reduces the model complexity and realizes the model transparency to make it more interpretable. We firmly believe that the DSDL framework is a promising and effective method for comprehending and predicting chaotic dynamical systems.
10.1038/s41598-024-53169-y
interpretable predictions of chaotic dynamical systems using dynamical system deep learning
making accurate predictions of chaotic dynamical systems is an essential but challenging task with many practical applications in various disciplines. however, the current dynamical methods can only provide short-term precise predictions, while prevailing deep learning techniques with better performances always suffer from model complexity and interpretability. here, we propose a new dynamic-based deep learning method, namely the dynamical system deep learning (dsdl), to achieve interpretable long-term precise predictions by the combination of nonlinear dynamics theory and deep learning methods. as validated by four chaotic dynamical systems with different complexities, the dsdl framework significantly outperforms other dynamical and deep learning methods. furthermore, the dsdl also reduces the model complexity and realizes the model transparency to make it more interpretable. we firmly believe that the dsdl framework is a promising and effective method for comprehending and predicting chaotic dynamical systems.
[ "accurate predictions", "chaotic dynamical systems", "an essential but challenging task", "many practical applications", "various disciplines", "the current dynamical methods", "short-term precise predictions", "deep learning techniques", "better performances", "model complexity", "interpretability", "we", "a new dynamic-based deep learning method", "namely the dynamical system", "deep learning", "dsdl", "interpretable long-term precise predictions", "the combination", "nonlinear dynamics theory", "deep learning methods", "four chaotic dynamical systems", "different complexities", "the dsdl framework", "other dynamical and deep learning methods", "the dsdl", "the model complexity", "the model transparency", "it", "we", "the dsdl framework", "a promising and effective method", "chaotic dynamical systems", "four" ]
A comprehensive review of COVID-19 detection with machine learning and deep learning techniques
[ "Sreeparna Das", "Ishan Ayus", "Deepak Gupta" ]
PurposeThe first transmission of coronavirus to humans started in Wuhan city of China, took the shape of a pandemic called Corona Virus Disease 2019 (COVID-19), and posed a principal threat to the entire world. The researchers are trying to inculcate artificial intelligence (Machine learning or deep learning models) for the efficient detection of COVID-19. This research explores all the existing machine learning (ML) or deep learning (DL) models, used for COVID-19 detection which may help the researcher to explore in different directions. The main purpose of this review article is to present a compact overview of the application of artificial intelligence to the research experts, helping them to explore the future scopes of improvement.MethodsThe researchers have used various machine learning, deep learning, and a combination of machine and deep learning models for extracting significant features and classifying various health conditions in COVID-19 patients. For this purpose, the researchers have utilized different image modalities such as CT-Scan, X-Ray, etc. This study has collected over 200 research papers from various repositories like Google Scholar, PubMed, Web of Science, etc. These research papers were passed through various levels of scrutiny and finally, 50 research articles were selected.ResultsIn those listed articles, the ML / DL models showed an accuracy of 99% and above while performing the classification of COVID-19. This study has also presented various clinical applications of various research. This study specifies the importance of various machine and deep learning models in the field of medical diagnosis and research.ConclusionIn conclusion, it is evident that ML/DL models have made significant progress in recent years, but there are still limitations that need to be addressed. Overfitting is one such limitation that can lead to incorrect predictions and overburdening of the models. The research community must continue to work towards finding ways to overcome these limitations and make machine and deep learning models even more effective and efficient. Through this ongoing research and development, we can expect even greater advances in the future.
10.1007/s12553-023-00757-z
a comprehensive review of covid-19 detection with machine learning and deep learning techniques
purposethe first transmission of coronavirus to humans started in wuhan city of china, took the shape of a pandemic called corona virus disease 2019 (covid-19), and posed a principal threat to the entire world. the researchers are trying to inculcate artificial intelligence (machine learning or deep learning models) for the efficient detection of covid-19. this research explores all the existing machine learning (ml) or deep learning (dl) models, used for covid-19 detection which may help the researcher to explore in different directions. the main purpose of this review article is to present a compact overview of the application of artificial intelligence to the research experts, helping them to explore the future scopes of improvement.methodsthe researchers have used various machine learning, deep learning, and a combination of machine and deep learning models for extracting significant features and classifying various health conditions in covid-19 patients. for this purpose, the researchers have utilized different image modalities such as ct-scan, x-ray, etc. this study has collected over 200 research papers from various repositories like google scholar, pubmed, web of science, etc. these research papers were passed through various levels of scrutiny and finally, 50 research articles were selected.resultsin those listed articles, the ml / dl models showed an accuracy of 99% and above while performing the classification of covid-19. this study has also presented various clinical applications of various research. this study specifies the importance of various machine and deep learning models in the field of medical diagnosis and research.conclusionin conclusion, it is evident that ml/dl models have made significant progress in recent years, but there are still limitations that need to be addressed. overfitting is one such limitation that can lead to incorrect predictions and overburdening of the models. the research community must continue to work towards finding ways to overcome these limitations and make machine and deep learning models even more effective and efficient. through this ongoing research and development, we can expect even greater advances in the future.
[ "purposethe first transmission", "coronavirus", "humans", "wuhan city", "china", "the shape", "a pandemic called corona virus disease", "covid-19", "a principal threat", "the entire world", "the researchers", "artificial intelligence", "machine learning", "deep learning models", "the efficient detection", "covid-19", "this research", "all the existing machine learning", "ml", "deep learning", "(dl) models", "covid-19 detection", "which", "the researcher", "different directions", "the main purpose", "this review article", "a compact overview", "the application", "artificial intelligence", "the research experts", "them", "the future scopes", "improvement.methodsthe researchers", "various machine learning", "deep learning", "a combination", "machine", "deep learning models", "significant features", "various health conditions", "covid-19 patients", "this purpose", "the researchers", "different image modalities", "ct-scan", "x", "-", "ray", "this study", "over 200 research papers", "various repositories", "google scholar", "pubmed", "web", "science", "these research papers", "various levels", "scrutiny", "50 research articles", "those listed articles", "the ml / dl models", "an accuracy", "99%", "the classification", "covid-19", "this study", "various clinical applications", "various research", "this study", "the importance", "various machine", "deep learning models", "the field", "medical diagnosis", "research.conclusionin conclusion", "it", "ml/dl models", "significant progress", "recent years", "limitations", "that", "one such limitation", "that", "incorrect predictions", "the models", "the research community", "ways", "these limitations", "machine", "deep learning models", "this ongoing research", "development", "we", "even greater advances", "the future", "first", "wuhan city", "china", "2019", "covid-19", "covid-19", "covid-19", "covid-19", "200", "50", "99%", "covid-19", "recent years", "one" ]
Continual learning, deep reinforcement learning, and microcircuits: a novel method for clever game playing
[ "Oscar Chang", "Leo Ramos", "Manuel Eugenio Morocho-Cayamcela", "Rolando Armas", "Luis Zhinin-Vera" ]
Contemporary neural networks frequently encounter the challenge of catastrophic forgetting, wherein newly acquired learning can overwrite and erase previously learned information. The paradigm of continual learning offers a promising solution by enabling intelligent systems to retain and build upon their acquired knowledge over time. This paper introduces a novel approach within the continual learning framework, employing deep reinforcement learning agents that process unprocessed pixel data and interact with microcircuit-like components. These agents autonomously advance through a series of learning stages, culminating in the development of a sophisticated neural network system optimized for predictive performance in the game of tic-tac-toe. Structured to operate in sequential order, each agent is tasked with achieving forward-looking objectives based on Bellman’s principles of reinforcement learning. Knowledge retention is facilitated through the integration of specific microcircuits, which securely store the insights gained by each agent. During the training phase, these microcircuits work in concert, employing high-energy, sparse encoding techniques to enhance learning efficiency and effectiveness. The core contribution of this paper is the establishment of an artificial neural network system capable of accurately predicting tic-tac-toe moves, akin to the observational strategies employed by humans. Our experimental results demonstrate that after approximately 5000 cycles of backpropagation, the system significantly reduced the training loss to \(L_{DQN}<0.1\), thereby increasing the expected cumulative reward. This advancement in training efficiency translates into superior predictive capabilities, enabling the system to secure consistent victories by anticipating up to four moves ahead.
10.1007/s11042-024-18925-2
continual learning, deep reinforcement learning, and microcircuits: a novel method for clever game playing
contemporary neural networks frequently encounter the challenge of catastrophic forgetting, wherein newly acquired learning can overwrite and erase previously learned information. the paradigm of continual learning offers a promising solution by enabling intelligent systems to retain and build upon their acquired knowledge over time. this paper introduces a novel approach within the continual learning framework, employing deep reinforcement learning agents that process unprocessed pixel data and interact with microcircuit-like components. these agents autonomously advance through a series of learning stages, culminating in the development of a sophisticated neural network system optimized for predictive performance in the game of tic-tac-toe. structured to operate in sequential order, each agent is tasked with achieving forward-looking objectives based on bellman’s principles of reinforcement learning. knowledge retention is facilitated through the integration of specific microcircuits, which securely store the insights gained by each agent. during the training phase, these microcircuits work in concert, employing high-energy, sparse encoding techniques to enhance learning efficiency and effectiveness. the core contribution of this paper is the establishment of an artificial neural network system capable of accurately predicting tic-tac-toe moves, akin to the observational strategies employed by humans. our experimental results demonstrate that after approximately 5000 cycles of backpropagation, the system significantly reduced the training loss to \(l_{dqn}<0.1\), thereby increasing the expected cumulative reward. this advancement in training efficiency translates into superior predictive capabilities, enabling the system to secure consistent victories by anticipating up to four moves ahead.
[ "contemporary neural networks", "the challenge", "catastrophic forgetting", "newly acquired learning", "erase", "information", "the paradigm", "continual learning", "a promising solution", "intelligent systems", "their acquired knowledge", "time", "this paper", "a novel approach", "the continual learning framework", "deep reinforcement learning agents", "that", "unprocessed pixel data", "interact", "microcircuit-like components", "these agents", "a series", "learning stages", "the development", "a sophisticated neural network system", "predictive performance", "the game", "tic-tac-toe", "sequential order", "each agent", "forward-looking objectives", "bellman’s principles", "reinforcement learning", "knowledge retention", "the integration", "specific microcircuits", "which", "the insights", "each agent", "the training phase", "these microcircuits", "concert", "high-energy", "techniques", "efficiency", "effectiveness", "the core contribution", "this paper", "the establishment", "an artificial neural network system", "tic-tac-toe moves", "the observational strategies", "humans", "our experimental results", "approximately 5000 cycles", "backpropagation", "the system", "the training loss", "\\(l_{dqn}<0.1\\", "the expected cumulative reward", "this advancement", "training efficiency", "superior predictive capabilities", "the system", "consistent victories", "up to four moves", "contemporary neural networks", "approximately 5000", "\\(l_{dqn}<0.1\\", "up to four" ]
Application of deep learning for characterizing microstructures in SBS modified asphalt
[ "Enhao Zhang", "Liyan Shan", "Yapeng Guo", "Shuang Liu" ]
Microstructures in asphalt, often resembling bee structures, are pivotal in influencing asphalt performance and, by extension, sustainable fuel production. This study employs deep learning techniques to investigate the impact of different Styrene–Butadiene–Styrene (SBS) modifiers on asphalt microstructures, akin to bee structures. The employed deep learning model was trained on a diverse dataset comprising 200 of images sourced from testing. The dataset was carefully curated to address specific challenges in data labeling precision. This involved individualized labeling sessions and adjustments in the number of targets per image, contributing to enhanced precision and increased dataset size. The research begins with the development of a deep learning model trained on a dataset comprising images featuring bee-like structures within asphalt. The model excels in accurately identifying and segmenting these structures. Subsequently, the deep learning approach is compared with existing methods for bee structure segmentation to establish its precision and superiority. Employing frequency distribution histograms, the distribution patterns of bee structures within various types of SBS-modified asphalt is analyzed, quantitatively assessing the influence of diverse modifier types on these microstructural attributes. The findings in this study underscore the deep learning model's efficacy in recognizing and segmenting bee structures with introduced metrics effectively capturing the distinctive characteristics of various asphalt microstructures. This study paves the way for comprehensive analyses of microstructural metrics, including parameters such as perimeter, area, quantity, and related indicators, thus contributing to the development of fundamental asphalt structural units suitable for processes like molecular simulation and finite element analysis. Moreover, it propels the application of deep learning methodologies in the realm of road materials research, opening new avenues for innovative explorations that can ultimately benefit sustainable fuel production.
10.1617/s11527-024-02341-x
application of deep learning for characterizing microstructures in sbs modified asphalt
microstructures in asphalt, often resembling bee structures, are pivotal in influencing asphalt performance and, by extension, sustainable fuel production. this study employs deep learning techniques to investigate the impact of different styrene–butadiene–styrene (sbs) modifiers on asphalt microstructures, akin to bee structures. the employed deep learning model was trained on a diverse dataset comprising 200 of images sourced from testing. the dataset was carefully curated to address specific challenges in data labeling precision. this involved individualized labeling sessions and adjustments in the number of targets per image, contributing to enhanced precision and increased dataset size. the research begins with the development of a deep learning model trained on a dataset comprising images featuring bee-like structures within asphalt. the model excels in accurately identifying and segmenting these structures. subsequently, the deep learning approach is compared with existing methods for bee structure segmentation to establish its precision and superiority. employing frequency distribution histograms, the distribution patterns of bee structures within various types of sbs-modified asphalt is analyzed, quantitatively assessing the influence of diverse modifier types on these microstructural attributes. the findings in this study underscore the deep learning model's efficacy in recognizing and segmenting bee structures with introduced metrics effectively capturing the distinctive characteristics of various asphalt microstructures. this study paves the way for comprehensive analyses of microstructural metrics, including parameters such as perimeter, area, quantity, and related indicators, thus contributing to the development of fundamental asphalt structural units suitable for processes like molecular simulation and finite element analysis. moreover, it propels the application of deep learning methodologies in the realm of road materials research, opening new avenues for innovative explorations that can ultimately benefit sustainable fuel production.
[ "microstructures", "asphalt", "bee structures", "asphalt performance", "extension", "sustainable fuel production", "this study", "deep learning techniques", "the impact", "different styrene", "butadiene–styrene (sbs) modifiers", "asphalt microstructures", "bee structures", "the employed deep learning model", "a diverse dataset", "images", "the dataset", "specific challenges", "data labeling precision", "this involved individualized labeling sessions", "adjustments", "the number", "targets", "image", "enhanced precision", "the research", "the development", "a deep learning model", "a dataset comprising images", "bee-like structures", "asphalt", "the model excels", "these structures", "the deep learning approach", "existing methods", "bee structure segmentation", "its precision", "superiority", "frequency distribution histograms", "the distribution patterns", "bee structures", "various types", "sbs-modified asphalt", "the influence", "diverse modifier types", "these microstructural attributes", "the findings", "this study", "the deep learning model's efficacy", "segmenting bee structures", "introduced metrics", "the distinctive characteristics", "various asphalt microstructures", "this study", "the way", "comprehensive analyses", "microstructural metrics", "parameters", "perimeter", "area", "quantity", "related indicators", "the development", "fundamental asphalt structural units", "processes", "molecular simulation", "finite element analysis", "it", "the application", "deep learning methodologies", "the realm", "road materials research", "new avenues", "innovative explorations", "that", "sustainable fuel production", "200" ]
Deep Machine Learning in Optimization of Scientific Research Activities
[ "E. V. Melnikova" ]
Abstract—This article provides a general overview of machine learning, a subdomain of artificial intelligence. The substance of the deep learning process is explained, and key features of deep learning as a high-level artificial intelligence technology are outlined. Differences between deep and conventional machine learning are analyzed. The architecture of deep learning models is considered. Issues with using deep learning in neural networks are outlined, and key processes of the functioning of neural networks are described. The importance of deep learning neural networks for processing big data is noted. Specific examples of application of deep learning algorithms in various research fields, specifically, scientometrics, bibliometrics, medicine, geoseismic research, and others, are provided. It is shown that deep learning plays an important role in optimizing research activities and improving research productivity.
10.3103/S0147688223010082
deep machine learning in optimization of scientific research activities
abstract—this article provides a general overview of machine learning, a subdomain of artificial intelligence. the substance of the deep learning process is explained, and key features of deep learning as a high-level artificial intelligence technology are outlined. differences between deep and conventional machine learning are analyzed. the architecture of deep learning models is considered. issues with using deep learning in neural networks are outlined, and key processes of the functioning of neural networks are described. the importance of deep learning neural networks for processing big data is noted. specific examples of application of deep learning algorithms in various research fields, specifically, scientometrics, bibliometrics, medicine, geoseismic research, and others, are provided. it is shown that deep learning plays an important role in optimizing research activities and improving research productivity.
[ "this article", "a general overview", "machine learning", "a subdomain", "artificial intelligence", "the substance", "the deep learning process", "key features", "deep learning", "a high-level artificial intelligence technology", "differences", "deep and conventional machine learning", "the architecture", "deep learning models", "issues", "deep learning", "neural networks", "key processes", "the functioning", "neural networks", "the importance", "deep learning neural networks", "big data", "specific examples", "application", "deep learning algorithms", "various research fields", "specifically, scientometrics", "bibliometrics", "medicine", "geoseismic research", "others", "it", "deep learning", "an important role", "research activities", "research productivity" ]
Detecting and Mitigating Encoded Bias in Deep Learning-Based Stealth Assessment Models for Reflection-Enriched Game-Based Learning Environments
[ "Anisha Gupta", "Dan Carpenter", "Wookhee Min", "Jonathan Rowe", "Roger Azevedo", "James Lester" ]
Reflection plays a critical role in learning. Game-based learning environments have significant potential to elicit and support student reflection by prompting learners to think critically about their own learning processes and performance. Stealth assessment models, used for unobtrusively assessing student competencies from evidence of game interaction data and facilitating learning through adaptive feedback, can be enhanced by incorporating evidence from students’ written reflections. We present a deep learning-based stealth assessment framework that predicts depth of student reflections and science content post-test scores during game-based learning. With the increasing adoption of AI techniques in decision-making processes, it is important to evaluate the fairness of these models. To address this concern, we investigate encoded bias in our stealth assessment model with respect to student gender and prior game-playing experience in deep learning-based stealth assessment models and examine the impact of debiasing on the models’ predictive performance. We evaluate the predictive performance of the deep learning-based stealth assessment models and measure encoded bias with the Absolute Between-ROC Area (ABROCA) statistic using gameplay data from 119 students collected in a series of classroom studies with a reflection-enriched game-based learning environment for middle school microbiology, Crystal Island. The results demonstrate the effectiveness of deep learning-based stealth assessment models and multiple debiasing techniques for deriving algorithmically fair stealth assessment models.
10.1007/s40593-023-00379-6
detecting and mitigating encoded bias in deep learning-based stealth assessment models for reflection-enriched game-based learning environments
reflection plays a critical role in learning. game-based learning environments have significant potential to elicit and support student reflection by prompting learners to think critically about their own learning processes and performance. stealth assessment models, used for unobtrusively assessing student competencies from evidence of game interaction data and facilitating learning through adaptive feedback, can be enhanced by incorporating evidence from students’ written reflections. we present a deep learning-based stealth assessment framework that predicts depth of student reflections and science content post-test scores during game-based learning. with the increasing adoption of ai techniques in decision-making processes, it is important to evaluate the fairness of these models. to address this concern, we investigate encoded bias in our stealth assessment model with respect to student gender and prior game-playing experience in deep learning-based stealth assessment models and examine the impact of debiasing on the models’ predictive performance. we evaluate the predictive performance of the deep learning-based stealth assessment models and measure encoded bias with the absolute between-roc area (abroca) statistic using gameplay data from 119 students collected in a series of classroom studies with a reflection-enriched game-based learning environment for middle school microbiology, crystal island. the results demonstrate the effectiveness of deep learning-based stealth assessment models and multiple debiasing techniques for deriving algorithmically fair stealth assessment models.
[ "reflection", "a critical role", "game-based learning environments", "significant potential", "student reflection", "learners", "their own learning processes", "performance", "stealth assessment models", "student competencies", "evidence", "game interaction data", "adaptive feedback", "evidence", "students’ written reflections", "we", "a deep learning-based stealth assessment framework", "that", "depth", "student reflections", "science content post-test scores", "game-based learning", "the increasing adoption", "ai techniques", "decision-making processes", "it", "the fairness", "these models", "this concern", "we", "encoded bias", "our stealth assessment model", "respect", "student gender", "prior game-playing experience", "deep learning-based stealth assessment models", "the impact", "the models’ predictive performance", "we", "the predictive performance", "the deep learning-based stealth assessment models", "measure", "bias", "-roc", "gameplay data", "119 students", "a series", "classroom studies", "a reflection-enriched game-based learning environment", "middle school microbiology", "crystal island", "the results", "the effectiveness", "deep learning-based stealth assessment models", "multiple debiasing techniques", "algorithmically fair stealth assessment models", "119", "crystal island" ]
Reading Between the Lines: Machine Learning Ensemble and Deep Learning for Implied Threat Detection in Textual Data
[ "Muhammad Owais Raza", "Areej Fatemah Meghji", "Naeem Ahmed Mahoto", "Mana Saleh Al Reshan", "Hamad Ali Abosaq", "Adel Sulaiman", "Asadullah Shaikh" ]
With the increase in the generation and spread of textual content on social media, natural language processing (NLP) has become an important area of research for detecting underlying threats, racial abuse, violence, and implied warnings in the content. The subtlety and ambiguity of language make the development of effective models for detecting threats in text a challenging task. This task is further complicated when the threat is not explicitly conveyed. This study focuses on the task of implied threat detection using an explicitly designed machine-generated dataset with both linguistic and lexical features. We evaluated the performance of different machine learning algorithms on these features including Support Vector Machines, Logistic Regression, Naive Bayes, Decision Tree, and K-nearest neighbors. The ensembling approaches of Adaboost, Random Forest, and Gradient Boosting were also explored. Deep learning modeling was performed using Long Short-Term Memory, Deep Neural Networks (DNN), and Bidirectional Long Short-Term Memory (BiLSTM). Based on the evaluation, it was observed that classical and ensemble models overfit while working with linguistic features. The performance of these models improved when working with lexical features. The model based on logistic regression exhibited superior performance with an F1 score of 77.13%. While experimenting with deep learning models, DNN achieved an F1 score of 91.49% while the BiLSTM achieved an F1 score of 91.61% while working with lexical features. The current study provides a baseline for future research in the domain of implied threat detection.
10.1007/s44196-024-00580-y
reading between the lines: machine learning ensemble and deep learning for implied threat detection in textual data
with the increase in the generation and spread of textual content on social media, natural language processing (nlp) has become an important area of research for detecting underlying threats, racial abuse, violence, and implied warnings in the content. the subtlety and ambiguity of language make the development of effective models for detecting threats in text a challenging task. this task is further complicated when the threat is not explicitly conveyed. this study focuses on the task of implied threat detection using an explicitly designed machine-generated dataset with both linguistic and lexical features. we evaluated the performance of different machine learning algorithms on these features including support vector machines, logistic regression, naive bayes, decision tree, and k-nearest neighbors. the ensembling approaches of adaboost, random forest, and gradient boosting were also explored. deep learning modeling was performed using long short-term memory, deep neural networks (dnn), and bidirectional long short-term memory (bilstm). based on the evaluation, it was observed that classical and ensemble models overfit while working with linguistic features. the performance of these models improved when working with lexical features. the model based on logistic regression exhibited superior performance with an f1 score of 77.13%. while experimenting with deep learning models, dnn achieved an f1 score of 91.49% while the bilstm achieved an f1 score of 91.61% while working with lexical features. the current study provides a baseline for future research in the domain of implied threat detection.
[ "the increase", "the generation", "spread", "textual content", "social media", "natural language processing", "nlp", "an important area", "research", "underlying threats", "racial abuse", "violence", "implied warnings", "the content", "the subtlety", "ambiguity", "language", "the development", "effective models", "threats", "text", "a challenging task", "this task", "the threat", "this study", "the task", "implied threat detection", "an explicitly designed machine-generated dataset", "both linguistic and lexical features", "we", "the performance", "different machine learning algorithms", "these features", "support vector machines", "logistic regression", "naive bayes", "decision tree", "k-nearest neighbors", "the ensembling approaches", "adaboost, random forest", "gradient boosting", "deep learning modeling", "long short-term memory", "deep neural networks", "dnn", "bidirectional long short-term memory", "bilstm", "the evaluation", "it", "classical and ensemble models", "linguistic features", "the performance", "these models", "lexical features", "the model", "logistic regression", "superior performance", "an f1 score", "77.13%", "deep learning models", "dnn", "an f1 score", "91.49%", "the bilstm", "an f1 score", "91.61%", "lexical features", "the current study", "a baseline", "future research", "the domain", "implied threat detection", "77.13%", "91.49%", "91.61%" ]
Deep learning-based phenotyping reclassifies combined hepatocellular-cholangiocarcinoma
[ "Julien Calderaro", "Narmin Ghaffari Laleh", "Qinghe Zeng", "Pascale Maille", "Loetitia Favre", "Anaïs Pujals", "Christophe Klein", "Céline Bazille", "Lara R. Heij", "Arnaud Uguen", "Tom Luedde", "Luca Di Tommaso", "Aurélie Beaufrère", "Augustin Chatain", "Delphine Gastineau", "Cong Trung Nguyen", "Hiep Nguyen-Canh", "Khuyen Nguyen Thi", "Viviane Gnemmi", "Rondell P. Graham", "Frédéric Charlotte", "Dominique Wendum", "Mukul Vij", "Daniela S. Allende", "Federico Aucejo", "Alba Diaz", "Benjamin Rivière", "Astrid Herrero", "Katja Evert", "Diego Francesco Calvisi", "Jérémy Augustin", "Wei Qiang Leow", "Howard Ho Wai Leung", "Emmanuel Boleslawski", "Mohamed Rela", "Arnaud François", "Anthony Wing-Hung Cha", "Alejandro Forner", "Maria Reig", "Manon Allaire", "Olivier Scatton", "Denis Chatelain", "Camille Boulagnon-Rombi", "Nathalie Sturm", "Benjamin Menahem", "Eric Frouin", "David Tougeron", "Christophe Tournigand", "Emmanuelle Kempf", "Haeryoung Kim", "Massih Ningarhari", "Sophie Michalak-Provost", "Purva Gopal", "Raffaele Brustia", "Eric Vibert", "Kornelius Schulze", "Darius F. Rüther", "Sören A. Weidemann", "Rami Rhaiem", "Jean-Michel Pawlotsky", "Xuchen Zhang", "Alain Luciani", "Sébastien Mulé", "Alexis Laurent", "Giuliana Amaddeo", "Hélène Regnault", "Eleonora De Martin", "Christine Sempoux", "Pooja Navale", "Maria Westerhoff", "Regina Cheuk-Lam Lo", "Jan Bednarsch", "Annette Gouw", "Catherine Guettier", "Marie Lequoy", "Kenichi Harada", "Pimsiri Sripongpun", "Poowadon Wetwittayaklang", "Nicolas Loménie", "Jarukit Tantipisit", "Apichat Kaewdech", "Jeanne Shen", "Valérie Paradis", "Stefano Caruso", "Jakob Nikolas Kather" ]
Primary liver cancer arises either from hepatocytic or biliary lineage cells, giving rise to hepatocellular carcinoma (HCC) or intrahepatic cholangiocarcinoma (ICCA). Combined hepatocellular- cholangiocarcinomas (cHCC-CCA) exhibit equivocal or mixed features of both, causing diagnostic uncertainty and difficulty in determining proper management. Here, we perform a comprehensive deep learning-based phenotyping of multiple cohorts of patients. We show that deep learning can reproduce the diagnosis of HCC vs. CCA with a high performance. We analyze a series of 405 cHCC-CCA patients and demonstrate that the model can reclassify the tumors as HCC or ICCA, and that the predictions are consistent with clinical outcomes, genetic alterations and in situ spatial gene expression profiling. This type of approach could improve treatment decisions and ultimately clinical outcome for patients with rare and biphenotypic cancers such as cHCC-CCA.
10.1038/s41467-023-43749-3
deep learning-based phenotyping reclassifies combined hepatocellular-cholangiocarcinoma
primary liver cancer arises either from hepatocytic or biliary lineage cells, giving rise to hepatocellular carcinoma (hcc) or intrahepatic cholangiocarcinoma (icca). combined hepatocellular- cholangiocarcinomas (chcc-cca) exhibit equivocal or mixed features of both, causing diagnostic uncertainty and difficulty in determining proper management. here, we perform a comprehensive deep learning-based phenotyping of multiple cohorts of patients. we show that deep learning can reproduce the diagnosis of hcc vs. cca with a high performance. we analyze a series of 405 chcc-cca patients and demonstrate that the model can reclassify the tumors as hcc or icca, and that the predictions are consistent with clinical outcomes, genetic alterations and in situ spatial gene expression profiling. this type of approach could improve treatment decisions and ultimately clinical outcome for patients with rare and biphenotypic cancers such as chcc-cca.
[ "primary liver cancer", "hepatocytic or biliary lineage cells", "rise", "hepatocellular carcinoma", "hcc", "intrahepatic cholangiocarcinoma", "icca", "combined hepatocellular- cholangiocarcinomas", "chcc-cca", "equivocal or mixed features", "both", "diagnostic uncertainty", "difficulty", "proper management", "we", "a comprehensive deep learning-based phenotyping", "multiple cohorts", "patients", "we", "deep learning", "the diagnosis", "hcc", "cca", "a high performance", "we", "a series", "405 chcc-cca patients", "the model", "the tumors", "hcc", "icca", "the predictions", "clinical outcomes", "genetic alterations", "situ spatial gene expression profiling", "this type", "approach", "treatment decisions", "ultimately clinical outcome", "patients", "rare and biphenotypic cancers", "chcc-cca", "405" ]
Advancing Naturalistic Affective Science with Deep Learning
[ "Chujun Lin", "Landry S. Bulls", "Lindsey J. Tepfer", "Amisha D. Vyas", "Mark A. Thornton" ]
People express their own emotions and perceive others’ emotions via a variety of channels, including facial movements, body gestures, vocal prosody, and language. Studying these channels of affective behavior offers insight into both the experience and perception of emotion. Prior research has predominantly focused on studying individual channels of affective behavior in isolation using tightly controlled, non-naturalistic experiments. This approach limits our understanding of emotion in more naturalistic contexts where different channels of information tend to interact. Traditional methods struggle to address this limitation: manually annotating behavior is time-consuming, making it infeasible to do at large scale; manually selecting and manipulating stimuli based on hypotheses may neglect unanticipated features, potentially generating biased conclusions; and common linear modeling approaches cannot fully capture the complex, nonlinear, and interactive nature of real-life affective processes. In this methodology review, we describe how deep learning can be applied to address these challenges to advance a more naturalistic affective science. First, we describe current practices in affective research and explain why existing methods face challenges in revealing a more naturalistic understanding of emotion. Second, we introduce deep learning approaches and explain how they can be applied to tackle three main challenges: quantifying naturalistic behaviors, selecting and manipulating naturalistic stimuli, and modeling naturalistic affective processes. Finally, we describe the limitations of these deep learning methods, and how these limitations might be avoided or mitigated. By detailing the promise and the peril of deep learning, this review aims to pave the way for a more naturalistic affective science.
10.1007/s42761-023-00215-z
advancing naturalistic affective science with deep learning
people express their own emotions and perceive others’ emotions via a variety of channels, including facial movements, body gestures, vocal prosody, and language. studying these channels of affective behavior offers insight into both the experience and perception of emotion. prior research has predominantly focused on studying individual channels of affective behavior in isolation using tightly controlled, non-naturalistic experiments. this approach limits our understanding of emotion in more naturalistic contexts where different channels of information tend to interact. traditional methods struggle to address this limitation: manually annotating behavior is time-consuming, making it infeasible to do at large scale; manually selecting and manipulating stimuli based on hypotheses may neglect unanticipated features, potentially generating biased conclusions; and common linear modeling approaches cannot fully capture the complex, nonlinear, and interactive nature of real-life affective processes. in this methodology review, we describe how deep learning can be applied to address these challenges to advance a more naturalistic affective science. first, we describe current practices in affective research and explain why existing methods face challenges in revealing a more naturalistic understanding of emotion. second, we introduce deep learning approaches and explain how they can be applied to tackle three main challenges: quantifying naturalistic behaviors, selecting and manipulating naturalistic stimuli, and modeling naturalistic affective processes. finally, we describe the limitations of these deep learning methods, and how these limitations might be avoided or mitigated. by detailing the promise and the peril of deep learning, this review aims to pave the way for a more naturalistic affective science.
[ "people", "their own emotions", "others’ emotions", "a variety", "channels", "facial movements", "body gestures", "vocal prosody", "language", "these channels", "affective behavior", "insight", "both the experience", "perception", "emotion", "prior research", "individual channels", "affective behavior", "isolation", "tightly controlled, non-naturalistic experiments", "this approach", "our understanding", "emotion", "more naturalistic contexts", "different channels", "information", "traditional methods", "this limitation", "behavior", "it", "large scale", "stimuli", "hypotheses", "unanticipated features", "biased conclusions", "common linear modeling approaches", "the complex, nonlinear, and interactive nature", "real-life affective processes", "this methodology review", "we", "how deep learning", "these challenges", "a more naturalistic affective science", "we", "current practices", "affective research", "existing methods", "challenges", "a more naturalistic understanding", "emotion", "we", "deep learning approaches", "they", "three main challenges", "naturalistic behaviors", "naturalistic stimuli", "naturalistic affective processes", "we", "the limitations", "these deep learning methods", "these limitations", "the promise", "the peril", "deep learning", "this review", "the way", "a more naturalistic affective science", "linear", "first", "second", "three" ]
Bearing Fault Diagnosis Based on Artificial Intelligence Methods: Machine Learning and Deep Learning
[ "Ahmed Ghorbel", "Sarra Eddai", "Bouthayna Limam", "Nabih Feki", "Mohamed Haddar" ]
This paper presents a comprehensive study on the application of Artificial Intelligence (AI) methods, specifically machine learning and deep learning, for the diagnosis of bearing faults. The study explores both data preprocessing-dependent methods (Support Vector Machine, Nearest Neighbor, and Decision Tree) and a preprocessing-independent method (1D Convolutional Neural Network). The experiment setup utilizes the Case Western Reserve University dataset for signal acquisition. A detailed strategy for data processing is developed, encompassing initialization, data loading, signal filtration, decomposition, feature extraction in both time- and frequency-domains, and feature selection. Indeed, the study involves working with four datasets, selected based on the distribution curves of the indicators as a function of the number of observations. The results demonstrate remarkable performance of the AI methods in bearing fault diagnosis. The 1D-CNN model, in particular, shows high robustness and accuracy, even in the presence of load variations. The findings of this study shed light on the significant potential of AI methods in improving the accuracy and efficiency of bearing fault diagnosis.
10.1007/s13369-024-09488-3
bearing fault diagnosis based on artificial intelligence methods: machine learning and deep learning
this paper presents a comprehensive study on the application of artificial intelligence (ai) methods, specifically machine learning and deep learning, for the diagnosis of bearing faults. the study explores both data preprocessing-dependent methods (support vector machine, nearest neighbor, and decision tree) and a preprocessing-independent method (1d convolutional neural network). the experiment setup utilizes the case western reserve university dataset for signal acquisition. a detailed strategy for data processing is developed, encompassing initialization, data loading, signal filtration, decomposition, feature extraction in both time- and frequency-domains, and feature selection. indeed, the study involves working with four datasets, selected based on the distribution curves of the indicators as a function of the number of observations. the results demonstrate remarkable performance of the ai methods in bearing fault diagnosis. the 1d-cnn model, in particular, shows high robustness and accuracy, even in the presence of load variations. the findings of this study shed light on the significant potential of ai methods in improving the accuracy and efficiency of bearing fault diagnosis.
[ "this paper", "a comprehensive study", "the application", "artificial intelligence (ai) methods", "specifically machine learning", "deep learning", "the diagnosis", "faults", "the study", "both data preprocessing-dependent methods", "vector machine", "neighbor", "decision tree", "a preprocessing-independent method", "1d convolutional neural network", "the experiment setup", "the case", "western reserve university dataset", "signal acquisition", "a detailed strategy", "data processing", "initialization", "data loading", "signal filtration", "decomposition", "extraction", "both time- and frequency-domains", "the study", "four datasets", "the distribution curves", "the indicators", "a function", "the number", "observations", "the results", "remarkable performance", "the ai methods", "fault diagnosis", "the 1d-cnn model", "high robustness", "accuracy", "the presence", "load variations", "the findings", "this study", "light", "the significant potential", "ai methods", "the accuracy", "efficiency", "fault diagnosis", "1d", "western reserve university", "four", "1d-cnn" ]
Exploring the effects of digital technology on deep learning: a meta-analysis
[ "Xiu-Yi Wu" ]
The impact of digital technology on learning outcomes, specifically deep learning, has been a subject of considerable debate and scrutiny in educational settings. This study aims to provide clarity by conducting a meta-analysis of empirical publications that examine students' deep learning outcomes in relation to digital technology. A comprehensive search of databases and a thorough literature review yielded 60 high-quality, peer-reviewed journal articles that met the inclusion criteria. Using Review Manager 5.4.1 software, a meta-analysis was conducted to assess the overall effectiveness of digital technology. The calculated effect size indicates a positive influence of digital technology on students' deep learning outcomes. Furthermore, a moderator variable analysis revealed several significant findings: 1. Different categories of digital technology tools have a favorable impact on deep learning outcomes; 2. The duration of digital technology treatment does not significantly affect deep learning outcomes; 3. Digital technology demonstrates a highly positive influence on deep learning within the humanities and social sciences disciplines; 4. Combining online and offline utilization of digital technology in education leads to a substantially greater enhancement in deep learning compared to relying solely on online methods; 5. The effectiveness of digital technology on deep learning is enhanced when accompanied by appropriate instructional guidance; 6. Utilizing digital technology in a systematic manner produces different outcomes compared to fragmented approaches, highlighting the importance of a cohesive implementation; 7. Integrating digital technology with collaborative learning has a more pronounced effect on deep learning compared to independent learning. These findings contribute to our understanding of the impact of digital technology on deep learning outcomes and underscore the importance of thoughtful integration and instructional support in educational contexts.
10.1007/s10639-023-12307-1
exploring the effects of digital technology on deep learning: a meta-analysis
the impact of digital technology on learning outcomes, specifically deep learning, has been a subject of considerable debate and scrutiny in educational settings. this study aims to provide clarity by conducting a meta-analysis of empirical publications that examine students' deep learning outcomes in relation to digital technology. a comprehensive search of databases and a thorough literature review yielded 60 high-quality, peer-reviewed journal articles that met the inclusion criteria. using review manager 5.4.1 software, a meta-analysis was conducted to assess the overall effectiveness of digital technology. the calculated effect size indicates a positive influence of digital technology on students' deep learning outcomes. furthermore, a moderator variable analysis revealed several significant findings: 1. different categories of digital technology tools have a favorable impact on deep learning outcomes; 2. the duration of digital technology treatment does not significantly affect deep learning outcomes; 3. digital technology demonstrates a highly positive influence on deep learning within the humanities and social sciences disciplines; 4. combining online and offline utilization of digital technology in education leads to a substantially greater enhancement in deep learning compared to relying solely on online methods; 5. the effectiveness of digital technology on deep learning is enhanced when accompanied by appropriate instructional guidance; 6. utilizing digital technology in a systematic manner produces different outcomes compared to fragmented approaches, highlighting the importance of a cohesive implementation; 7. integrating digital technology with collaborative learning has a more pronounced effect on deep learning compared to independent learning. these findings contribute to our understanding of the impact of digital technology on deep learning outcomes and underscore the importance of thoughtful integration and instructional support in educational contexts.
[ "the impact", "digital technology", "outcomes", "specifically deep learning", "a subject", "considerable debate", "scrutiny", "educational settings", "this study", "clarity", "a meta-analysis", "empirical publications", "that", "outcomes", "relation", "digital technology", "a comprehensive search", "databases", "a thorough literature review", "60 high-quality, peer-reviewed journal articles", "that", "the inclusion criteria", "review manager 5.4.1 software", "a meta-analysis", "the overall effectiveness", "digital technology", "the calculated effect size", "a positive influence", "digital technology", "students' deep learning outcomes", "a moderator variable analysis", "several significant findings", "1. different categories", "digital technology tools", "a favorable impact", "deep learning outcomes", "the duration", "digital technology treatment", "deep learning outcomes", "3. digital technology", "a highly positive influence", "deep learning", "the humanities", "social sciences disciplines", "online and offline utilization", "digital technology", "education", "a substantially greater enhancement", "deep learning", "online methods", "the effectiveness", "digital technology", "deep learning", "appropriate instructional guidance", "digital technology", "a systematic manner", "different outcomes", "fragmented approaches", "the importance", "a cohesive implementation", "digital technology", "collaborative learning", "a more pronounced effect", "deep learning", "independent learning", "these findings", "our understanding", "the impact", "digital technology", "deep learning outcomes", "the importance", "thoughtful integration", "instructional support", "educational contexts", "60", "5.4.1", "1", "2", "3", "4", "5", "6", "7" ]
Abstraction, mimesis and the evolution of deep learning
[ "Jon Eklöf", "Thomas Hamelryck", "Cadell Last", "Alexander Grima", "Ulrika Lundh Snis" ]
Deep learning developers typically rely on deep learning software frameworks (DLSFs)—simply described as pre-packaged libraries of programming components that provide high-level access to deep learning functionality. New DLSFs progressively encapsulate mathematical, statistical and computational complexity. Such higher levels of abstraction subsequently make it easier for deep learning methodology to spread through mimesis (i.e., imitation of models perceived as successful). In this study, we quantify this increase in abstraction and discuss its implications. Analyzing publicly available code from Github, we found that the introduction of DLSFs correlates both with significant increases in the number of deep learning projects and substantial reductions in the number of lines of code used. We subsequently discuss and argue the importance of abstraction in deep learning with respect to ephemeralization, technological advancement, democratization, adopting timely levels of abstraction, the emergence of mimetic deadlocks, issues related to the use of black box methods including privacy and fairness, and the concentration of technological power. Finally, we also discuss abstraction as a symptom of an ongoing technological metatransition.
10.1007/s00146-023-01688-z
abstraction, mimesis and the evolution of deep learning
deep learning developers typically rely on deep learning software frameworks (dlsfs)—simply described as pre-packaged libraries of programming components that provide high-level access to deep learning functionality. new dlsfs progressively encapsulate mathematical, statistical and computational complexity. such higher levels of abstraction subsequently make it easier for deep learning methodology to spread through mimesis (i.e., imitation of models perceived as successful). in this study, we quantify this increase in abstraction and discuss its implications. analyzing publicly available code from github, we found that the introduction of dlsfs correlates both with significant increases in the number of deep learning projects and substantial reductions in the number of lines of code used. we subsequently discuss and argue the importance of abstraction in deep learning with respect to ephemeralization, technological advancement, democratization, adopting timely levels of abstraction, the emergence of mimetic deadlocks, issues related to the use of black box methods including privacy and fairness, and the concentration of technological power. finally, we also discuss abstraction as a symptom of an ongoing technological metatransition.
[ "deep learning developers", "deep learning software frameworks", "pre-packaged libraries", "programming components", "that", "high-level access", "deep learning functionality", "mathematical, statistical and computational complexity", "such higher levels", "abstraction", "it", "deep learning methodology", "mimesis", "i.e., imitation", "models", "this study", "we", "this increase", "abstraction", "its implications", "publicly available code", "github", "we", "the introduction", "significant increases", "the number", "deep learning projects", "substantial reductions", "the number", "lines", "code", "we", "the importance", "abstraction", "deep learning", "respect", "ephemeralization", "technological advancement", "democratization", "timely levels", "abstraction", "the emergence", "mimetic deadlocks", "issues", "the use", "black box methods", "privacy", "fairness", "the concentration", "technological power", "we", "abstraction", "a symptom", "an ongoing technological metatransition" ]
COVID-19 classification based on a deep learning and machine learning fusion technique using chest CT images
[ "Gerges M. Salama", "Asmaa Mohamed", "Mahmoud Khaled Abd-Ellah" ]
Coronavirus disease (COVID-19), impacted by SARS-CoV-2, is one of the greatest challenges of the twenty-first century. COVID-19 broke out in the world over the last 2 years and has caused many injuries and killed persons. Computer-aided diagnosis has become a necessary tool to prevent the spreading of this virus. Detecting COVID-19 at an early stage is essential to reduce the mortality risk of patients. Researchers seek to find rapid solutions based on techniques of Machine Learning and Deep Learning. In this paper, we introduced a hybrid model for COVID-19 detection based on machine learning and deep learning models. We used 10 different deep CNN network models to extract features from CT images. We extract features from different layers in each network and find the optimum layer that gives the best-extracted features for each CNN network. Then, for classifying these features, we used five different classifiers based on machine learning. The dataset consists of 2481 CT images divided into COVID-19 and non-COVID-19 categories. Three folds are extracted with a different size between testing and training. Through experiments, we define the best layer for all used CNN networks, the best network, and the best-used classifier. The measured performance shows the superiority of the proposed system over the literature with a highest accuracy of 99.39%. Our models are tested with the three folds that gained maximum average accuracy. The result is 98.69%.
10.1007/s00521-023-09346-7
covid-19 classification based on a deep learning and machine learning fusion technique using chest ct images
coronavirus disease (covid-19), impacted by sars-cov-2, is one of the greatest challenges of the twenty-first century. covid-19 broke out in the world over the last 2 years and has caused many injuries and killed persons. computer-aided diagnosis has become a necessary tool to prevent the spreading of this virus. detecting covid-19 at an early stage is essential to reduce the mortality risk of patients. researchers seek to find rapid solutions based on techniques of machine learning and deep learning. in this paper, we introduced a hybrid model for covid-19 detection based on machine learning and deep learning models. we used 10 different deep cnn network models to extract features from ct images. we extract features from different layers in each network and find the optimum layer that gives the best-extracted features for each cnn network. then, for classifying these features, we used five different classifiers based on machine learning. the dataset consists of 2481 ct images divided into covid-19 and non-covid-19 categories. three folds are extracted with a different size between testing and training. through experiments, we define the best layer for all used cnn networks, the best network, and the best-used classifier. the measured performance shows the superiority of the proposed system over the literature with a highest accuracy of 99.39%. our models are tested with the three folds that gained maximum average accuracy. the result is 98.69%.
[ "coronavirus disease", "covid-19", "sars", "the greatest challenges", "the twenty-first century", "covid-19", "the world", "the last 2 years", "many injuries", "persons", "computer-aided diagnosis", "a necessary tool", "the spreading", "this virus", "covid-19", "an early stage", "the mortality risk", "patients", "researchers", "rapid solutions", "techniques", "machine learning", "deep learning", "this paper", "we", "a hybrid model", "covid-19 detection", "machine learning", "deep learning models", "we", "10 different deep cnn network models", "features", "ct images", "we", "features", "different layers", "each network", "the optimum layer", "that", "the best-extracted features", "each cnn network", "these features", "we", "five different classifiers", "machine learning", "the dataset", "2481 ct images", "covid-19 and non-covid-19 categories", "three folds", "a different size", "testing", "training", "experiments", "we", "the best layer", "all used cnn networks", "the best network", "the best-used classifier", "the measured performance", "the superiority", "the proposed system", "the literature", "a highest accuracy", "99.39%", "our models", "the three folds", "that", "maximum average accuracy", "the result", "98.69%", "covid-19", "the twenty-first century", "covid-19", "the last 2 years", "covid-19", "covid-19", "10", "cnn", "cnn", "five", "2481", "covid-19", "non-covid-19", "three", "cnn", "99.39%", "three", "98.69%" ]
A framework for training larger networks for deep Reinforcement learning
[ "Kei Ota", "Devesh K. Jha", "Asako Kanezaki" ]
The success of deep learning in computer vision and natural language processing communities can be attributed to the training of very deep neural networks with millions or billions of parameters, which can then be trained with massive amounts of data. However, a similar trend has largely eluded the training of deep reinforcement learning (RL) algorithms where larger networks do not lead to performance improvement. Previous work has shown that this is mostly due to instability during the training of deep RL agents when using larger networks. In this paper, we make an attempt to understand and address the training of larger networks for deep RL. We first show that naively increasing network capacity does not improve performance. Then, we propose a novel method that consists of (1) wider networks with DenseNet connection, (2) decoupling representation learning from the training of RL, and (3) a distributed training method to mitigate overfitting problems. Using this three-fold technique, we show that we can train very large networks that result in significant performance gains. We present several ablation studies to demonstrate the efficacy of the proposed method and some intuitive understanding of the reasons for performance gain. We show that our proposed method outperforms other baseline algorithms on several challenging locomotion tasks.
10.1007/s10994-024-06547-6
a framework for training larger networks for deep reinforcement learning
the success of deep learning in computer vision and natural language processing communities can be attributed to the training of very deep neural networks with millions or billions of parameters, which can then be trained with massive amounts of data. however, a similar trend has largely eluded the training of deep reinforcement learning (rl) algorithms where larger networks do not lead to performance improvement. previous work has shown that this is mostly due to instability during the training of deep rl agents when using larger networks. in this paper, we make an attempt to understand and address the training of larger networks for deep rl. we first show that naively increasing network capacity does not improve performance. then, we propose a novel method that consists of (1) wider networks with densenet connection, (2) decoupling representation learning from the training of rl, and (3) a distributed training method to mitigate overfitting problems. using this three-fold technique, we show that we can train very large networks that result in significant performance gains. we present several ablation studies to demonstrate the efficacy of the proposed method and some intuitive understanding of the reasons for performance gain. we show that our proposed method outperforms other baseline algorithms on several challenging locomotion tasks.
[ "the success", "deep learning", "computer vision", "natural language processing communities", "the training", "very deep neural networks", "millions", "billions", "parameters", "which", "massive amounts", "data", "a similar trend", "the training", "deep reinforcement learning", "rl) algorithms", "larger networks", "performance improvement", "previous work", "this", "instability", "the training", "deep rl agents", "larger networks", "this paper", "we", "an attempt", "the training", "larger networks", "deep rl", "we", "naively increasing network capacity", "performance", "we", "a novel method", "that", "wider networks", "densenet connection", "representation", "the training", "rl", "overfitting problems", "this three-fold technique", "we", "we", "very large networks", "that", "significant performance gains", "we", "several ablation studies", "the efficacy", "the proposed method", "some intuitive understanding", "the reasons", "performance gain", "we", "our proposed method", "other baseline algorithms", "several challenging locomotion tasks", "millions or billions", "first", "1", "2", "3", "three-fold" ]
Deep Learning-Based Watermarking Techniques Challenges: A Review of Current and Future Trends
[ "Saoussen Ben Jabra", "Mohamed Ben Farah" ]
The digital revolution places great emphasis on digital media watermarking due to the increased vulnerability of multimedia content to unauthorized alterations. Recently, in the digital boom in the technology of hiding data, research has been tending to perform watermarking with numerous architectures of deep learning, which has explored a variety of problems since its inception. Several watermarking approaches based on deep learning have been proposed, and they have proven their efficiency compared to traditional methods. This paper summarizes recent developments in conventional and deep learning image and video watermarking techniques. It shows that although there are many conventional techniques focused on video watermarking, there are yet to be any deep learning models focusing on this area; however, for image watermarking, different deep learning-based techniques where efficiency in invisibility and robustness depends on the used network architecture are observed. This study has been concluded by discussing possible research directions in deep learning-based video watermarking.
10.1007/s00034-024-02651-z
deep learning-based watermarking techniques challenges: a review of current and future trends
the digital revolution places great emphasis on digital media watermarking due to the increased vulnerability of multimedia content to unauthorized alterations. recently, in the digital boom in the technology of hiding data, research has been tending to perform watermarking with numerous architectures of deep learning, which has explored a variety of problems since its inception. several watermarking approaches based on deep learning have been proposed, and they have proven their efficiency compared to traditional methods. this paper summarizes recent developments in conventional and deep learning image and video watermarking techniques. it shows that although there are many conventional techniques focused on video watermarking, there are yet to be any deep learning models focusing on this area; however, for image watermarking, different deep learning-based techniques where efficiency in invisibility and robustness depends on the used network architecture are observed. this study has been concluded by discussing possible research directions in deep learning-based video watermarking.
[ "the digital revolution", "great emphasis", "digital media watermarking", "the increased vulnerability", "multimedia content", "unauthorized alterations", "the digital boom", "the technology", "hiding data", "research", "numerous architectures", "deep learning", "which", "a variety", "problems", "its inception", "several watermarking approaches", "deep learning", "they", "their efficiency", "traditional methods", "this paper", "recent developments", "conventional and deep learning image", "video watermarking techniques", "it", "many conventional techniques", "video watermarking", "any deep learning models", "this area", "image watermarking", ", different deep learning-based techniques", "efficiency", "invisibility", "robustness", "the used network architecture", "this study", "possible research directions", "deep learning-based video watermarking" ]
Advancing plant biology through deep learning-powered natural language processing
[ "Shuang Peng", "Loïc Rajjou" ]
The application of deep learning methods, specifically the utilization of Large Language Models (LLMs), in the field of plant biology holds significant promise for generating novel knowledge on plant cell systems. The LLM framework exhibits exceptional potential, particularly with the development of Protein Language Models (PLMs), allowing for in-depth analyses of nucleic acid and protein sequences. This analytical capacity facilitates the discernment of intricate patterns and relationships within biological data, encompassing multi-scale information within DNA or protein sequences. The contribution of PLMs extends beyond mere sequence patterns and structure––function recognition; it also supports advancements in genetic improvements for agriculture. The integration of deep learning approaches into the domain of plant sciences offers opportunities for major breakthroughs in basic research across multi-scale plant traits. Consequently, the strategic application of deep learning methodologies, particularly leveraging the potential of LLMs, will undoubtedly play a pivotal role in advancing plant sciences, plant production, plant uses and propelling the trajectory toward sustainable agroecological and agro-food transitions.
10.1007/s00299-024-03294-9
advancing plant biology through deep learning-powered natural language processing
the application of deep learning methods, specifically the utilization of large language models (llms), in the field of plant biology holds significant promise for generating novel knowledge on plant cell systems. the llm framework exhibits exceptional potential, particularly with the development of protein language models (plms), allowing for in-depth analyses of nucleic acid and protein sequences. this analytical capacity facilitates the discernment of intricate patterns and relationships within biological data, encompassing multi-scale information within dna or protein sequences. the contribution of plms extends beyond mere sequence patterns and structure––function recognition; it also supports advancements in genetic improvements for agriculture. the integration of deep learning approaches into the domain of plant sciences offers opportunities for major breakthroughs in basic research across multi-scale plant traits. consequently, the strategic application of deep learning methodologies, particularly leveraging the potential of llms, will undoubtedly play a pivotal role in advancing plant sciences, plant production, plant uses and propelling the trajectory toward sustainable agroecological and agro-food transitions.
[ "the application", "deep learning methods", "specifically the utilization", "large language models", "llms", "the field", "plant biology", "significant promise", "novel knowledge", "plant cell systems", "the llm framework", "exceptional potential", "the development", "protein language models", "plms", "in-depth analyses", "nucleic acid and protein sequences", "this analytical capacity", "the discernment", "intricate patterns", "relationships", "biological data", "multi-scale information", "dna", "protein sequences", "the contribution", "plms", "mere sequence patterns", "structure––function recognition", "it", "advancements", "genetic improvements", "agriculture", "the integration", "deep learning approaches", "the domain", "plant sciences", "opportunities", "major breakthroughs", "basic research", "multi-scale plant traits", "the strategic application", "deep learning methodologies", "the potential", "llms", "a pivotal role", "plant sciences", "plant production", "the trajectory", "sustainable agroecological and agro-food transitions" ]
Machine learning and deep learning techniques for detecting malicious android applications: An empirical analysis
[ "Parnika Bhat", "Sunny Behal", "Kamlesh Dutta" ]
The open system architecture of android makes it vulnerable to a variety of cyberattacks. Cybercriminals use android applications to intrude into the system and steal confidential data. This situation poses a threat to user privacy and integrity of the system. This paper proposes a static analysis approach to detect malicious and benign Android applications using various machine learning and deep learning algorithms. The proposed work has been validated using a bench marked dataset comprising 11,449 benign and malicious Android applications. The proposed approach applies a wrapper-based feature selection method to filter irrelevant features. The results clearly show that the deep learning algorithms of DBN and MLP outperformed machine learning algorithms in detecting malicious Android applications.
10.1007/s43538-023-00182-w
machine learning and deep learning techniques for detecting malicious android applications: an empirical analysis
the open system architecture of android makes it vulnerable to a variety of cyberattacks. cybercriminals use android applications to intrude into the system and steal confidential data. this situation poses a threat to user privacy and integrity of the system. this paper proposes a static analysis approach to detect malicious and benign android applications using various machine learning and deep learning algorithms. the proposed work has been validated using a bench marked dataset comprising 11,449 benign and malicious android applications. the proposed approach applies a wrapper-based feature selection method to filter irrelevant features. the results clearly show that the deep learning algorithms of dbn and mlp outperformed machine learning algorithms in detecting malicious android applications.
[ "the open system architecture", "android", "it", "a variety", "cyberattacks", "cybercriminals", "android applications", "the system", "confidential data", "this situation", "a threat", "user privacy", "integrity", "the system", "this paper", "a static analysis approach", "malicious and benign android applications", "various machine learning", "deep learning algorithms", "the proposed work", "a bench", "marked dataset", "11,449 benign and malicious android applications", "the proposed approach", "a wrapper-based feature selection method", "irrelevant features", "the results", "the deep learning algorithms", "dbn", "mlp", "algorithms", "malicious android applications", "11,449" ]
Deep reinforcement learning-based scheduling in distributed systems: a critical review
[ "Zahra Jalali Khalil Abadi", "Najme Mansouri", "Mohammad Masoud Javidi" ]
Many fields of research use parallelized and distributed computing environments, including astronomy, earth science, and bioinformatics. Due to an increase in client requests, service providers face various challenges, such as task scheduling, security, resource management, and virtual machine migration. NP-hard scheduling problems require a long time to implement an optimal or suboptimal solution due to their large solution space. With recent advances in artificial intelligence, deep reinforcement learning (DRL) can be used to solve scheduling problems. The DRL approach combines the strength of deep learning and neural networks with reinforcement learning’s feedback-based learning. This paper provides a comprehensive overview of DRL-based scheduling algorithms in distributed systems by categorizing algorithms and applications. As a result, several articles are assessed based on their main objectives, quality of service and scheduling parameters, as well as evaluation environments (i.e., simulation tools, real-world environment). The literature review indicates that algorithms based on RL, such as Q-learning, are effective for learning scaling and scheduling policies in a cloud environment. Additionally, the challenges and directions for further research on deep reinforcement learning to address scheduling problems were summarized (e.g., edge intelligence, ideal dynamic task scheduling framework, human–machine interaction, resource-hungry artificial intelligence (AI) and sustainability).
10.1007/s10115-024-02167-7
deep reinforcement learning-based scheduling in distributed systems: a critical review
many fields of research use parallelized and distributed computing environments, including astronomy, earth science, and bioinformatics. due to an increase in client requests, service providers face various challenges, such as task scheduling, security, resource management, and virtual machine migration. np-hard scheduling problems require a long time to implement an optimal or suboptimal solution due to their large solution space. with recent advances in artificial intelligence, deep reinforcement learning (drl) can be used to solve scheduling problems. the drl approach combines the strength of deep learning and neural networks with reinforcement learning’s feedback-based learning. this paper provides a comprehensive overview of drl-based scheduling algorithms in distributed systems by categorizing algorithms and applications. as a result, several articles are assessed based on their main objectives, quality of service and scheduling parameters, as well as evaluation environments (i.e., simulation tools, real-world environment). the literature review indicates that algorithms based on rl, such as q-learning, are effective for learning scaling and scheduling policies in a cloud environment. additionally, the challenges and directions for further research on deep reinforcement learning to address scheduling problems were summarized (e.g., edge intelligence, ideal dynamic task scheduling framework, human–machine interaction, resource-hungry artificial intelligence (ai) and sustainability).
[ "many fields", "research use", "astronomy", "earth science", "bioinformatics", "an increase", "client requests", "service providers", "various challenges", "task scheduling", "security", "resource management", "virtual machine migration", "np-hard scheduling problems", "a long time", "an optimal or suboptimal solution", "their large solution space", "recent advances", "artificial intelligence", "deep reinforcement learning", "drl", "scheduling problems", "the drl approach", "the strength", "deep learning", "neural networks", "reinforcement learning’s feedback-based learning", "this paper", "a comprehensive overview", "drl-based scheduling algorithms", "distributed systems", "algorithms", "applications", "a result", "several articles", "their main objectives", "quality", "service and scheduling parameters", "evaluation environments", "i.e., simulation tools", "real-world environment", "the literature review", "algorithms", "rl", "q-learning", "scaling and scheduling policies", "a cloud environment", "the challenges", "directions", "further research", "deep reinforcement learning", "scheduling problems", "edge intelligence", "ideal dynamic task scheduling framework", "human–machine interaction", "resource-hungry artificial intelligence", "ai", "sustainability" ]
Deep-WET: a deep learning-based approach for predicting DNA-binding proteins using word embedding techniques with weighted features
[ "S. M. Hasan Mahmud", "Kah Ong Michael Goh", "Md. Faruk Hosen", "Dip Nandi", "Watshara Shoombuatong" ]
DNA-binding proteins (DBPs) play a significant role in all phases of genetic processes, including DNA recombination, repair, and modification. They are often utilized in drug discovery as fundamental elements of steroids, antibiotics, and anticancer drugs. Predicting them poses the most challenging task in proteomics research. Conventional experimental methods for DBP identification are costly and sometimes biased toward prediction. Therefore, developing powerful computational methods that can accurately and rapidly identify DBPs from sequence information is an urgent need. In this study, we propose a novel deep learning-based method called Deep-WET to accurately identify DBPs from primary sequence information. In Deep-WET, we employed three powerful feature encoding schemes containing Global Vectors, Word2Vec, and fastText to encode the protein sequence. Subsequently, these three features were sequentially combined and weighted using the weights obtained from the elements learned through the differential evolution (DE) algorithm. To enhance the predictive performance of Deep-WET, we applied the SHapley Additive exPlanations approach to remove irrelevant features. Finally, the optimal feature subset was input into convolutional neural networks to construct the Deep-WET predictor. Both cross-validation and independent tests indicated that Deep-WET achieved superior predictive performance compared to conventional machine learning classifiers. In addition, in extensive independent test, Deep-WET was effective and outperformed than several state-of-the-art methods for DBP prediction, with accuracy of 78.08%, MCC of 0.559, and AUC of 0.805. This superior performance shows that Deep-WET has a tremendous predictive capacity to predict DBPs. The web server of Deep-WET and curated datasets in this study are available at https://deepwet-dna.monarcatechnical.com/. The proposed Deep-WET is anticipated to serve the community-wide effort for large-scale identification of potential DBPs.
10.1038/s41598-024-52653-9
deep-wet: a deep learning-based approach for predicting dna-binding proteins using word embedding techniques with weighted features
dna-binding proteins (dbps) play a significant role in all phases of genetic processes, including dna recombination, repair, and modification. they are often utilized in drug discovery as fundamental elements of steroids, antibiotics, and anticancer drugs. predicting them poses the most challenging task in proteomics research. conventional experimental methods for dbp identification are costly and sometimes biased toward prediction. therefore, developing powerful computational methods that can accurately and rapidly identify dbps from sequence information is an urgent need. in this study, we propose a novel deep learning-based method called deep-wet to accurately identify dbps from primary sequence information. in deep-wet, we employed three powerful feature encoding schemes containing global vectors, word2vec, and fasttext to encode the protein sequence. subsequently, these three features were sequentially combined and weighted using the weights obtained from the elements learned through the differential evolution (de) algorithm. to enhance the predictive performance of deep-wet, we applied the shapley additive explanations approach to remove irrelevant features. finally, the optimal feature subset was input into convolutional neural networks to construct the deep-wet predictor. both cross-validation and independent tests indicated that deep-wet achieved superior predictive performance compared to conventional machine learning classifiers. in addition, in extensive independent test, deep-wet was effective and outperformed than several state-of-the-art methods for dbp prediction, with accuracy of 78.08%, mcc of 0.559, and auc of 0.805. this superior performance shows that deep-wet has a tremendous predictive capacity to predict dbps. the web server of deep-wet and curated datasets in this study are available at https://deepwet-dna.monarcatechnical.com/. the proposed deep-wet is anticipated to serve the community-wide effort for large-scale identification of potential dbps.
[ "dna-binding proteins", "dbps", "a significant role", "all phases", "genetic processes", "dna recombination", "repair", "modification", "they", "drug discovery", "fundamental elements", "steroids", "antibiotics", "anticancer drugs", "them", "the most challenging task", "proteomics research", "conventional experimental methods", "dbp identification", "prediction", "powerful computational methods", "that", "dbps", "sequence information", "an urgent need", "this study", "we", "a novel deep learning-based method", "dbps", "primary sequence information", "we", "three powerful feature encoding schemes", "global vectors", "word2vec", "fasttext", "the protein sequence", "these three features", "the weights", "the elements", "the differential evolution", "de", "algorithm", "the predictive performance", "we", "the shapley additive explanations approach", "irrelevant features", "the optimal feature subset", "input", "convolutional neural networks", "the deep-wet predictor", "both cross-validation and independent tests", "superior predictive performance", "conventional machine learning classifiers", "addition", "extensive independent test", "the-art", "dbp prediction", "accuracy", "78.08%", "mcc", "auc", "this superior performance", "a tremendous predictive capacity", "dbps", "the web server", "datasets", "this study", "https://deepwet-dna.monarcatechnical.com/.", "the community-wide effort", "large-scale identification", "potential dbps", "three", "three", "78.08%", "0.559", "0.805" ]
DDoS attack traffic classification in SDN using deep learning
[ "Nisha Ahuja", "Debajyoti Mukhopadhyay", "Gaurav Singal" ]
Software-defined networking will be a critical component of the networking domain as it transitions from a standard networking design to an automation network. To meet the needs of the current scenario, this architecture redesign becomes mandatory. Besides, machine learning (ML) and deep learning (DL) techniques provide a significant solution in network attack detection, traffic classification, etc. The DDoS attack is still wreaking havoc. Previous work for DDoS attack detection in SDN has not yielded significant results, so the author has used the most recent deep learning technique to detect the attacks. In this paper, we aim to classify the network traffic into normal and malicious classes based on features in the available dataset by using various deep learning techniques. TCP, UDP, and ICMP traffic are considered normal; however, malicious traffic includes TCP Syn Attack, UDP Flood, and ICMP Flood, all of which are DDoS attack traffic. The major contribution of this paper is the identification of novel features for DDoS attack detection. Novel features are logged into the CSV file to create the dataset, and machine learning algorithms are trained on the created SDN dataset. Various work which has already been done for DDoS attack detection either used a non-SDN dataset or the research data is not made public. A novel hybrid machine learning model is utilized to perform the classification. The dataset used by the ML/DL algorithms is a collection of public datasets on DDoS attacks as well as an experimental DDoS dataset generated by us and publicly available on the Mendeley Data repository. A Python application performs the classification of traffic into one of the classes. From the various classifiers used, the accuracy score of 99.75% is achieved with Stacked Auto-Encoder Multi-layer Perceptron (SAE-MLP). To measure the effectiveness of the SDN-DDoS dataset, the other publicly available datasets are also evaluated against the same deep learning algorithms, and traffic classification accuracy is found to be significantly higher with the SDN-DDoS dataset. The attack detection time of 216.39 s also serve as experimental evidence.
10.1007/s00779-023-01785-2
ddos attack traffic classification in sdn using deep learning
software-defined networking will be a critical component of the networking domain as it transitions from a standard networking design to an automation network. to meet the needs of the current scenario, this architecture redesign becomes mandatory. besides, machine learning (ml) and deep learning (dl) techniques provide a significant solution in network attack detection, traffic classification, etc. the ddos attack is still wreaking havoc. previous work for ddos attack detection in sdn has not yielded significant results, so the author has used the most recent deep learning technique to detect the attacks. in this paper, we aim to classify the network traffic into normal and malicious classes based on features in the available dataset by using various deep learning techniques. tcp, udp, and icmp traffic are considered normal; however, malicious traffic includes tcp syn attack, udp flood, and icmp flood, all of which are ddos attack traffic. the major contribution of this paper is the identification of novel features for ddos attack detection. novel features are logged into the csv file to create the dataset, and machine learning algorithms are trained on the created sdn dataset. various work which has already been done for ddos attack detection either used a non-sdn dataset or the research data is not made public. a novel hybrid machine learning model is utilized to perform the classification. the dataset used by the ml/dl algorithms is a collection of public datasets on ddos attacks as well as an experimental ddos dataset generated by us and publicly available on the mendeley data repository. a python application performs the classification of traffic into one of the classes. from the various classifiers used, the accuracy score of 99.75% is achieved with stacked auto-encoder multi-layer perceptron (sae-mlp). to measure the effectiveness of the sdn-ddos dataset, the other publicly available datasets are also evaluated against the same deep learning algorithms, and traffic classification accuracy is found to be significantly higher with the sdn-ddos dataset. the attack detection time of 216.39 s also serve as experimental evidence.
[ "software-defined networking", "a critical component", "the networking domain", "it", "a standard networking design", "an automation network", "the needs", "the current scenario", "this architecture redesign", "machine learning", "ml", "deep learning", "(dl) techniques", "a significant solution", "network attack detection", "traffic classification", "the ddos attack", "havoc", "previous work", "ddos attack detection", "sdn", "significant results", "the author", "the most recent deep learning technique", "the attacks", "this paper", "we", "the network traffic", "normal and malicious classes", "features", "the available dataset", "various deep learning techniques", "tcp", "udp", "icmp traffic", "malicious traffic", "tcp syn attack", "udp flood", "icmp flood", "all", "which", "ddos attack traffic", "the major contribution", "this paper", "the identification", "novel features", "ddos attack detection", "novel features", "the csv file", "the dataset", "machine learning algorithms", "the created sdn dataset", "various work", "which", "ddos attack detection", "a non-sdn dataset", "the research data", "a novel hybrid machine learning model", "the classification", "the dataset", "the ml/dl algorithms", "a collection", "public datasets", "ddos attacks", "an experimental ddos dataset", "us", "the mendeley data repository", "a python application", "the classification", "traffic", "the classes", "the various classifiers", "the accuracy score", "99.75%", "stacked auto-encoder multi-layer perceptron", "sae-mlp", "the effectiveness", "the sdn-ddos dataset", "the other publicly available datasets", "the same deep learning algorithms", "traffic classification accuracy", "the sdn-ddos dataset", "the attack detection time", "216.39 s", "experimental evidence", "icmp", "mendeley", "99.75%", "216.39" ]
A Deep Learning-Based Object Representation Algorithm for Smart Retail Management
[ "Bin Liu" ]
This study underscores the vital role of object representation and detection in smart retail management systems for optimizing customer experiences and operational efficiency. The literature review reveals a preference for deep learning techniques, citing their superior accuracy compared to traditional methods. While acknowledging the challenges of achieving high accuracy and low computation costs simultaneously in deep learning-based object representation, the paper proposes a solution using the YOLOv7 framework. In order to navigate the ever-changing landscape of smart retail technologies, the study clarifies the potential scalability and flexibility of deep learning approaches. The method employs a custom dataset, and experimental results demonstrate the model’s efficacy, showcasing accurate results and enhanced performance in various experiments and analyses.
10.1007/s40031-024-01051-w
a deep learning-based object representation algorithm for smart retail management
this study underscores the vital role of object representation and detection in smart retail management systems for optimizing customer experiences and operational efficiency. the literature review reveals a preference for deep learning techniques, citing their superior accuracy compared to traditional methods. while acknowledging the challenges of achieving high accuracy and low computation costs simultaneously in deep learning-based object representation, the paper proposes a solution using the yolov7 framework. in order to navigate the ever-changing landscape of smart retail technologies, the study clarifies the potential scalability and flexibility of deep learning approaches. the method employs a custom dataset, and experimental results demonstrate the model’s efficacy, showcasing accurate results and enhanced performance in various experiments and analyses.
[ "this study", "the vital role", "object representation", "detection", "smart retail management systems", "customer experiences", "operational efficiency", "the literature review", "a preference", "deep learning techniques", "their superior accuracy", "traditional methods", "the challenges", "high accuracy", "low computation costs", "deep learning-based object representation", "the paper", "a solution", "the yolov7 framework", "order", "the ever-changing landscape", "smart retail technologies", "the study", "the potential scalability", "flexibility", "deep learning approaches", "the method", "a custom dataset", "experimental results", "the model’s efficacy", "accurate results", "enhanced performance", "various experiments", "analyses" ]
Shallow and deep learning classifiers in medical image analysis
[ "Francesco Prinzi", "Tiziana Currieri", "Salvatore Gaglio", "Salvatore Vitabile" ]
An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians’ decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between “shallow” learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and “deep” learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• Deep classifiers implement automatic feature extraction and classification.• The classifier selection is based on data and computational resources availability, task, and explanation needs.Graphical Abstract
10.1186/s41747-024-00428-2
shallow and deep learning classifiers in medical image analysis
an increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians’ decision-making. artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. this review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between “shallow” learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and xgboost, and “deep” learning architectures including convolutional neural networks and vision transformers. in addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.relevance statement the growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. explainability is a key feature of models that leads systems toward integration into clinical practice. key points • training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• deep classifiers implement automatic feature extraction and classification.• the classifier selection is based on data and computational resources availability, task, and explanation needs.graphical abstract
[ "an increasingly strong connection", "artificial intelligence", "medicine", "the development", "predictive models", "physicians’ decision-making", "artificial intelligence", "machine learning", "which", "sub", "the last decade", "most clinical problems", "machine learning classifiers", "it", "their main elements", "this review", "primary educational insights", "the most accessible and widely employed classifiers", "radiology field", "“shallow” learning", "(i.e., traditional machine learning", "algorithms", "support vector machines", "random forest", "xgboost", "“deep” learning architectures", "convolutional neural networks", "vision transformers", "addition", "the paper", "the key steps", "classifiers training", "the differences", "the most common algorithms", "architectures", "the choice", "an algorithm", "the task", "general guidelines", "classifier selection", "relation", "task analysis", "dataset size", "explainability requirements", "available computing resources", "the enormous interest", "these innovative models", "architectures", "the problem", "machine learning algorithms interpretability", "a future perspective", "trustworthy artificial intelligence.relevance statement", "the growing synergy", "artificial intelligence and medicine fosters predictive models", "physicians", "machine learning classifiers", "shallow learning", "deep learning", "crucial insights", "the development", "clinical decision support systems", "healthcare", "explainability", "a key feature", "models", "that", "systems", "integration", "clinical practice", "key points •", "a shallow classifier", "disease-related features", "region", "interests", "deep classifiers", "automatic feature extraction", "classification.•", "the classifier selection", "data", "computational resources availability", "task", "the last decade", "•" ]
Survey of Research on Application of Deep Learning in Modulation Recognition
[ "Yongjun Sun", "Wanting Wu" ]
Modulation recognition is an important research branch in the field of communication, which is widely used in civil and military fields. The classic methods depend on decision theory, signal feature and the choice of classifier, while the deep learning network can get the signal feature directly from the data, and its recognition accuracy is higher than the classic methods. This paper summarized the application of deep learning in modulation recognition. Firstly, the basic concept of deep learning and the common network structure in modulation recognition were introduced. Secondly, the common signal forms and signal preprocessing technologies of input deep learning network were given, and the characteristics and performance of different deep learning networks were summarized and analyzed. Finally, the challenges and future research directions in this field were discussed.
10.1007/s11277-023-10826-1
survey of research on application of deep learning in modulation recognition
modulation recognition is an important research branch in the field of communication, which is widely used in civil and military fields. the classic methods depend on decision theory, signal feature and the choice of classifier, while the deep learning network can get the signal feature directly from the data, and its recognition accuracy is higher than the classic methods. this paper summarized the application of deep learning in modulation recognition. firstly, the basic concept of deep learning and the common network structure in modulation recognition were introduced. secondly, the common signal forms and signal preprocessing technologies of input deep learning network were given, and the characteristics and performance of different deep learning networks were summarized and analyzed. finally, the challenges and future research directions in this field were discussed.
[ "modulation recognition", "an important research branch", "the field", "communication", "which", "civil and military fields", "the classic methods", "decision theory", "classifier", "the deep learning network", "the signal feature", "the data", "its recognition accuracy", "the classic methods", "this paper", "the application", "deep learning", "modulation recognition", "the basic concept", "deep learning", "the common network structure", "modulation recognition", "the common signal forms", "signal", "preprocessing technologies", "input deep learning network", "the characteristics", "performance", "different deep learning networks", "the challenges", "future research directions", "this field", "firstly", "secondly" ]
Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs
[ "Eve Martin", "Angus G. Cook", "Shaun M. Frost", "Angus W. Turner", "Fred K. Chen", "Ian L. McAllister", "Janis M. Nolde", "Markus P. Schlaich" ]
Background/ObjectivesArtificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms.Subjects/MethodsPatients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants.ResultsOf the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy.ConclusionsThe results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
10.1038/s41433-024-03085-2
ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs
background/objectivesartificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. the study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms.subjects/methodspatients referred for treatment-resistant hypertension were imaged at a hospital unit in perth, australia, between 2016 and 2022. the same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants.resultsof the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. the models designed to screen for fewer diseases captured more incidental disease. all three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy.conclusionsthe results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
[ "background/objectivesartificial intelligence", "ocular image analysis", "screening", "diagnosis", "it", "autonomous full-spectrum screening", "false-positive results", "screening potential", "signals", "training and/or ambiguous signals", "biomarker overlap", "high comorbidity", "the study", "the potential", "clinically useful incidental ocular biomarkers", "fundus photographs", "hypertensive adults", "diabetic deep learning algorithms.subjects/methodspatients", "treatment-resistant hypertension", "a hospital unit", "perth", "australia", "the same 45° colour fundus photograph", "each", "the 433 participants", "three deep learning algorithms", "two expert retinal specialists", "all false-positive results", "diabetic retinopathy", "non-diabetic participants.resultsof", "the 29 non-diabetic participants", "diabetic retinopathy", "28 (97%", "clinically useful retinal biomarkers", "the models", "fewer diseases", "more incidental disease", "all three algorithms", "a positive correlation", "severity", "hypertensive retinopathy", "retinopathy.conclusionsthe results", "diabetic deep learning models", "hypertensive and other clinically useful retinal biomarkers", "an at-risk, hypertensive cohort", "that models", "fewer diseases", "more incidental pathology", "confidence", "hypotheses", "self-supervised learning", "autonomous comprehensive screening", "non-referable and false-positive outputs", "other deep learning screening models", "immediate clinical use", "other populations", "perth", "australia", "between 2016 and 2022", "the same 45", "433", "three", "two", "29", "28", "97%", "three" ]
Deep residual learning with Anscombe transformation for low-dose digital tomosynthesis
[ "Youngjin Lee", "Seungwan Lee", "Chanrok Park" ]
Deep learning-based convolutional neural networks (CNNs) have been proposed for enhancing the quality of digital tomosynthesis (DTS) images. However, the direct applications of the conventional CNNs for low-dose DTS imaging are limited to provide acceptable image quality due to the inaccurate recognition of complex texture patterns. In this study, a deep residual learning network combined with the Anscombe transformation was proposed for simplifying the complex texture and restoring the low-dose DTS image quality. The proposed network consisted of convolution layers, max-pooling layers, up-sampling layers, and skip connections. The network training was performed to learn the residual images between the ground-truth and low-dose projections, which were converted using the Anscombe transformation. As a result, the proposed network enhanced the quantitative accuracy and noise characteristic of DTS images by 1.01–1.27 and 1.14–1.71 times, respectively, in comparison to low-dose DTS images and other deep learning networks. The spatial resolution of the DTS image restored using the proposed network was 1.12 times higher than that obtained using a deep image learning network. In conclusion, the proposed network can restore the low-dose DTS image quality and provide an optimal model for low-dose DTS imaging.
10.1007/s40042-024-01117-4
deep residual learning with anscombe transformation for low-dose digital tomosynthesis
deep learning-based convolutional neural networks (cnns) have been proposed for enhancing the quality of digital tomosynthesis (dts) images. however, the direct applications of the conventional cnns for low-dose dts imaging are limited to provide acceptable image quality due to the inaccurate recognition of complex texture patterns. in this study, a deep residual learning network combined with the anscombe transformation was proposed for simplifying the complex texture and restoring the low-dose dts image quality. the proposed network consisted of convolution layers, max-pooling layers, up-sampling layers, and skip connections. the network training was performed to learn the residual images between the ground-truth and low-dose projections, which were converted using the anscombe transformation. as a result, the proposed network enhanced the quantitative accuracy and noise characteristic of dts images by 1.01–1.27 and 1.14–1.71 times, respectively, in comparison to low-dose dts images and other deep learning networks. the spatial resolution of the dts image restored using the proposed network was 1.12 times higher than that obtained using a deep image learning network. in conclusion, the proposed network can restore the low-dose dts image quality and provide an optimal model for low-dose dts imaging.
[ "deep learning-based convolutional neural networks", "cnns", "the quality", "digital tomosynthesis (dts) images", "the direct applications", "the conventional cnns", "low-dose dts imaging", "acceptable image quality", "the inaccurate recognition", "complex texture patterns", "this study", "a deep residual learning network", "the anscombe transformation", "the complex texture", "the low-dose dts image quality", "the proposed network", "convolution layers", "max-pooling layers", "up-sampling layers", "skip connections", "the network training", "the residual images", "the ground-truth and low-dose projections", "which", "the anscombe transformation", "a result", "the proposed network", "the quantitative accuracy", "dts images", "1.01–1.27 and 1.14–1.71 times", "comparison", "low-dose dts images", "other deep learning networks", "the spatial resolution", "the dts image", "the proposed network", "that", "a deep image learning network", "conclusion", "the proposed network", "the low-dose dts image quality", "an optimal model", "low-dose dts imaging", "max", "1.01–1.27", "1.14–1.71", "1.12" ]
Colon cancer diagnosis by means of explainable deep learning
[ "Marcello Di Giammarco", "Fabio Martinelli", "Antonella Santone", "Mario Cesarelli", "Francesco Mercaldo" ]
Early detection of the adenocarcinoma cancer in colon tissue by means of explainable deep learning, by classifying histological images and providing visual explainability on model prediction. Considering that in recent years, deep learning techniques have emerged as powerful techniques in medical image analysis, offering unprecedented accuracy and efficiency, in this paper we propose a method to automatically detect the presence of cancerous cells in colon tissue images. Various deep learning architectures are considered, with the aim of considering the best one in terms of quantitative and qualitative results. As a matter of fact, we consider qualitative results by taking into account the so-called prediction explainability, by providing a way to highlight on the tissue images the areas that from the model point of view are related to the presence of colon cancer. The experimental analysis, performed on 10,000 colon issue images, showed the effectiveness of the proposed method by obtaining an accuracy equal to 0.99. The experimental analysis shows that the proposed method can be successfully exploited for colon cancer detection and localisation from tissue images.
10.1038/s41598-024-63659-8
colon cancer diagnosis by means of explainable deep learning
early detection of the adenocarcinoma cancer in colon tissue by means of explainable deep learning, by classifying histological images and providing visual explainability on model prediction. considering that in recent years, deep learning techniques have emerged as powerful techniques in medical image analysis, offering unprecedented accuracy and efficiency, in this paper we propose a method to automatically detect the presence of cancerous cells in colon tissue images. various deep learning architectures are considered, with the aim of considering the best one in terms of quantitative and qualitative results. as a matter of fact, we consider qualitative results by taking into account the so-called prediction explainability, by providing a way to highlight on the tissue images the areas that from the model point of view are related to the presence of colon cancer. the experimental analysis, performed on 10,000 colon issue images, showed the effectiveness of the proposed method by obtaining an accuracy equal to 0.99. the experimental analysis shows that the proposed method can be successfully exploited for colon cancer detection and localisation from tissue images.
[ "early detection", "the adenocarcinoma cancer", "colon tissue", "means", "explainable deep learning", "histological images", "visual explainability", "model prediction", "recent years", "deep learning techniques", "powerful techniques", "medical image analysis", "unprecedented accuracy", "efficiency", "this paper", "we", "a method", "the presence", "cancerous cells", "colon tissue images", "various deep learning architectures", "the aim", "terms", "quantitative and qualitative results", "a matter", "fact", "we", "qualitative results", "account", "the so-called prediction explainability", "a way", "the tissue", "the areas", "the model point", "view", "the presence", "colon cancer", "the experimental analysis", "10,000 colon issue images", "the effectiveness", "the proposed method", "an accuracy", "the experimental analysis", "the proposed method", "colon cancer detection", "localisation", "tissue images", "recent years", "10,000", "0.99" ]
Deep learning based multiclass classification for citrus anomaly detection in agriculture
[ "Ebru Ergün" ]
In regions where citrus crops are threatened by diseases caused by fungi, bacteria, pests and viruses, growers are actively seeking automated technologies that can accurately detect citrus anomalies to minimize economic losses. Recent advances in deep learning techniques have shown potential in automating and improving the accuracy of citrus anomaly categorization. This research explores the use of deep learning methods, specifically DenseNet, to construct robust models capable of accurately distinguishing between different types of citrus anomalies. The dataset consists of high-resolution images of different orange leaves of the species Citrus sinensis osbeck, collected from orange groves in the states of Tamaulipas and San Luis Potosi in northeastern Mexico was used in study. Experimental results demonstrated the effectiveness of the proposed deep learning models in simultaneously identifying 12 different classes of citrus anomalies. Evaluation metrics, including accuracy, recall, precision and the confusion matrix, underscore the discriminative power of the models. Among the convolutional neural network architectures used, DenseNet achieved the highest classification accuracy at 99.50%. The study concluded by highlighting the potential for scalable and effective citrus anomaly classification and management using deep learning-based systems.
10.1007/s11760-024-03452-2
deep learning based multiclass classification for citrus anomaly detection in agriculture
in regions where citrus crops are threatened by diseases caused by fungi, bacteria, pests and viruses, growers are actively seeking automated technologies that can accurately detect citrus anomalies to minimize economic losses. recent advances in deep learning techniques have shown potential in automating and improving the accuracy of citrus anomaly categorization. this research explores the use of deep learning methods, specifically densenet, to construct robust models capable of accurately distinguishing between different types of citrus anomalies. the dataset consists of high-resolution images of different orange leaves of the species citrus sinensis osbeck, collected from orange groves in the states of tamaulipas and san luis potosi in northeastern mexico was used in study. experimental results demonstrated the effectiveness of the proposed deep learning models in simultaneously identifying 12 different classes of citrus anomalies. evaluation metrics, including accuracy, recall, precision and the confusion matrix, underscore the discriminative power of the models. among the convolutional neural network architectures used, densenet achieved the highest classification accuracy at 99.50%. the study concluded by highlighting the potential for scalable and effective citrus anomaly classification and management using deep learning-based systems.
[ "regions", "citrus crops", "diseases", "fungi", "bacteria", "pests", "viruses", "growers", "automated technologies", "that", "citrus anomalies", "economic losses", "recent advances", "deep learning techniques", "potential", "the accuracy", "citrus anomaly categorization", "this research", "the use", "deep learning methods", "specifically densenet", "robust models", "different types", "citrus anomalies", "the dataset", "high-resolution images", "different orange leaves", "the species citrus sinensis osbeck", "orange groves", "the states", "tamaulipas", "san luis potosi", "northeastern mexico", "study", "experimental results", "the effectiveness", "the proposed deep learning models", "12 different classes", "citrus anomalies", "evaluation metrics", "accuracy", "recall", "precision", "the confusion matrix", "the discriminative power", "the models", "the convolutional neural network architectures", "densenet", "the highest classification accuracy", "99.50%", "the study", "the potential", "scalable and effective citrus anomaly classification", "management", "deep learning-based systems", "san luis", "mexico", "12", "99.50%" ]
Ensemble of deep learning and machine learning approach for classification of handwritten Hindi numerals
[ "Danveer Rajpal", "Akhil Ranjan Garg" ]
Given the vast range of factors, including shape, size, skew, and orientation of handwritten numerals, their machine-based recognition is a difficult challenge for researchers in the pattern recognition field. Due to the abundance of curves and resembling shapes of the symbols, the recognition of Devnagari numerals can leverage the difficulty level of the recognition. The suggested low-classification-cost method for obtaining fine features from given numeral images used benchmark deep learning models, VGG-16Net, VGG-19Net, ResNet-50, and Inception-v3, to address these issues. Principal component analysis, a powerful dimensionality reduction method, was used to efficiently reduce the number of dimensions in the information that pre-trained deep convolutional neural network models provided. The method for improving recognition accuracy by fusing features was provided in the scheme. A machine learning algorithm: support vector machine was employed for the recognition task due to its capacity to distinguish between patterns belonging to distinct classes. The system was able to obtain a recognition accuracy of 99.72% and was effective in demonstrating the importance of ensemble machine learning and deep learning approaches.
10.1186/s44147-023-00252-2
ensemble of deep learning and machine learning approach for classification of handwritten hindi numerals
given the vast range of factors, including shape, size, skew, and orientation of handwritten numerals, their machine-based recognition is a difficult challenge for researchers in the pattern recognition field. due to the abundance of curves and resembling shapes of the symbols, the recognition of devnagari numerals can leverage the difficulty level of the recognition. the suggested low-classification-cost method for obtaining fine features from given numeral images used benchmark deep learning models, vgg-16net, vgg-19net, resnet-50, and inception-v3, to address these issues. principal component analysis, a powerful dimensionality reduction method, was used to efficiently reduce the number of dimensions in the information that pre-trained deep convolutional neural network models provided. the method for improving recognition accuracy by fusing features was provided in the scheme. a machine learning algorithm: support vector machine was employed for the recognition task due to its capacity to distinguish between patterns belonging to distinct classes. the system was able to obtain a recognition accuracy of 99.72% and was effective in demonstrating the importance of ensemble machine learning and deep learning approaches.
[ "the vast range", "factors", "shape", "size", "skew", "orientation", "handwritten numerals", "their machine-based recognition", "a difficult challenge", "researchers", "the pattern recognition field", "the abundance", "curves", "shapes", "the symbols", "the recognition", "devnagari numerals", "the difficulty level", "the recognition", "the suggested low-classification-cost method", "fine features", "numeral images", "benchmark deep learning models", "vgg-16net", "vgg-19net", "resnet-50", "inception-v3", "these issues", "principal component analysis", "a powerful dimensionality reduction method", "the number", "dimensions", "the information", "that", "pre-trained deep convolutional neural network models", "the method", "recognition accuracy", "fusing features", "the scheme", "a machine learning", "algorithm", "support vector machine", "the recognition task", "its capacity", "patterns", "distinct classes", "the system", "a recognition accuracy", "99.72%", "the importance", "ensemble machine learning", "deep learning approaches", "resnet-50", "99.72%" ]
Strengthening KMS Security with Advanced Cryptography, Machine Learning, Deep Learning, and IoT Technologies
[ "Justin Onyarin Ogala", "Shahnawaz Ahmad", "Iman Shakeel", "Javed Ahmad", "Shabana Mehfuz" ]
This paper presents an innovative approach to strengthening Key Management Systems (KMS) against the escalating landscape of cyber threats by integrating advanced cryptographic technologies, machine learning, deep learning, and the Internet of Things (IoT). As digital reliance and cyber-attacks surge, strengthening KMS security becomes paramount. Our research provides a comprehensive overview of the state-of-the-art in cloud data security, identifying key vulnerabilities in existing KMS. The paper also outlines a distinctive framework based on the combined application of advanced cryptography, machine learning, deep learning, and IoT, which represents a novel approach in the quest for robust KMS security. Our experimental results substantiate the efficacy of this unique blend of technologies, providing solid empirical evidence that such a fusion can successfully strengthen KMS against potential threats. As technologies and threat landscapes continue to evolve, our framework can serve as a benchmark for future research and practical implementations. It highlights the potential of integrated technological solutions to counter complex cybersecurity issues. Moreover, the approach we've developed can be adapted and expanded to cater to the specific needs of different sectors, such as finance, healthcare, and e-commerce, which are particularly vulnerable to cyber threats. The novelty of our work lies in the amalgamation of the four technologies and the creation of an empirically backed, robust framework, marking a significant stride in KMS security.
10.1007/s42979-023-02073-9
strengthening kms security with advanced cryptography, machine learning, deep learning, and iot technologies
this paper presents an innovative approach to strengthening key management systems (kms) against the escalating landscape of cyber threats by integrating advanced cryptographic technologies, machine learning, deep learning, and the internet of things (iot). as digital reliance and cyber-attacks surge, strengthening kms security becomes paramount. our research provides a comprehensive overview of the state-of-the-art in cloud data security, identifying key vulnerabilities in existing kms. the paper also outlines a distinctive framework based on the combined application of advanced cryptography, machine learning, deep learning, and iot, which represents a novel approach in the quest for robust kms security. our experimental results substantiate the efficacy of this unique blend of technologies, providing solid empirical evidence that such a fusion can successfully strengthen kms against potential threats. as technologies and threat landscapes continue to evolve, our framework can serve as a benchmark for future research and practical implementations. it highlights the potential of integrated technological solutions to counter complex cybersecurity issues. moreover, the approach we've developed can be adapted and expanded to cater to the specific needs of different sectors, such as finance, healthcare, and e-commerce, which are particularly vulnerable to cyber threats. the novelty of our work lies in the amalgamation of the four technologies and the creation of an empirically backed, robust framework, marking a significant stride in kms security.
[ "this paper", "an innovative approach", "key management systems", "kms", "the escalating landscape", "cyber threats", "advanced cryptographic technologies", "machine learning", "deep learning", "the internet", "things", "iot", "digital reliance", "cyber-attacks", "kms security", "our research", "a comprehensive overview", "the state", "the-art", "cloud data security", "key vulnerabilities", "existing kms", "the paper", "a distinctive framework", "the combined application", "advanced cryptography", "machine learning", "deep learning", "iot", "which", "a novel approach", "the quest", "robust kms security", "our experimental results", "the efficacy", "this unique blend", "technologies", "solid empirical evidence", "such a fusion", "kms", "potential threats", "technologies", "threat landscapes", "our framework", "a benchmark", "future research and practical implementations", "it", "the potential", "integrated technological solutions", "complex cybersecurity issues", "the approach", "we", "the specific needs", "different sectors", "finance", "healthcare", "e", "-", "commerce", "which", "cyber threats", "the novelty", "our work", "the amalgamation", "the four technologies", "the creation", "an empirically backed, robust framework", "a significant stride", "kms security", "four" ]
Skin cancer detection using ensemble of machine learning and deep learning techniques
[ "Jitendra V. Tembhurne", "Nachiketa Hebbar", "Hemprasad Y. Patil", "Tausif Diwan" ]
Skin cancer is one of the most common forms of cancer, which makes it pertinent to be able to diagnose it accurately. In particular, melanoma is a form of skin cancer that is fatal and accounts for 6 of every 7-skin cancer related death. Moreover, in hospitals where dermatologists have to diagnose multiple cases of skin cancer, there are high possibilities of false negatives in diagnosis. To avoid such incidents, there here has been exhaustive research conducted by the research community all over the world to build highly accurate automated tools for skin cancer detection. In this paper, we introduce a novel approach of combining machine learning and deep learning techniques to solve the problem of skin cancer detection. The deep learning model uses state-of-the-art neural networks to extract features from images whereas the machine learning model processes image features which are obtained after performing the techniques such as Contourlet Transform and Local Binary Pattern Histogram. Meaningful feature extraction is crucial for any image classification roblem. As a result, by combining the manual and automated features, our designed model achieves a higher accuracy of 93% with an individual recall score of 99.7% and 86% for the benign and malignant forms of cancer, respectively. We benchmarked the model on publicly available Kaggle dataset containing processed images from ISIC Archive dataset. The proposed ensemble outperforms both expert dermatologists as well as other state-of-the-art deep learning and machine learning methods. Thus, this novel method can be of high assistance to dermatologists to help prevent any misdiagnosis.
10.1007/s11042-023-14697-3
skin cancer detection using ensemble of machine learning and deep learning techniques
skin cancer is one of the most common forms of cancer, which makes it pertinent to be able to diagnose it accurately. in particular, melanoma is a form of skin cancer that is fatal and accounts for 6 of every 7-skin cancer related death. moreover, in hospitals where dermatologists have to diagnose multiple cases of skin cancer, there are high possibilities of false negatives in diagnosis. to avoid such incidents, there here has been exhaustive research conducted by the research community all over the world to build highly accurate automated tools for skin cancer detection. in this paper, we introduce a novel approach of combining machine learning and deep learning techniques to solve the problem of skin cancer detection. the deep learning model uses state-of-the-art neural networks to extract features from images whereas the machine learning model processes image features which are obtained after performing the techniques such as contourlet transform and local binary pattern histogram. meaningful feature extraction is crucial for any image classification roblem. as a result, by combining the manual and automated features, our designed model achieves a higher accuracy of 93% with an individual recall score of 99.7% and 86% for the benign and malignant forms of cancer, respectively. we benchmarked the model on publicly available kaggle dataset containing processed images from isic archive dataset. the proposed ensemble outperforms both expert dermatologists as well as other state-of-the-art deep learning and machine learning methods. thus, this novel method can be of high assistance to dermatologists to help prevent any misdiagnosis.
[ "skin cancer", "the most common forms", "cancer", "which", "it", "it", "melanoma", "a form", "skin cancer", "that", "every 7-skin cancer related death", "hospitals", "dermatologists", "multiple cases", "skin cancer", "high possibilities", "false negatives", "diagnosis", "such incidents", "exhaustive research", "the research community", "the world", "highly accurate automated tools", "skin cancer detection", "this paper", "we", "a novel approach", "machine learning", "deep learning techniques", "the problem", "skin cancer detection", "the deep learning model", "the-art", "features", "images", "the machine learning model", "image features", "which", "the techniques", "contourlet transform", "local binary pattern histogram", "meaningful feature extraction", "any image classification roblem", "a result", "the manual and automated features", "our designed model", "a higher accuracy", "93%", "an individual recall score", "99.7%", "86%", "the benign and malignant forms", "cancer", "we", "the model", "publicly available kaggle dataset", "processed images", "isic archive dataset", "the proposed ensemble outperforms", "both expert dermatologists", "the-art", "this novel method", "high assistance", "dermatologists", "any misdiagnosis", "one", "6", "7", "93%", "99.7% and", "86%" ]