title
stringlengths 31
206
| authors
sequencelengths 1
85
| abstract
stringlengths 428
3.21k
| doi
stringlengths 21
31
| cleaned_title
stringlengths 31
206
| cleaned_abstract
stringlengths 428
3.21k
| key_phrases
sequencelengths 19
150
|
---|---|---|---|---|---|---|
Deep-learning based sleep apnea detection using sleep sound, SpO2, and pulse rate | [
"Chutinan Singtothong",
"Thitirat Siriborvornratanakul"
] | Sleep apnea, a common sleep disorder where breathing is repeatedly interrupted during sleep, poses significant health risks. Traditional diagnostic methods like overnight polysomnography (PSG) are complex and costly, limiting widespread screening. This study suggests a deep learning model to identify sleep apnea using sleep sounds, oxygen saturation (SpO2), and pulse rate. Mel-spectrogram, computed from PSG data of 24 patients, serves as input. The study shows the effectiveness of deep learning, with the combined model achieving 96% accuracy in inferring apnea severity, outperforming individual models using SpO2 and pulse rate (79%) and sleep sound (83%). | 10.1007/s41870-024-01906-x | deep-learning based sleep apnea detection using sleep sound, spo2, and pulse rate | sleep apnea, a common sleep disorder where breathing is repeatedly interrupted during sleep, poses significant health risks. traditional diagnostic methods like overnight polysomnography (psg) are complex and costly, limiting widespread screening. this study suggests a deep learning model to identify sleep apnea using sleep sounds, oxygen saturation (spo2), and pulse rate. mel-spectrogram, computed from psg data of 24 patients, serves as input. the study shows the effectiveness of deep learning, with the combined model achieving 96% accuracy in inferring apnea severity, outperforming individual models using spo2 and pulse rate (79%) and sleep sound (83%). | [
"sleep apnea",
"a common sleep disorder",
"breathing",
"sleep",
"significant health risks",
"traditional diagnostic methods",
"overnight polysomnography",
"psg",
"widespread screening",
"this study",
"a deep learning model",
"sleep apnea",
"sleep sounds",
"oxygen saturation",
"spo2",
"pulse rate",
"mel-spectrogram",
"psg data",
"24 patients",
"input",
"the study",
"the effectiveness",
"deep learning",
"the combined model",
"96% accuracy",
"apnea severity",
"outperforming individual models",
"spo2",
"pulse rate",
"79%",
"sleep sound",
"83%",
"overnight",
"mel-spectrogram",
"24",
"96%",
"spo2",
"79%",
"83%"
] |
Deep learning automatically assesses 2-µm laser-induced skin damage OCT images | [
"Changke Wang",
"Qiong Ma",
"Yu Wei",
"Qi Liu",
"Yuqing Wang",
"Chenliang Xu",
"Caihui Li",
"Qingyu Cai",
"Haiyang Sun",
"Xiaoan Tang",
"Hongxiang Kang"
] | The present study proposed a noninvasive, automated, in vivo assessment method based on optical coherence tomography (OCT) and deep learning techniques to qualitatively and quantitatively analyze the biological effects of 2-µm laser-induced skin damage at different irradiation doses. Different doses of 2-µm laser irradiation established a mouse skin damage model, after which the skin-damaged tissues were imaged non-invasively in vivo using OCT. The acquired images were preprocessed to construct the dataset required for deep learning. The deep learning models used were U-Net, DeepLabV3+, PSP-Net, and HR-Net, and the trained models were used to segment the damage images and further quantify the damage volume of mouse skin under different irradiation doses. The comparison of the qualitative and quantitative results of the four network models showed that HR-Net had the best performance, the highest agreement between the segmentation results and real values, and the smallest error in the quantitative assessment of the damage volume. Based on HR-Net to segment the damage image and quantify the damage volume, the irradiation doses 5.41, 9.55, 13.05, 20.85, 32.71, 52.92, 76.71, and 97.24 J/cm² corresponded to a damage volume of 4.58, 12.56, 16.74, 20.88, 24.52, 30.75, 34.13, and 37.32 mm³. The damage volume increased in a radiation dose-dependent manner. | 10.1007/s10103-024-04053-8 | deep learning automatically assesses 2-µm laser-induced skin damage oct images | the present study proposed a noninvasive, automated, in vivo assessment method based on optical coherence tomography (oct) and deep learning techniques to qualitatively and quantitatively analyze the biological effects of 2-µm laser-induced skin damage at different irradiation doses. different doses of 2-µm laser irradiation established a mouse skin damage model, after which the skin-damaged tissues were imaged non-invasively in vivo using oct. the acquired images were preprocessed to construct the dataset required for deep learning. the deep learning models used were u-net, deeplabv3+, psp-net, and hr-net, and the trained models were used to segment the damage images and further quantify the damage volume of mouse skin under different irradiation doses. the comparison of the qualitative and quantitative results of the four network models showed that hr-net had the best performance, the highest agreement between the segmentation results and real values, and the smallest error in the quantitative assessment of the damage volume. based on hr-net to segment the damage image and quantify the damage volume, the irradiation doses 5.41, 9.55, 13.05, 20.85, 32.71, 52.92, 76.71, and 97.24 j/cm² corresponded to a damage volume of 4.58, 12.56, 16.74, 20.88, 24.52, 30.75, 34.13, and 37.32 mm³. the damage volume increased in a radiation dose-dependent manner. | [
"the present study",
"vivo assessment method",
"optical coherence tomography",
"oct",
"techniques",
"the biological effects",
"2-µm laser-induced skin damage",
"different irradiation doses",
"different doses",
"2-µm laser irradiation",
"a mouse skin damage model",
"which",
"the skin-damaged tissues",
"vivo",
"oct",
"the acquired images",
"the dataset",
"deep learning",
"the deep learning models",
"u",
"-",
"net",
"psp-net",
"hr-net",
"the trained models",
"the damage images",
"the damage volume",
"mouse skin",
"different irradiation doses",
"the comparison",
"the qualitative and quantitative results",
"the four network models",
"hr-net",
"the best performance",
"the highest agreement",
"the segmentation results",
"real values",
"the smallest error",
"the quantitative assessment",
"the damage volume",
"hr-net",
"the damage image",
"the damage volume",
"the irradiation",
"97.24 j/cm²",
"a damage volume",
"37.32 mm³.",
"the damage volume",
"a radiation dose-dependent manner",
"2-µm",
"2-µm",
"oct",
"four",
"5.41",
"9.55",
"13.05",
"20.85",
"32.71",
"52.92",
"76.71",
"97.24",
"4.58",
"12.56",
"16.74",
"20.88",
"24.52",
"30.75",
"34.13",
"37.32"
] |
A hybrid deep model with cumulative learning for few-shot learning | [
"Jiehao Liu",
"Zhao Yang",
"Liufei Luo",
"Mingkai Luo",
"Luyu Hu",
"Jiahao Li"
] | Few-shot learning (FSL) aims to recognize unseen classes with only a few samples for each class. This challenging research endeavors to narrow the gap between the computer vision technology and the human visual system. Recently, mainstream approaches for FSL can be grouped into meta-learning and classification learning. These two methods train the FSL model from local and global classification viewpoints respectively. In our work, we find the former method can effectively learn transferable knowledge (generalization capacity) with an episodic training paradigm but encounters the problem of slow convergence. The latter method can build an essential classification ability quickly (classification capacity) with a mini-batch training paradigm but easily causes an over-fitting problem. In light of this issue, we propose a hybrid deep model with cumulative learning to tackle the FSL problem by absorbing the advantages of the both methods. The proposed hybrid deep model innovatively integrates meta-learning and classification learning (IMC) in a unified two-branch network framework in which a meta-learning branch and a classification learning branch can work simultaneously. Besides, by considering the different characteristics of the two branches, we propose a cumulative learning strategy to take care of both generalization capacity learning and classification capacity learning in our IMC model training. With the proposed method, the model can quickly build the basic classification capability at the initial stage and continually mine discriminative class information during the remaining training for better generalization. Extensive experiments on CIFAR-FS, FC100, mini-ImageNet and tiered-ImageNet datasets are implemented to demonstrate the promising performance of our method. | 10.1007/s11042-022-14218-8 | a hybrid deep model with cumulative learning for few-shot learning | few-shot learning (fsl) aims to recognize unseen classes with only a few samples for each class. this challenging research endeavors to narrow the gap between the computer vision technology and the human visual system. recently, mainstream approaches for fsl can be grouped into meta-learning and classification learning. these two methods train the fsl model from local and global classification viewpoints respectively. in our work, we find the former method can effectively learn transferable knowledge (generalization capacity) with an episodic training paradigm but encounters the problem of slow convergence. the latter method can build an essential classification ability quickly (classification capacity) with a mini-batch training paradigm but easily causes an over-fitting problem. in light of this issue, we propose a hybrid deep model with cumulative learning to tackle the fsl problem by absorbing the advantages of the both methods. the proposed hybrid deep model innovatively integrates meta-learning and classification learning (imc) in a unified two-branch network framework in which a meta-learning branch and a classification learning branch can work simultaneously. besides, by considering the different characteristics of the two branches, we propose a cumulative learning strategy to take care of both generalization capacity learning and classification capacity learning in our imc model training. with the proposed method, the model can quickly build the basic classification capability at the initial stage and continually mine discriminative class information during the remaining training for better generalization. extensive experiments on cifar-fs, fc100, mini-imagenet and tiered-imagenet datasets are implemented to demonstrate the promising performance of our method. | [
"few-shot learning",
"unseen classes",
"only a few samples",
"each class",
"this challenging research endeavors",
"the gap",
"the computer vision technology",
"the human visual system",
"mainstream approaches",
"fsl",
"meta-learning and classification learning",
"these two methods",
"the fsl model",
"local and global classification viewpoints",
"our work",
"we",
"the former method",
"transferable knowledge",
"generalization capacity",
"an episodic training paradigm",
"the problem",
"slow convergence",
"the latter method",
"an essential classification ability",
"classification capacity",
"a mini-batch training paradigm",
"an over-fitting problem",
"light",
"this issue",
"we",
"a hybrid deep model",
"cumulative learning",
"the fsl problem",
"the advantages",
"the both methods",
"the proposed hybrid deep model",
"meta-learning and classification learning",
"imc",
"a unified two-branch network framework",
"which",
"a meta-learning branch",
"a classification learning branch",
"the different characteristics",
"the two branches",
"we",
"a cumulative learning strategy",
"care",
"both generalization capacity learning and classification capacity",
"our imc model training",
"the proposed method",
"the model",
"the basic classification capability",
"the initial stage",
"continually mine discriminative class information",
"the remaining training",
"better generalization",
"extensive experiments",
"cifar-fs, fc100, mini",
"-",
"imagenet",
"tiered-imagenet datasets",
"the promising performance",
"our method",
"two",
"two",
"two"
] |
Cytopathic Effect Detection and Clonal Selection using Deep Learning | [
"Yu Yuan",
"Tony Wang",
"Jordan Sims",
"Kim Le",
"Cenk Undey",
"Erdal Oruklu"
] | Purpose:In biotechnology, microscopic cell imaging is often used to identify and analyze cell morphology and cell state for a variety of applications. For example, microscopy can be used to detect the presence of cytopathic effects (CPE) in cell culture samples to determine virus contamination. Another application of microscopy is to verify clonality during cell line development. Conventionally, inspection of these microscopy images is performed manually by human analysts. This is both tedious and time consuming. In this paper, we propose using supervised deep learning algorithms to automate the cell detection processes mentioned above.Methods:The proposed algorithms utilize image processing techniques and convolutional neural networks (CNN) to detect the presence of CPE and to verify the clonality in cell line development.Results:We train and test the algorithms on image data which have been collected and labeled by domain experts. Our experiments have shown promising results in terms of both accuracy and speed.Conclusion:Deep learning algorithms achieve high accuracy (more than 95%) on both CPE detection and clonal selection applications, resulting in a highly efficient and cost-effective automation process. | 10.1007/s11095-024-03749-4 | cytopathic effect detection and clonal selection using deep learning | purpose:in biotechnology, microscopic cell imaging is often used to identify and analyze cell morphology and cell state for a variety of applications. for example, microscopy can be used to detect the presence of cytopathic effects (cpe) in cell culture samples to determine virus contamination. another application of microscopy is to verify clonality during cell line development. conventionally, inspection of these microscopy images is performed manually by human analysts. this is both tedious and time consuming. in this paper, we propose using supervised deep learning algorithms to automate the cell detection processes mentioned above.methods:the proposed algorithms utilize image processing techniques and convolutional neural networks (cnn) to detect the presence of cpe and to verify the clonality in cell line development.results:we train and test the algorithms on image data which have been collected and labeled by domain experts. our experiments have shown promising results in terms of both accuracy and speed.conclusion:deep learning algorithms achieve high accuracy (more than 95%) on both cpe detection and clonal selection applications, resulting in a highly efficient and cost-effective automation process. | [
"purpose",
"biotechnology",
"microscopic cell imaging",
"cell morphology",
"cell state",
"a variety",
"applications",
"example",
"the presence",
"cytopathic effects",
"cell culture samples",
"virus contamination",
"another application",
"microscopy",
"clonality",
"cell line development",
"inspection",
"these microscopy images",
"human analysts",
"this",
"this paper",
"we",
"supervised deep learning algorithms",
"the cell detection processes",
"above.methods",
"the proposed algorithms",
"image processing techniques",
"convolutional neural networks",
"cnn",
"the presence",
"cpe",
"the clonality",
"cell line development.results",
"we",
"the algorithms",
"image data",
"which",
"domain experts",
"our experiments",
"promising results",
"terms",
"both accuracy",
"speed.conclusion",
"deep learning algorithms",
"high accuracy",
"(more than 95%",
"both cpe detection and clonal selection applications",
"a highly efficient and cost-effective automation process",
"cnn",
"more than 95%"
] |
A secured deep learning based smart home automation system | [
"Chitukula Sanjay",
"Konda Jahnavi",
"Shyam Karanth"
] | With the expansion of modern technologies and the Internet of Things (IoT), the concept of smart homes has gained tremendous popularity with a view to making people’s lives easier by ensuring a secured environment. Several home automation systems have been developed to report suspicious activities by capturing the movements of residents. However, these systems are associated with challenges such as weak security, lack of interoperability and integration with IoT devices, timely reporting of suspicious movements, etc. Therefore, the given paper proposes a novel smart home automation framework for controlling home appliances by integrating with sensors, IoT devices, and microcontrollers, which would in turn monitor the movements and send notifications about suspicious movements on the resident’s smartphone. The proposed framework makes use of convolutional neural networks (CNNs) for motion detection and classification based on pre-processing of images. The images related to the movements of residents are captured by a spy camera installed in the system. It helps in identification of outsiders based on differentiation of motion patterns. The performance of the framework is compared with existing deep learning models used in recent studies based on evaluation metrics such as accuracy (%), precision (%), recall (%), and f-1 measure (%). The results show that the proposed framework attains the highest accuracy (98.67%), thereby surpassing the existing deep learning models used in smart home automation systems. | 10.1007/s41870-024-02097-1 | a secured deep learning based smart home automation system | with the expansion of modern technologies and the internet of things (iot), the concept of smart homes has gained tremendous popularity with a view to making people’s lives easier by ensuring a secured environment. several home automation systems have been developed to report suspicious activities by capturing the movements of residents. however, these systems are associated with challenges such as weak security, lack of interoperability and integration with iot devices, timely reporting of suspicious movements, etc. therefore, the given paper proposes a novel smart home automation framework for controlling home appliances by integrating with sensors, iot devices, and microcontrollers, which would in turn monitor the movements and send notifications about suspicious movements on the resident’s smartphone. the proposed framework makes use of convolutional neural networks (cnns) for motion detection and classification based on pre-processing of images. the images related to the movements of residents are captured by a spy camera installed in the system. it helps in identification of outsiders based on differentiation of motion patterns. the performance of the framework is compared with existing deep learning models used in recent studies based on evaluation metrics such as accuracy (%), precision (%), recall (%), and f-1 measure (%). the results show that the proposed framework attains the highest accuracy (98.67%), thereby surpassing the existing deep learning models used in smart home automation systems. | [
"the expansion",
"modern technologies",
"the internet",
"things",
"iot",
"the concept",
"smart homes",
"tremendous popularity",
"a view",
"people",
"a secured environment",
"several home automation systems",
"suspicious activities",
"the movements",
"residents",
"these systems",
"challenges",
"weak security",
"lack",
"interoperability",
"integration",
"iot devices",
"timely reporting",
"suspicious movements",
"the given paper",
"a novel smart home automation framework",
"home appliances",
"sensors",
"iot devices",
"microcontrollers",
"which",
"turn",
"the movements",
"notifications",
"suspicious movements",
"the resident’s smartphone",
"the proposed framework",
"use",
"convolutional neural networks",
"cnns",
"motion detection",
"classification",
"pre",
"processing",
"images",
"the images",
"the movements",
"residents",
"a spy camera",
"the system",
"it",
"identification",
"outsiders",
"differentiation",
"motion patterns",
"the performance",
"the framework",
"existing deep learning models",
"recent studies",
"evaluation metrics",
"accuracy",
"precision",
"recall",
"f-1 measure",
"the results",
"the proposed framework",
"the highest accuracy",
"98.67%",
"the existing deep learning models",
"smart home automation systems",
"98.67%"
] |
Minimization of occurrence of retained surgical items using machine learning and deep learning techniques: a review | [
"Mohammed Abo-Zahhad",
"Ahmed H. Abd El-Malek",
"Mohammed S. Sayed",
"Susan Njeri Gitau"
] | Retained surgical items (RSIs) pose significant risks to patients and healthcare professionals, prompting extensive efforts to reduce their incidence. RSIs are objects inadvertently left within patients’ bodies after surgery, which can lead to severe consequences such as infections and death. The repercussions highlight the critical need to address this issue. Machine learning (ML) and deep learning (DL) have displayed considerable potential for enhancing the prevention of RSIs through heightened precision and decreased reliance on human involvement. ML techniques are finding an expanding number of applications in medicine, ranging from automated imaging analysis to diagnosis. DL has enabled substantial advances in the prediction capabilities of computers by combining the availability of massive volumes of data with extremely effective learning algorithms. This paper reviews and evaluates recently published articles on the application of ML and DL in RSIs prevention and diagnosis, stressing the need for a multi-layered approach that leverages each method’s strengths to mitigate RSI risks. It highlights the key findings, advantages, and limitations of the different techniques used. Extensive datasets for training ML and DL models could enhance RSI detection systems. This paper also discusses the various datasets used by researchers for training the models. In addition, future directions for improving these technologies for RSI diagnosis and prevention are considered. By merging ML and DL with current procedures, it is conceivable to substantially minimize RSIs, enhance patient safety, and elevate surgical care standards. | 10.1186/s13040-024-00367-z | minimization of occurrence of retained surgical items using machine learning and deep learning techniques: a review | retained surgical items (rsis) pose significant risks to patients and healthcare professionals, prompting extensive efforts to reduce their incidence. rsis are objects inadvertently left within patients’ bodies after surgery, which can lead to severe consequences such as infections and death. the repercussions highlight the critical need to address this issue. machine learning (ml) and deep learning (dl) have displayed considerable potential for enhancing the prevention of rsis through heightened precision and decreased reliance on human involvement. ml techniques are finding an expanding number of applications in medicine, ranging from automated imaging analysis to diagnosis. dl has enabled substantial advances in the prediction capabilities of computers by combining the availability of massive volumes of data with extremely effective learning algorithms. this paper reviews and evaluates recently published articles on the application of ml and dl in rsis prevention and diagnosis, stressing the need for a multi-layered approach that leverages each method’s strengths to mitigate rsi risks. it highlights the key findings, advantages, and limitations of the different techniques used. extensive datasets for training ml and dl models could enhance rsi detection systems. this paper also discusses the various datasets used by researchers for training the models. in addition, future directions for improving these technologies for rsi diagnosis and prevention are considered. by merging ml and dl with current procedures, it is conceivable to substantially minimize rsis, enhance patient safety, and elevate surgical care standards. | [
"surgical items",
"rsis",
"significant risks",
"patients",
"healthcare professionals",
"extensive efforts",
"their incidence",
"rsis",
"objects",
"patients’ bodies",
"surgery",
"which",
"severe consequences",
"infections",
"death",
"the repercussions",
"the critical need",
"this issue",
"machine learning",
"ml",
"deep learning",
"dl",
"considerable potential",
"the prevention",
"rsis",
"heightened precision",
"reliance",
"human involvement",
"techniques",
"an expanding number",
"applications",
"medicine",
"automated imaging analysis",
"diagnosis",
"dl",
"substantial advances",
"the prediction capabilities",
"computers",
"the availability",
"massive volumes",
"data",
"extremely effective learning algorithms",
"this paper reviews",
"evaluates",
"articles",
"the application",
"ml",
"dl",
"rsis prevention",
"diagnosis",
"the need",
"a multi-layered approach",
"that",
"each method’s strengths",
"rsi risks",
"it",
"the key findings",
"advantages",
"limitations",
"the different techniques",
"extensive datasets",
"training ml",
"dl models",
"rsi detection systems",
"this paper",
"the various datasets",
"researchers",
"the models",
"addition",
"future directions",
"these technologies",
"rsi diagnosis",
"prevention",
"ml",
"dl",
"current procedures",
"it",
"rsis",
"patient safety",
"surgical care standards"
] |
FPGA implementation of deep learning architecture for kidney cancer detection from histopathological images | [
"Shyam Lal",
"Amit Kumar Chanchal",
"Jyoti Kini",
"Gopal Krishna Upadhyay"
] | Kidney cancer is the most common type of cancer, and designing an automated system to accurately classify the cancer grade is of paramount importance for a better prognosis of the disease from histopathological kidney cancer images. Application of deep learning neural networks (DLNNs) for histopathological image classification is thriving and implementation of these networks on edge devices has been gaining the ground correspondingly due to high computational power and low latency requirements. This paper designs an automated system that classifies histopathological kidney cancer images. For experimentation, we have collected Kidney histopathological images of Non-cancerous, cancerous, and their respective grade of Renal Cell Carcinoma (RCC) from Kasturba Medical College (KMC), Mangalore, Karnataka, India. We have implemented and analyzed performances of deep learning architectures on a Field Programmable Gate Array (FPGA) board. Results yield that the Inception-V3 network provides better accuracy for kidney cancer detection as compared to other deep learning models on Kidney histopathological images. Further, the DenseNet-169 network provides better accuracy for kidney cancer grading as compared to other existing deep learning architecture on the FPGA board. | 10.1007/s11042-023-17895-1 | fpga implementation of deep learning architecture for kidney cancer detection from histopathological images | kidney cancer is the most common type of cancer, and designing an automated system to accurately classify the cancer grade is of paramount importance for a better prognosis of the disease from histopathological kidney cancer images. application of deep learning neural networks (dlnns) for histopathological image classification is thriving and implementation of these networks on edge devices has been gaining the ground correspondingly due to high computational power and low latency requirements. this paper designs an automated system that classifies histopathological kidney cancer images. for experimentation, we have collected kidney histopathological images of non-cancerous, cancerous, and their respective grade of renal cell carcinoma (rcc) from kasturba medical college (kmc), mangalore, karnataka, india. we have implemented and analyzed performances of deep learning architectures on a field programmable gate array (fpga) board. results yield that the inception-v3 network provides better accuracy for kidney cancer detection as compared to other deep learning models on kidney histopathological images. further, the densenet-169 network provides better accuracy for kidney cancer grading as compared to other existing deep learning architecture on the fpga board. | [
"kidney cancer",
"the most common type",
"cancer",
"an automated system",
"the cancer grade",
"paramount importance",
"a better prognosis",
"the disease",
"histopathological kidney cancer images",
"application",
"neural networks",
"dlnns",
"histopathological image classification",
"implementation",
"these networks",
"edge devices",
"the ground",
"high computational power",
"low latency requirements",
"this paper",
"an automated system",
"that",
"histopathological kidney cancer images",
"experimentation",
"we",
"kidney histopathological images",
"their respective grade",
"renal cell carcinoma",
"rcc",
"kasturba medical college",
"mangalore",
"karnataka",
"india",
"we",
"analyzed performances",
"deep learning architectures",
"a field programmable gate array",
"(fpga) board",
"results",
"the inception-v3 network",
"better accuracy",
"kidney cancer detection",
"other deep learning models",
"kidney histopathological images",
"the densenet-169 network",
"better accuracy",
"kidney cancer",
"other existing deep learning architecture",
"the fpga board",
"rcc",
"kasturba medical college (kmc",
"karnataka",
"india"
] |
Sapotech Reveal CAST—Deep Learning Based Surface Quality Analysis of Hot Steel Slabs in Continuous Casting | [
"Saku Kaukonen",
"Eemil Kiviahde",
"Hannu Suopajärvi"
] | In this contribution, the Sapotech Reveal CAST machine vision-based surface inspection solution is presented in the context of assessing the surface quality of hot steel slabs during the process of continuous casting.Further, a Deep Learning (DL) based approach for defect detection and classification is presented. Deep Learning enables the assessment of surface quality and the detection of defects, which is extremely challenging to be performed using classical machine vision methods. In Reveal CAST, the tools and methods for training, testing, and applying Deep Learning are integrated into a single product.Finally, a cloud enabled Reveal CAST case for Deep Learning is presented with example results from a hybrid cloud-edge reference production system. | 10.1007/s00501-023-01404-w | sapotech reveal cast—deep learning based surface quality analysis of hot steel slabs in continuous casting | in this contribution, the sapotech reveal cast machine vision-based surface inspection solution is presented in the context of assessing the surface quality of hot steel slabs during the process of continuous casting.further, a deep learning (dl) based approach for defect detection and classification is presented. deep learning enables the assessment of surface quality and the detection of defects, which is extremely challenging to be performed using classical machine vision methods. in reveal cast, the tools and methods for training, testing, and applying deep learning are integrated into a single product.finally, a cloud enabled reveal cast case for deep learning is presented with example results from a hybrid cloud-edge reference production system. | [
"this contribution",
"the sapotech",
"cast machine vision-based surface inspection solution",
"the context",
"the surface quality",
"hot steel slabs",
"the process",
"continuous casting.further",
"a deep learning",
"(dl) based approach",
"defect detection",
"classification",
"deep learning",
"the assessment",
"surface quality",
"the detection",
"defects",
"which",
"classical machine vision methods",
"reveal cast",
"the tools",
"methods",
"training",
"testing",
"deep learning",
"a cloud",
"reveal cast case",
"deep learning",
"example results",
"a hybrid cloud-edge reference production system"
] |
Classification of Plant Leaf Disease Using Deep Learning | [
"K. Indira",
"H. Mallika"
] | Agriculture is critical in human existence. Practically, 60% of the population is occupied with some sort of farming, either straight forwardly or in a roundabout way. The important factor which determines the quality of crops grown in fields is early detection, classification and severity of plant leaf diseases. Traditional plant disease detection was based on manual design of features and classifiers. The traditional methods faced many challenges in real complex environment such as low contrast, noise in the image being captured. In recent years, deep learning model widely used in image classification task is convolutional neural network, and hence, plant disease detection based on deep learning has important academic research value. In this paper, a CNN model with different convolution layers, AlexNet and MobileNet, was designed to classify the plant leaf diseases of pepper bell, tomato, potato, rice, apple and sorghum with a total of 26 classes. The dataset for pepper bell, tomato, potato, rice and apple is taken from kaggle, whereas sorghum dataset is generated by taking pictures from the field. Along with identification of disease, severity detection is performed on two selected diseases of tomato crop using the best model. Using an available dataset of 24,156 images of diseased and healthy plant leaves, the performance of the model was evaluated. Simulation results revealed an accuracy of 84.24% with CNN model consisting of five layers and an accuracy of 91.19% with pretrained AlexNet model and an accuracy of 97.33% with MobileNet with a learning rate of 0.001. Severity detection was performed on tomato early blight and tomato bacterial spot with levels 1–5 using MobileNet. Simulation result revealed an accuracy of 87.08% and 88.75% for tomato early blight and tomato bacterial spot, respectively. | 10.1007/s40031-024-00993-5 | classification of plant leaf disease using deep learning | agriculture is critical in human existence. practically, 60% of the population is occupied with some sort of farming, either straight forwardly or in a roundabout way. the important factor which determines the quality of crops grown in fields is early detection, classification and severity of plant leaf diseases. traditional plant disease detection was based on manual design of features and classifiers. the traditional methods faced many challenges in real complex environment such as low contrast, noise in the image being captured. in recent years, deep learning model widely used in image classification task is convolutional neural network, and hence, plant disease detection based on deep learning has important academic research value. in this paper, a cnn model with different convolution layers, alexnet and mobilenet, was designed to classify the plant leaf diseases of pepper bell, tomato, potato, rice, apple and sorghum with a total of 26 classes. the dataset for pepper bell, tomato, potato, rice and apple is taken from kaggle, whereas sorghum dataset is generated by taking pictures from the field. along with identification of disease, severity detection is performed on two selected diseases of tomato crop using the best model. using an available dataset of 24,156 images of diseased and healthy plant leaves, the performance of the model was evaluated. simulation results revealed an accuracy of 84.24% with cnn model consisting of five layers and an accuracy of 91.19% with pretrained alexnet model and an accuracy of 97.33% with mobilenet with a learning rate of 0.001. severity detection was performed on tomato early blight and tomato bacterial spot with levels 1–5 using mobilenet. simulation result revealed an accuracy of 87.08% and 88.75% for tomato early blight and tomato bacterial spot, respectively. | [
"agriculture",
"human existence",
"60%",
"the population",
"some sort",
"farming",
"a roundabout way",
"the important factor",
"which",
"the quality",
"crops",
"fields",
"early detection",
"classification",
"severity",
"plant leaf diseases",
"traditional plant disease detection",
"manual design",
"features",
"classifiers",
"the traditional methods",
"many challenges",
"real complex environment",
"low contrast",
"the image",
"recent years",
"deep learning model",
"image classification task",
"convolutional neural network",
"plant disease detection",
"deep learning",
"important academic research value",
"this paper",
"a cnn model",
"different convolution layers",
"alexnet",
"mobilenet",
"the plant leaf diseases",
"pepper bell",
"tomato",
"potato",
"rice",
"apple",
"sorghum",
"a total",
"26 classes",
"the dataset",
"pepper bell",
"kaggle",
"sorghum dataset",
"pictures",
"the field",
"identification",
"disease",
"severity detection",
"two selected diseases",
"tomato crop",
"the best model",
"an available dataset",
"24,156 images",
"diseased and healthy plant leaves",
"the performance",
"the model",
"simulation results",
"an accuracy",
"84.24%",
"cnn model",
"five layers",
"an accuracy",
"91.19%",
"pretrained alexnet model",
"an accuracy",
"97.33%",
"mobilenet",
"a learning rate",
"severity detection",
"tomato early blight",
"tomato bacterial spot",
"levels",
"mobilenet",
"simulation result",
"an accuracy",
"87.08%",
"88.75%",
"tomato early blight",
"tomato bacterial spot",
"60%",
"recent years",
"cnn",
"tomato, potato",
"26",
"tomato, potato",
"kaggle",
"two",
"24,156",
"84.24%",
"cnn",
"five",
"91.19%",
"97.33%",
"0.001",
"1–5",
"87.08% and",
"88.75%"
] |
DeepDOF-SE: affordable deep-learning microscopy platform for slide-free histology | [
"Lingbo Jin",
"Yubo Tang",
"Jackson B. Coole",
"Melody T. Tan",
"Xuan Zhao",
"Hawraa Badaoui",
"Jacob T. Robinson",
"Michelle D. Williams",
"Nadarajah Vigneswaran",
"Ann M. Gillenwater",
"Rebecca R. Richards-Kortum",
"Ashok Veeraraghavan"
] | Histopathology plays a critical role in the diagnosis and surgical management of cancer. However, access to histopathology services, especially frozen section pathology during surgery, is limited in resource-constrained settings because preparing slides from resected tissue is time-consuming, labor-intensive, and requires expensive infrastructure. Here, we report a deep-learning-enabled microscope, named DeepDOF-SE, to rapidly scan intact tissue at cellular resolution without the need for physical sectioning. Three key features jointly make DeepDOF-SE practical. First, tissue specimens are stained directly with inexpensive vital fluorescent dyes and optically sectioned with ultra-violet excitation that localizes fluorescent emission to a thin surface layer. Second, a deep-learning algorithm extends the depth-of-field, allowing rapid acquisition of in-focus images from large areas of tissue even when the tissue surface is highly irregular. Finally, a semi-supervised generative adversarial network virtually stains DeepDOF-SE fluorescence images with hematoxylin-and-eosin appearance, facilitating image interpretation by pathologists without significant additional training. We developed the DeepDOF-SE platform using a data-driven approach and validated its performance by imaging surgical resections of suspected oral tumors. Our results show that DeepDOF-SE provides histological information of diagnostic importance, offering a rapid and affordable slide-free histology platform for intraoperative tumor margin assessment and in low-resource settings. | 10.1038/s41467-024-47065-2 | deepdof-se: affordable deep-learning microscopy platform for slide-free histology | histopathology plays a critical role in the diagnosis and surgical management of cancer. however, access to histopathology services, especially frozen section pathology during surgery, is limited in resource-constrained settings because preparing slides from resected tissue is time-consuming, labor-intensive, and requires expensive infrastructure. here, we report a deep-learning-enabled microscope, named deepdof-se, to rapidly scan intact tissue at cellular resolution without the need for physical sectioning. three key features jointly make deepdof-se practical. first, tissue specimens are stained directly with inexpensive vital fluorescent dyes and optically sectioned with ultra-violet excitation that localizes fluorescent emission to a thin surface layer. second, a deep-learning algorithm extends the depth-of-field, allowing rapid acquisition of in-focus images from large areas of tissue even when the tissue surface is highly irregular. finally, a semi-supervised generative adversarial network virtually stains deepdof-se fluorescence images with hematoxylin-and-eosin appearance, facilitating image interpretation by pathologists without significant additional training. we developed the deepdof-se platform using a data-driven approach and validated its performance by imaging surgical resections of suspected oral tumors. our results show that deepdof-se provides histological information of diagnostic importance, offering a rapid and affordable slide-free histology platform for intraoperative tumor margin assessment and in low-resource settings. | [
"histopathology",
"a critical role",
"the diagnosis",
"surgical management",
"cancer",
"access",
"histopathology services",
"especially frozen section pathology",
"surgery",
"resource-constrained settings",
"slides",
"resected tissue",
"expensive infrastructure",
"we",
"a deep-learning-enabled microscope",
"deepdof-se",
"intact tissue",
"cellular resolution",
"the need",
"physical sectioning",
"three key features",
"deepdof-se",
"tissue specimens",
"inexpensive vital fluorescent dyes",
"ultra-violet excitation",
"that",
"fluorescent emission",
"a thin surface layer",
"a deep-learning algorithm",
"the depth",
"field",
"rapid acquisition",
"focus",
"large areas",
"tissue",
"the tissue surface",
"a semi-supervised generative adversarial network",
"deepdof-se fluorescence images",
"hematoxylin-and-eosin appearance",
"image interpretation",
"pathologists",
"significant additional training",
"we",
"the deepdof-se platform",
"a data-driven approach",
"its performance",
"surgical resections",
"suspected oral tumors",
"our results",
"deepdof-se",
"histological information",
"diagnostic importance",
"a rapid and affordable slide-free histology platform",
"intraoperative tumor margin assessment",
"low-resource settings",
"three",
"first",
"second",
"hematoxylin"
] |
Enhancing deep learning classification performance of tongue lesions in imbalanced data: mosaic-based soft labeling with curriculum learning | [
"Sung-Jae Lee",
"Hyun Jun Oh",
"Young-Don Son",
"Jong-Hoon Kim",
"Ik-Jae Kwon",
"Bongju Kim",
"Jong-Ho Lee",
"Hang-Keun Kim"
] | BackgroundOral potentially malignant disorders (OPMDs) are associated with an increased risk of cancer of the oral cavity including the tongue. The early detection of oral cavity cancers and OPMDs is critical for reducing cancer-specific morbidity and mortality. Recently, there have been studies to apply the rapidly advancing technology of deep learning for diagnosing oral cavity cancer and OPMDs. However, several challenging issues such as class imbalance must be resolved to effectively train a deep learning model for medical imaging classification tasks. The aim of this study is to evaluate a new technique of artificial intelligence to improve the classification performance in an imbalanced tongue lesion dataset.MethodsA total of 1,810 tongue images were used for the classification. The class-imbalanced dataset consisted of 372 instances of cancer, 141 instances of OPMDs, and 1,297 instances of noncancerous lesions. The EfficientNet model was used as the feature extraction model for classification. Mosaic data augmentation, soft labeling, and curriculum learning (CL) were employed to improve the classification performance of the convolutional neural network.ResultsUtilizing a mosaic-augmented dataset in conjunction with CL, the final model achieved an accuracy rate of 0.9444, surpassing conventional oversampling and weight balancing methods. The relative precision improvement rate for the minority class OPMD was 21.2%, while the relative \({F}_{1}\) score improvement rate of OPMD was 4.9%.ConclusionsThe present study demonstrates that the integration of mosaic-based soft labeling and curriculum learning improves the classification performance of tongue lesions compared to previous methods, establishing a foundation for future research on effectively learning from imbalanced data. | 10.1186/s12903-024-03898-3 | enhancing deep learning classification performance of tongue lesions in imbalanced data: mosaic-based soft labeling with curriculum learning | backgroundoral potentially malignant disorders (opmds) are associated with an increased risk of cancer of the oral cavity including the tongue. the early detection of oral cavity cancers and opmds is critical for reducing cancer-specific morbidity and mortality. recently, there have been studies to apply the rapidly advancing technology of deep learning for diagnosing oral cavity cancer and opmds. however, several challenging issues such as class imbalance must be resolved to effectively train a deep learning model for medical imaging classification tasks. the aim of this study is to evaluate a new technique of artificial intelligence to improve the classification performance in an imbalanced tongue lesion dataset.methodsa total of 1,810 tongue images were used for the classification. the class-imbalanced dataset consisted of 372 instances of cancer, 141 instances of opmds, and 1,297 instances of noncancerous lesions. the efficientnet model was used as the feature extraction model for classification. mosaic data augmentation, soft labeling, and curriculum learning (cl) were employed to improve the classification performance of the convolutional neural network.resultsutilizing a mosaic-augmented dataset in conjunction with cl, the final model achieved an accuracy rate of 0.9444, surpassing conventional oversampling and weight balancing methods. the relative precision improvement rate for the minority class opmd was 21.2%, while the relative \({f}_{1}\) score improvement rate of opmd was 4.9%.conclusionsthe present study demonstrates that the integration of mosaic-based soft labeling and curriculum learning improves the classification performance of tongue lesions compared to previous methods, establishing a foundation for future research on effectively learning from imbalanced data. | [
"backgroundoral potentially malignant disorders",
"an increased risk",
"cancer",
"the oral cavity",
"the tongue",
"the early detection",
"oral cavity cancers",
"opmds",
"cancer-specific morbidity",
"mortality",
"studies",
"the rapidly advancing technology",
"deep learning",
"oral cavity cancer",
"opmds",
"several challenging issues",
"class imbalance",
"a deep learning model",
"medical imaging classification tasks",
"the aim",
"this study",
"a new technique",
"artificial intelligence",
"the classification performance",
"an imbalanced tongue lesion",
"dataset.methodsa total",
"1,810 tongue images",
"the classification",
"the class-imbalanced dataset",
"372 instances",
"cancer",
"141 instances",
"opmds",
"1,297 instances",
"noncancerous lesions",
"the efficientnet model",
"the feature extraction model",
"classification",
"mosaic data augmentation",
"soft labeling",
"curriculum learning",
"cl",
"the classification performance",
"the convolutional neural",
"a mosaic-augmented dataset",
"conjunction",
"cl",
"the final model",
"an accuracy rate",
"conventional oversampling",
"weight balancing methods",
"the relative precision improvement rate",
"the minority class",
"21.2%",
"the relative \\({f}_{1}\\) score improvement rate",
"4.9%.conclusionsthe present study",
"the integration",
"mosaic-based soft labeling",
"curriculum learning",
"the classification performance",
"tongue lesions",
"previous methods",
"a foundation",
"future research",
"imbalanced data",
"dataset.methodsa",
"1,810",
"372",
"141",
"1,297",
"mosaic data",
"mosaic",
"0.9444",
"21.2%"
] |
A deep learning-based car accident detection approach in video-based traffic surveillance | [
"Xinyu Wu",
"Tingting Li"
] | Car accident detection plays a crucial role in video-based traffic surveillance systems, contributing to prompt response and improved road safety. In the literature, various methods have been investigated for accident detection, among which deep learning approaches have shown superior accuracy compared to other methods. The popularity of deep learning stems from its ability to automatically learn complex features from data. However, the current research challenge in deep learning-based accident detection lies in achieving high accuracy rates while meeting real-time requirements. To address this challenge, this study introduces a deep learning approach using convolutional neural networks (CNNs) to enhance car accident detection, prioritizing accuracy and real-time performance. It includes a tailored dataset for evaluation, and the F1-scores reveal reasonably accurate detection for “damaged-rear-window” (62%) and “damaged-window” (63%), while “damaged-windscreen” exhibits exceptional performance at 83%. These results demonstrate the potential of CNNs in improving car accident detection, particularly for certain classes. Following extensive experiments and performance analysis, the proposed method demonstrates accurate results, significantly enhancing car accident detection in video-based traffic surveillance scenarios. | 10.1007/s12596-023-01581-4 | a deep learning-based car accident detection approach in video-based traffic surveillance | car accident detection plays a crucial role in video-based traffic surveillance systems, contributing to prompt response and improved road safety. in the literature, various methods have been investigated for accident detection, among which deep learning approaches have shown superior accuracy compared to other methods. the popularity of deep learning stems from its ability to automatically learn complex features from data. however, the current research challenge in deep learning-based accident detection lies in achieving high accuracy rates while meeting real-time requirements. to address this challenge, this study introduces a deep learning approach using convolutional neural networks (cnns) to enhance car accident detection, prioritizing accuracy and real-time performance. it includes a tailored dataset for evaluation, and the f1-scores reveal reasonably accurate detection for “damaged-rear-window” (62%) and “damaged-window” (63%), while “damaged-windscreen” exhibits exceptional performance at 83%. these results demonstrate the potential of cnns in improving car accident detection, particularly for certain classes. following extensive experiments and performance analysis, the proposed method demonstrates accurate results, significantly enhancing car accident detection in video-based traffic surveillance scenarios. | [
"car accident detection",
"a crucial role",
"video-based traffic surveillance systems",
"response",
"the literature",
"various methods",
"accident detection",
"which",
"deep learning approaches",
"superior accuracy",
"other methods",
"the popularity",
"deep learning",
"its ability",
"complex features",
"data",
"the current research challenge",
"deep learning-based accident detection",
"high accuracy rates",
"real-time requirements",
"this challenge",
"this study",
"a deep learning approach",
"convolutional neural networks",
"cnns",
"car accident detection",
"accuracy",
"real-time performance",
"it",
"a tailored dataset",
"evaluation",
"the f1-scores",
"reasonably accurate detection",
"“damaged-rear-window",
"62%",
"“damaged-window",
"63%",
"exceptional performance",
"83%",
"these results",
"the potential",
"cnns",
"car accident detection",
"certain classes",
"extensive experiments",
"performance analysis",
"the proposed method",
"accurate results",
"car accident detection",
"video-based traffic surveillance scenarios",
"62%",
"63%",
"83%"
] |
Deep learning model for pleural effusion detection via active learning and pseudo-labeling: a multisite study | [
"Joseph Chang",
"Bo-Ru Lin",
"Ti-Hao Wang",
"Chung-Ming Chen"
] | BackgroundThe study aimed to develop and validate a deep learning-based Computer Aided Triage (CADt) algorithm for detecting pleural effusion in chest radiographs using an active learning (AL) framework. This is aimed at addressing the critical need for a clinical grade algorithm that can timely diagnose pleural effusion, which affects approximately 1.5 million people annually in the United States.MethodsIn this multisite study, 10,599 chest radiographs from 2006 to 2018 were retrospectively collected from an institution in Taiwan to train the deep learning algorithm. The AL framework utilized significantly reduced the need for expert annotations. For external validation, the algorithm was tested on a multisite dataset of 600 chest radiographs from 22 clinical sites in the United States and Taiwan, which were annotated by three U.S. board-certified radiologists.ResultsThe CADt algorithm demonstrated high effectiveness in identifying pleural effusion, achieving a sensitivity of 0.95 (95% CI: [0.92, 0.97]) and a specificity of 0.97 (95% CI: [0.95, 0.99]). The area under the receiver operating characteristic curve (AUC) was 0.97 (95% DeLong’s CI: [0.95, 0.99]). Subgroup analyses showed that the algorithm maintained robust performance across various demographics and clinical settings.ConclusionThis study presents a novel approach in developing clinical grade CADt solutions for the diagnosis of pleural effusion. The AL-based CADt algorithm not only achieved high accuracy in detecting pleural effusion but also significantly reduced the workload required for clinical experts in annotating medical data. This method enhances the feasibility of employing advanced technological solutions for prompt and accurate diagnosis in medical settings. | 10.1186/s12880-024-01260-1 | deep learning model for pleural effusion detection via active learning and pseudo-labeling: a multisite study | backgroundthe study aimed to develop and validate a deep learning-based computer aided triage (cadt) algorithm for detecting pleural effusion in chest radiographs using an active learning (al) framework. this is aimed at addressing the critical need for a clinical grade algorithm that can timely diagnose pleural effusion, which affects approximately 1.5 million people annually in the united states.methodsin this multisite study, 10,599 chest radiographs from 2006 to 2018 were retrospectively collected from an institution in taiwan to train the deep learning algorithm. the al framework utilized significantly reduced the need for expert annotations. for external validation, the algorithm was tested on a multisite dataset of 600 chest radiographs from 22 clinical sites in the united states and taiwan, which were annotated by three u.s. board-certified radiologists.resultsthe cadt algorithm demonstrated high effectiveness in identifying pleural effusion, achieving a sensitivity of 0.95 (95% ci: [0.92, 0.97]) and a specificity of 0.97 (95% ci: [0.95, 0.99]). the area under the receiver operating characteristic curve (auc) was 0.97 (95% delong’s ci: [0.95, 0.99]). subgroup analyses showed that the algorithm maintained robust performance across various demographics and clinical settings.conclusionthis study presents a novel approach in developing clinical grade cadt solutions for the diagnosis of pleural effusion. the al-based cadt algorithm not only achieved high accuracy in detecting pleural effusion but also significantly reduced the workload required for clinical experts in annotating medical data. this method enhances the feasibility of employing advanced technological solutions for prompt and accurate diagnosis in medical settings. | [
"backgroundthe study",
"a deep learning-based computer aided triage",
"cadt",
"algorithm",
"pleural effusion",
"chest",
"an active learning",
"(al) framework",
"this",
"the critical need",
"a clinical grade",
"algorithm",
"that",
"pleural effusion",
"which",
"approximately 1.5 million people",
"the united states.methodsin",
"this multisite study",
"10,599 chest",
"an institution",
"taiwan",
"the deep learning algorithm",
"the al framework",
"the need",
"expert annotations",
"external validation",
"the algorithm",
"a multisite dataset",
"600 chest",
"22 clinical sites",
"the united states",
"taiwan",
"which",
"radiologists.resultsthe cadt algorithm",
"high effectiveness",
"pleural effusion",
"a sensitivity",
"95% ci",
"a specificity",
"(95% ci",
"the area",
"the receiver operating characteristic curve",
"auc",
"(95% delong",
"subgroup analyses",
"the algorithm",
"robust performance",
"various demographics",
"clinical settings.conclusionthis study",
"a novel approach",
"clinical grade cadt solutions",
"the diagnosis",
"pleural effusion",
"the al-based cadt",
"algorithm",
"high accuracy",
"pleural effusion",
"the workload",
"clinical experts",
"medical data",
"this method",
"the feasibility",
"advanced technological solutions",
"prompt and accurate diagnosis",
"medical settings",
"approximately 1.5 million",
"annually",
"the united states.methodsin",
"10,599",
"2006",
"2018",
"taiwan",
"al",
"600",
"22",
"the united states",
"taiwan",
"three",
"u.s.",
"0.95",
"95%",
"0.92",
"0.97",
"0.97",
"95%",
"0.95",
"0.99",
"0.97",
"95%",
"0.95",
"0.99",
"al"
] |
Deep Learning-Based Empirical and Sub-Space Decomposition for Speech Enhancement | [
"Khaoula Mraihi",
"Mohamed Anouar Ben Messaoud"
] | This research presents a single-channel speech enhancement approach based on the combination of the adaptive empirical wavelet transform and the improved sub-space decomposition method followed by a deep learning network. The adaptive empirical wavelet transform is used to determine the boundaries of the segments, then we decompose the obtained spectrogram of the noisy speech into three sub-spaces to determine the low-rank matrix and the sparse matrix of the spectrogram under the perturbation of the residual matrix. The residual noise affecting the speech quality is avoided by the low-rank decomposition using the nonnegative factorization. Then, a cross-domain learning framework is developed to specify the correlations along the frequency and time axes and avoid the disadvantages of the time–frequency domain. Experimental results show that the proposed approach outperforms several competing speech enhancement methods and achieves the highest PESQ, Cov and STOI under different types of noise and at low SNR values in the two datasets. The proposed model is tested on a hardware-level manual design to accelerate the execution of the developed deep learning model on an FPGA. | 10.1007/s00034-024-02606-4 | deep learning-based empirical and sub-space decomposition for speech enhancement | this research presents a single-channel speech enhancement approach based on the combination of the adaptive empirical wavelet transform and the improved sub-space decomposition method followed by a deep learning network. the adaptive empirical wavelet transform is used to determine the boundaries of the segments, then we decompose the obtained spectrogram of the noisy speech into three sub-spaces to determine the low-rank matrix and the sparse matrix of the spectrogram under the perturbation of the residual matrix. the residual noise affecting the speech quality is avoided by the low-rank decomposition using the nonnegative factorization. then, a cross-domain learning framework is developed to specify the correlations along the frequency and time axes and avoid the disadvantages of the time–frequency domain. experimental results show that the proposed approach outperforms several competing speech enhancement methods and achieves the highest pesq, cov and stoi under different types of noise and at low snr values in the two datasets. the proposed model is tested on a hardware-level manual design to accelerate the execution of the developed deep learning model on an fpga. | [
"this research",
"a single-channel speech enhancement approach",
"the combination",
"the adaptive empirical wavelet transform",
"the improved sub-space decomposition method",
"a deep learning network",
"the adaptive empirical wavelet transform",
"the boundaries",
"the segments",
"we",
"the obtained spectrogram",
"the noisy speech",
"three sub",
"-",
"spaces",
"the low-rank matrix",
"the sparse matrix",
"the spectrogram",
"the perturbation",
"the residual matrix",
"the residual noise",
"the speech quality",
"the low-rank decomposition",
"the nonnegative factorization",
"a cross-domain learning framework",
"the correlations",
"the frequency and time axes",
"the disadvantages",
"the time",
"–frequency domain",
"experimental results",
"the proposed approach",
"several competing speech enhancement methods",
"the highest pesq",
"cov",
"stoi",
"different types",
"noise",
"low snr values",
"the two datasets",
"the proposed model",
"a hardware-level manual design",
"the execution",
"the developed deep learning model",
"an fpga",
"three",
"two"
] |
Distributed source DOA estimation based on deep learning networks | [
"Quan Tian",
"Ruiyan Cai",
"Gongrun Qiu",
"Yang Luo"
] | With space electromagnetic environments becoming increasingly complex, the direction of arrival (DOA) estimation based on the point source model can no longer meet the requirements of spatial target location. Based on the characteristics of the distributed source, a new DOA estimation algorithm based on deep learning is proposed. The algorithm first maps the distributed source model into the point source model via a generative adversarial network (GAN) and further combines the subspace-based method to achieve central DOA estimation. Second, by constructing a deep neural network (DNN), the covariance matrix of the received signals is used as the input to estimate the angular spread of the distributed source. The experimental results show that the proposed algorithm can achieve better performance than the existing methods for a distributed source. | 10.1007/s11760-024-03402-y | distributed source doa estimation based on deep learning networks | with space electromagnetic environments becoming increasingly complex, the direction of arrival (doa) estimation based on the point source model can no longer meet the requirements of spatial target location. based on the characteristics of the distributed source, a new doa estimation algorithm based on deep learning is proposed. the algorithm first maps the distributed source model into the point source model via a generative adversarial network (gan) and further combines the subspace-based method to achieve central doa estimation. second, by constructing a deep neural network (dnn), the covariance matrix of the received signals is used as the input to estimate the angular spread of the distributed source. the experimental results show that the proposed algorithm can achieve better performance than the existing methods for a distributed source. | [
"space electromagnetic environments",
"the direction",
"arrival",
"(doa) estimation",
"the point source model",
"the requirements",
"spatial target location",
"the characteristics",
"the distributed source",
"a new doa estimation algorithm",
"deep learning",
"algorithm",
"the distributed source model",
"the point source model",
"a generative adversarial network",
"gan",
"the subspace-based method",
"central doa estimation",
"a deep neural network",
"dnn",
"the covariance matrix",
"the received signals",
"the input",
"the angular spread",
"the distributed source",
"the experimental results",
"the proposed algorithm",
"better performance",
"the existing methods",
"a distributed source",
"first",
"second"
] |
Car crash detection using ensemble deep learning | [
"Vani Suthamathi Saravanarajan",
"Rung-Ching Chen",
"Christine Dewi",
"Long-Sheng Chen",
"Lata Ganesan"
] | With the recent advancements in Autonomous Vehicles (AVs), two important factors that play a vital role to avoid accidents and collisions are obstacles and track detection. AVs must implement an accident detection model to detect accident vehicles and avoid running into rollover vehicles. At present many trajectories-based and sensor-based multiple-vehicle accident prediction models exist. In Taiwan, the AV Tesla sedan's failure to detect overturned vehicles shows that an efficient deep learning model is still required to detect a single-car crash by taking appropriate actions like slowing down, tracking changes, and informing the concerned authorities. This paper proposes a novel car crash detection system for various car crashes using three deep learning models, namely VGG16(feature extractor using transfer learning), RPN (region proposal network), and CNN8L (region-based detector). The CNN8L is a novel lightweight sequential convolutional neural network for region-based classification and detection. The model is trained using a customized dataset, evaluated using different metrics and compared with various state-of-the-art models. The experimental results show that the VGG16 combined with the CNN8L model performed much better when compared to other models. The proposed system accurately recognizes car accidents with an Accident Detection Rate (ADR) of 86.25% and False Alarm Rate (FAR) of 33.00%. | 10.1007/s11042-023-15906-9 | car crash detection using ensemble deep learning | with the recent advancements in autonomous vehicles (avs), two important factors that play a vital role to avoid accidents and collisions are obstacles and track detection. avs must implement an accident detection model to detect accident vehicles and avoid running into rollover vehicles. at present many trajectories-based and sensor-based multiple-vehicle accident prediction models exist. in taiwan, the av tesla sedan's failure to detect overturned vehicles shows that an efficient deep learning model is still required to detect a single-car crash by taking appropriate actions like slowing down, tracking changes, and informing the concerned authorities. this paper proposes a novel car crash detection system for various car crashes using three deep learning models, namely vgg16(feature extractor using transfer learning), rpn (region proposal network), and cnn8l (region-based detector). the cnn8l is a novel lightweight sequential convolutional neural network for region-based classification and detection. the model is trained using a customized dataset, evaluated using different metrics and compared with various state-of-the-art models. the experimental results show that the vgg16 combined with the cnn8l model performed much better when compared to other models. the proposed system accurately recognizes car accidents with an accident detection rate (adr) of 86.25% and false alarm rate (far) of 33.00%. | [
"the recent advancements",
"autonomous vehicles",
"two important factors",
"that",
"a vital role",
"accidents",
"collisions",
"obstacles",
"track detection",
"avs",
"an accident detection model",
"accident vehicles",
"rollover vehicles",
"present many trajectories-based and sensor-based multiple-vehicle accident prediction models",
"taiwan",
"the av tesla sedan's failure",
"overturned vehicles",
"an efficient deep learning model",
"a single-car crash",
"appropriate actions",
"changes",
"the concerned authorities",
"this paper",
"a novel car crash detection system",
"various car crashes",
"three deep learning models",
"namely vgg16(feature extractor",
"transfer learning",
"rpn",
"region proposal network",
"cnn8l",
"(region-based detector",
"the cnn8l",
"a novel lightweight sequential convolutional neural network",
"region-based classification",
"detection",
"the model",
"a customized dataset",
"different metrics",
"the-art",
"the experimental results",
"the vgg16",
"the cnn8l model",
"other models",
"the proposed system",
"car accidents",
"an accident detection rate",
"adr",
"86.25%",
"false alarm rate",
"33.00%",
"two",
"taiwan",
"three",
"86.25%",
"33.00%"
] |
Meta-heuristic-based hybrid deep learning model for vulnerability detection and prevention in software system | [
"Lijin Shaji",
"R. Suji Pramila"
] | Software vulnerabilities are flaws that may be exploited to cause loss or harm. Various automated machine-learning techniques have been developed in preceding studies to detect software vulnerabilities. This work tries to develop a technique for securing the software on the basis of their vulnerabilities that are already known, by developing a hybrid deep learning model to detect those vulnerabilities. Moreover, certain countermeasures are suggested based on the types of vulnerability to prevent the attack further. For different software projects taken as the dataset, feature fusion is done by utilizing canonical correlation analysis together with Deep Residual Network (DRN). A hybrid deep learning technique trained using AdamW-Rat Swarm Optimizer (AdamW-RSO) is designed to detect software vulnerability. Hybrid deep learning makes use of the Deep Belief Network (DBN) and Generative Adversarial Network (GAN). For every vulnerability, its location of occurrence within the software development procedures and techniques of alleviation via implementation level or design level activities are described. Thus, it helps in understanding the appearance of vulnerabilities, suggesting the use of various countermeasures during the initial phases of software design, and therefore, assures software security. Evaluating the performance of vulnerability detection by the proposed technique regarding recall, precision, and f-measure, it is found to be more effective than the existing methods. | 10.1007/s10878-024-01185-z | meta-heuristic-based hybrid deep learning model for vulnerability detection and prevention in software system | software vulnerabilities are flaws that may be exploited to cause loss or harm. various automated machine-learning techniques have been developed in preceding studies to detect software vulnerabilities. this work tries to develop a technique for securing the software on the basis of their vulnerabilities that are already known, by developing a hybrid deep learning model to detect those vulnerabilities. moreover, certain countermeasures are suggested based on the types of vulnerability to prevent the attack further. for different software projects taken as the dataset, feature fusion is done by utilizing canonical correlation analysis together with deep residual network (drn). a hybrid deep learning technique trained using adamw-rat swarm optimizer (adamw-rso) is designed to detect software vulnerability. hybrid deep learning makes use of the deep belief network (dbn) and generative adversarial network (gan). for every vulnerability, its location of occurrence within the software development procedures and techniques of alleviation via implementation level or design level activities are described. thus, it helps in understanding the appearance of vulnerabilities, suggesting the use of various countermeasures during the initial phases of software design, and therefore, assures software security. evaluating the performance of vulnerability detection by the proposed technique regarding recall, precision, and f-measure, it is found to be more effective than the existing methods. | [
"software vulnerabilities",
"flaws",
"that",
"loss",
"harm",
"various automated machine-learning techniques",
"studies",
"software vulnerabilities",
"this work",
"a technique",
"the software",
"the basis",
"their vulnerabilities",
"that",
"a hybrid deep learning model",
"those vulnerabilities",
"certain countermeasures",
"the types",
"vulnerability",
"the attack",
"different software projects",
"the dataset",
"feature fusion",
"canonical correlation analysis",
"deep residual network",
"drn",
"a hybrid deep learning technique",
"adamw-rat swarm optimizer",
"adamw-rso",
"software vulnerability",
"hybrid deep learning",
"use",
"the deep belief network",
"dbn",
"adversarial network",
"gan",
"every vulnerability",
"its location",
"occurrence",
"the software development procedures",
"techniques",
"alleviation",
"implementation level or design level activities",
"it",
"the appearance",
"vulnerabilities",
"the use",
"various countermeasures",
"the initial phases",
"software design",
"therefore, assures software security",
"the performance",
"vulnerability detection",
"the proposed technique",
"recall",
"precision",
"f-measure",
"it",
"the existing methods"
] |
Deep learning implementations in mining applications: a compact critical review | [
"Faris Azhari",
"Charlotte C. Sennersten",
"Craig A. Lindley",
"Ewan Sellers"
] | Deep learning is a sub-field of artificial intelligence that combines feature engineering and classification in one method. It is a data-driven technique that optimises a predictive model via learning from a large dataset. Digitisation in industry has included acquisition and storage of a variety of large datasets for interpretation and decision making. This has led to the adoption of deep learning in different industries, such as transportation, manufacturing, medicine and agriculture. However, in the mining industry, the adoption and development of new technologies, including deep learning methods, has not progressed at the same rate as in other industries. Nevertheless, in the past 5 years, applications of deep learning have been increasing in the mining research space. Deep learning has been implemented to solve a variety of problems related to mine exploration, ore and metal extraction and reclamation processes. The increased automation adoption in mining provides an avenue for wider application of deep learning as an element within a mine automation framework. This work provides a compact, comprehensive review of deep learning implementations in mining-related applications. The trends of these implementations in terms of years, venues, deep learning network types, tasks and general implementation, categorised by the value chain operations of exploration, extraction and reclamation are outlined. The review enables shortcomings regarding progress within the research context to be highlighted such as the proprietary nature of data, small datasets (tens to thousands of data points) limited to single operations with unique geology, mine design and equipment, lack of large scale publicly available mining related datasets and limited sensor types leading to the majority of applications being image-based analysis. Gaps identified for future research and application includes the usage of a wider range of sensor data, improved understanding of the outputs by mining practitioners, adversarial testing of the deep learning models, development of public datasets covering the extensive range of conditions experienced in mines. | 10.1007/s10462-023-10500-9 | deep learning implementations in mining applications: a compact critical review | deep learning is a sub-field of artificial intelligence that combines feature engineering and classification in one method. it is a data-driven technique that optimises a predictive model via learning from a large dataset. digitisation in industry has included acquisition and storage of a variety of large datasets for interpretation and decision making. this has led to the adoption of deep learning in different industries, such as transportation, manufacturing, medicine and agriculture. however, in the mining industry, the adoption and development of new technologies, including deep learning methods, has not progressed at the same rate as in other industries. nevertheless, in the past 5 years, applications of deep learning have been increasing in the mining research space. deep learning has been implemented to solve a variety of problems related to mine exploration, ore and metal extraction and reclamation processes. the increased automation adoption in mining provides an avenue for wider application of deep learning as an element within a mine automation framework. this work provides a compact, comprehensive review of deep learning implementations in mining-related applications. the trends of these implementations in terms of years, venues, deep learning network types, tasks and general implementation, categorised by the value chain operations of exploration, extraction and reclamation are outlined. the review enables shortcomings regarding progress within the research context to be highlighted such as the proprietary nature of data, small datasets (tens to thousands of data points) limited to single operations with unique geology, mine design and equipment, lack of large scale publicly available mining related datasets and limited sensor types leading to the majority of applications being image-based analysis. gaps identified for future research and application includes the usage of a wider range of sensor data, improved understanding of the outputs by mining practitioners, adversarial testing of the deep learning models, development of public datasets covering the extensive range of conditions experienced in mines. | [
"deep learning",
"a sub",
"-",
"field",
"artificial intelligence",
"that",
"feature engineering",
"classification",
"one method",
"it",
"a data-driven technique",
"that",
"a predictive model",
"a large dataset",
"digitisation",
"industry",
"acquisition",
"storage",
"a variety",
"large datasets",
"interpretation",
"decision making",
"this",
"the adoption",
"deep learning",
"different industries",
"transportation",
"manufacturing",
"medicine",
"agriculture",
"the mining industry",
"new technologies",
"deep learning methods",
"the same rate",
"other industries",
"the past 5 years",
"applications",
"deep learning",
"the mining research space",
"deep learning",
"a variety",
"problems",
"mine exploration, ore and metal extraction and reclamation processes",
"the increased automation adoption",
"mining",
"an avenue",
"wider application",
"deep learning",
"an element",
"a mine automation framework",
"this work",
"a compact, comprehensive review",
"deep learning implementations",
"mining-related applications",
"the trends",
"these implementations",
"terms",
"years",
"venues",
"deep learning network types",
"tasks",
"general implementation",
"the value chain operations",
"exploration",
"extraction",
"reclamation",
"the review",
"shortcomings",
"progress",
"the research context",
"the proprietary nature",
"data",
"small datasets",
"tens to thousands",
"data points",
"single operations",
"unique geology",
"mine design",
"equipment",
"large scale publicly available mining related datasets",
"limited sensor types",
"the majority",
"applications",
"image-based analysis",
"gaps",
"future research",
"application",
"the usage",
"a wider range",
"sensor data",
"understanding",
"the outputs",
"mining practitioners",
"adversarial testing",
"the deep learning models",
"development",
"public datasets",
"the extensive range",
"conditions",
"mines",
"one",
"the past 5 years",
"tens to thousands"
] |
Sentiment analysis of Canadian maritime case law: a sentiment case law and deep learning approach | [
"Bola Abimbola",
"Qing Tan",
"Enrique A. De La Cal Marín"
] | Historical information in the Canadian Maritime Judiciary increases with time because of the need to archive data to be utilized in case references and for later application when determining verdicts for similar cases. However, such data are typically stored in multiple systems, making its reachability technical. Utilizing technologies like deep learning and sentiment analysis provides chances to facilitate faster access to court records. Such practice enhances impartial verdicts, minimizing workloads for court employees, and decreases the time used in legal proceedings for claims during maritime contracts such as shipping disputes between parties. This paper seeks to develop a sentiment analysis framework that uses deep learning, distributed learning, and machine learning to improve access to statutes, laws, and cases used by maritime judges in making judgments to back their claims. The suggested approach uses deep learning models, including convolutional neural networks (CNNs), deep neural networks, long short-term memory (LSTM), and recurrent neural networks. It extracts court records having crucial sentiments or statements for maritime court verdicts. The suggested approach has been used successfully during sentiment analysis by emphasizing feature selection from a legal repository. The LSTM + CNN model has shown promising results in obtaining sentiments and records from multiple devices and sufficiently proposing practical guidance to judicial personnel regarding the regulations applicable to various situations. | 10.1007/s41870-024-01820-2 | sentiment analysis of canadian maritime case law: a sentiment case law and deep learning approach | historical information in the canadian maritime judiciary increases with time because of the need to archive data to be utilized in case references and for later application when determining verdicts for similar cases. however, such data are typically stored in multiple systems, making its reachability technical. utilizing technologies like deep learning and sentiment analysis provides chances to facilitate faster access to court records. such practice enhances impartial verdicts, minimizing workloads for court employees, and decreases the time used in legal proceedings for claims during maritime contracts such as shipping disputes between parties. this paper seeks to develop a sentiment analysis framework that uses deep learning, distributed learning, and machine learning to improve access to statutes, laws, and cases used by maritime judges in making judgments to back their claims. the suggested approach uses deep learning models, including convolutional neural networks (cnns), deep neural networks, long short-term memory (lstm), and recurrent neural networks. it extracts court records having crucial sentiments or statements for maritime court verdicts. the suggested approach has been used successfully during sentiment analysis by emphasizing feature selection from a legal repository. the lstm + cnn model has shown promising results in obtaining sentiments and records from multiple devices and sufficiently proposing practical guidance to judicial personnel regarding the regulations applicable to various situations. | [
"historical information",
"the canadian maritime judiciary",
"time",
"the need",
"data",
"case references",
"later application",
"verdicts",
"similar cases",
"such data",
"multiple systems",
"its reachability",
"technologies",
"deep learning",
"sentiment analysis",
"chances",
"faster access",
"court records",
"such practice",
"impartial verdicts",
"workloads",
"court employees",
"the time",
"legal proceedings",
"claims",
"maritime contracts",
"shipping disputes",
"parties",
"this paper",
"a sentiment analysis framework",
"that",
"deep learning",
"learning",
"access",
"statutes",
"laws",
"cases",
"maritime judges",
"judgments",
"their claims",
"the suggested approach",
"deep learning models",
"convolutional neural networks",
"cnns",
"deep neural networks",
"long short-term memory",
"lstm",
"neural networks",
"it",
"court records",
"crucial sentiments",
"statements",
"maritime court verdicts",
"the suggested approach",
"sentiment analysis",
"feature selection",
"a legal repository",
"the lstm + cnn model",
"promising results",
"sentiments",
"records",
"multiple devices",
"practical guidance",
"judicial personnel",
"the regulations",
"various situations",
"canadian"
] |
Multi-step framework for glaucoma diagnosis in retinal fundus images using deep learning | [
"Sanli Yi",
"Lingxiang Zhou"
] | Glaucoma is one of the most common causes of blindness in the world. Screening glaucoma from retinal fundus images based on deep learning is a common method at present. In the diagnosis of glaucoma based on deep learning, the blood vessels within the optic disc interfere with the diagnosis, and there is also some pathological information outside the optic disc in fundus images. Therefore, integrating the original fundus image with the vessel-removed optic disc image can improve diagnostic efficiency. In this paper, we propose a novel multi-step framework named MSGC-CNN that can better diagnose glaucoma. In the framework, (1) we combine glaucoma pathological knowledge with deep learning model, fuse the features of original fundus image and optic disc region in which the interference of blood vessel is specifically removed by U-Net, and make glaucoma diagnosis based on the fused features. (2) Aiming at the characteristics of glaucoma fundus images, such as small amount of data, high resolution, and rich feature information, we design a new feature extraction network RA-ResNet and combined it with transfer learning. In order to verify our method, we conduct binary classification experiments on three public datasets, Drishti-GS, RIM-ONE-R3, and ACRIMA, with accuracy of 92.01%, 93.75%, and 97.87%. The results demonstrate a significant improvement over earlier results.Graphical abstract | 10.1007/s11517-024-03172-2 | multi-step framework for glaucoma diagnosis in retinal fundus images using deep learning | glaucoma is one of the most common causes of blindness in the world. screening glaucoma from retinal fundus images based on deep learning is a common method at present. in the diagnosis of glaucoma based on deep learning, the blood vessels within the optic disc interfere with the diagnosis, and there is also some pathological information outside the optic disc in fundus images. therefore, integrating the original fundus image with the vessel-removed optic disc image can improve diagnostic efficiency. in this paper, we propose a novel multi-step framework named msgc-cnn that can better diagnose glaucoma. in the framework, (1) we combine glaucoma pathological knowledge with deep learning model, fuse the features of original fundus image and optic disc region in which the interference of blood vessel is specifically removed by u-net, and make glaucoma diagnosis based on the fused features. (2) aiming at the characteristics of glaucoma fundus images, such as small amount of data, high resolution, and rich feature information, we design a new feature extraction network ra-resnet and combined it with transfer learning. in order to verify our method, we conduct binary classification experiments on three public datasets, drishti-gs, rim-one-r3, and acrima, with accuracy of 92.01%, 93.75%, and 97.87%. the results demonstrate a significant improvement over earlier results.graphical abstract | [
"glaucoma",
"the most common causes",
"blindness",
"the world",
"glaucoma",
"retinal fundus images",
"deep learning",
"a common method",
"present",
"the diagnosis",
"glaucoma",
"deep learning",
"the blood vessels",
"the optic disc",
"the diagnosis",
"some pathological information",
"the optic disc",
"fundus images",
"the original fundus image",
"the vessel-removed optic disc image",
"diagnostic efficiency",
"this paper",
"we",
"a novel multi-step framework",
"msgc-cnn",
"that",
"glaucoma",
"the framework",
"we",
"glaucoma pathological knowledge",
"deep learning model",
"the features",
"original fundus image",
"optic disc region",
"which",
"the interference",
"blood vessel",
"u",
"-",
"net",
"glaucoma diagnosis",
"the fused features",
"the characteristics",
"glaucoma fundus images",
"small amount",
"data",
"high resolution",
"rich feature information",
"we",
"a new feature extraction network",
"ra-resnet",
"it",
"transfer learning",
"order",
"our method",
"we",
"binary classification experiments",
"three public datasets",
"drishti-gs",
"rim-one-r3",
"acrima",
"accuracy",
"92.01%",
"93.75%",
"97.87%",
"the results",
"a significant improvement",
"glaucoma",
"msgc-cnn",
"1",
"2",
"three",
"one-r3",
"92.01%",
"93.75%",
"97.87%"
] |
Early detection and prediction of Heart Disease using Wearable devices and Deep Learning algorithms | [
"S. Sivasubramaniam",
"S. P. Balamurugan"
] | In this paper, we propose a multimodal deep learning algorithm that combines convolutional neural networks (CNNs) and long short-term memory (LSTM) networks for early detection and prediction of heart disease using data collected from wearable devices. This combined multi-model deep learning algorithm is used to detect the accurate precision and accuracy value. At first, we consider, ECG and PPG signals, which are collected from the dataset. Then, the features from ECG and PPG are extracted using CNN and the accelerometer features are extracted using the LSTM model. The combined features are then classified using hybrid CNN-LSTM network architecture. The algorithm is evaluated using a publicly available benchmark dataset. The model achieved an accuracy of 99.33% in detecting heart disease, outperforming several state-of-the-art deep learning models. In addition, the model can predict the likelihood of developing heart disease with a precision of 99.33%, providing an early warning system for at-risk patients. The results demonstrate the potential of a multimodal approach for early detection and prediction of heart disease using wearable devices and deep learning algorithms. | 10.1007/s11042-024-19127-6 | early detection and prediction of heart disease using wearable devices and deep learning algorithms | in this paper, we propose a multimodal deep learning algorithm that combines convolutional neural networks (cnns) and long short-term memory (lstm) networks for early detection and prediction of heart disease using data collected from wearable devices. this combined multi-model deep learning algorithm is used to detect the accurate precision and accuracy value. at first, we consider, ecg and ppg signals, which are collected from the dataset. then, the features from ecg and ppg are extracted using cnn and the accelerometer features are extracted using the lstm model. the combined features are then classified using hybrid cnn-lstm network architecture. the algorithm is evaluated using a publicly available benchmark dataset. the model achieved an accuracy of 99.33% in detecting heart disease, outperforming several state-of-the-art deep learning models. in addition, the model can predict the likelihood of developing heart disease with a precision of 99.33%, providing an early warning system for at-risk patients. the results demonstrate the potential of a multimodal approach for early detection and prediction of heart disease using wearable devices and deep learning algorithms. | [
"this paper",
"we",
"a multimodal deep learning algorithm",
"that",
"cnns",
"lstm",
"early detection",
"prediction",
"heart disease",
"data",
"wearable devices",
"this combined multi-model deep learning algorithm",
"the accurate precision",
"accuracy value",
"we",
"ecg and ppg signals",
"which",
"the dataset",
"the features",
"ecg",
"ppg",
"cnn",
"the accelerometer features",
"the lstm model",
"the combined features",
"hybrid cnn-lstm network architecture",
"the algorithm",
"a publicly available benchmark dataset",
"the model",
"an accuracy",
"99.33%",
"heart disease",
"the-art",
"addition",
"the model",
"the likelihood",
"heart disease",
"a precision",
"99.33%",
"an early warning system",
"risk",
"the results",
"the potential",
"a multimodal approach",
"early detection",
"prediction",
"heart disease",
"wearable devices",
"deep learning algorithms",
"first",
"cnn",
"cnn",
"99.33%",
"99.33%"
] |
A new content-aware image resizing based on Rényi entropy and deep learning | [
"Jila Ayubi",
"Mehdi Chehel Amirani",
"Morteza Valizadeh"
] | One of the most popular techniques for changing the purpose of an image or resizing a digital image with content awareness is the seam-carving method. The performance of image resizing algorithms based on seam machining shows that these algorithms are highly dependent on the extraction of importance map techniques and the detection of salient objects. So far, various algorithms have been proposed to extract the importance map. In this paper, a new method based on Rényi entropy is proposed to extract the importance map. Also, a deep learning network has been used to detect salient objects. The simulator results showed that combining Rényi’s importance map with a deep network of salient object detection performed better than classical seam-carving and other extended seam-carving algorithms based on deep learning. | 10.1007/s00521-024-09517-0 | a new content-aware image resizing based on rényi entropy and deep learning | one of the most popular techniques for changing the purpose of an image or resizing a digital image with content awareness is the seam-carving method. the performance of image resizing algorithms based on seam machining shows that these algorithms are highly dependent on the extraction of importance map techniques and the detection of salient objects. so far, various algorithms have been proposed to extract the importance map. in this paper, a new method based on rényi entropy is proposed to extract the importance map. also, a deep learning network has been used to detect salient objects. the simulator results showed that combining rényi’s importance map with a deep network of salient object detection performed better than classical seam-carving and other extended seam-carving algorithms based on deep learning. | [
"the most popular techniques",
"the purpose",
"an image",
"a digital image",
"content awareness",
"the seam-carving method",
"the performance",
"image resizing algorithms",
"seam machining",
"these algorithms",
"the extraction",
"importance map techniques",
"the detection",
"salient objects",
"various algorithms",
"the importance map",
"this paper",
"a new method",
"rényi entropy",
"the importance map",
"a deep learning network",
"salient objects",
"the simulator results",
"rényi’s importance map",
"a deep network",
"salient object detection",
"classical seam-carving",
"other extended seam-carving algorithms",
"deep learning",
"one"
] |
Deep learning based active image steganalysis: a review | [
"Punam Bedi",
"Anuradha Singhal",
"Veenu Bhasin"
] | Steganalysis plays a vital role in cybersecurity in today’s digital era where exchange of malicious information can be done easily across web pages. Steganography techniques are used to hide data in an object where the existence of hidden information is also obscured. Steganalysis is the process for detection of steganography within an object and can be categorized as active and passive steganalysis. Passive steganalysis tries to classify a given object as a clean or modified object. Active steganalysis aims to extract more details about hidden contents such as length of embedded message, region of inserted message, key used for embedding, required by cybersecurity experts for comprehensive analysis. Images being a viable source of exchange of information in the era of internet, social media are the most susceptible source for such transmission. Many researchers have worked and developed techniques required to detect and alert about such counterfeit exchanges over the internet. Literature present in passive and active image steganalysis techniques, addresses these issues by detecting and unveiling details of such obscured communication respectively. This paper provides a systematic and comprehensive review of work done on active image steganalysis techniques using deep learning techniques. This review will be helpful to the new researchers to become aware and build a strong foundation of literature present in active image steganalysis using deep learning techniques. The paper also includes various steganographic algorithms, dataset and performance evaluation metrics used in literature. Open research challenges and possible future research directions are also discussed in the paper. | 10.1007/s13198-023-02203-9 | deep learning based active image steganalysis: a review | steganalysis plays a vital role in cybersecurity in today’s digital era where exchange of malicious information can be done easily across web pages. steganography techniques are used to hide data in an object where the existence of hidden information is also obscured. steganalysis is the process for detection of steganography within an object and can be categorized as active and passive steganalysis. passive steganalysis tries to classify a given object as a clean or modified object. active steganalysis aims to extract more details about hidden contents such as length of embedded message, region of inserted message, key used for embedding, required by cybersecurity experts for comprehensive analysis. images being a viable source of exchange of information in the era of internet, social media are the most susceptible source for such transmission. many researchers have worked and developed techniques required to detect and alert about such counterfeit exchanges over the internet. literature present in passive and active image steganalysis techniques, addresses these issues by detecting and unveiling details of such obscured communication respectively. this paper provides a systematic and comprehensive review of work done on active image steganalysis techniques using deep learning techniques. this review will be helpful to the new researchers to become aware and build a strong foundation of literature present in active image steganalysis using deep learning techniques. the paper also includes various steganographic algorithms, dataset and performance evaluation metrics used in literature. open research challenges and possible future research directions are also discussed in the paper. | [
"steganalysis",
"a vital role",
"cybersecurity",
"today’s digital era",
"exchange",
"malicious information",
"web pages",
"steganography techniques",
"data",
"an object",
"the existence",
"hidden information",
"steganalysis",
"the process",
"detection",
"steganography",
"an object",
"active and passive steganalysis",
"passive steganalysis",
"a given object",
"a clean or modified object",
"active steganalysis",
"more details",
"hidden contents",
"length",
"embedded message",
"region",
"inserted message",
"cybersecurity experts",
"comprehensive analysis",
"images",
"a viable source",
"exchange",
"information",
"the era",
"internet",
"social media",
"the most susceptible source",
"such transmission",
"many researchers",
"techniques",
"such counterfeit exchanges",
"the internet",
"literature",
"passive and active image steganalysis techniques",
"these issues",
"details",
"such obscured communication",
"this paper",
"a systematic and comprehensive review",
"work",
"active image steganalysis techniques",
"deep learning techniques",
"this review",
"the new researchers",
"a strong foundation",
"literature",
"active image steganalysis",
"deep learning techniques",
"the paper",
"various steganographic algorithms",
"dataset",
"performance evaluation metrics",
"literature",
"open research challenges",
"possible future research directions",
"the paper",
"today"
] |
A deep learning-based car accident detection approach in video-based traffic surveillance | [
"Xinyu Wu",
"Tingting Li"
] | Car accident detection plays a crucial role in video-based traffic surveillance systems, contributing to prompt response and improved road safety. In the literature, various methods have been investigated for accident detection, among which deep learning approaches have shown superior accuracy compared to other methods. The popularity of deep learning stems from its ability to automatically learn complex features from data. However, the current research challenge in deep learning-based accident detection lies in achieving high accuracy rates while meeting real-time requirements. To address this challenge, this study introduces a deep learning approach using convolutional neural networks (CNNs) to enhance car accident detection, prioritizing accuracy and real-time performance. It includes a tailored dataset for evaluation, and the F1-scores reveal reasonably accurate detection for “damaged-rear-window” (62%) and “damaged-window” (63%), while “damaged-windscreen” exhibits exceptional performance at 83%. These results demonstrate the potential of CNNs in improving car accident detection, particularly for certain classes. Following extensive experiments and performance analysis, the proposed method demonstrates accurate results, significantly enhancing car accident detection in video-based traffic surveillance scenarios. | 10.1007/s12596-023-01581-4 | a deep learning-based car accident detection approach in video-based traffic surveillance | car accident detection plays a crucial role in video-based traffic surveillance systems, contributing to prompt response and improved road safety. in the literature, various methods have been investigated for accident detection, among which deep learning approaches have shown superior accuracy compared to other methods. the popularity of deep learning stems from its ability to automatically learn complex features from data. however, the current research challenge in deep learning-based accident detection lies in achieving high accuracy rates while meeting real-time requirements. to address this challenge, this study introduces a deep learning approach using convolutional neural networks (cnns) to enhance car accident detection, prioritizing accuracy and real-time performance. it includes a tailored dataset for evaluation, and the f1-scores reveal reasonably accurate detection for “damaged-rear-window” (62%) and “damaged-window” (63%), while “damaged-windscreen” exhibits exceptional performance at 83%. these results demonstrate the potential of cnns in improving car accident detection, particularly for certain classes. following extensive experiments and performance analysis, the proposed method demonstrates accurate results, significantly enhancing car accident detection in video-based traffic surveillance scenarios. | [
"car accident detection",
"a crucial role",
"video-based traffic surveillance systems",
"response",
"the literature",
"various methods",
"accident detection",
"which",
"deep learning approaches",
"superior accuracy",
"other methods",
"the popularity",
"deep learning",
"its ability",
"complex features",
"data",
"the current research challenge",
"deep learning-based accident detection",
"high accuracy rates",
"real-time requirements",
"this challenge",
"this study",
"a deep learning approach",
"convolutional neural networks",
"cnns",
"car accident detection",
"accuracy",
"real-time performance",
"it",
"a tailored dataset",
"evaluation",
"the f1-scores",
"reasonably accurate detection",
"“damaged-rear-window",
"62%",
"“damaged-window",
"63%",
"exceptional performance",
"83%",
"these results",
"the potential",
"cnns",
"car accident detection",
"certain classes",
"extensive experiments",
"performance analysis",
"the proposed method",
"accurate results",
"car accident detection",
"video-based traffic surveillance scenarios",
"62%",
"63%",
"83%"
] |
Scoring method of English composition integrating deep learning in higher vocational colleges | [
"Shuo Feng",
"Lixia Yu",
"Fen Liu"
] | Along with the progress of natural language processing technology and deep learning, the subjectivity, slow feedback, and long grading time of traditional English essay grading have been addressed. Intelligent English automatic scoring has been widely concerned by scholars. Given the limitations of topic relevance feature extraction methods and traditional automatic grading methods for English compositions, a topic decision model is proposed to calculate the topic relevance score of the topic richness in English composition. Then, based on the Score of Relevance Based on Topic Richness (TRSR) calculation method, an intelligent English composition scoring method combining artificial feature extraction and deep learning is designed. From the findings, the Topic Decision (TD) model achieved the best effect only when it was iterated 80 times. The corresponding accuracy, recall and F1 value were 0.97, 0.93 and 0.95 respectively. The model training loss finally stabilized at 0.03. The Intelligent English Composition Grading Method Integrating Deep Learning (DLIECG) method has the best overall performance and the best performance on dataset P. To sum up, the intelligent English composition scoring method has better effectiveness and reliability. | 10.1038/s41598-024-57419-x | scoring method of english composition integrating deep learning in higher vocational colleges | along with the progress of natural language processing technology and deep learning, the subjectivity, slow feedback, and long grading time of traditional english essay grading have been addressed. intelligent english automatic scoring has been widely concerned by scholars. given the limitations of topic relevance feature extraction methods and traditional automatic grading methods for english compositions, a topic decision model is proposed to calculate the topic relevance score of the topic richness in english composition. then, based on the score of relevance based on topic richness (trsr) calculation method, an intelligent english composition scoring method combining artificial feature extraction and deep learning is designed. from the findings, the topic decision (td) model achieved the best effect only when it was iterated 80 times. the corresponding accuracy, recall and f1 value were 0.97, 0.93 and 0.95 respectively. the model training loss finally stabilized at 0.03. the intelligent english composition grading method integrating deep learning (dliecg) method has the best overall performance and the best performance on dataset p. to sum up, the intelligent english composition scoring method has better effectiveness and reliability. | [
"the progress",
"natural language processing technology",
"deep learning",
"the subjectivity",
"slow feedback",
"long grading time",
"intelligent english automatic scoring",
"scholars",
"the limitations",
"topic relevance feature extraction methods",
"traditional automatic grading methods",
"english compositions",
"a topic decision model",
"the topic relevance score",
"the topic richness",
"english composition",
"the score",
"relevance",
"topic richness (trsr) calculation method",
"an intelligent english composition scoring method",
"artificial feature extraction",
"deep learning",
"the findings",
"the topic decision",
"(td) model",
"the best effect",
"it",
"the corresponding accuracy",
"recall",
"f1 value",
"the model training loss",
"the intelligent english composition",
"method",
"deep learning (dliecg) method",
"the best overall performance",
"the best performance",
"dataset p.",
"the intelligent english composition scoring method",
"better effectiveness",
"reliability",
"english",
"english",
"english",
"english",
"english",
"80",
"0.97",
"0.93",
"0.95",
"0.03",
"english",
"english"
] |
A deep learning framework for students' academic performance analysis | [
"Sumati Pathak",
"Hiral Raja",
"Sumit Srivastava",
"Neelam Sahu",
"Rohit Raja",
"Amit Kumar Dewangan"
] | Students Performance (SP) analysis is regarded as one of the most important steps in the educational system for supporting students' academic success and the institutions' overall outcomes. Nevertheless, it is tremendously challenging due to the numerous details that many students have. Data Mining (DM) is the most widely used approach for SP prediction that extracts imperative information from a bigger raw data set. Even though there are various DM-centered performance prediction approaches, they all have low accuracy and high training time and don't produce the desired output. This paper proposes a hybrid deep learning framework using Deer Hunting Optimization based Deep Learning Neural Networks (DH-DLNN). A self-structured questionnaire covers all aspects of using information and communication technology, including increased access, knowledge building, learning, performance, motivation, classroom management and interaction, collaborative learning, and satisfaction. Data Cleaning and data conversion preprocess the dataset. The prediction of the student's level is then performed by extracting imperative features from the preprocessed data, followed by feature ranking using entropy calculations. The obtained entropy values are inputted into the DH-DLNN, which predicts the students' academic performance. Finally, the accuracy of the proposed system is evaluated using K-fold cross-validation. The experiment results revealed that DH-DLNN outperforms the other classification approaches with an accuracy of 96.33%. | 10.1007/s40012-023-00388-9 | a deep learning framework for students' academic performance analysis | students performance (sp) analysis is regarded as one of the most important steps in the educational system for supporting students' academic success and the institutions' overall outcomes. nevertheless, it is tremendously challenging due to the numerous details that many students have. data mining (dm) is the most widely used approach for sp prediction that extracts imperative information from a bigger raw data set. even though there are various dm-centered performance prediction approaches, they all have low accuracy and high training time and don't produce the desired output. this paper proposes a hybrid deep learning framework using deer hunting optimization based deep learning neural networks (dh-dlnn). a self-structured questionnaire covers all aspects of using information and communication technology, including increased access, knowledge building, learning, performance, motivation, classroom management and interaction, collaborative learning, and satisfaction. data cleaning and data conversion preprocess the dataset. the prediction of the student's level is then performed by extracting imperative features from the preprocessed data, followed by feature ranking using entropy calculations. the obtained entropy values are inputted into the dh-dlnn, which predicts the students' academic performance. finally, the accuracy of the proposed system is evaluated using k-fold cross-validation. the experiment results revealed that dh-dlnn outperforms the other classification approaches with an accuracy of 96.33%. | [
"students",
"performance (sp) analysis",
"the most important steps",
"the educational system",
"students' academic success",
"the institutions' overall outcomes",
"it",
"the numerous details",
"many students",
"data mining",
"(dm",
"the most widely used approach",
"sp prediction",
"that",
"imperative information",
"a bigger raw data set",
"various dm-centered performance prediction approaches",
"they",
"all",
"low accuracy",
"high training time",
"the desired output",
"this paper",
"a hybrid deep learning framework",
"deer hunting optimization",
"neural networks",
"dh-dlnn",
"a self-structured questionnaire",
"all aspects",
"information and communication technology",
"increased access",
"knowledge building",
"learning",
"performance",
"motivation",
"classroom management",
"interaction",
"collaborative learning",
"satisfaction",
"data cleaning",
"data conversion",
"the dataset",
"the prediction",
"the student's level",
"imperative features",
"the preprocessed data",
"feature",
"entropy calculations",
"the obtained entropy values",
"the dh-dlnn",
"which",
"the students' academic performance",
"the accuracy",
"the proposed system",
"k",
"fold cross",
"-",
"validation",
"the experiment results",
"dh-dlnn",
"the other classification approaches",
"an accuracy",
"96.33%",
"one",
"96.33%"
] |
Car crash detection using ensemble deep learning | [
"Vani Suthamathi Saravanarajan",
"Rung-Ching Chen",
"Christine Dewi",
"Long-Sheng Chen",
"Lata Ganesan"
] | With the recent advancements in Autonomous Vehicles (AVs), two important factors that play a vital role to avoid accidents and collisions are obstacles and track detection. AVs must implement an accident detection model to detect accident vehicles and avoid running into rollover vehicles. At present many trajectories-based and sensor-based multiple-vehicle accident prediction models exist. In Taiwan, the AV Tesla sedan's failure to detect overturned vehicles shows that an efficient deep learning model is still required to detect a single-car crash by taking appropriate actions like slowing down, tracking changes, and informing the concerned authorities. This paper proposes a novel car crash detection system for various car crashes using three deep learning models, namely VGG16(feature extractor using transfer learning), RPN (region proposal network), and CNN8L (region-based detector). The CNN8L is a novel lightweight sequential convolutional neural network for region-based classification and detection. The model is trained using a customized dataset, evaluated using different metrics and compared with various state-of-the-art models. The experimental results show that the VGG16 combined with the CNN8L model performed much better when compared to other models. The proposed system accurately recognizes car accidents with an Accident Detection Rate (ADR) of 86.25% and False Alarm Rate (FAR) of 33.00%. | 10.1007/s11042-023-15906-9 | car crash detection using ensemble deep learning | with the recent advancements in autonomous vehicles (avs), two important factors that play a vital role to avoid accidents and collisions are obstacles and track detection. avs must implement an accident detection model to detect accident vehicles and avoid running into rollover vehicles. at present many trajectories-based and sensor-based multiple-vehicle accident prediction models exist. in taiwan, the av tesla sedan's failure to detect overturned vehicles shows that an efficient deep learning model is still required to detect a single-car crash by taking appropriate actions like slowing down, tracking changes, and informing the concerned authorities. this paper proposes a novel car crash detection system for various car crashes using three deep learning models, namely vgg16(feature extractor using transfer learning), rpn (region proposal network), and cnn8l (region-based detector). the cnn8l is a novel lightweight sequential convolutional neural network for region-based classification and detection. the model is trained using a customized dataset, evaluated using different metrics and compared with various state-of-the-art models. the experimental results show that the vgg16 combined with the cnn8l model performed much better when compared to other models. the proposed system accurately recognizes car accidents with an accident detection rate (adr) of 86.25% and false alarm rate (far) of 33.00%. | [
"the recent advancements",
"autonomous vehicles",
"two important factors",
"that",
"a vital role",
"accidents",
"collisions",
"obstacles",
"track detection",
"avs",
"an accident detection model",
"accident vehicles",
"rollover vehicles",
"present many trajectories-based and sensor-based multiple-vehicle accident prediction models",
"taiwan",
"the av tesla sedan's failure",
"overturned vehicles",
"an efficient deep learning model",
"a single-car crash",
"appropriate actions",
"changes",
"the concerned authorities",
"this paper",
"a novel car crash detection system",
"various car crashes",
"three deep learning models",
"namely vgg16(feature extractor",
"transfer learning",
"rpn",
"region proposal network",
"cnn8l",
"(region-based detector",
"the cnn8l",
"a novel lightweight sequential convolutional neural network",
"region-based classification",
"detection",
"the model",
"a customized dataset",
"different metrics",
"the-art",
"the experimental results",
"the vgg16",
"the cnn8l model",
"other models",
"the proposed system",
"car accidents",
"an accident detection rate",
"adr",
"86.25%",
"false alarm rate",
"33.00%",
"two",
"taiwan",
"three",
"86.25%",
"33.00%"
] |
Deep learning based vessel arrivals monitoring via autoregressive statistical control charts | [
"Sara El Mekkaoui",
"Ghait Boukachab",
"Loubna Benabbou",
"Abdelaziz Berrado"
] | This paper introduces a methodology for monitoring the vessel arrival process, a critical factor in enhancing maritime operational efficiency. This approach uses deep learning sequence models and Statistical Process Control Charts to track the variability in a vessel arrival process. The proposed solution uses the predictive deep learning model to get a vessel’s estimated time of arrival, produces quality characteristics, and applies statistical control charts to monitor their variability. The paper presents the results of applying the proposed methodology for vessel arrivals at a coal terminal, which demonstrates the effectiveness of the method. By enabling precise monitoring of arrival times, this methodology not only supports efficient ship and port operations planning but also aids in the timely adoption of operational adjustments. This can significantly contribute to operational measures aimed at reducing shipping emissions and optimizing resource utilization. | 10.1007/s13437-024-00342-9 | deep learning based vessel arrivals monitoring via autoregressive statistical control charts | this paper introduces a methodology for monitoring the vessel arrival process, a critical factor in enhancing maritime operational efficiency. this approach uses deep learning sequence models and statistical process control charts to track the variability in a vessel arrival process. the proposed solution uses the predictive deep learning model to get a vessel’s estimated time of arrival, produces quality characteristics, and applies statistical control charts to monitor their variability. the paper presents the results of applying the proposed methodology for vessel arrivals at a coal terminal, which demonstrates the effectiveness of the method. by enabling precise monitoring of arrival times, this methodology not only supports efficient ship and port operations planning but also aids in the timely adoption of operational adjustments. this can significantly contribute to operational measures aimed at reducing shipping emissions and optimizing resource utilization. | [
"this paper",
"a methodology",
"the vessel arrival process",
"a critical factor",
"maritime operational efficiency",
"this approach",
"deep learning sequence models",
"statistical process control charts",
"the variability",
"a vessel arrival process",
"the proposed solution",
"the predictive deep learning model",
"a vessel’s estimated time",
"arrival",
"quality characteristics",
"statistical control charts",
"their variability",
"the paper",
"the results",
"the proposed methodology",
"vessel arrivals",
"a coal terminal",
"which",
"the effectiveness",
"the method",
"precise monitoring",
"arrival times",
"this methodology",
"efficient ship and port operations planning",
"the timely adoption",
"operational adjustments",
"this",
"operational measures",
"shipping emissions",
"resource utilization"
] |
RNA contact prediction by data efficient deep learning | [
"Oskar Taubert",
"Fabrice von der Lehr",
"Alina Bazarova",
"Christian Faber",
"Philipp Knechtges",
"Marie Weiel",
"Charlotte Debus",
"Daniel Coquelin",
"Achim Basermann",
"Achim Streit",
"Stefan Kesselheim",
"Markus Götz",
"Alexander Schug"
] | On the path to full understanding of the structure-function relationship or even design of RNA, structure prediction would offer an intriguing complement to experimental efforts. Any deep learning on RNA structure, however, is hampered by the sparsity of labeled training data. Utilizing the limited data available, we here focus on predicting spatial adjacencies ("contact maps”) as a proxy for 3D structure. Our model, BARNACLE, combines the utilization of unlabeled data through self-supervised pre-training and efficient use of the sparse labeled data through an XGBoost classifier. BARNACLE shows a considerable improvement over both the established classical baseline and a deep neural network. In order to demonstrate that our approach can be applied to tasks with similar data constraints, we show that our findings generalize to the related setting of accessible surface area prediction. | 10.1038/s42003-023-05244-9 | rna contact prediction by data efficient deep learning | on the path to full understanding of the structure-function relationship or even design of rna, structure prediction would offer an intriguing complement to experimental efforts. any deep learning on rna structure, however, is hampered by the sparsity of labeled training data. utilizing the limited data available, we here focus on predicting spatial adjacencies ("contact maps”) as a proxy for 3d structure. our model, barnacle, combines the utilization of unlabeled data through self-supervised pre-training and efficient use of the sparse labeled data through an xgboost classifier. barnacle shows a considerable improvement over both the established classical baseline and a deep neural network. in order to demonstrate that our approach can be applied to tasks with similar data constraints, we show that our findings generalize to the related setting of accessible surface area prediction. | [
"the path",
"full understanding",
"the structure-function relationship",
"even design",
"rna",
"structure prediction",
"an intriguing complement",
"experimental efforts",
"any deep learning",
"rna structure",
"the sparsity",
"labeled training data",
"the limited data",
"we",
"spatial adjacencies",
"(\"contact maps",
"a proxy",
"3d structure",
"our model",
"barnacle",
"the utilization",
"unlabeled data",
"self-supervised pre",
"-",
"training and efficient use",
"the sparse",
"data",
"an xgboost classifier",
"barnacle",
"a considerable improvement",
"both the established classical baseline",
"a deep neural network",
"order",
"our approach",
"tasks",
"similar data constraints",
"we",
"our findings",
"the related setting",
"accessible surface area prediction",
"3d"
] |
Analysis, characterization, prediction, and attribution of extreme atmospheric events with machine learning and deep learning techniques: a review | [
"Sancho Salcedo-Sanz",
"Jorge Pérez-Aracil",
"Guido Ascenso",
"Javier Del Ser",
"David Casillas-Pérez",
"Christopher Kadow",
"Dušan Fister",
"David Barriopedro",
"Ricardo García-Herrera",
"Matteo Giuliani",
"Andrea Castelletti"
] | Atmospheric extreme events cause severe damage to human societies and ecosystems. The frequency and intensity of extremes and other associated events are continuously increasing due to climate change and global warming. The accurate prediction, characterization, and attribution of atmospheric extreme events is, therefore, a key research field in which many groups are currently working by applying different methodologies and computational tools. Machine learning and deep learning methods have arisen in the last years as powerful techniques to tackle many of the problems related to atmospheric extreme events. This paper reviews machine learning and deep learning approaches applied to the analysis, characterization, prediction, and attribution of the most important atmospheric extremes. A summary of the most used machine learning and deep learning techniques in this area, and a comprehensive critical review of literature related to ML in EEs, are provided. The critical literature review has been extended to extreme events related to rainfall and floods, heatwaves and extreme temperatures, droughts, severe weather events and fog, and low-visibility episodes. A case study focused on the analysis of extreme atmospheric temperature prediction with ML and DL techniques is also presented in the paper. Conclusions, perspectives, and outlooks on the field are finally drawn. | 10.1007/s00704-023-04571-5 | analysis, characterization, prediction, and attribution of extreme atmospheric events with machine learning and deep learning techniques: a review | atmospheric extreme events cause severe damage to human societies and ecosystems. the frequency and intensity of extremes and other associated events are continuously increasing due to climate change and global warming. the accurate prediction, characterization, and attribution of atmospheric extreme events is, therefore, a key research field in which many groups are currently working by applying different methodologies and computational tools. machine learning and deep learning methods have arisen in the last years as powerful techniques to tackle many of the problems related to atmospheric extreme events. this paper reviews machine learning and deep learning approaches applied to the analysis, characterization, prediction, and attribution of the most important atmospheric extremes. a summary of the most used machine learning and deep learning techniques in this area, and a comprehensive critical review of literature related to ml in ees, are provided. the critical literature review has been extended to extreme events related to rainfall and floods, heatwaves and extreme temperatures, droughts, severe weather events and fog, and low-visibility episodes. a case study focused on the analysis of extreme atmospheric temperature prediction with ml and dl techniques is also presented in the paper. conclusions, perspectives, and outlooks on the field are finally drawn. | [
"atmospheric extreme events",
"severe damage",
"human societies",
"ecosystems",
"the frequency",
"intensity",
"extremes",
"other associated events",
"climate change",
"global warming",
"the accurate prediction",
"characterization",
"attribution",
"atmospheric extreme events",
"a key research field",
"which",
"many groups",
"different methodologies",
"computational tools",
"machine learning",
"deep learning methods",
"the last years",
"powerful techniques",
"the problems",
"atmospheric extreme events",
"this paper",
"machine learning",
"deep learning approaches",
"the analysis",
"characterization",
"prediction",
"attribution",
"the most important atmospheric extremes",
"a summary",
"the most used machine learning",
"deep learning techniques",
"this area",
"a comprehensive critical review",
"literature",
"ml",
"ees",
"the critical literature review",
"extreme events",
"rainfall",
"floods",
"heatwaves",
"extreme temperatures",
"droughts",
"severe weather events",
"fog",
"low-visibility episodes",
"a case study",
"the analysis",
"extreme atmospheric temperature prediction",
"ml",
"dl techniques",
"the paper",
"conclusions",
"perspectives",
"outlooks",
"the field",
"the last years"
] |
Application of deep learning and XGBoost in predicting pathological staging of breast cancer MR images | [
"Yue Miao",
"Siyuan Tang",
"Zhuqiang Zhang",
"Jukun Song",
"Zhi Liu",
"Qiang Chen",
"Miao Zhang"
] | The methods of deep learning and traditional radiomics feature extraction were preliminarily discussed, and a multimodal data prediction model for breast cancer clinical stage was established. The MR images and clinical staging data of breast cancer were obtained from the official websites of the American Cancer Center TCGA and TCIA, respectively, with a total of 139 patient samples. The region of interest was delineated on the enhanced image of breast cancer MR, and then the feature extraction of radiomics and deep learning was performed, and 108 radiomics features and 1024 deep-learning features were extracted for each case. After feature screening and processing, clinical data were integrated, and a machine-learning model was used to predict clinical stage I and non-stage I. Results 26 radiomic features and 12 deep features related to staging were screened out by LASSO algorithm, and a classification model was constructed based on XGBoost machine learning. The patients were predicted with an accuracy rate of 80.00%, and the area under the curve of the receiver operating characteristic curve was 0.833. It is feasible to predict the clinical stage of breast cancer through radiomics and deep-learning feature extraction and machine-learning technology. The classification model based on multimodal data established by using machine-learning classifier can distinguish clinical stage I and non-stage I in breast cancer and have higher accuracy. This study confirms the feasibility and accuracy of combining data from different modalities to contribute to clinical staging prediction. The research contributions include demonstrating the superiority of deep-learning models for feature extraction and classification, as well as highlighting the potential of combining deep learning and traditional machine-learning algorithms for improved classification performance. | 10.1007/s11227-023-05797-w | application of deep learning and xgboost in predicting pathological staging of breast cancer mr images | the methods of deep learning and traditional radiomics feature extraction were preliminarily discussed, and a multimodal data prediction model for breast cancer clinical stage was established. the mr images and clinical staging data of breast cancer were obtained from the official websites of the american cancer center tcga and tcia, respectively, with a total of 139 patient samples. the region of interest was delineated on the enhanced image of breast cancer mr, and then the feature extraction of radiomics and deep learning was performed, and 108 radiomics features and 1024 deep-learning features were extracted for each case. after feature screening and processing, clinical data were integrated, and a machine-learning model was used to predict clinical stage i and non-stage i. results 26 radiomic features and 12 deep features related to staging were screened out by lasso algorithm, and a classification model was constructed based on xgboost machine learning. the patients were predicted with an accuracy rate of 80.00%, and the area under the curve of the receiver operating characteristic curve was 0.833. it is feasible to predict the clinical stage of breast cancer through radiomics and deep-learning feature extraction and machine-learning technology. the classification model based on multimodal data established by using machine-learning classifier can distinguish clinical stage i and non-stage i in breast cancer and have higher accuracy. this study confirms the feasibility and accuracy of combining data from different modalities to contribute to clinical staging prediction. the research contributions include demonstrating the superiority of deep-learning models for feature extraction and classification, as well as highlighting the potential of combining deep learning and traditional machine-learning algorithms for improved classification performance. | [
"the methods",
"deep learning",
"traditional radiomics",
"feature extraction",
"a multimodal data prediction model",
"breast cancer clinical stage",
"the mr images",
"clinical staging data",
"breast cancer",
"the official websites",
"the american cancer center",
"tcia",
"a total",
"139 patient samples",
"the region",
"interest",
"the enhanced image",
"breast cancer",
"then the feature extraction",
"radiomics",
"deep learning",
"108 radiomics features",
"1024 deep-learning features",
"each case",
"feature screening",
"processing",
"clinical data",
"a machine-learning model",
"clinical stage",
"i",
"non-stage i.",
"26 radiomic features",
"12 deep features",
"staging",
"lasso algorithm",
"a classification model",
"xgboost machine learning",
"the patients",
"an accuracy rate",
"80.00%",
"the area",
"the curve",
"the receiver operating characteristic curve",
"it",
"the clinical stage",
"breast cancer",
"radiomics",
"deep-learning feature extraction and machine-learning technology",
"the classification model",
"multimodal data",
"machine-learning classifier",
"clinical stage",
"i",
"i",
"breast cancer",
"higher accuracy",
"this study",
"the feasibility",
"accuracy",
"data",
"different modalities",
"clinical staging prediction",
"the research contributions",
"the superiority",
"deep-learning models",
"feature extraction",
"classification",
"the potential",
"deep learning",
"traditional machine-learning algorithms",
"improved classification performance",
"american",
"139",
"108",
"1024",
"26",
"12",
"lasso",
"80.00%",
"0.833",
"distinguish"
] |
Enhancing deep learning classification performance of tongue lesions in imbalanced data: mosaic-based soft labeling with curriculum learning | [
"Sung-Jae Lee",
"Hyun Jun Oh",
"Young-Don Son",
"Jong-Hoon Kim",
"Ik-Jae Kwon",
"Bongju Kim",
"Jong-Ho Lee",
"Hang-Keun Kim"
] | BackgroundOral potentially malignant disorders (OPMDs) are associated with an increased risk of cancer of the oral cavity including the tongue. The early detection of oral cavity cancers and OPMDs is critical for reducing cancer-specific morbidity and mortality. Recently, there have been studies to apply the rapidly advancing technology of deep learning for diagnosing oral cavity cancer and OPMDs. However, several challenging issues such as class imbalance must be resolved to effectively train a deep learning model for medical imaging classification tasks. The aim of this study is to evaluate a new technique of artificial intelligence to improve the classification performance in an imbalanced tongue lesion dataset.MethodsA total of 1,810 tongue images were used for the classification. The class-imbalanced dataset consisted of 372 instances of cancer, 141 instances of OPMDs, and 1,297 instances of noncancerous lesions. The EfficientNet model was used as the feature extraction model for classification. Mosaic data augmentation, soft labeling, and curriculum learning (CL) were employed to improve the classification performance of the convolutional neural network.ResultsUtilizing a mosaic-augmented dataset in conjunction with CL, the final model achieved an accuracy rate of 0.9444, surpassing conventional oversampling and weight balancing methods. The relative precision improvement rate for the minority class OPMD was 21.2%, while the relative \({F}_{1}\) score improvement rate of OPMD was 4.9%.ConclusionsThe present study demonstrates that the integration of mosaic-based soft labeling and curriculum learning improves the classification performance of tongue lesions compared to previous methods, establishing a foundation for future research on effectively learning from imbalanced data. | 10.1186/s12903-024-03898-3 | enhancing deep learning classification performance of tongue lesions in imbalanced data: mosaic-based soft labeling with curriculum learning | backgroundoral potentially malignant disorders (opmds) are associated with an increased risk of cancer of the oral cavity including the tongue. the early detection of oral cavity cancers and opmds is critical for reducing cancer-specific morbidity and mortality. recently, there have been studies to apply the rapidly advancing technology of deep learning for diagnosing oral cavity cancer and opmds. however, several challenging issues such as class imbalance must be resolved to effectively train a deep learning model for medical imaging classification tasks. the aim of this study is to evaluate a new technique of artificial intelligence to improve the classification performance in an imbalanced tongue lesion dataset.methodsa total of 1,810 tongue images were used for the classification. the class-imbalanced dataset consisted of 372 instances of cancer, 141 instances of opmds, and 1,297 instances of noncancerous lesions. the efficientnet model was used as the feature extraction model for classification. mosaic data augmentation, soft labeling, and curriculum learning (cl) were employed to improve the classification performance of the convolutional neural network.resultsutilizing a mosaic-augmented dataset in conjunction with cl, the final model achieved an accuracy rate of 0.9444, surpassing conventional oversampling and weight balancing methods. the relative precision improvement rate for the minority class opmd was 21.2%, while the relative \({f}_{1}\) score improvement rate of opmd was 4.9%.conclusionsthe present study demonstrates that the integration of mosaic-based soft labeling and curriculum learning improves the classification performance of tongue lesions compared to previous methods, establishing a foundation for future research on effectively learning from imbalanced data. | [
"backgroundoral potentially malignant disorders",
"an increased risk",
"cancer",
"the oral cavity",
"the tongue",
"the early detection",
"oral cavity cancers",
"opmds",
"cancer-specific morbidity",
"mortality",
"studies",
"the rapidly advancing technology",
"deep learning",
"oral cavity cancer",
"opmds",
"several challenging issues",
"class imbalance",
"a deep learning model",
"medical imaging classification tasks",
"the aim",
"this study",
"a new technique",
"artificial intelligence",
"the classification performance",
"an imbalanced tongue lesion",
"dataset.methodsa total",
"1,810 tongue images",
"the classification",
"the class-imbalanced dataset",
"372 instances",
"cancer",
"141 instances",
"opmds",
"1,297 instances",
"noncancerous lesions",
"the efficientnet model",
"the feature extraction model",
"classification",
"mosaic data augmentation",
"soft labeling",
"curriculum learning",
"cl",
"the classification performance",
"the convolutional neural",
"a mosaic-augmented dataset",
"conjunction",
"cl",
"the final model",
"an accuracy rate",
"conventional oversampling",
"weight balancing methods",
"the relative precision improvement rate",
"the minority class",
"21.2%",
"the relative \\({f}_{1}\\) score improvement rate",
"4.9%.conclusionsthe present study",
"the integration",
"mosaic-based soft labeling",
"curriculum learning",
"the classification performance",
"tongue lesions",
"previous methods",
"a foundation",
"future research",
"imbalanced data",
"dataset.methodsa",
"1,810",
"372",
"141",
"1,297",
"mosaic data",
"mosaic",
"0.9444",
"21.2%"
] |
Deep learning enables fast, gentle STED microscopy | [
"Vahid Ebrahimi",
"Till Stephan",
"Jiah Kim",
"Pablo Carravilla",
"Christian Eggeling",
"Stefan Jakobs",
"Kyu Young Han"
] | STED microscopy is widely used to image subcellular structures with super-resolution. Here, we report that restoring STED images with deep learning can mitigate photobleaching and photodamage by reducing the pixel dwell time by one or two orders of magnitude. Our method allows for efficient and robust restoration of noisy 2D and 3D STED images with multiple targets and facilitates long-term imaging of mitochondrial dynamics. | 10.1038/s42003-023-05054-z | deep learning enables fast, gentle sted microscopy | sted microscopy is widely used to image subcellular structures with super-resolution. here, we report that restoring sted images with deep learning can mitigate photobleaching and photodamage by reducing the pixel dwell time by one or two orders of magnitude. our method allows for efficient and robust restoration of noisy 2d and 3d sted images with multiple targets and facilitates long-term imaging of mitochondrial dynamics. | [
"sted microscopy",
"subcellular structures",
"super",
"-",
"resolution",
"we",
"sted images",
"deep learning",
"photobleaching",
"photodamage",
"the pixel",
"dwell time",
"one or two orders",
"magnitude",
"our method",
"efficient and robust restoration",
"noisy 2d",
"3d sted images",
"multiple targets",
"long-term imaging",
"mitochondrial dynamics",
"one",
"two",
"2d",
"3d"
] |
Biological gender identification in Turkish news text using deep learning models | [
"Pınar Tüfekci",
"Melike Bektaş Kösesoy"
] | Identifying the biological gender of authors based on the content of their written work is a crucial task in Natural Language Processing (NLP). Accurate biological gender identification finds numerous applications in fields such as linguistics, sociology, and marketing. However, achieving high accuracy in identifying the biological gender of the author is heavily dependent on the quality of the collected data and its proper splitting. Therefore, determining the best-performing model necessitates experimental evaluation. This study aimed to develop and evaluate four learning algorithms for biological gender identification in news texts. To this end, a comprehensive dataset, IAG-TNKU, was created from a Turkish newspaper, comprising 43,292 news articles. Four models utilizing popular machine learning algorithms, including Naive Bayes and Random Forest, and two deep learning algorithms, Long Short Term Memory and Convolutional Neural Networks, were developed and evaluated rigorously. The results indicated that the Long Short Term Memory (LSTM) algorithm outperformed the other three models, exhibiting an exceptional accuracy of 88.51%. This model's outstanding performance underpins the importance of utilizing innovative deep learning algorithms for biological gender identification tasks in NLP. The present study contributes to extant literature by developing a new dataset for biological gender identification in news texts and evaluating four machine learning algorithms. Our findings highlight the significance of utilizing innovative techniques for biological gender identification tasks. The dataset and deep learning algorithm can be applied in many areas such as sociolinguistics, marketing research, and journalism, where the identification of biological gender in written content plays a pivotal role. | 10.1007/s11042-023-17622-w | biological gender identification in turkish news text using deep learning models | identifying the biological gender of authors based on the content of their written work is a crucial task in natural language processing (nlp). accurate biological gender identification finds numerous applications in fields such as linguistics, sociology, and marketing. however, achieving high accuracy in identifying the biological gender of the author is heavily dependent on the quality of the collected data and its proper splitting. therefore, determining the best-performing model necessitates experimental evaluation. this study aimed to develop and evaluate four learning algorithms for biological gender identification in news texts. to this end, a comprehensive dataset, iag-tnku, was created from a turkish newspaper, comprising 43,292 news articles. four models utilizing popular machine learning algorithms, including naive bayes and random forest, and two deep learning algorithms, long short term memory and convolutional neural networks, were developed and evaluated rigorously. the results indicated that the long short term memory (lstm) algorithm outperformed the other three models, exhibiting an exceptional accuracy of 88.51%. this model's outstanding performance underpins the importance of utilizing innovative deep learning algorithms for biological gender identification tasks in nlp. the present study contributes to extant literature by developing a new dataset for biological gender identification in news texts and evaluating four machine learning algorithms. our findings highlight the significance of utilizing innovative techniques for biological gender identification tasks. the dataset and deep learning algorithm can be applied in many areas such as sociolinguistics, marketing research, and journalism, where the identification of biological gender in written content plays a pivotal role. | [
"the biological gender",
"authors",
"the content",
"their written work",
"a crucial task",
"natural language processing",
"nlp",
"accurate biological gender identification",
"numerous applications",
"fields",
"linguistics",
"sociology",
"marketing",
"high accuracy",
"the biological gender",
"the author",
"the quality",
"the collected data",
"its proper splitting",
"the best-performing model necessitates experimental evaluation",
"this study",
"four learning algorithms",
"biological gender identification",
"news texts",
"this end",
"a comprehensive dataset",
"iag",
"tnku",
"a turkish newspaper",
"43,292 news articles",
"four models",
"popular machine learning algorithms",
"naive bayes",
"random forest",
"two deep learning algorithms",
"long short term memory",
"convolutional neural networks",
"the results",
"the long short term memory",
"lstm",
"algorithm",
"the other three models",
"an exceptional accuracy",
"88.51%",
"this model's outstanding performance",
"the importance",
"innovative deep learning algorithms",
"biological gender identification tasks",
"nlp",
"the present study",
"extant literature",
"a new dataset",
"biological gender identification",
"news texts",
"four machine learning algorithms",
"our findings",
"the significance",
"innovative techniques",
"biological gender identification tasks",
"the dataset and deep learning algorithm",
"many areas",
"sociolinguistics",
"marketing research",
"journalism",
"the identification",
"biological gender",
"written content",
"a pivotal role",
"four",
"turkish",
"43,292",
"four",
"two",
"three",
"88.51%",
"four"
] |
Deep learning model for pleural effusion detection via active learning and pseudo-labeling: a multisite study | [
"Joseph Chang",
"Bo-Ru Lin",
"Ti-Hao Wang",
"Chung-Ming Chen"
] | BackgroundThe study aimed to develop and validate a deep learning-based Computer Aided Triage (CADt) algorithm for detecting pleural effusion in chest radiographs using an active learning (AL) framework. This is aimed at addressing the critical need for a clinical grade algorithm that can timely diagnose pleural effusion, which affects approximately 1.5 million people annually in the United States.MethodsIn this multisite study, 10,599 chest radiographs from 2006 to 2018 were retrospectively collected from an institution in Taiwan to train the deep learning algorithm. The AL framework utilized significantly reduced the need for expert annotations. For external validation, the algorithm was tested on a multisite dataset of 600 chest radiographs from 22 clinical sites in the United States and Taiwan, which were annotated by three U.S. board-certified radiologists.ResultsThe CADt algorithm demonstrated high effectiveness in identifying pleural effusion, achieving a sensitivity of 0.95 (95% CI: [0.92, 0.97]) and a specificity of 0.97 (95% CI: [0.95, 0.99]). The area under the receiver operating characteristic curve (AUC) was 0.97 (95% DeLong’s CI: [0.95, 0.99]). Subgroup analyses showed that the algorithm maintained robust performance across various demographics and clinical settings.ConclusionThis study presents a novel approach in developing clinical grade CADt solutions for the diagnosis of pleural effusion. The AL-based CADt algorithm not only achieved high accuracy in detecting pleural effusion but also significantly reduced the workload required for clinical experts in annotating medical data. This method enhances the feasibility of employing advanced technological solutions for prompt and accurate diagnosis in medical settings. | 10.1186/s12880-024-01260-1 | deep learning model for pleural effusion detection via active learning and pseudo-labeling: a multisite study | backgroundthe study aimed to develop and validate a deep learning-based computer aided triage (cadt) algorithm for detecting pleural effusion in chest radiographs using an active learning (al) framework. this is aimed at addressing the critical need for a clinical grade algorithm that can timely diagnose pleural effusion, which affects approximately 1.5 million people annually in the united states.methodsin this multisite study, 10,599 chest radiographs from 2006 to 2018 were retrospectively collected from an institution in taiwan to train the deep learning algorithm. the al framework utilized significantly reduced the need for expert annotations. for external validation, the algorithm was tested on a multisite dataset of 600 chest radiographs from 22 clinical sites in the united states and taiwan, which were annotated by three u.s. board-certified radiologists.resultsthe cadt algorithm demonstrated high effectiveness in identifying pleural effusion, achieving a sensitivity of 0.95 (95% ci: [0.92, 0.97]) and a specificity of 0.97 (95% ci: [0.95, 0.99]). the area under the receiver operating characteristic curve (auc) was 0.97 (95% delong’s ci: [0.95, 0.99]). subgroup analyses showed that the algorithm maintained robust performance across various demographics and clinical settings.conclusionthis study presents a novel approach in developing clinical grade cadt solutions for the diagnosis of pleural effusion. the al-based cadt algorithm not only achieved high accuracy in detecting pleural effusion but also significantly reduced the workload required for clinical experts in annotating medical data. this method enhances the feasibility of employing advanced technological solutions for prompt and accurate diagnosis in medical settings. | [
"backgroundthe study",
"a deep learning-based computer aided triage",
"cadt",
"algorithm",
"pleural effusion",
"chest",
"an active learning",
"(al) framework",
"this",
"the critical need",
"a clinical grade",
"algorithm",
"that",
"pleural effusion",
"which",
"approximately 1.5 million people",
"the united states.methodsin",
"this multisite study",
"10,599 chest",
"an institution",
"taiwan",
"the deep learning algorithm",
"the al framework",
"the need",
"expert annotations",
"external validation",
"the algorithm",
"a multisite dataset",
"600 chest",
"22 clinical sites",
"the united states",
"taiwan",
"which",
"radiologists.resultsthe cadt algorithm",
"high effectiveness",
"pleural effusion",
"a sensitivity",
"95% ci",
"a specificity",
"(95% ci",
"the area",
"the receiver operating characteristic curve",
"auc",
"(95% delong",
"subgroup analyses",
"the algorithm",
"robust performance",
"various demographics",
"clinical settings.conclusionthis study",
"a novel approach",
"clinical grade cadt solutions",
"the diagnosis",
"pleural effusion",
"the al-based cadt",
"algorithm",
"high accuracy",
"pleural effusion",
"the workload",
"clinical experts",
"medical data",
"this method",
"the feasibility",
"advanced technological solutions",
"prompt and accurate diagnosis",
"medical settings",
"approximately 1.5 million",
"annually",
"the united states.methodsin",
"10,599",
"2006",
"2018",
"taiwan",
"al",
"600",
"22",
"the united states",
"taiwan",
"three",
"u.s.",
"0.95",
"95%",
"0.92",
"0.97",
"0.97",
"95%",
"0.95",
"0.99",
"0.97",
"95%",
"0.95",
"0.99",
"al"
] |
Learning key steps to attack deep reinforcement learning agents | [
"Chien-Min Yu",
"Ming-Hsin Chen",
"Hsuan-Tien Lin"
] | Deep reinforcement learning agents are vulnerable to adversarial attacks. In particular, recent studies have shown that attacking a few key steps can effectively decrease the agent’s cumulative reward. However, all existing attacking methods define those key steps with human-designed heuristics, and it is not clear how more effective key steps can be identified. This paper introduces a novel reinforcement learning framework that learns key steps through interacting with the agent. The proposed framework does not require any human heuristics nor knowledge, and can be flexibly coupled with any white-box or black-box adversarial attack scenarios. Experiments on benchmark Atari games across different scenarios demonstrate that the proposed framework is superior to existing methods for identifying effective key steps. The results highlight the weakness of RL agents even under budgeted attacks. | 10.1007/s10994-023-06318-9 | learning key steps to attack deep reinforcement learning agents | deep reinforcement learning agents are vulnerable to adversarial attacks. in particular, recent studies have shown that attacking a few key steps can effectively decrease the agent’s cumulative reward. however, all existing attacking methods define those key steps with human-designed heuristics, and it is not clear how more effective key steps can be identified. this paper introduces a novel reinforcement learning framework that learns key steps through interacting with the agent. the proposed framework does not require any human heuristics nor knowledge, and can be flexibly coupled with any white-box or black-box adversarial attack scenarios. experiments on benchmark atari games across different scenarios demonstrate that the proposed framework is superior to existing methods for identifying effective key steps. the results highlight the weakness of rl agents even under budgeted attacks. | [
"deep reinforcement learning agents",
"adversarial attacks",
"recent studies",
"a few key steps",
"the agent’s cumulative reward",
"all existing attacking methods",
"those key steps",
"human-designed heuristics",
"it",
"how more effective key steps",
"this paper",
"a novel reinforcement learning framework",
"that",
"key steps",
"the agent",
"the proposed framework",
"any human heuristics",
"knowledge",
"any white-box or black-box adversarial attack scenarios",
"experiments",
"benchmark atari games",
"different scenarios",
"the proposed framework",
"existing methods",
"effective key steps",
"the results",
"the weakness",
"rl agents",
"budgeted attacks"
] |
An active learning approach to train a deep learning algorithm for tumor segmentation from brain MR images | [
"Andrew S. Boehringer",
"Amirhossein Sanaat",
"Hossein Arabi",
"Habib Zaidi"
] | PurposeThis study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model.MethodsThe publicly available training dataset provided for the 2021 RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge was used in this study, consisting of 1251 multi-institutional, multi-parametric MR images. Post-contrast T1, T2, and T2 FLAIR images as well as ground truth manual segmentation were used as input for the model. The data were split into a training set of 1151 cases and testing set of 100 cases, with the testing set remaining constant throughout. Deep convolutional neural network segmentation models were trained using the NiftyNet platform. To test the viability of active learning in training a segmentation model, an initial reference model was trained using all 1151 training cases followed by two additional models using only 575 cases and 100 cases. The resulting predicted segmentations of these two additional models on the remaining training cases were then addended to the training dataset for additional training.ResultsIt was demonstrated that an active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas (0.906 reference Dice score vs 0.868 active learning Dice score) while only requiring manual annotation for 28.6% of the data.ConclusionThe active learning approach when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data.Critical relevance statementActive learning concepts were applied to a deep learning-assisted segmentation of brain gliomas from MR images to assess their viability in reducing the required amount of manually annotated ground truth data in model training.Key points• This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model.• The active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas.• Active learning when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data.Graphical Abstract | 10.1186/s13244-023-01487-6 | an active learning approach to train a deep learning algorithm for tumor segmentation from brain mr images | purposethis study focuses on assessing the performance of active learning techniques to train a brain mri glioma segmentation model.methodsthe publicly available training dataset provided for the 2021 rsna-asnr-miccai brain tumor segmentation (brats) challenge was used in this study, consisting of 1251 multi-institutional, multi-parametric mr images. post-contrast t1, t2, and t2 flair images as well as ground truth manual segmentation were used as input for the model. the data were split into a training set of 1151 cases and testing set of 100 cases, with the testing set remaining constant throughout. deep convolutional neural network segmentation models were trained using the niftynet platform. to test the viability of active learning in training a segmentation model, an initial reference model was trained using all 1151 training cases followed by two additional models using only 575 cases and 100 cases. the resulting predicted segmentations of these two additional models on the remaining training cases were then addended to the training dataset for additional training.resultsit was demonstrated that an active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas (0.906 reference dice score vs 0.868 active learning dice score) while only requiring manual annotation for 28.6% of the data.conclusionthe active learning approach when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data.critical relevance statementactive learning concepts were applied to a deep learning-assisted segmentation of brain gliomas from mr images to assess their viability in reducing the required amount of manually annotated ground truth data in model training.key points• this study focuses on assessing the performance of active learning techniques to train a brain mri glioma segmentation model.• the active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas.• active learning when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data.graphical abstract | [
"purposethis study",
"the performance",
"active learning techniques",
"a brain mri glioma segmentation",
"model.methodsthe publicly available training dataset",
"the 2021 rsna-asnr-miccai brain tumor segmentation (brats) challenge",
"this study",
"1251 multi-institutional, multi-parametric mr images",
"post-contrast t1",
"t2",
"flair images",
"ground truth manual segmentation",
"input",
"the model",
"the data",
"a training set",
"1151 cases",
"testing",
"100 cases",
"the testing",
"deep convolutional neural network segmentation models",
"the niftynet platform",
"the viability",
"active learning",
"a segmentation model",
"an initial reference model",
"all 1151 training cases",
"two additional models",
"only 575 cases",
"100 cases",
"the resulting predicted segmentations",
"these two additional models",
"the remaining training cases",
"the training dataset",
"additional training.resultsit",
"an active learning approach",
"manual segmentation",
"comparable model performance",
"segmentation",
"brain gliomas",
"0.906 reference dice score",
"0.868 active learning dice score",
"manual annotation",
"28.6%",
"the data.conclusionthe active learning approach",
"model training",
"the time",
"labor",
"preparation",
"ground truth",
"data.critical relevance statementactive learning concepts",
"a deep learning-assisted segmentation",
"brain gliomas",
"mr images",
"their viability",
"the required amount",
"manually annotated ground truth data",
"this study",
"the performance",
"active learning techniques",
"a brain mri glioma segmentation",
"the active learning approach",
"manual segmentation",
"comparable model performance",
"segmentation",
"brain gliomas.• active learning",
"model training",
"the time",
"labor",
"preparation",
"ground truth",
"data.graphical abstract",
"2021",
"1251",
"1151",
"100",
"1151",
"two",
"only 575",
"100",
"two",
"0.906 reference",
"0.868",
"28.6%"
] |
Efficient shallow learning as an alternative to deep learning | [
"Yuval Meir",
"Ofek Tevet",
"Yarden Tzach",
"Shiri Hodassman",
"Ronit D. Gross",
"Ido Kanter"
] | The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutional layer reveals localized patterns in the input and large-scale patterns in the following layers, until it reliably characterizes a class of inputs. Here, we demonstrate that with a fixed ratio between the depths of the first and second convolutional layers, the error rates of the generalized shallow LeNet architecture, consisting of only five layers, decay as a power law with the number of filters in the first convolutional layer. The extrapolation of this power law indicates that the generalized LeNet can achieve small error rates that were previously obtained for the CIFAR-10 database using DL architectures. A power law with a similar exponent also characterizes the generalized VGG-16 architecture. However, this results in a significantly increased number of operations required to achieve a given error rate with respect to LeNet. This power law phenomenon governs various generalized LeNet and VGG-16 architectures, hinting at its universal behavior and suggesting a quantitative hierarchical time–space complexity among machine learning architectures. Additionally, the conservation law along the convolutional layers, which is the square-root of their size times their depth, is found to asymptotically minimize error rates. The efficient shallow learning that is demonstrated in this study calls for further quantitative examination using various databases and architectures and its accelerated implementation using future dedicated hardware developments. | 10.1038/s41598-023-32559-8 | efficient shallow learning as an alternative to deep learning | the realization of complex classification tasks requires training of deep learning (dl) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. according to the dl rationale, the first convolutional layer reveals localized patterns in the input and large-scale patterns in the following layers, until it reliably characterizes a class of inputs. here, we demonstrate that with a fixed ratio between the depths of the first and second convolutional layers, the error rates of the generalized shallow lenet architecture, consisting of only five layers, decay as a power law with the number of filters in the first convolutional layer. the extrapolation of this power law indicates that the generalized lenet can achieve small error rates that were previously obtained for the cifar-10 database using dl architectures. a power law with a similar exponent also characterizes the generalized vgg-16 architecture. however, this results in a significantly increased number of operations required to achieve a given error rate with respect to lenet. this power law phenomenon governs various generalized lenet and vgg-16 architectures, hinting at its universal behavior and suggesting a quantitative hierarchical time–space complexity among machine learning architectures. additionally, the conservation law along the convolutional layers, which is the square-root of their size times their depth, is found to asymptotically minimize error rates. the efficient shallow learning that is demonstrated in this study calls for further quantitative examination using various databases and architectures and its accelerated implementation using future dedicated hardware developments. | [
"the realization",
"complex classification tasks",
"training",
"deep learning",
"dl",
"tens",
"even hundreds",
"convolutional and fully connected hidden layers",
"which",
"the reality",
"the human brain",
"the dl rationale",
"the first convolutional layer",
"localized patterns",
"the input",
"large-scale patterns",
"the following layers",
"it",
"a class",
"inputs",
"we",
"a fixed ratio",
"the depths",
"the first and second convolutional layers",
"the error rates",
"the generalized shallow lenet architecture",
"only five layers",
"a power law",
"the number",
"filters",
"the first convolutional layer",
"the extrapolation",
"this power law",
"the generalized lenet",
"small error rates",
"that",
"the cifar-10 database",
"dl architectures",
"a power law",
"a similar exponent",
"the generalized vgg-16 architecture",
"this",
"a significantly increased number",
"operations",
"a given error rate",
"respect",
"lenet",
"this power law phenomenon",
"various generalized lenet",
"vgg-16",
"architectures",
"its universal behavior",
"a quantitative hierarchical time–space complexity",
"machine learning architectures",
"the conservation law",
"the convolutional layers",
"which",
"the square-root",
"their size",
"their depth",
"error rates",
"the efficient shallow learning",
"that",
"this study",
"further quantitative examination",
"various databases",
"architectures",
"its accelerated implementation",
"future dedicated hardware developments",
"tens or even hundreds",
"first",
"first",
"second",
"only five",
"first",
"cifar-10",
"vgg-16"
] |
Crop type classification with hyperspectral images using deep learning : a transfer learning approach | [
"Usha Patel",
"Mohib Pathan",
"Preeti Kathiria",
"Vibha Patel"
] | Crop classification plays a vital role in felicitating agriculture statistics to the state and national government in decision-making. In recent years, due to advancements in remote sensing, high-resolution hyperspectral images (HSIs) are available for land cover classification. HSIs can classify the different crop categories precisely due to their narrow and continuous spectral band reflection. With improvements in computing power and evolution in deep learning technology, Deep learning is rapidly being used for HSIs classification. However, to train deep neural networks, many labeled samples are needed. The labeling of HSIs is time-consuming and costly. A transfer learning approach is used in many applications where a labeled dataset is challenging. This paper opts for the heterogeneous transfer learning models on benchmark HSIs datasets to discuss the performance accuracy of well-defined deep learning models—VGG16, VGG19, ResNet, and DenseNet for crop classification. Also, it discusses the performance accuracy of customized 2-dimensional Convolutional neural network (2DCNN) and 3-dimensional Convolutional neural network (3DCNN) deep learning models using homogeneous transfer learning models on benchmark HSIs datasets for crop classification. The results show that although HSIs datasets contain few samples, the transfer learning models perform better with limited labeled samples. The results achieved 99% of accuracy for the Indian Pines and Pavia University dataset with 15% of labeled training samples with heterogeneous transfer learning. As per the overall accuracy, homogeneous transfer learning with 2DCNN and 3DCNN models pre-trained on the Indian Pines dataset and adjusted on the Salinas scene dataset performs far better than heterogeneous transfer learning. | 10.1007/s40808-022-01608-y | crop type classification with hyperspectral images using deep learning : a transfer learning approach | crop classification plays a vital role in felicitating agriculture statistics to the state and national government in decision-making. in recent years, due to advancements in remote sensing, high-resolution hyperspectral images (hsis) are available for land cover classification. hsis can classify the different crop categories precisely due to their narrow and continuous spectral band reflection. with improvements in computing power and evolution in deep learning technology, deep learning is rapidly being used for hsis classification. however, to train deep neural networks, many labeled samples are needed. the labeling of hsis is time-consuming and costly. a transfer learning approach is used in many applications where a labeled dataset is challenging. this paper opts for the heterogeneous transfer learning models on benchmark hsis datasets to discuss the performance accuracy of well-defined deep learning models—vgg16, vgg19, resnet, and densenet for crop classification. also, it discusses the performance accuracy of customized 2-dimensional convolutional neural network (2dcnn) and 3-dimensional convolutional neural network (3dcnn) deep learning models using homogeneous transfer learning models on benchmark hsis datasets for crop classification. the results show that although hsis datasets contain few samples, the transfer learning models perform better with limited labeled samples. the results achieved 99% of accuracy for the indian pines and pavia university dataset with 15% of labeled training samples with heterogeneous transfer learning. as per the overall accuracy, homogeneous transfer learning with 2dcnn and 3dcnn models pre-trained on the indian pines dataset and adjusted on the salinas scene dataset performs far better than heterogeneous transfer learning. | [
"crop classification",
"a vital role",
"agriculture statistics",
"the state and national government",
"decision-making",
"recent years",
"advancements",
"remote sensing",
"high-resolution hyperspectral images",
"hsis",
"land cover classification",
"hsis",
"the different crop categories",
"their narrow and continuous spectral band reflection",
"improvements",
"computing power",
"evolution",
"deep learning technology",
"deep learning",
"hsis classification",
"deep neural networks",
"many labeled samples",
"the labeling",
"hsis",
"a transfer learning approach",
"many applications",
"a labeled dataset",
"the heterogeneous transfer learning models",
"benchmark hsis datasets",
"the performance accuracy",
"well-defined deep learning models",
"vgg16",
"vgg19",
"resnet",
"densenet",
"crop classification",
"it",
"the performance accuracy",
"customized 2-dimensional convolutional neural network",
"3-dimensional convolutional neural network (3dcnn) deep learning models",
"homogeneous transfer learning models",
"benchmark hsis datasets",
"crop classification",
"the results",
"hsis datasets",
"few samples",
"the transfer learning models",
"limited labeled samples",
"the results",
"99%",
"accuracy",
"the indian pines",
"pavia university",
"15%",
"labeled training samples",
"heterogeneous transfer learning",
"the overall accuracy",
"3dcnn models",
"the indian pines",
"the salinas scene dataset",
"heterogeneous transfer learning",
"recent years",
"2",
"2dcnn",
"3",
"3dcnn",
"99%",
"indian",
"15%",
"2dcnn",
"3dcnn",
"indian"
] |
Novel deep learning models for yoga pose estimator | [
"Amira Samy Talaat"
] | Yoga pose recognition and correction are artificial intelligent techniques to provide standardized and appropriate yoga poses. Incorrect yoga poses can cause serious injuries and long-term complications. Analyzing human posture can identify and rectify abnormal positions, improving well-being at home. A posture estimator extracts yoga asana attributes from properly represented images. These extracted features are then utilized directly as inputs for various neural networks and machine learning models. These models serve the purpose of evaluating and predicting the accuracy of specific yoga poses. The objective of this research is to explore multiple methods for classifying yoga poses. The LGDeep model is introduced, which combines a novel residual convolutional neural network with three deep learning approaches: Xception, VGGNet, and SqueezeNet. Additionally, the LGDeep model incorporates feature extraction methods such as LDA and GDA. Experimental results demonstrate that the LGDeep classifier outperforms other approaches and achieves the highest classification accuracy ratio. | 10.1007/s42452-023-05581-8 | novel deep learning models for yoga pose estimator | yoga pose recognition and correction are artificial intelligent techniques to provide standardized and appropriate yoga poses. incorrect yoga poses can cause serious injuries and long-term complications. analyzing human posture can identify and rectify abnormal positions, improving well-being at home. a posture estimator extracts yoga asana attributes from properly represented images. these extracted features are then utilized directly as inputs for various neural networks and machine learning models. these models serve the purpose of evaluating and predicting the accuracy of specific yoga poses. the objective of this research is to explore multiple methods for classifying yoga poses. the lgdeep model is introduced, which combines a novel residual convolutional neural network with three deep learning approaches: xception, vggnet, and squeezenet. additionally, the lgdeep model incorporates feature extraction methods such as lda and gda. experimental results demonstrate that the lgdeep classifier outperforms other approaches and achieves the highest classification accuracy ratio. | [
"yoga",
"recognition",
"correction",
"artificial intelligent techniques",
"standardized and appropriate yoga poses",
"incorrect yoga poses",
"serious injuries",
"long-term complications",
"human posture",
"abnormal positions",
"well-being",
"home",
"a posture estimator",
"yoga asana attributes",
"properly represented images",
"these extracted features",
"various neural networks",
"machine learning models",
"these models",
"the purpose",
"the accuracy",
"specific yoga poses",
"the objective",
"this research",
"multiple methods",
"yoga poses",
"the lgdeep model",
"which",
"a novel residual convolutional neural network",
"three deep learning approaches",
"xception",
"vggnet",
"squeezenet",
"the lgdeep model",
"feature extraction methods",
"lda",
"gda",
"experimental results",
"the lgdeep classifier",
"other approaches",
"the highest classification accuracy ratio",
"three"
] |
GyroFlow+: Gyroscope-Guided Unsupervised Deep Homography and Optical Flow Learning | [
"Haipeng Li",
"Kunming Luo",
"Bing Zeng",
"Shuaicheng Liu"
] | Existing homography and optical flow methods are erroneous in challenging scenes, such as fog, rain, night, and snow because the basic assumptions such as brightness and gradient constancy are broken. To address this issue, we present an unsupervised learning approach that fuses gyroscope into homography and optical flow learning. Specifically, we first convert gyroscope readings into motion fields named gyro field. Second, we design a self-guided fusion module (SGF) to fuse the background motion extracted from the gyro field with the optical flow and guide the network to focus on motion details. Meanwhile, we propose a homography decoder module (HD) to combine gyro field and intermediate results of SGF to produce the homography. To the best of our knowledge, this is the first deep learning framework that fuses gyroscope data and image content for both deep homography and optical flow learning. To validate our method, we propose a new dataset that covers regular and challenging scenes. Experiments show that our method outperforms the state-of-the-art methods in both regular and challenging scenes. The code and dataset are available at https://github.com/lhaippp/GyroFlowPlus. | 10.1007/s11263-023-01978-5 | gyroflow+: gyroscope-guided unsupervised deep homography and optical flow learning | existing homography and optical flow methods are erroneous in challenging scenes, such as fog, rain, night, and snow because the basic assumptions such as brightness and gradient constancy are broken. to address this issue, we present an unsupervised learning approach that fuses gyroscope into homography and optical flow learning. specifically, we first convert gyroscope readings into motion fields named gyro field. second, we design a self-guided fusion module (sgf) to fuse the background motion extracted from the gyro field with the optical flow and guide the network to focus on motion details. meanwhile, we propose a homography decoder module (hd) to combine gyro field and intermediate results of sgf to produce the homography. to the best of our knowledge, this is the first deep learning framework that fuses gyroscope data and image content for both deep homography and optical flow learning. to validate our method, we propose a new dataset that covers regular and challenging scenes. experiments show that our method outperforms the state-of-the-art methods in both regular and challenging scenes. the code and dataset are available at https://github.com/lhaippp/gyroflowplus. | [
"existing homography",
"optical flow methods",
"challenging scenes",
"fog",
"rain",
"night",
"snow",
"the basic assumptions",
"brightness and gradient constancy",
"this issue",
"we",
"an unsupervised learning approach",
"that",
"fuses",
"homography",
"optical flow learning",
"we",
"gyroscope readings",
"motion fields",
"gyro field",
"we",
"a self-guided fusion module",
"sgf",
"the background motion",
"the gyro field",
"the optical flow",
"the network",
"motion details",
"we",
"a homography decoder module",
"hd",
"gyro field",
"intermediate results",
"sgf",
"the homography",
"our knowledge",
"this",
"the first deep learning framework",
"that",
"data",
"image content",
"both deep homography",
"optical flow learning",
"our method",
"we",
"a new dataset",
"that",
"regular and challenging scenes",
"experiments",
"our method",
"the-art",
"both regular and challenging scenes",
"the code",
"dataset",
"https://github.com/lhaippp/gyroflowplus",
"night",
"first",
"second",
"first",
"deep homography"
] |
An effective facial spoofing detection approach based on weighted deep ensemble learning | [
"My Abdelouahed Sabri",
"Assia Ennouni",
"Abdellah Aarab"
] | Deep learning has seen successful implementation in various domains, such as natural language processing, image classification, and object detection in recent times. In the field of biometrics, deep learning has also been used to develop effective anti-spoofing systems. Facial spoofing, the act of presenting fake facial information to deceive a biometric system, poses a significant threat to the security of face recognition systems. To address this challenge, we propose, in this paper, an effective and robust facial spoofing detection approach based on weighted deep ensemble learning. Our method combines the strengths of two powerful deep learning architectures, DenseNet201 and MiniVGG. The choice of these two architectures is based on a comparative study between DenseNet201, DenseNet169, VGG16, MiniVGG, and ResNet50, where DenseNet201 and MiniVGG obtained the best recall and precision scores, respectively. Our proposed weighted voting ensemble leverages each architecture-specific capabilities to make the final prediction. We assign weights to each classification model based on its performance, which are determined by a mathematical formulation considering the trade-off between recall and precision. To validate the effectiveness of our proposed approach, we evaluate it on the challenging ROSE-Youtu face liveness detection dataset. Our experimental results demonstrate that our proposed method achieves an impressive accuracy rate of 99% in accurately detecting facial spoofing attacks. | 10.1007/s11760-023-02818-2 | an effective facial spoofing detection approach based on weighted deep ensemble learning | deep learning has seen successful implementation in various domains, such as natural language processing, image classification, and object detection in recent times. in the field of biometrics, deep learning has also been used to develop effective anti-spoofing systems. facial spoofing, the act of presenting fake facial information to deceive a biometric system, poses a significant threat to the security of face recognition systems. to address this challenge, we propose, in this paper, an effective and robust facial spoofing detection approach based on weighted deep ensemble learning. our method combines the strengths of two powerful deep learning architectures, densenet201 and minivgg. the choice of these two architectures is based on a comparative study between densenet201, densenet169, vgg16, minivgg, and resnet50, where densenet201 and minivgg obtained the best recall and precision scores, respectively. our proposed weighted voting ensemble leverages each architecture-specific capabilities to make the final prediction. we assign weights to each classification model based on its performance, which are determined by a mathematical formulation considering the trade-off between recall and precision. to validate the effectiveness of our proposed approach, we evaluate it on the challenging rose-youtu face liveness detection dataset. our experimental results demonstrate that our proposed method achieves an impressive accuracy rate of 99% in accurately detecting facial spoofing attacks. | [
"deep learning",
"successful implementation",
"various domains",
"natural language processing",
"image classification",
"object",
"detection",
"recent times",
"the field",
"biometrics",
"deep learning",
"effective anti-spoofing systems",
"facial spoofing",
"the act",
"fake facial information",
"a biometric system",
"a significant threat",
"the security",
"face recognition systems",
"this challenge",
"we",
"this paper",
"an effective and robust facial spoofing detection approach",
"weighted deep ensemble learning",
"our method",
"the strengths",
"two powerful deep learning architectures",
"densenet201",
"the choice",
"these two architectures",
"a comparative study",
"densenet201",
"densenet169",
"vgg16",
"minivgg",
"resnet50",
"densenet201",
"minivgg",
"the best recall",
"precision scores",
"voting ensemble leverages",
"each architecture-specific capabilities",
"the final prediction",
"we",
"weights",
"each classification model",
"its performance",
"which",
"a mathematical formulation",
"the trade-off",
"recall",
"precision",
"the effectiveness",
"our proposed approach",
"we",
"it",
"the challenging rose-youtu",
"liveness detection dataset",
"our experimental results",
"our proposed method",
"an impressive accuracy rate",
"99%",
"facial spoofing attacks",
"two",
"two",
"resnet50",
"99%"
] |
Deep active learning with high structural discriminability for molecular mutagenicity prediction | [
"Huiyan Xu",
"Yanpeng Zhao",
"Yixin Zhang",
"Junshan Han",
"Peng Zan",
"Song He",
"Xiaochen Bo"
] | The assessment of mutagenicity is essential in drug discovery, as it may lead to cancer and germ cells damage. Although in silico methods have been proposed for mutagenicity prediction, their performance is hindered by the scarcity of labeled molecules. However, experimental mutagenicity testing can be time-consuming and costly. One solution to reduce the annotation cost is active learning, where the algorithm actively selects the most valuable molecules from a vast chemical space and presents them to the oracle (e.g., a human expert) for annotation, thereby rapidly improving the model’s predictive performance with a smaller annotation cost. In this paper, we propose muTOX-AL, a deep active learning framework, which can actively explore the chemical space and identify the most valuable molecules, resulting in competitive performance with a small number of labeled samples. The experimental results show that, compared to the random sampling strategy, muTOX-AL can reduce the number of training molecules by about 57%. Additionally, muTOX-AL exhibits outstanding molecular structural discriminability, allowing it to pick molecules with high structural similarity but opposite properties. | 10.1038/s42003-024-06758-6 | deep active learning with high structural discriminability for molecular mutagenicity prediction | the assessment of mutagenicity is essential in drug discovery, as it may lead to cancer and germ cells damage. although in silico methods have been proposed for mutagenicity prediction, their performance is hindered by the scarcity of labeled molecules. however, experimental mutagenicity testing can be time-consuming and costly. one solution to reduce the annotation cost is active learning, where the algorithm actively selects the most valuable molecules from a vast chemical space and presents them to the oracle (e.g., a human expert) for annotation, thereby rapidly improving the model’s predictive performance with a smaller annotation cost. in this paper, we propose mutox-al, a deep active learning framework, which can actively explore the chemical space and identify the most valuable molecules, resulting in competitive performance with a small number of labeled samples. the experimental results show that, compared to the random sampling strategy, mutox-al can reduce the number of training molecules by about 57%. additionally, mutox-al exhibits outstanding molecular structural discriminability, allowing it to pick molecules with high structural similarity but opposite properties. | [
"the assessment",
"mutagenicity",
"drug discovery",
"it",
"silico methods",
"mutagenicity prediction",
"their performance",
"the scarcity",
"labeled molecules",
"experimental mutagenicity testing",
"one solution",
"the annotation cost",
"active learning",
"the algorithm",
"the most valuable molecules",
"a vast chemical space",
"them",
"the oracle",
"e.g., a human expert",
"annotation",
"the model’s predictive performance",
"a smaller annotation cost",
"this paper",
"we",
"mutox-al",
"a deep active learning framework",
"which",
"the chemical space",
"the most valuable molecules",
"competitive performance",
"a small number",
"labeled samples",
"the experimental results",
"the random sampling strategy",
"mutox-al",
"the number",
"training molecules",
"about 57%",
"mutox-al",
"outstanding molecular structural discriminability",
"it",
"molecules",
"high structural similarity",
"opposite properties",
"one",
"mutox-al",
"about 57%"
] |
A privacy-preserving approach for detecting smishing attacks using federated deep learning | [
"Mohamed Abdelkarim Remmide",
"Fatima Boumahdi",
"Bousmaha Ilhem",
"Narhimene Boustia"
] | Smishing is a type of social engineering attack that involves sending fraudulent SMS messages to trick recipients into revealing sensitive information. In recent years, it has become a significant threat to mobile communications. In this study, we introduce a novel smishing detection method based on federated learning, which is a decentralized approach ensuring data privacy. We develop a robust detection model within a federated learning framework based on deep learning methods such as Long Short-Term Memory (LSTM) and Bidirectional LSTM (Bi-LSTM). Our experiments show that the federated learning method using Bi-LSTM achieves an accuracy of 88.78%, highlighting its effectiveness in tackling smishing detection while preserving user privacy. This approach not only offers a promising solution to smishing attacks but also lays the groundwork for future research in mobile security and privacy-preserving machine learning. | 10.1007/s41870-024-02144-x | a privacy-preserving approach for detecting smishing attacks using federated deep learning | smishing is a type of social engineering attack that involves sending fraudulent sms messages to trick recipients into revealing sensitive information. in recent years, it has become a significant threat to mobile communications. in this study, we introduce a novel smishing detection method based on federated learning, which is a decentralized approach ensuring data privacy. we develop a robust detection model within a federated learning framework based on deep learning methods such as long short-term memory (lstm) and bidirectional lstm (bi-lstm). our experiments show that the federated learning method using bi-lstm achieves an accuracy of 88.78%, highlighting its effectiveness in tackling smishing detection while preserving user privacy. this approach not only offers a promising solution to smishing attacks but also lays the groundwork for future research in mobile security and privacy-preserving machine learning. | [
"a type",
"social engineering attack",
"that",
"fraudulent sms messages",
"recipients",
"sensitive information",
"recent years",
"it",
"a significant threat",
"mobile communications",
"this study",
"we",
"a novel",
"detection method",
"federated learning",
"which",
"a decentralized approach",
"data privacy",
"we",
"a robust detection model",
"a federated learning framework",
"deep learning methods",
"long short-term memory",
"lstm",
"bidirectional lstm",
"bi",
"-",
"lstm",
"our experiments",
"the federated learning method",
"bi",
"-",
"lstm",
"an accuracy",
"88.78%",
"its effectiveness",
"detection",
"user privacy",
"this approach",
"a promising solution",
"attacks",
"the groundwork",
"future research",
"mobile security",
"privacy-preserving machine learning",
"recent years",
"88.78%"
] |
SlumberNet: deep learning classification of sleep stages using residual neural networks | [
"Pawan K. Jha",
"Utham K. Valekunja",
"Akhilesh B. Reddy"
] | Sleep research is fundamental to understanding health and well-being, as proper sleep is essential for maintaining optimal physiological function. Here we present SlumberNet, a novel deep learning model based on residual network (ResNet) architecture, designed to classify sleep states in mice using electroencephalogram (EEG) and electromyogram (EMG) signals. Our model was trained and tested on data from mice undergoing baseline sleep, sleep deprivation, and recovery sleep, enabling it to handle a wide range of sleep conditions. Employing k-fold cross-validation and data augmentation techniques, SlumberNet achieved high levels of overall performance (accuracy = 97%; F1 score = 96%) in predicting sleep stages and showed robust performance even with a small and diverse training dataset. Comparison of SlumberNet's performance to manual sleep stage classification revealed a significant reduction in analysis time (~ 50 × faster), without sacrificing accuracy. Our study showcases the potential of deep learning to facilitate sleep research by providing a more efficient, accurate, and scalable method for sleep stage classification. Our work with SlumberNet further demonstrates the power of deep learning in mouse sleep research. | 10.1038/s41598-024-54727-0 | slumbernet: deep learning classification of sleep stages using residual neural networks | sleep research is fundamental to understanding health and well-being, as proper sleep is essential for maintaining optimal physiological function. here we present slumbernet, a novel deep learning model based on residual network (resnet) architecture, designed to classify sleep states in mice using electroencephalogram (eeg) and electromyogram (emg) signals. our model was trained and tested on data from mice undergoing baseline sleep, sleep deprivation, and recovery sleep, enabling it to handle a wide range of sleep conditions. employing k-fold cross-validation and data augmentation techniques, slumbernet achieved high levels of overall performance (accuracy = 97%; f1 score = 96%) in predicting sleep stages and showed robust performance even with a small and diverse training dataset. comparison of slumbernet's performance to manual sleep stage classification revealed a significant reduction in analysis time (~ 50 × faster), without sacrificing accuracy. our study showcases the potential of deep learning to facilitate sleep research by providing a more efficient, accurate, and scalable method for sleep stage classification. our work with slumbernet further demonstrates the power of deep learning in mouse sleep research. | [
"sleep research",
"health",
"well-being",
"proper sleep",
"optimal physiological function",
"we",
"slumbernet",
"a novel deep learning model",
"residual network",
"resnet) architecture",
"sleep states",
"mice",
"eeg",
"emg",
"our model",
"data",
"mice",
"baseline sleep",
"sleep deprivation",
"recovery sleep",
"it",
"a wide range",
"sleep conditions",
"k-fold cross-validation and data augmentation techniques",
"slumbernet",
"high levels",
"overall performance",
"f1 score",
"sleep stages",
"robust performance",
"a small and diverse training dataset",
"comparison",
"slumbernet's performance",
"manual sleep stage classification",
"a significant reduction",
"analysis time",
"accuracy",
"our study",
"the potential",
"deep learning",
"sleep research",
"a more efficient, accurate, and scalable method",
"sleep stage classification",
"our work",
"slumbernet",
"the power",
"deep learning",
"mouse sleep research",
"97%",
"96%",
"50"
] |
A comparative analysis and classification of cancerous brain tumors detection based on classical machine learning and deep transfer learning models | [
"Yajuvendra Pratap Singh",
"D.K Lobiyal"
] | Brain tumor can be fatal for human life. Therefore, proper and timely diagnosis and treatment is important to save human lives. The similarity and variety between the normal and tumor tissues make it difficult for diagnosis through human assisted techniques. In the recent past, machine learning and deep learning techniques have been applied for the classification and segmentation of brain tumor. These techniques have shown promising results by improving the accuracy of classification and segmentation. In this paper, we proposed to implement various classical machine-learning techniques support vector machine, Naive Bayes classifier, K- Nearest Neighbor, random forest, and deep-learning CNN-based models Xception, Inceptionv3, VGG19, and DenseNet201 techniques to classify gliomas, meningiomas, and pituitary tumors and compare their performance. Further, we proposed to modify Xception model to improve the performance of classification and segmentation. In this research, we used the standard Figshare dataset consisting of 3064 images of size \(112\times 112\) each. The performance of these models is measured and compared in terms of precision, recall, F1-score, and accuracy. The classical Machine learning model gives scores varying from 88% to 93%. For the above four metrics. However, all the deep learning models give their scores of more than 96% for the above metrics. Our proposed modified Xception model gives scores more than 98% in all the above four metrics, which is comparable to the best scores reported in the literature. | 10.1007/s11042-023-16637-7 | a comparative analysis and classification of cancerous brain tumors detection based on classical machine learning and deep transfer learning models | brain tumor can be fatal for human life. therefore, proper and timely diagnosis and treatment is important to save human lives. the similarity and variety between the normal and tumor tissues make it difficult for diagnosis through human assisted techniques. in the recent past, machine learning and deep learning techniques have been applied for the classification and segmentation of brain tumor. these techniques have shown promising results by improving the accuracy of classification and segmentation. in this paper, we proposed to implement various classical machine-learning techniques support vector machine, naive bayes classifier, k- nearest neighbor, random forest, and deep-learning cnn-based models xception, inceptionv3, vgg19, and densenet201 techniques to classify gliomas, meningiomas, and pituitary tumors and compare their performance. further, we proposed to modify xception model to improve the performance of classification and segmentation. in this research, we used the standard figshare dataset consisting of 3064 images of size \(112\times 112\) each. the performance of these models is measured and compared in terms of precision, recall, f1-score, and accuracy. the classical machine learning model gives scores varying from 88% to 93%. for the above four metrics. however, all the deep learning models give their scores of more than 96% for the above metrics. our proposed modified xception model gives scores more than 98% in all the above four metrics, which is comparable to the best scores reported in the literature. | [
"brain tumor",
"human life",
"proper and timely diagnosis",
"treatment",
"human lives",
"the similarity",
"variety",
"the normal and tumor tissues",
"it",
"diagnosis",
"human assisted techniques",
"the recent past",
"machine learning",
"deep learning techniques",
"the classification",
"segmentation",
"brain tumor",
"these techniques",
"promising results",
"the accuracy",
"classification",
"segmentation",
"this paper",
"we",
"various classical machine-learning techniques support vector machine",
"naive bayes classifier",
"neighbor",
"random forest",
"deep-learning cnn-based models xception",
"inceptionv3",
"vgg19",
"densenet201 techniques",
"gliomas",
"meningiomas",
"pituitary tumors",
"their performance",
"we",
"xception model",
"the performance",
"classification",
"segmentation",
"this research",
"we",
"the standard figshare dataset",
"3064 images",
"size",
"\\(112\\times 112\\",
"each",
"the performance",
"these models",
"terms",
"precision",
"f1-score",
"accuracy",
"the classical machine learning model",
"scores",
"88%",
"to 93%",
"the above four metrics",
"all the deep learning models",
"their scores",
"more than 96%",
"the above metrics",
"our proposed modified xception model",
"scores",
"more than 98%",
"all the above four metrics",
"which",
"the best scores",
"the literature",
"k-",
"cnn",
"inceptionv3",
"meningiomas",
"3064",
"112\\",
"88% to 93%",
"four",
"more than 96%",
"more than 98%",
"four"
] |
Implementing Deep Learning-Based Intelligent Inspection for Investment Castings | [
"Nabhan Yousef",
"Amit Sata"
] | In this study, a user-friendly intelligent inspection device was developed employing deep learning techniques to effectively detect and characterize surface defects in investment castings. The inspection techniques encompassed a range from basic visual inspection to more specialized methods like liquid penetration and magnetic particle tests. The developed device leveraged convolutional neural networks (CNN), residual neural networks (ResNet), and recurrent neural networks (R-CNN) models, with a dataset of 3600 images depicting industrial castings, both defective and defect-free. A majority of the dataset was allocated for training (approximately 80%), while the remainder was used for testing. Among the deep learning models considered, ResNet exhibited superior accuracy in defect inspection, trailed by CNN and R-CNN. The resultant intelligent inspection device was successfully implemented in an industrial setting, enabling efficient defect identification in investment castings through the trained ResNet model. | 10.1007/s13369-023-08240-7 | implementing deep learning-based intelligent inspection for investment castings | in this study, a user-friendly intelligent inspection device was developed employing deep learning techniques to effectively detect and characterize surface defects in investment castings. the inspection techniques encompassed a range from basic visual inspection to more specialized methods like liquid penetration and magnetic particle tests. the developed device leveraged convolutional neural networks (cnn), residual neural networks (resnet), and recurrent neural networks (r-cnn) models, with a dataset of 3600 images depicting industrial castings, both defective and defect-free. a majority of the dataset was allocated for training (approximately 80%), while the remainder was used for testing. among the deep learning models considered, resnet exhibited superior accuracy in defect inspection, trailed by cnn and r-cnn. the resultant intelligent inspection device was successfully implemented in an industrial setting, enabling efficient defect identification in investment castings through the trained resnet model. | [
"this study",
"a user-friendly intelligent inspection device",
"deep learning techniques",
"surface defects",
"investment castings",
"the inspection techniques",
"a range",
"basic visual inspection",
"more specialized methods",
"liquid penetration",
"magnetic particle tests",
"the developed device leveraged convolutional neural networks",
"cnn",
"residual neural networks",
"resnet",
"recurrent neural networks",
"r-cnn) models",
"a dataset",
"3600 images",
"industrial castings",
"a majority",
"the dataset",
"training",
"approximately 80%",
"the remainder",
"testing",
"the deep learning models",
"resnet",
"superior accuracy",
"defect inspection",
"cnn",
"r-cnn",
"the resultant intelligent inspection device",
"an industrial setting",
"efficient defect identification",
"investment castings",
"the trained resnet model",
"cnn",
"3600",
"approximately 80%",
"cnn"
] |
Revealing the mechanisms of semantic satiation with deep learning models | [
"Xinyu Zhang",
"Jing Lian",
"Zhaofei Yu",
"Huajin Tang",
"Dong Liang",
"Jizhao Liu",
"Jian K. Liu"
] | The phenomenon of semantic satiation, which refers to the loss of meaning of a word or phrase after being repeated many times, is a well-known psychological phenomenon. However, the microscopic neural computational principles responsible for these mechanisms remain unknown. In this study, we use a deep learning model of continuous coupled neural networks to investigate the mechanism underlying semantic satiation and precisely describe this process with neuronal components. Our results suggest that, from a mesoscopic perspective, semantic satiation may be a bottom-up process. Unlike existing macroscopic psychological studies that suggest that semantic satiation is a top-down process, our simulations use a similar experimental paradigm as classical psychology experiments and observe similar results. Satiation of semantic objectives, similar to the learning process of our network model used for object recognition, relies on continuous learning and switching between objects. The underlying neural coupling strengthens or weakens satiation. Taken together, both neural and network mechanisms play a role in controlling semantic satiation. | 10.1038/s42003-024-06162-0 | revealing the mechanisms of semantic satiation with deep learning models | the phenomenon of semantic satiation, which refers to the loss of meaning of a word or phrase after being repeated many times, is a well-known psychological phenomenon. however, the microscopic neural computational principles responsible for these mechanisms remain unknown. in this study, we use a deep learning model of continuous coupled neural networks to investigate the mechanism underlying semantic satiation and precisely describe this process with neuronal components. our results suggest that, from a mesoscopic perspective, semantic satiation may be a bottom-up process. unlike existing macroscopic psychological studies that suggest that semantic satiation is a top-down process, our simulations use a similar experimental paradigm as classical psychology experiments and observe similar results. satiation of semantic objectives, similar to the learning process of our network model used for object recognition, relies on continuous learning and switching between objects. the underlying neural coupling strengthens or weakens satiation. taken together, both neural and network mechanisms play a role in controlling semantic satiation. | [
"the phenomenon",
"semantic satiation",
"which",
"the loss",
"meaning",
"a word",
"phrase",
"a well-known psychological phenomenon",
"the microscopic neural computational principles",
"these mechanisms",
"this study",
"we",
"a deep learning model",
"continuous coupled neural networks",
"the mechanism",
"semantic satiation",
"this process",
"neuronal components",
"our results",
"a mesoscopic perspective",
"semantic satiation",
"a bottom-up process",
"existing macroscopic psychological studies",
"that",
"semantic satiation",
"a top-down process",
"our simulations",
"a similar experimental paradigm",
"classical psychology experiments",
"similar results",
"satiation",
"semantic objectives",
"the learning process",
"our network model",
"object recognition",
"continuous learning",
"switching",
"objects",
"the underlying neural coupling strengthens",
"satiation",
"both neural and network mechanisms",
"a role",
"semantic satiation"
] |
Incremental–decremental data transformation based ensemble deep learning model (IDT-eDL) for temperature prediction | [
"Vipin Kumar",
"Rana Kumar"
] | Human life heavily depends on weather conditions, which affect the necessary operations like agriculture, aviation, tourism, industries, etc., where the temperature plays a vital role in deciding the weather conditions along with other meteorological variables. Therefore, temperature forecasting has drawn considerable attention from researchers because of its significant effect on daily life activities and the ever-challenging forecasting task. These research objectives are to investigate the transformation of data based on incremental and decremental approaches and to find the practical ensemble approach over proposed models for effective temperature prediction, where the proposed model is called the Incremental–Decremental Data Transformation-Based Ensemble Deep Learning Model (IDT-eDL). The temperature dataset from Delhi, India, has been utilized to compare proposed and traditional deep learning models over various performance measures. The proposed IDT-eDL with BiLSTM deep learning model (i.e., IDT-eDL_BiLSTM ) has performed the best among the proposed models and traditional deep learning model and achieved Performance over measures MSE: 1.36, RMSE: 1.16, MAE: 0.89, MAPE: 4.13 and \(R^2\):0.999. Additionally, non-parametric statistical analysis of Friedman ranking is also performed to validate the effectiveness of the proposed IDT-eDL model, which also shows a higher ranking of the proposed model than the traditional deep learning models. | 10.1007/s40808-024-01953-0 | incremental–decremental data transformation based ensemble deep learning model (idt-edl) for temperature prediction | human life heavily depends on weather conditions, which affect the necessary operations like agriculture, aviation, tourism, industries, etc., where the temperature plays a vital role in deciding the weather conditions along with other meteorological variables. therefore, temperature forecasting has drawn considerable attention from researchers because of its significant effect on daily life activities and the ever-challenging forecasting task. these research objectives are to investigate the transformation of data based on incremental and decremental approaches and to find the practical ensemble approach over proposed models for effective temperature prediction, where the proposed model is called the incremental–decremental data transformation-based ensemble deep learning model (idt-edl). the temperature dataset from delhi, india, has been utilized to compare proposed and traditional deep learning models over various performance measures. the proposed idt-edl with bilstm deep learning model (i.e., idt-edl_bilstm ) has performed the best among the proposed models and traditional deep learning model and achieved performance over measures mse: 1.36, rmse: 1.16, mae: 0.89, mape: 4.13 and \(r^2\):0.999. additionally, non-parametric statistical analysis of friedman ranking is also performed to validate the effectiveness of the proposed idt-edl model, which also shows a higher ranking of the proposed model than the traditional deep learning models. | [
"human life",
"weather conditions",
"which",
"the necessary operations",
"agriculture",
"aviation",
"tourism",
"industries",
"the temperature",
"a vital role",
"the weather conditions",
"other meteorological variables",
"temperature forecasting",
"considerable attention",
"researchers",
"its significant effect",
"daily life activities",
"the ever-challenging forecasting task",
"these research objectives",
"the transformation",
"data",
"incremental and decremental approaches",
"the practical ensemble approach",
"proposed models",
"effective temperature prediction",
"the proposed model",
"the incremental–decremental data transformation-based ensemble deep learning model",
"idt-edl",
"the temperature",
"delhi",
"india",
"proposed and traditional deep learning models",
"various performance measures",
"the proposed idt-edl",
"bilstm deep learning model",
"i.e., idt-edl_bilstm",
"the proposed models",
"traditional deep learning model",
"performance",
"measures mse",
"rmse",
"mape",
"\\(r^2\\):0.999",
"non-parametric statistical analysis",
"friedman ranking",
"the effectiveness",
"the proposed idt-edl model",
"which",
"a higher ranking",
"the proposed model",
"the traditional deep learning models",
"delhi",
"india",
"1.36",
"rmse",
"1.16",
"0.89",
"4.13"
] |
Deep Learning Framework for Predicting Essential Proteins with Temporal Convolutional Networks | [
"Pengli Lu \n (卢鹏丽)",
"Peishi Yang \n (杨培实)",
"Yonggang Liao \n (廖永刚)"
] | Essential proteins are an indispensable part of cells and play an extremely significant role in genetic disease diagnosis and drug development. Therefore, the prediction of essential proteins has received extensive attention from researchers. Many centrality methods and machine learning algorithms have been proposed to predict essential proteins. Nevertheless, the topological characteristics learned by the centrality method are not comprehensive enough, resulting in low accuracy. In addition, machine learning algorithms need sufficient prior knowledge to select features, and the ability to solve imbalanced classification problems needs to be further strengthened. These two factors greatly affect the performance of predicting essential proteins. In this paper, we propose a deep learning framework based on temporal convolutional networks to predict essential proteins by integrating gene expression data and protein-protein interaction (PPI) network. We make use of the method of network embedding to automatically learn more abundant features of proteins in the PPI network. For gene expression data, we treat it as sequence data, and use temporal convolutional networks to extract sequence features. Finally, the two types of features are integrated and put into the multi-layer neural network to complete the final classification task. The performance of our method is evaluated by comparing with seven centrality methods, six machine learning algorithms, and two deep learning models. The results of the experiment show that our method is more effective than the comparison methods for predicting essential proteins. | 10.1007/s12204-023-2632-9 | deep learning framework for predicting essential proteins with temporal convolutional networks | essential proteins are an indispensable part of cells and play an extremely significant role in genetic disease diagnosis and drug development. therefore, the prediction of essential proteins has received extensive attention from researchers. many centrality methods and machine learning algorithms have been proposed to predict essential proteins. nevertheless, the topological characteristics learned by the centrality method are not comprehensive enough, resulting in low accuracy. in addition, machine learning algorithms need sufficient prior knowledge to select features, and the ability to solve imbalanced classification problems needs to be further strengthened. these two factors greatly affect the performance of predicting essential proteins. in this paper, we propose a deep learning framework based on temporal convolutional networks to predict essential proteins by integrating gene expression data and protein-protein interaction (ppi) network. we make use of the method of network embedding to automatically learn more abundant features of proteins in the ppi network. for gene expression data, we treat it as sequence data, and use temporal convolutional networks to extract sequence features. finally, the two types of features are integrated and put into the multi-layer neural network to complete the final classification task. the performance of our method is evaluated by comparing with seven centrality methods, six machine learning algorithms, and two deep learning models. the results of the experiment show that our method is more effective than the comparison methods for predicting essential proteins. | [
"essential proteins",
"an indispensable part",
"cells",
"an extremely significant role",
"genetic disease diagnosis and drug development",
"the prediction",
"essential proteins",
"extensive attention",
"researchers",
"many centrality methods",
"machine learning algorithms",
"essential proteins",
"the topological characteristics",
"the centrality method",
"low accuracy",
"addition",
"machine learning algorithms",
"sufficient prior knowledge",
"features",
"the ability",
"imbalanced classification problems",
"these two factors",
"the performance",
"essential proteins",
"this paper",
"we",
"a deep learning framework",
"temporal convolutional networks",
"essential proteins",
"gene expression data",
"protein-protein interaction",
"(ppi) network",
"we",
"use",
"the method",
"network",
"more abundant features",
"proteins",
"the ppi network",
"gene expression data",
"we",
"it",
"sequence data",
"temporal convolutional networks",
"sequence features",
"the two types",
"features",
"the multi-layer neural network",
"the final classification task",
"the performance",
"our method",
"seven centrality methods",
"six machine learning algorithms",
"two deep learning models",
"the results",
"the experiment",
"our method",
"the comparison methods",
"essential proteins",
"two",
"seven",
"six",
"two"
] |
Unsupervised deep representation learning enables phenotype discovery for genetic association studies of brain imaging | [
"Khush Patel",
"Ziqian Xie",
"Hao Yuan",
"Sheikh Muhammad Saiful Islam",
"Yaochen Xie",
"Wei He",
"Wanheng Zhang",
"Assaf Gottlieb",
"Han Chen",
"Luca Giancardo",
"Alexander Knaack",
"Evan Fletcher",
"Myriam Fornage",
"Shuiwang Ji",
"Degui Zhi"
] | Understanding the genetic architecture of brain structure is challenging, partly due to difficulties in designing robust, non-biased descriptors of brain morphology. Until recently, brain measures for genome-wide association studies (GWAS) consisted of traditionally expert-defined or software-derived image-derived phenotypes (IDPs) that are often based on theoretical preconceptions or computed from limited amounts of data. Here, we present an approach to derive brain imaging phenotypes using unsupervised deep representation learning. We train a 3-D convolutional autoencoder model with reconstruction loss on 6130 UK Biobank (UKBB) participants’ T1 or T2-FLAIR (T2) brain MRIs to create a 128-dimensional representation known as Unsupervised Deep learning derived Imaging Phenotypes (UDIPs). GWAS of these UDIPs in held-out UKBB subjects (n = 22,880 discovery and n = 12,359/11,265 replication cohorts for T1/T2) identified 9457 significant SNPs organized into 97 independent genetic loci of which 60 loci were replicated. Twenty-six loci were not reported in earlier T1 and T2 IDP-based UK Biobank GWAS. We developed a perturbation-based decoder interpretation approach to show that these loci are associated with UDIPs mapped to multiple relevant brain regions. Our results established unsupervised deep learning can derive robust, unbiased, heritable, and interpretable brain imaging phenotypes. | 10.1038/s42003-024-06096-7 | unsupervised deep representation learning enables phenotype discovery for genetic association studies of brain imaging | understanding the genetic architecture of brain structure is challenging, partly due to difficulties in designing robust, non-biased descriptors of brain morphology. until recently, brain measures for genome-wide association studies (gwas) consisted of traditionally expert-defined or software-derived image-derived phenotypes (idps) that are often based on theoretical preconceptions or computed from limited amounts of data. here, we present an approach to derive brain imaging phenotypes using unsupervised deep representation learning. we train a 3-d convolutional autoencoder model with reconstruction loss on 6130 uk biobank (ukbb) participants’ t1 or t2-flair (t2) brain mris to create a 128-dimensional representation known as unsupervised deep learning derived imaging phenotypes (udips). gwas of these udips in held-out ukbb subjects (n = 22,880 discovery and n = 12,359/11,265 replication cohorts for t1/t2) identified 9457 significant snps organized into 97 independent genetic loci of which 60 loci were replicated. twenty-six loci were not reported in earlier t1 and t2 idp-based uk biobank gwas. we developed a perturbation-based decoder interpretation approach to show that these loci are associated with udips mapped to multiple relevant brain regions. our results established unsupervised deep learning can derive robust, unbiased, heritable, and interpretable brain imaging phenotypes. | [
"the genetic architecture",
"brain structure",
"difficulties",
"robust, non-biased descriptors",
"brain morphology",
"brain measures",
"genome-wide association studies",
"gwas",
"traditionally expert-defined or software-derived image-derived phenotypes",
"idps",
"that",
"theoretical preconceptions",
"limited amounts",
"data",
"we",
"an approach",
"brain imaging phenotypes",
"unsupervised deep representation learning",
"we",
"a 3-d convolutional autoencoder model",
"reconstruction loss",
"6130 uk biobank (ukbb) participants’ t1 or t2-flair (t2) brain mris",
"a 128-dimensional representation",
"unsupervised deep learning derived imaging phenotypes",
"udips",
"gwas",
"these udips",
"held-out ukbb subjects",
"n = 22,880 discovery",
"n = 12,359/11,265 replication cohorts",
"t1/t2",
"9457 significant snps",
"97 independent genetic loci",
"which",
"60 loci",
"twenty-six loci",
"earlier t1 and t2 idp-based uk biobank gwas",
"we",
"a perturbation-based decoder interpretation approach",
"these loci",
"udips",
"multiple relevant brain regions",
"our results",
"unsupervised deep learning",
"robust, unbiased, heritable, and interpretable brain imaging phenotypes",
"3",
"6130",
"mris",
"128",
"22,880",
"12,359/11,265",
"9457",
"97",
"60",
"twenty-six"
] |
Deep learning-based weld defect classification using VGG16 transfer learning adaptive fine-tuning | [
"Samuel Kumaresan",
"K. S. Jai Aultrin",
"S. S. Kumar",
"M. Dev Anand"
] | Welding is a vital joining process; however, occurrences of weld defects often degrade the quality of the welded part. The risk of occurrence of a variety of defects has led to the development of advanced weld defects detection systems such as automated weld defects detection and classification. The present work is a novel approach that proposes and investigates a unique image-centered method based on a deep learning model trained by a small X-ray image dataset. A data augmentation method able to process images on the go was used to offset the limitation of the small X-ray dataset. Fine-tuned transfer learning techniques were used to train two convolutional neural network based architectures with VGG16 and ResNet50 as the base models for the augmented sets. Out of the networks we fine-tuned, VGG16 based model performed well with a relatively higher average accuracy of 90%. Even though the small dataset was spread across 15 different classes in an unbalanced way, the learning curves showed acceptable model generalization characteristics. | 10.1007/s12008-023-01327-3 | deep learning-based weld defect classification using vgg16 transfer learning adaptive fine-tuning | welding is a vital joining process; however, occurrences of weld defects often degrade the quality of the welded part. the risk of occurrence of a variety of defects has led to the development of advanced weld defects detection systems such as automated weld defects detection and classification. the present work is a novel approach that proposes and investigates a unique image-centered method based on a deep learning model trained by a small x-ray image dataset. a data augmentation method able to process images on the go was used to offset the limitation of the small x-ray dataset. fine-tuned transfer learning techniques were used to train two convolutional neural network based architectures with vgg16 and resnet50 as the base models for the augmented sets. out of the networks we fine-tuned, vgg16 based model performed well with a relatively higher average accuracy of 90%. even though the small dataset was spread across 15 different classes in an unbalanced way, the learning curves showed acceptable model generalization characteristics. | [
"welding",
"a vital joining process",
"occurrences",
"weld defects",
"the quality",
"the welded part",
"the risk",
"occurrence",
"a variety",
"defects",
"the development",
"advanced weld defects detection systems",
"automated weld defects detection",
"classification",
"the present work",
"a novel approach",
"that",
"a unique image-centered method",
"a deep learning model",
"a small x-ray image dataset",
"a data augmentation method",
"images",
"the go",
"the limitation",
"the small x-ray dataset",
"fine-tuned transfer learning techniques",
"two convolutional neural network based architectures",
"vgg16",
"resnet50",
"the base models",
"the augmented sets",
"the networks",
"we",
"vgg16 based model",
"a relatively higher average accuracy",
"90%",
"the small dataset",
"15 different classes",
"an unbalanced way",
"the learning curves",
"acceptable model generalization characteristics",
"two",
"resnet50",
"90%",
"15"
] |
Deep-learning-assisted online surface roughness monitoring in ultraprecision fly cutting | [
"Adeel Shehzad",
"XiaoTing Rui",
"YuanYuan Ding",
"JianShu Zhang",
"Yu Chang",
"HanJing Lu",
"YiHeng Chen"
] | Surface roughness is one of the most critical attributes of machined components, especially those used in high-performance systems. Online surface roughness monitoring offers advancements comparable to post-process inspection methods, reducing inspection time and costs and concurrently reducing the likelihood of defects. Currently, online monitoring approaches for surface roughness are constrained by several limitations, including the reliance on handcrafted feature extraction, which necessitates the involvement of human experts and entails time-consuming processes. Moreover, the prediction models trained under one set of cutting conditions exhibit poor performance when applied to different experimental settings. To address these challenges, this work presents a novel deep-learning-assisted online surface roughness monitoring method for ultraprecision fly cutting of copper workpieces under different cutting conditions. Tooltip acceleration signals were acquired during each cutting experiment to develop two datasets, and no handcrafted features were extracted. Five deep learning models were developed and evaluated using standard performance metrics. A convolutional neural network stacked on a long short-term memory network outperformed all other network models, yielding exceptional results, including a mean absolute percentage error as low as 1.51% and anR2 value of 96.6%. Furthermore, the robustness of the proposed model was assessed via a validation cohort analysis using experimental data obtained using cutting parameters different from those previously employed. The performance of the model remained consistent and commendable under varied conditions, asserting its applicability in real-world scenarios. | 10.1007/s11431-023-2615-4 | deep-learning-assisted online surface roughness monitoring in ultraprecision fly cutting | surface roughness is one of the most critical attributes of machined components, especially those used in high-performance systems. online surface roughness monitoring offers advancements comparable to post-process inspection methods, reducing inspection time and costs and concurrently reducing the likelihood of defects. currently, online monitoring approaches for surface roughness are constrained by several limitations, including the reliance on handcrafted feature extraction, which necessitates the involvement of human experts and entails time-consuming processes. moreover, the prediction models trained under one set of cutting conditions exhibit poor performance when applied to different experimental settings. to address these challenges, this work presents a novel deep-learning-assisted online surface roughness monitoring method for ultraprecision fly cutting of copper workpieces under different cutting conditions. tooltip acceleration signals were acquired during each cutting experiment to develop two datasets, and no handcrafted features were extracted. five deep learning models were developed and evaluated using standard performance metrics. a convolutional neural network stacked on a long short-term memory network outperformed all other network models, yielding exceptional results, including a mean absolute percentage error as low as 1.51% and anr2 value of 96.6%. furthermore, the robustness of the proposed model was assessed via a validation cohort analysis using experimental data obtained using cutting parameters different from those previously employed. the performance of the model remained consistent and commendable under varied conditions, asserting its applicability in real-world scenarios. | [
"surface roughness",
"the most critical attributes",
"machined components",
"especially those",
"high-performance systems",
"online surface roughness monitoring",
"advancements",
"post-process inspection methods",
"inspection time",
"costs",
"the likelihood",
"defects",
"online monitoring approaches",
"surface roughness",
"several limitations",
"the reliance",
"handcrafted feature extraction",
"which",
"the involvement",
"human experts",
"time-consuming processes",
"the prediction models",
"one set",
"conditions",
"poor performance",
"different experimental settings",
"these challenges",
"this work",
"a novel deep-learning-assisted online surface roughness",
"method",
"ultraprecision fly cutting",
"copper workpieces",
"different cutting conditions",
"tooltip acceleration signals",
"each cutting experiment",
"two datasets",
"no handcrafted features",
"five deep learning models",
"standard performance metrics",
"a convolutional neural network",
"a long short-term memory network",
"all other network models",
"exceptional results",
"a mean absolute percentage error",
"1.51%",
"anr2 value",
"96.6%",
"the robustness",
"the proposed model",
"a validation cohort analysis",
"experimental data",
"cutting parameters",
"those",
"the performance",
"the model",
"varied conditions",
"its applicability",
"real-world scenarios",
"two",
"five",
"as low as 1.51%",
"96.6%"
] |
Robust Deep Learning for Accurate Landslide Identification and Prediction | [
"T. Bhuvaneswari",
"R. Chandra Guru Sekar",
"M. Chengathir Selvi",
"J. Jemima Rubavathi",
"V. Kaviyaa"
] | AbstractLandslide is the most common natural risk in mountainous regions on all five continents and they can pose a serious threat in these areas. Strong earthquakes, unusual weather events such as storms and eruptions of volcanoes, and human-caused events such as creating roadways that crossed the slopes are the main causes of landslides and they cause significant dangers to residential properties and society as a whole. The Landslide4sense dataset is used for identifying landslides, which contains 3799 training samples and 245 testing samples. These image patches are taken from the Sentinel-2 sensor, while the slope and Digital Elevation Model (DEM) are from the ALOS PALSAR sensor. Data was gathered from four distinct geographical areas namely Kodagu, Iburi, Taiwan, and Gorkha. We use Deep Learning (DL) models such as ResNet18, U-Net, and VGG16 to predict the landslide. By comparing the above models with the evaluation metrics like loss, precision, recall, F1 score and accuracy, ResNet18 model is selected as the best model for landslide identification. | 10.1134/S1028334X23602961 | robust deep learning for accurate landslide identification and prediction | abstractlandslide is the most common natural risk in mountainous regions on all five continents and they can pose a serious threat in these areas. strong earthquakes, unusual weather events such as storms and eruptions of volcanoes, and human-caused events such as creating roadways that crossed the slopes are the main causes of landslides and they cause significant dangers to residential properties and society as a whole. the landslide4sense dataset is used for identifying landslides, which contains 3799 training samples and 245 testing samples. these image patches are taken from the sentinel-2 sensor, while the slope and digital elevation model (dem) are from the alos palsar sensor. data was gathered from four distinct geographical areas namely kodagu, iburi, taiwan, and gorkha. we use deep learning (dl) models such as resnet18, u-net, and vgg16 to predict the landslide. by comparing the above models with the evaluation metrics like loss, precision, recall, f1 score and accuracy, resnet18 model is selected as the best model for landslide identification. | [
"abstractlandslide",
"the most common natural risk",
"mountainous regions",
"all five continents",
"they",
"a serious threat",
"these areas",
"strong earthquakes",
"unusual weather events",
"storms",
"eruptions",
"volcanoes",
"human-caused events",
"roadways",
"that",
"the slopes",
"the main causes",
"landslides",
"they",
"significant dangers",
"residential properties",
"society",
"a whole",
"the landslide4sense dataset",
"landslides",
"which",
"3799 training samples",
"245 testing samples",
"these image patches",
"the sentinel-2 sensor",
"the slope",
"digital elevation model",
"dem",
"the alos palsar sensor",
"data",
"four distinct geographical areas",
"we",
"deep learning (dl) models",
"resnet18",
"u",
"-",
"net",
"vgg16",
"the landslide",
"the above models",
"the evaluation metrics",
"loss",
"precision",
"recall",
"f1 score",
"accuracy",
"resnet18 model",
"the best model",
"landslide identification",
"five",
"3799",
"245",
"dem",
"four",
"kodagu",
"iburi",
"taiwan",
"resnet18",
"resnet18"
] |
DeepEPhishNet: a deep learning framework for email phishing detection using word embedding algorithms | [
"M Somesha",
"Alwyn Roshan Pais"
] | Email phishing is a social engineering scheme that uses spoofed emails intended to trick the user into disclosing legitimate business and personal credentials. Many phishing email detection techniques exist based on machine learning, deep learning, and word embedding. In this paper, we propose a new technique for the detection of phishing emails using word embedding (Word2Vec, FastText, and TF-IDF) and deep learning techniques (DNN and BiLSTM network). Our proposed technique makes use of only four header based (From, Returnpath, Subject, Message-ID) features of the emails for the email classification. We applied several word embeddings for the evaluation of our models. From the experimental evaluation, we observed that the DNN model with FastText-SkipGram achieved an accuracy of 99.52% and BiLSTM model with FastText-SkipGram achieved an accuracy of 99.42%. Among these two techniques, DNN outperformed BiLSTM using the same word embedding (FastText-SkipGram) techniques with an accuracy of 99.52%. | 10.1007/s12046-024-02538-4 | deepephishnet: a deep learning framework for email phishing detection using word embedding algorithms | email phishing is a social engineering scheme that uses spoofed emails intended to trick the user into disclosing legitimate business and personal credentials. many phishing email detection techniques exist based on machine learning, deep learning, and word embedding. in this paper, we propose a new technique for the detection of phishing emails using word embedding (word2vec, fasttext, and tf-idf) and deep learning techniques (dnn and bilstm network). our proposed technique makes use of only four header based (from, returnpath, subject, message-id) features of the emails for the email classification. we applied several word embeddings for the evaluation of our models. from the experimental evaluation, we observed that the dnn model with fasttext-skipgram achieved an accuracy of 99.52% and bilstm model with fasttext-skipgram achieved an accuracy of 99.42%. among these two techniques, dnn outperformed bilstm using the same word embedding (fasttext-skipgram) techniques with an accuracy of 99.52%. | [
"email phishing",
"a social engineering scheme",
"that",
"spoofed emails",
"the user",
"legitimate business",
"personal credentials",
"many phishing email detection techniques",
"machine learning",
"deep learning",
"word",
"this paper",
"we",
"a new technique",
"the detection",
"emails",
"word",
"tf-idf",
"deep learning techniques",
"dnn and bilstm network",
"our proposed technique",
"use",
"only four header based (from, returnpath, subject, message-id) features",
"the emails",
"the email classification",
"we",
"several word embeddings",
"the evaluation",
"our models",
"the experimental evaluation",
"we",
"the dnn model",
"fasttext-skipgram",
"an accuracy",
"99.52%",
"bilstm model",
"fasttext-skipgram",
"an accuracy",
"99.42%",
"these two techniques",
"dnn",
"the same word",
"fasttext-skipgram",
"an accuracy",
"99.52%",
"only four",
"99.52%",
"99.42%",
"two",
"99.52%"
] |
Quantum mechanics-based deep learning framework considering near-zero variance data | [
"Eunseo Oh",
"Hyunsoo Lee"
] | AbstractWith the development of automation technology, big data is collected during operation processes, and among various machine learning analysis techniques using such data, deep neural network (DNN) has high analysis performance. However, most industrial data has low-variance or near-zero variance data from the refined processes in the collected data itself. This reduces deep learning analysis performance, which is affected by data quality. To overcome this, in this study, the weight learning pattern of an applied DNN is modeled as a stochastic differential equation (SDE) based on quantum mechanics. Through the drift and diffuse terms of quantum mechanics, the patterns of the DNN and data are quickly acquired, and the data with near-zero variance is effectively analyzed simultaneously. To demonstrate the superiority of the proposed framework, DNN analysis was performed using data with near-zero variance issues, and it was proved that the proposed framework is effective in processing near-zero variance data compared with other existing algorithms.Graphical abstract | 10.1007/s10489-024-05465-3 | quantum mechanics-based deep learning framework considering near-zero variance data | abstractwith the development of automation technology, big data is collected during operation processes, and among various machine learning analysis techniques using such data, deep neural network (dnn) has high analysis performance. however, most industrial data has low-variance or near-zero variance data from the refined processes in the collected data itself. this reduces deep learning analysis performance, which is affected by data quality. to overcome this, in this study, the weight learning pattern of an applied dnn is modeled as a stochastic differential equation (sde) based on quantum mechanics. through the drift and diffuse terms of quantum mechanics, the patterns of the dnn and data are quickly acquired, and the data with near-zero variance is effectively analyzed simultaneously. to demonstrate the superiority of the proposed framework, dnn analysis was performed using data with near-zero variance issues, and it was proved that the proposed framework is effective in processing near-zero variance data compared with other existing algorithms.graphical abstract | [
"the development",
"automation technology",
"big data",
"operation processes",
"various machine",
"analysis techniques",
"such data",
"deep neural network",
"dnn",
"high analysis performance",
"most industrial data",
"low-variance or near-zero variance data",
"the refined processes",
"the collected data",
"itself",
"this",
"deep learning analysis performance",
"which",
"data quality",
"this",
"this study",
"the weight learning pattern",
"an applied dnn",
"a stochastic differential equation",
"sde",
"quantum mechanics",
"the drift",
"diffuse",
"terms",
"quantum mechanics",
"the patterns",
"the dnn",
"data",
"the data",
"near-zero variance",
"the superiority",
"the proposed framework",
"dnn analysis",
"data",
"near-zero variance issues",
"it",
"the proposed framework",
"near-zero variance data",
"other existing algorithms.graphical abstract",
"abstractwith",
"quantum mechanics"
] |
Deep learning-based network intrusion detection in smart healthcare enterprise systems | [
"Vinayakumar Ravi"
] | Network-based intrusion detection (N-IDS) is an essential system inside an organization in a smart healthcare enterprise system to prevent the system and its networks from network attacks. A survey of the literature shows that in recent days deep learning approaches are employed successfully for N-IDS using network connections. However, finding the right features from a network connection is a daunting task. This work proposes a multidimensional attention-based deep learning approach for N-IDS that extracts the optimal features for intrusion detection using network payload. The proposed approach includes an embedding that transforms every word in the payload into a 100-dimensional feature vector representation and embedding follows deep learning layers such as a convolutional neural network (CNN) and long short-term memory (LSTM) with attention to extracting optimal features for attack classification. Next, the features of CNN and LSTM layers are concatenated and passed into fully connected layers for intrusion detection. The proposed approach showed 99% accuracy on the KISTI enterprise network payload dataset. In addition, the proposed approach showed 98% accuracy and 99% accuracy on network-based datasets such as KDDCup-99, CICIDS-2017, and WSN-DS and UNSW-NB15 respectively. The good experimental results on various network-based datasets suggest that the proposed N-IDS in smart healthcare enterprise systems is robust and generalizable to detect attacks from different network environments. The proposed approach performed better in all the experiments than the other deep learning-based methods. The model showed a 5% accuracy performance improvement compared to the existing study using the KISTI dataset. In addition, the proposed model has shown similar performances on the other intrusion datasets. The proposed approach serves as a network monitoring tool for efficient and accurate detection of attacks inside an organization on a healthcare enterprise network system. | 10.1007/s11042-023-17300-x | deep learning-based network intrusion detection in smart healthcare enterprise systems | network-based intrusion detection (n-ids) is an essential system inside an organization in a smart healthcare enterprise system to prevent the system and its networks from network attacks. a survey of the literature shows that in recent days deep learning approaches are employed successfully for n-ids using network connections. however, finding the right features from a network connection is a daunting task. this work proposes a multidimensional attention-based deep learning approach for n-ids that extracts the optimal features for intrusion detection using network payload. the proposed approach includes an embedding that transforms every word in the payload into a 100-dimensional feature vector representation and embedding follows deep learning layers such as a convolutional neural network (cnn) and long short-term memory (lstm) with attention to extracting optimal features for attack classification. next, the features of cnn and lstm layers are concatenated and passed into fully connected layers for intrusion detection. the proposed approach showed 99% accuracy on the kisti enterprise network payload dataset. in addition, the proposed approach showed 98% accuracy and 99% accuracy on network-based datasets such as kddcup-99, cicids-2017, and wsn-ds and unsw-nb15 respectively. the good experimental results on various network-based datasets suggest that the proposed n-ids in smart healthcare enterprise systems is robust and generalizable to detect attacks from different network environments. the proposed approach performed better in all the experiments than the other deep learning-based methods. the model showed a 5% accuracy performance improvement compared to the existing study using the kisti dataset. in addition, the proposed model has shown similar performances on the other intrusion datasets. the proposed approach serves as a network monitoring tool for efficient and accurate detection of attacks inside an organization on a healthcare enterprise network system. | [
"network-based intrusion detection",
"-ids",
"an essential system",
"an organization",
"a smart healthcare enterprise system",
"the system",
"its networks",
"network attacks",
"a survey",
"the literature",
"recent days",
"deep learning approaches",
"n-ids",
"network connections",
"the right features",
"a network connection",
"a daunting task",
"this work",
"a multidimensional attention-based deep learning approach",
"n-ids",
"that",
"the optimal features",
"intrusion detection",
"network payload",
"the proposed approach",
"that",
"every word",
"the payload",
"a 100-dimensional feature vector representation",
"embedding",
"deep learning layers",
"a convolutional neural network",
"cnn",
"long short-term memory",
"lstm",
"attention",
"optimal features",
"attack classification",
"the features",
"cnn",
"lstm layers",
"fully connected layers",
"intrusion detection",
"the proposed approach",
"99% accuracy",
"the kisti enterprise network payload dataset",
"addition",
"the proposed approach",
"98% accuracy",
"99% accuracy",
"network-based datasets",
"kddcup-99",
"cicids-2017",
"wsn-ds and unsw-nb15",
"the good experimental results",
"various network-based datasets",
"the proposed n-ids",
"smart healthcare enterprise systems",
"attacks",
"different network environments",
"the proposed approach",
"all the experiments",
"the other deep learning-based methods",
"the model",
"a 5% accuracy performance improvement",
"the existing study",
"the kisti dataset",
"addition",
"the proposed model",
"similar performances",
"the other intrusion datasets",
"the proposed approach",
"a network monitoring tool",
"efficient and accurate detection",
"attacks",
"an organization",
"a healthcare enterprise network system",
"smart healthcare",
"recent days",
"100",
"cnn",
"cnn",
"99%",
"98%",
"99%",
"kddcup-99",
"cicids-2017",
"smart healthcare enterprise",
"5%"
] |
Implementing Deep Learning-Based Intelligent Inspection for Investment Castings | [
"Nabhan Yousef",
"Amit Sata"
] | In this study, a user-friendly intelligent inspection device was developed employing deep learning techniques to effectively detect and characterize surface defects in investment castings. The inspection techniques encompassed a range from basic visual inspection to more specialized methods like liquid penetration and magnetic particle tests. The developed device leveraged convolutional neural networks (CNN), residual neural networks (ResNet), and recurrent neural networks (R-CNN) models, with a dataset of 3600 images depicting industrial castings, both defective and defect-free. A majority of the dataset was allocated for training (approximately 80%), while the remainder was used for testing. Among the deep learning models considered, ResNet exhibited superior accuracy in defect inspection, trailed by CNN and R-CNN. The resultant intelligent inspection device was successfully implemented in an industrial setting, enabling efficient defect identification in investment castings through the trained ResNet model. | 10.1007/s13369-023-08240-7 | implementing deep learning-based intelligent inspection for investment castings | in this study, a user-friendly intelligent inspection device was developed employing deep learning techniques to effectively detect and characterize surface defects in investment castings. the inspection techniques encompassed a range from basic visual inspection to more specialized methods like liquid penetration and magnetic particle tests. the developed device leveraged convolutional neural networks (cnn), residual neural networks (resnet), and recurrent neural networks (r-cnn) models, with a dataset of 3600 images depicting industrial castings, both defective and defect-free. a majority of the dataset was allocated for training (approximately 80%), while the remainder was used for testing. among the deep learning models considered, resnet exhibited superior accuracy in defect inspection, trailed by cnn and r-cnn. the resultant intelligent inspection device was successfully implemented in an industrial setting, enabling efficient defect identification in investment castings through the trained resnet model. | [
"this study",
"a user-friendly intelligent inspection device",
"deep learning techniques",
"surface defects",
"investment castings",
"the inspection techniques",
"a range",
"basic visual inspection",
"more specialized methods",
"liquid penetration",
"magnetic particle tests",
"the developed device leveraged convolutional neural networks",
"cnn",
"residual neural networks",
"resnet",
"recurrent neural networks",
"r-cnn) models",
"a dataset",
"3600 images",
"industrial castings",
"a majority",
"the dataset",
"training",
"approximately 80%",
"the remainder",
"testing",
"the deep learning models",
"resnet",
"superior accuracy",
"defect inspection",
"cnn",
"r-cnn",
"the resultant intelligent inspection device",
"an industrial setting",
"efficient defect identification",
"investment castings",
"the trained resnet model",
"cnn",
"3600",
"approximately 80%",
"cnn"
] |
Discovery of a structural class of antibiotics with explainable deep learning | [
"Felix Wong",
"Erica J. Zheng",
"Jacqueline A. Valeri",
"Nina M. Donghia",
"Melis N. Anahtar",
"Satotaka Omori",
"Alicia Li",
"Andres Cubillos-Ruiz",
"Aarti Krishnan",
"Wengong Jin",
"Abigail L. Manson",
"Jens Friedrichs",
"Ralf Helbig",
"Behnoush Hajian",
"Dawid K. Fiejtek",
"Florence F. Wagner",
"Holly H. Soutter",
"Ashlee M. Earl",
"Jonathan M. Stokes",
"Lars D. Renner",
"James J. Collins"
] | The discovery of novel structural classes of antibiotics is urgently needed to address the ongoing antibiotic resistance crisis1,2,3,4,5,6,7,8,9. Deep learning approaches have aided in exploring chemical spaces1,10,11,12,13,14,15; these typically use black box models and do not provide chemical insights. Here we reasoned that the chemical substructures associated with antibiotic activity learned by neural network models can be identified and used to predict structural classes of antibiotics. We tested this hypothesis by developing an explainable, substructure-based approach for the efficient, deep learning-guided exploration of chemical spaces. We determined the antibiotic activities and human cell cytotoxicity profiles of 39,312 compounds and applied ensembles of graph neural networks to predict antibiotic activity and cytotoxicity for 12,076,365 compounds. Using explainable graph algorithms, we identified substructure-based rationales for compounds with high predicted antibiotic activity and low predicted cytotoxicity. We empirically tested 283 compounds and found that compounds exhibiting antibiotic activity against Staphylococcus aureus were enriched in putative structural classes arising from rationales. Of these structural classes of compounds, one is selective against methicillin-resistant S. aureus (MRSA) and vancomycin-resistant enterococci, evades substantial resistance, and reduces bacterial titres in mouse models of MRSA skin and systemic thigh infection. Our approach enables the deep learning-guided discovery of structural classes of antibiotics and demonstrates that machine learning models in drug discovery can be explainable, providing insights into the chemical substructures that underlie selective antibiotic activity. | 10.1038/s41586-023-06887-8 | discovery of a structural class of antibiotics with explainable deep learning | the discovery of novel structural classes of antibiotics is urgently needed to address the ongoing antibiotic resistance crisis1,2,3,4,5,6,7,8,9. deep learning approaches have aided in exploring chemical spaces1,10,11,12,13,14,15; these typically use black box models and do not provide chemical insights. here we reasoned that the chemical substructures associated with antibiotic activity learned by neural network models can be identified and used to predict structural classes of antibiotics. we tested this hypothesis by developing an explainable, substructure-based approach for the efficient, deep learning-guided exploration of chemical spaces. we determined the antibiotic activities and human cell cytotoxicity profiles of 39,312 compounds and applied ensembles of graph neural networks to predict antibiotic activity and cytotoxicity for 12,076,365 compounds. using explainable graph algorithms, we identified substructure-based rationales for compounds with high predicted antibiotic activity and low predicted cytotoxicity. we empirically tested 283 compounds and found that compounds exhibiting antibiotic activity against staphylococcus aureus were enriched in putative structural classes arising from rationales. of these structural classes of compounds, one is selective against methicillin-resistant s. aureus (mrsa) and vancomycin-resistant enterococci, evades substantial resistance, and reduces bacterial titres in mouse models of mrsa skin and systemic thigh infection. our approach enables the deep learning-guided discovery of structural classes of antibiotics and demonstrates that machine learning models in drug discovery can be explainable, providing insights into the chemical substructures that underlie selective antibiotic activity. | [
"the discovery",
"novel structural classes",
"antibiotics",
"the ongoing antibiotic resistance crisis1,2,3,4,5,6,7,8,9",
"deep learning approaches",
"chemical spaces1,10,11,12,13,14,15",
"these",
"black box models",
"chemical insights",
"we",
"the chemical substructures",
"antibiotic activity",
"neural network models",
"structural classes",
"antibiotics",
"we",
"this hypothesis",
"an explainable, substructure-based approach",
"the efficient, deep learning-guided exploration",
"chemical spaces",
"we",
"the antibiotic activities",
"human cell cytotoxicity profiles",
"39,312 compounds",
"applied ensembles",
"graph neural networks",
"antibiotic activity",
"cytotoxicity",
"12,076,365 compounds",
"explainable graph algorithms",
"we",
"substructure-based rationales",
"compounds",
"high predicted antibiotic activity",
"cytotoxicity",
"we",
"283 compounds",
"compounds",
"antibiotic activity",
"staphylococcus aureus",
"putative structural classes",
"rationales",
"these structural classes",
"compounds",
"methicillin-resistant s. aureus",
"(mrsa",
"vancomycin-resistant enterococci",
"substantial resistance",
"bacterial titres",
"mouse models",
"mrsa skin",
"systemic thigh infection",
"our approach",
"the deep learning-guided discovery",
"structural classes",
"antibiotics",
"machine learning models",
"drug discovery",
"insights",
"the chemical substructures",
"that",
"selective antibiotic activity",
"deep learning",
"39,312",
"12,076,365",
"283",
"methicillin",
"s. aureus"
] |
Incremental–decremental data transformation based ensemble deep learning model (IDT-eDL) for temperature prediction | [
"Vipin Kumar",
"Rana Kumar"
] | Human life heavily depends on weather conditions, which affect the necessary operations like agriculture, aviation, tourism, industries, etc., where the temperature plays a vital role in deciding the weather conditions along with other meteorological variables. Therefore, temperature forecasting has drawn considerable attention from researchers because of its significant effect on daily life activities and the ever-challenging forecasting task. These research objectives are to investigate the transformation of data based on incremental and decremental approaches and to find the practical ensemble approach over proposed models for effective temperature prediction, where the proposed model is called the Incremental–Decremental Data Transformation-Based Ensemble Deep Learning Model (IDT-eDL). The temperature dataset from Delhi, India, has been utilized to compare proposed and traditional deep learning models over various performance measures. The proposed IDT-eDL with BiLSTM deep learning model (i.e., IDT-eDL_BiLSTM ) has performed the best among the proposed models and traditional deep learning model and achieved Performance over measures MSE: 1.36, RMSE: 1.16, MAE: 0.89, MAPE: 4.13 and \(R^2\):0.999. Additionally, non-parametric statistical analysis of Friedman ranking is also performed to validate the effectiveness of the proposed IDT-eDL model, which also shows a higher ranking of the proposed model than the traditional deep learning models. | 10.1007/s40808-024-01953-0 | incremental–decremental data transformation based ensemble deep learning model (idt-edl) for temperature prediction | human life heavily depends on weather conditions, which affect the necessary operations like agriculture, aviation, tourism, industries, etc., where the temperature plays a vital role in deciding the weather conditions along with other meteorological variables. therefore, temperature forecasting has drawn considerable attention from researchers because of its significant effect on daily life activities and the ever-challenging forecasting task. these research objectives are to investigate the transformation of data based on incremental and decremental approaches and to find the practical ensemble approach over proposed models for effective temperature prediction, where the proposed model is called the incremental–decremental data transformation-based ensemble deep learning model (idt-edl). the temperature dataset from delhi, india, has been utilized to compare proposed and traditional deep learning models over various performance measures. the proposed idt-edl with bilstm deep learning model (i.e., idt-edl_bilstm ) has performed the best among the proposed models and traditional deep learning model and achieved performance over measures mse: 1.36, rmse: 1.16, mae: 0.89, mape: 4.13 and \(r^2\):0.999. additionally, non-parametric statistical analysis of friedman ranking is also performed to validate the effectiveness of the proposed idt-edl model, which also shows a higher ranking of the proposed model than the traditional deep learning models. | [
"human life",
"weather conditions",
"which",
"the necessary operations",
"agriculture",
"aviation",
"tourism",
"industries",
"the temperature",
"a vital role",
"the weather conditions",
"other meteorological variables",
"temperature forecasting",
"considerable attention",
"researchers",
"its significant effect",
"daily life activities",
"the ever-challenging forecasting task",
"these research objectives",
"the transformation",
"data",
"incremental and decremental approaches",
"the practical ensemble approach",
"proposed models",
"effective temperature prediction",
"the proposed model",
"the incremental–decremental data transformation-based ensemble deep learning model",
"idt-edl",
"the temperature",
"delhi",
"india",
"proposed and traditional deep learning models",
"various performance measures",
"the proposed idt-edl",
"bilstm deep learning model",
"i.e., idt-edl_bilstm",
"the proposed models",
"traditional deep learning model",
"performance",
"measures mse",
"rmse",
"mape",
"\\(r^2\\):0.999",
"non-parametric statistical analysis",
"friedman ranking",
"the effectiveness",
"the proposed idt-edl model",
"which",
"a higher ranking",
"the proposed model",
"the traditional deep learning models",
"delhi",
"india",
"1.36",
"rmse",
"1.16",
"0.89",
"4.13"
] |
Deep learning systems for forecasting the prices of crude oil and precious metals | [
"Parisa Foroutan",
"Salim Lahmiri"
] | Commodity markets, such as crude oil and precious metals, play a strategic role in the economic development of nations, with crude oil prices influencing geopolitical relations and the global economy. Moreover, gold and silver are argued to hedge the stock and cryptocurrency markets during market downsides. Therefore, accurate forecasting of crude oil and precious metals prices is critical. Nevertheless, due to the nonlinear nature, substantial fluctuations, and irregular cycles of crude oil and precious metals, predicting their prices is a challenging task. Our study contributes to the commodity market price forecasting literature by implementing and comparing advanced deep-learning models. We address this gap by including silver alongside gold in our analysis, offering a more comprehensive understanding of the precious metal markets. This research expands existing knowledge and provides valuable insights into predicting commodity prices. In this study, we implemented 16 deep- and machine-learning models to forecast the daily price of the West Texas Intermediate (WTI), Brent, gold, and silver markets. The employed deep-learning models are long short-term memory (LSTM), BiLSTM, gated recurrent unit (GRU), bidirectional gated recurrent units (BiGRU), T2V-BiLSTM, T2V-BiGRU, convolutional neural networks (CNN), CNN-BiLSTM, CNN-BiGRU, temporal convolutional network (TCN), TCN-BiLSTM, and TCN-BiGRU. We compared the forecasting performance of deep-learning models with the baseline random forest, LightGBM, support vector regression, and k-nearest neighborhood models using mean absolute error (MAE), mean absolute percentage error, and root mean squared error as evaluation criteria. By considering different sliding window lengths, we examine the forecasting performance of our models. Our results reveal that the TCN model outperforms the others for WTI, Brent, and silver, achieving the lowest MAE values of 1.444, 1.295, and 0.346, respectively. The BiGRU model performs best for gold, with an MAE of 15.188 using a 30-day input sequence. Furthermore, LightGBM exhibits comparable performance to TCN and is the best-performing machine-learning model overall. These findings are critical for investors, policymakers, mining companies, and governmental agencies to effectively anticipate market trends, mitigate risk, manage uncertainty, and make timely decisions and strategies regarding crude oil, gold, and silver markets. | 10.1186/s40854-024-00637-z | deep learning systems for forecasting the prices of crude oil and precious metals | commodity markets, such as crude oil and precious metals, play a strategic role in the economic development of nations, with crude oil prices influencing geopolitical relations and the global economy. moreover, gold and silver are argued to hedge the stock and cryptocurrency markets during market downsides. therefore, accurate forecasting of crude oil and precious metals prices is critical. nevertheless, due to the nonlinear nature, substantial fluctuations, and irregular cycles of crude oil and precious metals, predicting their prices is a challenging task. our study contributes to the commodity market price forecasting literature by implementing and comparing advanced deep-learning models. we address this gap by including silver alongside gold in our analysis, offering a more comprehensive understanding of the precious metal markets. this research expands existing knowledge and provides valuable insights into predicting commodity prices. in this study, we implemented 16 deep- and machine-learning models to forecast the daily price of the west texas intermediate (wti), brent, gold, and silver markets. the employed deep-learning models are long short-term memory (lstm), bilstm, gated recurrent unit (gru), bidirectional gated recurrent units (bigru), t2v-bilstm, t2v-bigru, convolutional neural networks (cnn), cnn-bilstm, cnn-bigru, temporal convolutional network (tcn), tcn-bilstm, and tcn-bigru. we compared the forecasting performance of deep-learning models with the baseline random forest, lightgbm, support vector regression, and k-nearest neighborhood models using mean absolute error (mae), mean absolute percentage error, and root mean squared error as evaluation criteria. by considering different sliding window lengths, we examine the forecasting performance of our models. our results reveal that the tcn model outperforms the others for wti, brent, and silver, achieving the lowest mae values of 1.444, 1.295, and 0.346, respectively. the bigru model performs best for gold, with an mae of 15.188 using a 30-day input sequence. furthermore, lightgbm exhibits comparable performance to tcn and is the best-performing machine-learning model overall. these findings are critical for investors, policymakers, mining companies, and governmental agencies to effectively anticipate market trends, mitigate risk, manage uncertainty, and make timely decisions and strategies regarding crude oil, gold, and silver markets. | [
"commodity markets",
"crude oil",
"precious metals",
"a strategic role",
"the economic development",
"nations",
"crude oil prices",
"geopolitical relations",
"the global economy",
"gold",
"silver",
"the stock and cryptocurrency markets",
"accurate forecasting",
"crude oil",
"precious metals prices",
"the nonlinear nature",
"substantial fluctuations",
"irregular cycles",
"crude oil",
"precious metals",
"their prices",
"a challenging task",
"our study",
"the commodity market price forecasting literature",
"advanced deep-learning models",
"we",
"this gap",
"silver",
"gold",
"our analysis",
"a more comprehensive understanding",
"the precious metal markets",
"this research",
"existing knowledge",
"valuable insights",
"commodity prices",
"this study",
"we",
"16 deep- and machine-learning models",
"the daily price",
"the west texas intermediate",
"(wti",
"brent",
"gold",
"silver markets",
"the employed deep-learning models",
"long short-term memory",
"lstm",
"bilstm",
"gated recurrent unit",
"gru",
"bidirectional gated recurrent units",
"bigru",
"t2v-bilstm",
"t2v-bigru",
"convolutional neural networks",
"cnn",
"cnn-bilstm",
"cnn-bigru, temporal convolutional network",
"tcn",
"tcn-bilstm",
"tcn-bigru",
"we",
"the forecasting performance",
"deep-learning models",
"the baseline random forest",
"lightgbm",
"vector regression",
"k-nearest neighborhood models",
"mean absolute error",
"mae",
"absolute percentage error",
"root mean squared error",
"evaluation criteria",
"different sliding window lengths",
"we",
"the forecasting performance",
"our models",
"our results",
"the tcn model",
"the others",
"wti",
"brent",
"silver",
"the lowest mae values",
"the bigru model",
"gold",
"an mae",
"a 30-day input sequence",
"lightgbm",
"comparable performance",
"the best-performing machine-learning model",
"these findings",
"investors",
"policymakers",
"mining companies",
"governmental agencies",
"market trends",
"mitigate risk",
"uncertainty",
"timely decisions",
"strategies",
"crude oil",
"gold",
"silver markets",
"16",
"daily",
"west texas",
"cnn",
"cnn",
"cnn",
"1.444",
"1.295",
"0.346",
"15.188",
"30-day"
] |
Deep learning-based weld defect classification using VGG16 transfer learning adaptive fine-tuning | [
"Samuel Kumaresan",
"K. S. Jai Aultrin",
"S. S. Kumar",
"M. Dev Anand"
] | Welding is a vital joining process; however, occurrences of weld defects often degrade the quality of the welded part. The risk of occurrence of a variety of defects has led to the development of advanced weld defects detection systems such as automated weld defects detection and classification. The present work is a novel approach that proposes and investigates a unique image-centered method based on a deep learning model trained by a small X-ray image dataset. A data augmentation method able to process images on the go was used to offset the limitation of the small X-ray dataset. Fine-tuned transfer learning techniques were used to train two convolutional neural network based architectures with VGG16 and ResNet50 as the base models for the augmented sets. Out of the networks we fine-tuned, VGG16 based model performed well with a relatively higher average accuracy of 90%. Even though the small dataset was spread across 15 different classes in an unbalanced way, the learning curves showed acceptable model generalization characteristics. | 10.1007/s12008-023-01327-3 | deep learning-based weld defect classification using vgg16 transfer learning adaptive fine-tuning | welding is a vital joining process; however, occurrences of weld defects often degrade the quality of the welded part. the risk of occurrence of a variety of defects has led to the development of advanced weld defects detection systems such as automated weld defects detection and classification. the present work is a novel approach that proposes and investigates a unique image-centered method based on a deep learning model trained by a small x-ray image dataset. a data augmentation method able to process images on the go was used to offset the limitation of the small x-ray dataset. fine-tuned transfer learning techniques were used to train two convolutional neural network based architectures with vgg16 and resnet50 as the base models for the augmented sets. out of the networks we fine-tuned, vgg16 based model performed well with a relatively higher average accuracy of 90%. even though the small dataset was spread across 15 different classes in an unbalanced way, the learning curves showed acceptable model generalization characteristics. | [
"welding",
"a vital joining process",
"occurrences",
"weld defects",
"the quality",
"the welded part",
"the risk",
"occurrence",
"a variety",
"defects",
"the development",
"advanced weld defects detection systems",
"automated weld defects detection",
"classification",
"the present work",
"a novel approach",
"that",
"a unique image-centered method",
"a deep learning model",
"a small x-ray image dataset",
"a data augmentation method",
"images",
"the go",
"the limitation",
"the small x-ray dataset",
"fine-tuned transfer learning techniques",
"two convolutional neural network based architectures",
"vgg16",
"resnet50",
"the base models",
"the augmented sets",
"the networks",
"we",
"vgg16 based model",
"a relatively higher average accuracy",
"90%",
"the small dataset",
"15 different classes",
"an unbalanced way",
"the learning curves",
"acceptable model generalization characteristics",
"two",
"resnet50",
"90%",
"15"
] |
Transfer learning for emotion detection in conversational text: a hybrid deep learning approach with pre-trained embeddings | [
"Sheetal Kusal",
"Shruti Patil",
"Jyoti Choudrie",
"Ketan Kotecha",
"Deepali Vora"
] | Understanding the emotions and sentiments from conversations has relevance in many application areas. Specifically, conversational agents, question-answering systems, or areas where natural language inference is used. Therefore, techniques to detect emotions from conversations have become the need of the moment. The convolutional network and recurrent networks have shown different capabilities in text representation. This work proposes a hybrid deep learning network based on the convolutional-recurrent network used to detect the emotions of people based on conversational text. A convolutional network has the ability to capture local patterns and relationships and is inherently shift-invariant. At the same time, the recurrent network captures long-range dependencies in sequential information. This work also utilises the power of transfer learning by employing pre-trained embeddings from Neural Network Language Model models. These pre-trained representations, generated from vast text corpora, encode rich semantic information about words. This study investigates a novel approach towards text-based emotion detection using pre-trained Neural Network Language Model embeddings with hybrid convolutional-recurrent architecture. The proposed hybrid experimental setup has been evaluated on the Empathetic Dialogues dataset and contrasted with the state-of-the-art works. A comparative analysis reveals that the proposed Convolutional Neural Network with a Bidirectional Gated Recurrent Unit hybrid approach with Neural Network Language Model embeddings achieves superior performance and accuracy. | 10.1007/s41870-024-02027-1 | transfer learning for emotion detection in conversational text: a hybrid deep learning approach with pre-trained embeddings | understanding the emotions and sentiments from conversations has relevance in many application areas. specifically, conversational agents, question-answering systems, or areas where natural language inference is used. therefore, techniques to detect emotions from conversations have become the need of the moment. the convolutional network and recurrent networks have shown different capabilities in text representation. this work proposes a hybrid deep learning network based on the convolutional-recurrent network used to detect the emotions of people based on conversational text. a convolutional network has the ability to capture local patterns and relationships and is inherently shift-invariant. at the same time, the recurrent network captures long-range dependencies in sequential information. this work also utilises the power of transfer learning by employing pre-trained embeddings from neural network language model models. these pre-trained representations, generated from vast text corpora, encode rich semantic information about words. this study investigates a novel approach towards text-based emotion detection using pre-trained neural network language model embeddings with hybrid convolutional-recurrent architecture. the proposed hybrid experimental setup has been evaluated on the empathetic dialogues dataset and contrasted with the state-of-the-art works. a comparative analysis reveals that the proposed convolutional neural network with a bidirectional gated recurrent unit hybrid approach with neural network language model embeddings achieves superior performance and accuracy. | [
"the emotions",
"sentiments",
"conversations",
"relevance",
"many application areas",
"specifically, conversational agents",
"question-answering systems",
"areas",
"natural language inference",
"techniques",
"emotions",
"conversations",
"the need",
"the moment",
"the convolutional network",
"recurrent networks",
"different capabilities",
"text representation",
"this work",
"a hybrid deep learning network",
"the convolutional-recurrent network",
"the emotions",
"people",
"conversational text",
"a convolutional network",
"the ability",
"local patterns",
"relationships",
"the same time",
"the recurrent network",
"long-range dependencies",
"sequential information",
"this work",
"the power",
"pre-trained embeddings",
"neural network language model models",
"these pre-trained representations",
"vast text corpora",
"encode rich semantic information",
"words",
"this study",
"a novel approach",
"text-based emotion detection",
"pre-trained neural network language model embeddings",
"hybrid convolutional-recurrent architecture",
"the proposed hybrid experimental setup",
"the empathetic dialogues",
"the-art",
"a comparative analysis",
"the proposed convolutional neural network",
"a bidirectional gated recurrent unit hybrid approach",
"neural network language model embeddings",
"superior performance",
"accuracy",
"corpora",
"encode rich semantic information"
] |
Quantum mechanics-based deep learning framework considering near-zero variance data | [
"Eunseo Oh",
"Hyunsoo Lee"
] | AbstractWith the development of automation technology, big data is collected during operation processes, and among various machine learning analysis techniques using such data, deep neural network (DNN) has high analysis performance. However, most industrial data has low-variance or near-zero variance data from the refined processes in the collected data itself. This reduces deep learning analysis performance, which is affected by data quality. To overcome this, in this study, the weight learning pattern of an applied DNN is modeled as a stochastic differential equation (SDE) based on quantum mechanics. Through the drift and diffuse terms of quantum mechanics, the patterns of the DNN and data are quickly acquired, and the data with near-zero variance is effectively analyzed simultaneously. To demonstrate the superiority of the proposed framework, DNN analysis was performed using data with near-zero variance issues, and it was proved that the proposed framework is effective in processing near-zero variance data compared with other existing algorithms.Graphical abstract | 10.1007/s10489-024-05465-3 | quantum mechanics-based deep learning framework considering near-zero variance data | abstractwith the development of automation technology, big data is collected during operation processes, and among various machine learning analysis techniques using such data, deep neural network (dnn) has high analysis performance. however, most industrial data has low-variance or near-zero variance data from the refined processes in the collected data itself. this reduces deep learning analysis performance, which is affected by data quality. to overcome this, in this study, the weight learning pattern of an applied dnn is modeled as a stochastic differential equation (sde) based on quantum mechanics. through the drift and diffuse terms of quantum mechanics, the patterns of the dnn and data are quickly acquired, and the data with near-zero variance is effectively analyzed simultaneously. to demonstrate the superiority of the proposed framework, dnn analysis was performed using data with near-zero variance issues, and it was proved that the proposed framework is effective in processing near-zero variance data compared with other existing algorithms.graphical abstract | [
"the development",
"automation technology",
"big data",
"operation processes",
"various machine",
"analysis techniques",
"such data",
"deep neural network",
"dnn",
"high analysis performance",
"most industrial data",
"low-variance or near-zero variance data",
"the refined processes",
"the collected data",
"itself",
"this",
"deep learning analysis performance",
"which",
"data quality",
"this",
"this study",
"the weight learning pattern",
"an applied dnn",
"a stochastic differential equation",
"sde",
"quantum mechanics",
"the drift",
"diffuse",
"terms",
"quantum mechanics",
"the patterns",
"the dnn",
"data",
"the data",
"near-zero variance",
"the superiority",
"the proposed framework",
"dnn analysis",
"data",
"near-zero variance issues",
"it",
"the proposed framework",
"near-zero variance data",
"other existing algorithms.graphical abstract",
"abstractwith",
"quantum mechanics"
] |
Deep learning-based network intrusion detection in smart healthcare enterprise systems | [
"Vinayakumar Ravi"
] | Network-based intrusion detection (N-IDS) is an essential system inside an organization in a smart healthcare enterprise system to prevent the system and its networks from network attacks. A survey of the literature shows that in recent days deep learning approaches are employed successfully for N-IDS using network connections. However, finding the right features from a network connection is a daunting task. This work proposes a multidimensional attention-based deep learning approach for N-IDS that extracts the optimal features for intrusion detection using network payload. The proposed approach includes an embedding that transforms every word in the payload into a 100-dimensional feature vector representation and embedding follows deep learning layers such as a convolutional neural network (CNN) and long short-term memory (LSTM) with attention to extracting optimal features for attack classification. Next, the features of CNN and LSTM layers are concatenated and passed into fully connected layers for intrusion detection. The proposed approach showed 99% accuracy on the KISTI enterprise network payload dataset. In addition, the proposed approach showed 98% accuracy and 99% accuracy on network-based datasets such as KDDCup-99, CICIDS-2017, and WSN-DS and UNSW-NB15 respectively. The good experimental results on various network-based datasets suggest that the proposed N-IDS in smart healthcare enterprise systems is robust and generalizable to detect attacks from different network environments. The proposed approach performed better in all the experiments than the other deep learning-based methods. The model showed a 5% accuracy performance improvement compared to the existing study using the KISTI dataset. In addition, the proposed model has shown similar performances on the other intrusion datasets. The proposed approach serves as a network monitoring tool for efficient and accurate detection of attacks inside an organization on a healthcare enterprise network system. | 10.1007/s11042-023-17300-x | deep learning-based network intrusion detection in smart healthcare enterprise systems | network-based intrusion detection (n-ids) is an essential system inside an organization in a smart healthcare enterprise system to prevent the system and its networks from network attacks. a survey of the literature shows that in recent days deep learning approaches are employed successfully for n-ids using network connections. however, finding the right features from a network connection is a daunting task. this work proposes a multidimensional attention-based deep learning approach for n-ids that extracts the optimal features for intrusion detection using network payload. the proposed approach includes an embedding that transforms every word in the payload into a 100-dimensional feature vector representation and embedding follows deep learning layers such as a convolutional neural network (cnn) and long short-term memory (lstm) with attention to extracting optimal features for attack classification. next, the features of cnn and lstm layers are concatenated and passed into fully connected layers for intrusion detection. the proposed approach showed 99% accuracy on the kisti enterprise network payload dataset. in addition, the proposed approach showed 98% accuracy and 99% accuracy on network-based datasets such as kddcup-99, cicids-2017, and wsn-ds and unsw-nb15 respectively. the good experimental results on various network-based datasets suggest that the proposed n-ids in smart healthcare enterprise systems is robust and generalizable to detect attacks from different network environments. the proposed approach performed better in all the experiments than the other deep learning-based methods. the model showed a 5% accuracy performance improvement compared to the existing study using the kisti dataset. in addition, the proposed model has shown similar performances on the other intrusion datasets. the proposed approach serves as a network monitoring tool for efficient and accurate detection of attacks inside an organization on a healthcare enterprise network system. | [
"network-based intrusion detection",
"-ids",
"an essential system",
"an organization",
"a smart healthcare enterprise system",
"the system",
"its networks",
"network attacks",
"a survey",
"the literature",
"recent days",
"deep learning approaches",
"n-ids",
"network connections",
"the right features",
"a network connection",
"a daunting task",
"this work",
"a multidimensional attention-based deep learning approach",
"n-ids",
"that",
"the optimal features",
"intrusion detection",
"network payload",
"the proposed approach",
"that",
"every word",
"the payload",
"a 100-dimensional feature vector representation",
"embedding",
"deep learning layers",
"a convolutional neural network",
"cnn",
"long short-term memory",
"lstm",
"attention",
"optimal features",
"attack classification",
"the features",
"cnn",
"lstm layers",
"fully connected layers",
"intrusion detection",
"the proposed approach",
"99% accuracy",
"the kisti enterprise network payload dataset",
"addition",
"the proposed approach",
"98% accuracy",
"99% accuracy",
"network-based datasets",
"kddcup-99",
"cicids-2017",
"wsn-ds and unsw-nb15",
"the good experimental results",
"various network-based datasets",
"the proposed n-ids",
"smart healthcare enterprise systems",
"attacks",
"different network environments",
"the proposed approach",
"all the experiments",
"the other deep learning-based methods",
"the model",
"a 5% accuracy performance improvement",
"the existing study",
"the kisti dataset",
"addition",
"the proposed model",
"similar performances",
"the other intrusion datasets",
"the proposed approach",
"a network monitoring tool",
"efficient and accurate detection",
"attacks",
"an organization",
"a healthcare enterprise network system",
"smart healthcare",
"recent days",
"100",
"cnn",
"cnn",
"99%",
"98%",
"99%",
"kddcup-99",
"cicids-2017",
"smart healthcare enterprise",
"5%"
] |
Deep-learning model for evaluating histopathology of acute renal tubular injury | [
"Thi Thuy Uyen Nguyen",
"Anh-Tien Nguyen",
"Hyeongwan Kim",
"Yu Jin Jung",
"Woong Park",
"Kyoung Min Kim",
"Ilwoo Park",
"Won Kim"
] | Tubular injury is the most common cause of acute kidney injury. Histopathological diagnosis may help distinguish between the different types of acute kidney injury and aid in treatment. To date, a limited number of study has used deep-learning models to assist in the histopathological diagnosis of acute kidney injury. This study aimed to perform histopathological segmentation to identify the four structures of acute renal tubular injury using deep-learning models. A segmentation model was used to classify tubule-specific injuries following cisplatin treatment. A total of 45 whole-slide images with 400 generated patches were used in the segmentation model, and 27,478 annotations were created for four classes: glomerulus, healthy tubules, necrotic tubules, and tubules with casts. A segmentation model was developed using the DeepLabV3 architecture with a MobileNetv3-Large backbone to accurately identify the four histopathological structures associated with acute renal tubular injury in PAS-stained mouse samples. In the segmentation model for four structures, the highest Intersection over Union and the Dice coefficient were obtained for the segmentation of the “glomerulus” class, followed by “necrotic tubules,” “healthy tubules,” and “tubules with cast” classes. The overall performance of the segmentation algorithm for all classes in the test set included an Intersection over Union of 0.7968 and a Dice coefficient of 0.8772. The Dice scores for the glomerulus, healthy tubules, necrotic tubules, and tubules with cast are 91.78 ± 11.09, 87.37 ± 4.02, 88.08 ± 6.83, and 83.64 ± 20.39%, respectively. The utilization of deep learning in a predictive model has demonstrated promising performance in accurately identifying the degree of injured renal tubules. These results may provide new opportunities for the application of the proposed methods to evaluate renal pathology more effectively. | 10.1038/s41598-024-58506-9 | deep-learning model for evaluating histopathology of acute renal tubular injury | tubular injury is the most common cause of acute kidney injury. histopathological diagnosis may help distinguish between the different types of acute kidney injury and aid in treatment. to date, a limited number of study has used deep-learning models to assist in the histopathological diagnosis of acute kidney injury. this study aimed to perform histopathological segmentation to identify the four structures of acute renal tubular injury using deep-learning models. a segmentation model was used to classify tubule-specific injuries following cisplatin treatment. a total of 45 whole-slide images with 400 generated patches were used in the segmentation model, and 27,478 annotations were created for four classes: glomerulus, healthy tubules, necrotic tubules, and tubules with casts. a segmentation model was developed using the deeplabv3 architecture with a mobilenetv3-large backbone to accurately identify the four histopathological structures associated with acute renal tubular injury in pas-stained mouse samples. in the segmentation model for four structures, the highest intersection over union and the dice coefficient were obtained for the segmentation of the “glomerulus” class, followed by “necrotic tubules,” “healthy tubules,” and “tubules with cast” classes. the overall performance of the segmentation algorithm for all classes in the test set included an intersection over union of 0.7968 and a dice coefficient of 0.8772. the dice scores for the glomerulus, healthy tubules, necrotic tubules, and tubules with cast are 91.78 ± 11.09, 87.37 ± 4.02, 88.08 ± 6.83, and 83.64 ± 20.39%, respectively. the utilization of deep learning in a predictive model has demonstrated promising performance in accurately identifying the degree of injured renal tubules. these results may provide new opportunities for the application of the proposed methods to evaluate renal pathology more effectively. | [
"tubular injury",
"the most common cause",
"acute kidney injury",
"histopathological diagnosis",
"the different types",
"acute kidney injury",
"aid",
"treatment",
"date",
"a limited number",
"study",
"deep-learning models",
"the histopathological diagnosis",
"acute kidney injury",
"this study",
"histopathological segmentation",
"the four structures",
"acute renal tubular injury",
"deep-learning models",
"a segmentation model",
"tubule-specific injuries",
"cisplatin treatment",
"a total",
"45 whole-slide images",
"400 generated patches",
"the segmentation model",
"27,478 annotations",
"four classes",
"healthy tubules",
"necrotic tubules",
"tubules",
"casts",
"a segmentation model",
"the deeplabv3 architecture",
"a mobilenetv3-large backbone",
"the four histopathological structures",
"acute renal tubular injury",
"pas-stained mouse samples",
"the segmentation model",
"four structures",
"the highest intersection",
"union",
"the dice coefficient",
"the segmentation",
"the “glomerulus” class",
"necrotic tubules",
"” “healthy tubules",
"“tubules",
"cast” classes",
"the overall performance",
"the segmentation algorithm",
"all classes",
"an intersection",
"union",
"a dice coefficient",
"the dice scores",
"the glomerulus, healthy tubules",
"necrotic tubules",
"tubules",
"cast",
"91.78 ±",
"±",
"±",
"83.64 ±",
"20.39%",
"the utilization",
"deep learning",
"a predictive model",
"promising performance",
"the degree",
"injured renal tubules",
"these results",
"new opportunities",
"the application",
"the proposed methods",
"renal pathology",
"four",
"45",
"400",
"27,478",
"four",
"mobilenetv3",
"four",
"four",
"0.7968",
"0.8772",
"91.78",
"11.09",
"87.37",
"4.02",
"88.08",
"6.83",
"83.64",
"20.39%"
] |
Deep Learning–Assisted Identification of Femoroacetabular Impingement (FAI) on Routine Pelvic Radiographs | [
"Michael K. Hoy",
"Vishal Desai",
"Simukayi Mutasa",
"Robert C. Hoy",
"Richard Gorniak",
"Jeffrey A. Belair"
] | To use a novel deep learning system to localize the hip joints and detect findings of cam-type femoroacetabular impingement (FAI). A retrospective search of hip/pelvis radiographs obtained in patients to evaluate for FAI yielded 3050 total studies. Each hip was classified separately by the original interpreting radiologist in the following manner: 724 hips had severe cam-type FAI morphology, 962 moderate cam-type FAI morphology, 846 mild cam-type FAI morphology, and 518 hips were normal. The anteroposterior (AP) view from each study was anonymized and extracted. After localization of the hip joints by a novel convolutional neural network (CNN) based on the focal loss principle, a second CNN classified the images of the hip as cam positive, or no FAI. Accuracy was 74% for diagnosing normal vs. abnormal cam-type FAI morphology, with aggregate sensitivity and specificity of 0.821 and 0.669, respectively, at the chosen operating point. The aggregate AUC was 0.736. A deep learning system can be applied to detect FAI-related changes on single view pelvic radiographs. Deep learning is useful for quickly identifying and categorizing pathology on imaging, which may aid the interpreting radiologist. | 10.1007/s10278-023-00920-y | deep learning–assisted identification of femoroacetabular impingement (fai) on routine pelvic radiographs | to use a novel deep learning system to localize the hip joints and detect findings of cam-type femoroacetabular impingement (fai). a retrospective search of hip/pelvis radiographs obtained in patients to evaluate for fai yielded 3050 total studies. each hip was classified separately by the original interpreting radiologist in the following manner: 724 hips had severe cam-type fai morphology, 962 moderate cam-type fai morphology, 846 mild cam-type fai morphology, and 518 hips were normal. the anteroposterior (ap) view from each study was anonymized and extracted. after localization of the hip joints by a novel convolutional neural network (cnn) based on the focal loss principle, a second cnn classified the images of the hip as cam positive, or no fai. accuracy was 74% for diagnosing normal vs. abnormal cam-type fai morphology, with aggregate sensitivity and specificity of 0.821 and 0.669, respectively, at the chosen operating point. the aggregate auc was 0.736. a deep learning system can be applied to detect fai-related changes on single view pelvic radiographs. deep learning is useful for quickly identifying and categorizing pathology on imaging, which may aid the interpreting radiologist. | [
"a novel deep learning system",
"the hip joints",
"findings",
"cam-type femoroacetabular impingement (fai",
"a retrospective search",
"hip/pelvis radiographs",
"patients",
"fai",
"3050 total studies",
"each hip",
"the original interpreting radiologist",
"the following manner",
"724 hips",
"severe cam-type fai morphology",
"962 moderate cam-type fai morphology",
"846 mild cam-type fai morphology",
"518 hips",
"the anteroposterior (ap) view",
"each study",
"localization",
"the hip joints",
"a novel convolutional neural network",
"cnn",
"the focal loss principle",
"a second cnn",
"the images",
"the hip",
"cam",
"no fai. accuracy",
"74%",
"abnormal cam-type fai morphology",
"aggregate sensitivity",
"specificity",
"the chosen operating point",
"the aggregate auc",
"a deep learning system",
"fai-related changes",
"single view pelvic radiographs",
"deep learning",
"categorizing pathology",
"imaging",
"which",
"the interpreting radiologist",
"3050",
"724",
"962",
"846",
"518",
"cnn",
"second",
"cnn",
"accuracy",
"74%",
"abnormal cam-type",
"0.821",
"0.669",
"0.736"
] |
A deep learning-based framework for road traffic prediction | [
"Redouane Benabdallah Benarmas",
"Kadda Beghdad Bey"
] | Due to the exponential rise in the number of vehicles and road segments in cities, traffic prediction becomes more difficult, necessitating the application of sophisticated algorithms such as deep learning (DL). The models used in the literature provide accurate predictions for specific cases when the data flow is properly prepared. However, in complex situations, these approaches fail, and thus, the prediction must be developed through a process rather than a prediction calculation method. In addition to using a pure and robust DL prediction model, an efficient approach could be built by taking into account two other factors, namely the relationships between road segments and the amount and quality of the training data. The main goal of our research is to develop a three-stage framework for road traffic prediction based on statistical and deep learning modules. First, a cross-correlation prediction with a Long Short-Term Memory model (LSTM) is implemented to predict the influential road segments; second, a deep generative model (DGM)-based data augmentation is used to improve the data of the related segments; and third, we adapt a Neural Basis Expansion Analysis for interpretable Time Series (N-BEATS) architecture, to the resulting data to implement the prediction module. The framework components are trained and validated using the 6th Beijing road traffic dataset. | 10.1007/s11227-023-05718-x | a deep learning-based framework for road traffic prediction | due to the exponential rise in the number of vehicles and road segments in cities, traffic prediction becomes more difficult, necessitating the application of sophisticated algorithms such as deep learning (dl). the models used in the literature provide accurate predictions for specific cases when the data flow is properly prepared. however, in complex situations, these approaches fail, and thus, the prediction must be developed through a process rather than a prediction calculation method. in addition to using a pure and robust dl prediction model, an efficient approach could be built by taking into account two other factors, namely the relationships between road segments and the amount and quality of the training data. the main goal of our research is to develop a three-stage framework for road traffic prediction based on statistical and deep learning modules. first, a cross-correlation prediction with a long short-term memory model (lstm) is implemented to predict the influential road segments; second, a deep generative model (dgm)-based data augmentation is used to improve the data of the related segments; and third, we adapt a neural basis expansion analysis for interpretable time series (n-beats) architecture, to the resulting data to implement the prediction module. the framework components are trained and validated using the 6th beijing road traffic dataset. | [
"the exponential rise",
"the number",
"vehicles",
"road segments",
"cities",
"traffic prediction",
"the application",
"sophisticated algorithms",
"deep learning",
"dl",
"the models",
"the literature",
"accurate predictions",
"specific cases",
"the data flow",
"complex situations",
"these approaches",
"the prediction",
"a process",
"a prediction calculation method",
"addition",
"a pure and robust dl prediction model",
"an efficient approach",
"account",
"two other factors",
"namely the relationships",
"road segments",
"the amount",
"quality",
"the training data",
"the main goal",
"our research",
"a three-stage framework",
"road traffic prediction",
"statistical and deep learning modules",
"first, a cross-correlation prediction",
"a long short-term memory model",
"lstm",
"the influential road segments",
"second, a deep generative model",
"dgm)-based data augmentation",
"the data",
"the related segments",
"we",
"a neural basis expansion analysis",
"interpretable time series (n-beats) architecture",
"the resulting data",
"the prediction module",
"the framework components",
"the 6th beijing road traffic dataset",
"two",
"three",
"first",
"second",
"third",
"6th"
] |
Deep learning for Arabic healthcare: MedicalBot | [
"Mohammed Abdelhay",
"Ammar Mohammed",
"Hesham A. Hefny"
] | Since the COVID-19 pandemic, healthcare services, particularly remote and automated healthcare consultations, have gained increased attention. Medical bots, which provide medical advice and support, are becoming increasingly popular. They offer numerous benefits, including 24/7 access to medical counseling, reduced appointment wait times by providing quick answers to common questions or concerns, and cost savings associated with fewer visits or tests required for diagnosis and treatment plans. The success of medical bots depends on the quality of their learning, which in turn depends on the appropriate corpus within the domain of interest. Arabic is one of the most commonly used languages for sharing users’ internet content. However, implementing medical bots in Arabic faces several challenges, including the language’s morphological composition, the diversity of dialects, and the need for an appropriate and large enough corpus in the medical domain. To address this gap, this paper introduces the largest Arabic Healthcare Q &A dataset, called MAQA, consisting of over 430,000 questions distributed across 20 medical specializations. Furthermore, this paper adopts three deep learning models, namely LSTM, Bi-LSTM, and Transformers, for experimenting and benchmarking the proposed corpus MAQA. The experimental results demonstrate that the recent Transformer model outperforms the traditional deep learning models, achieving an average cosine similarity of 80.81% and a BLeU score of 58%. | 10.1007/s13278-023-01077-w | deep learning for arabic healthcare: medicalbot | since the covid-19 pandemic, healthcare services, particularly remote and automated healthcare consultations, have gained increased attention. medical bots, which provide medical advice and support, are becoming increasingly popular. they offer numerous benefits, including 24/7 access to medical counseling, reduced appointment wait times by providing quick answers to common questions or concerns, and cost savings associated with fewer visits or tests required for diagnosis and treatment plans. the success of medical bots depends on the quality of their learning, which in turn depends on the appropriate corpus within the domain of interest. arabic is one of the most commonly used languages for sharing users’ internet content. however, implementing medical bots in arabic faces several challenges, including the language’s morphological composition, the diversity of dialects, and the need for an appropriate and large enough corpus in the medical domain. to address this gap, this paper introduces the largest arabic healthcare q &a dataset, called maqa, consisting of over 430,000 questions distributed across 20 medical specializations. furthermore, this paper adopts three deep learning models, namely lstm, bi-lstm, and transformers, for experimenting and benchmarking the proposed corpus maqa. the experimental results demonstrate that the recent transformer model outperforms the traditional deep learning models, achieving an average cosine similarity of 80.81% and a bleu score of 58%. | [
"the covid-19 pandemic, healthcare services",
"particularly remote and automated healthcare consultations",
"increased attention",
"medical bots",
"which",
"medical advice",
"support",
"they",
"numerous benefits",
"24/7 access",
"medical counseling",
"reduced appointment",
"quick answers",
"common questions",
"concerns",
"cost savings",
"fewer visits",
"tests",
"diagnosis and treatment plans",
"the success",
"medical bots",
"the quality",
"their learning",
"which",
"turn",
"the appropriate corpus",
"the domain",
"interest",
"the most commonly used languages",
"users’ internet content",
"medical bots",
"several challenges",
"the language’s morphological composition",
"the diversity",
"dialects",
"the need",
"an appropriate and large enough corpus",
"the medical domain",
"this gap",
"this paper",
"the largest arabic healthcare q",
"a dataset",
"maqa",
"over 430,000 questions",
"20 medical specializations",
"this paper",
"three deep learning models",
"namely lstm",
"bi",
"-",
"lstm",
"transformers",
"the proposed corpus maqa",
"the experimental results",
"the recent transformer model",
"the traditional deep learning models",
"an average cosine similarity",
"80.81%",
"a bleu score",
"58%",
"covid-19",
"24/7",
"arabic",
"one",
"arabic",
"arabic",
"healthcare q &a",
"over 430,000",
"20",
"three",
"80.81%",
"58%"
] |
Optimization of news dissemination push mode by intelligent edge computing technology for deep learning | [
"JiLe DeGe",
"Sina Sang"
] | The Internet era is an era of information explosion. By 2022, the global Internet users have reached more than 4 billion, and the social media users have exceeded 3 billion. People face a lot of news content every day, and it is almost impossible to get interesting information by browsing all the news content. Under this background, personalized news recommendation technology has been widely used, but it still needs to be further optimized and improved. In order to better push the news content of interest to different readers, users' satisfaction with major news websites should be further improved. This study proposes a new recommendation algorithm based on deep learning and reinforcement learning. Firstly, the RL algorithm is introduced based on deep learning. Deep learning is excellent in processing large-scale data and complex pattern recognition, but it often faces the challenge of low sample efficiency when it comes to complex decision-making and sequential tasks. While reinforcement learning (RL) emphasizes learning optimization strategies through continuous trial and error through interactive learning with the environment. Compared with deep learning, RL is more suitable for scenes that need long-term decision-making and trial-and-error learning. By feeding back the reward signal of the action, the system can better adapt to the unknown environment and complex tasks, which makes up for the relative shortcomings of deep learning in these aspects. A scenario is applied to an action to solve the sequential decision problem in the news dissemination process. In order to enable the news recommendation system to consider the dynamic changes in users' interest in news content, the Deep Deterministic Policy Gradient algorithm is applied to the news recommendation scenario. Opposing learning complements and combines Deep Q-network with the strategic network. On the basis of fully summarizing and thinking, this paper puts forward the mode of intelligent news dissemination and push. The push process of news communication information based on edge computing technology is proposed. Finally, based on Area Under Curve a Q-Leaning Area Under Curve for RL models is proposed. This indicator can measure the strengths and weaknesses of RL models efficiently and facilitates comparing models and evaluating offline experiments. The results show that the DDPG algorithm improves the click-through rate by 2.586% compared with the conventional recommendation algorithm. It shows that the algorithm designed in this paper has more obvious advantages in accurate recommendation by users. This paper effectively improves the efficiency of news dissemination by optimizing the push mode of intelligent news dissemination. In addition, the paper also deeply studies the innovative application of intelligent edge technology in news communication, which brings new ideas and practices to promote the development of news communication methods. Optimizing the push mode of intelligent news dissemination not only improves the user experience, but also provides strong support for the application of intelligent edge technology in this field, which has important practical application prospects. | 10.1038/s41598-024-53859-7 | optimization of news dissemination push mode by intelligent edge computing technology for deep learning | the internet era is an era of information explosion. by 2022, the global internet users have reached more than 4 billion, and the social media users have exceeded 3 billion. people face a lot of news content every day, and it is almost impossible to get interesting information by browsing all the news content. under this background, personalized news recommendation technology has been widely used, but it still needs to be further optimized and improved. in order to better push the news content of interest to different readers, users' satisfaction with major news websites should be further improved. this study proposes a new recommendation algorithm based on deep learning and reinforcement learning. firstly, the rl algorithm is introduced based on deep learning. deep learning is excellent in processing large-scale data and complex pattern recognition, but it often faces the challenge of low sample efficiency when it comes to complex decision-making and sequential tasks. while reinforcement learning (rl) emphasizes learning optimization strategies through continuous trial and error through interactive learning with the environment. compared with deep learning, rl is more suitable for scenes that need long-term decision-making and trial-and-error learning. by feeding back the reward signal of the action, the system can better adapt to the unknown environment and complex tasks, which makes up for the relative shortcomings of deep learning in these aspects. a scenario is applied to an action to solve the sequential decision problem in the news dissemination process. in order to enable the news recommendation system to consider the dynamic changes in users' interest in news content, the deep deterministic policy gradient algorithm is applied to the news recommendation scenario. opposing learning complements and combines deep q-network with the strategic network. on the basis of fully summarizing and thinking, this paper puts forward the mode of intelligent news dissemination and push. the push process of news communication information based on edge computing technology is proposed. finally, based on area under curve a q-leaning area under curve for rl models is proposed. this indicator can measure the strengths and weaknesses of rl models efficiently and facilitates comparing models and evaluating offline experiments. the results show that the ddpg algorithm improves the click-through rate by 2.586% compared with the conventional recommendation algorithm. it shows that the algorithm designed in this paper has more obvious advantages in accurate recommendation by users. this paper effectively improves the efficiency of news dissemination by optimizing the push mode of intelligent news dissemination. in addition, the paper also deeply studies the innovative application of intelligent edge technology in news communication, which brings new ideas and practices to promote the development of news communication methods. optimizing the push mode of intelligent news dissemination not only improves the user experience, but also provides strong support for the application of intelligent edge technology in this field, which has important practical application prospects. | [
"the internet era",
"an era",
"information explosion",
"the global internet users",
"the social media users",
"people",
"a lot",
"news content",
"it",
"interesting information",
"all the news content",
"this background",
"personalized news recommendation technology",
"it",
"order",
"the news content",
"interest",
"different readers",
"users' satisfaction",
"major news websites",
"this study",
"a new recommendation algorithm",
"deep learning",
"reinforcement learning",
"the rl algorithm",
"deep learning",
"deep learning",
"large-scale data",
"complex pattern recognition",
"it",
"the challenge",
"low sample efficiency",
"it",
"complex decision-making",
"sequential tasks",
"reinforcement learning",
"(rl",
"optimization strategies",
"continuous trial",
"error",
"interactive learning",
"the environment",
"deep learning",
"rl",
"scenes",
"that",
"long-term decision-making and trial-and-error learning",
"the reward signal",
"the action",
"the system",
"the unknown environment",
"complex tasks",
"which",
"the relative shortcomings",
"deep learning",
"these aspects",
"a scenario",
"an action",
"the sequential decision problem",
"the news dissemination process",
"order",
"the news recommendation system",
"the dynamic changes",
"users' interest",
"news content",
"the deep deterministic policy gradient algorithm",
"the news recommendation scenario",
"complements",
"deep q-network",
"the strategic network",
"the basis",
"thinking",
"this paper",
"the mode",
"intelligent news dissemination",
"push",
"the push process",
"news communication information",
"edge computing technology",
"area",
"curve",
"a q-leaning area",
"curve",
"rl models",
"this indicator",
"the strengths",
"weaknesses",
"rl models",
"models",
"offline experiments",
"the results",
"the ddpg algorithm",
"the click-through rate",
"2.586%",
"the conventional recommendation algorithm",
"it",
"the algorithm",
"this paper",
"more obvious advantages",
"accurate recommendation",
"users",
"this paper",
"the efficiency",
"news dissemination",
"the push mode",
"intelligent news dissemination",
"addition",
"the paper",
"the innovative application",
"intelligent edge technology",
"news communication",
"which",
"new ideas",
"practices",
"the development",
"news communication methods",
"the push mode",
"intelligent news dissemination",
"the user experience",
"strong support",
"the application",
"intelligent edge technology",
"this field",
"which",
"important practical application prospects",
"2022",
"more than 4 billion",
"3 billion",
"firstly",
"2.586%"
] |
Deep learning pose detection model for sow locomotion | [
"Tauana Maria Carlos Guimarães de Paula",
"Rafael Vieira de Sousa",
"Marisol Parada Sarmiento",
"Ton Kramer",
"Edson José de Souza Sardinha",
"Leandro Sabei",
"Júlia Silvestrini Machado",
"Mirela Vilioti",
"Adroaldo José Zanella"
] | Lameness affects animal mobility, causing pain and discomfort. Lameness in early stages often goes undetected due to a lack of observation, precision, and reliability. Automated and non-invasive systems offer precision and detection ease and may improve animal welfare. This study was conducted to create a repository of images and videos of sows with different locomotion scores. Our goal is to develop a computer vision model for automatically identifying specific points on the sow's body. The automatic identification and ability to track specific body areas, will allow us to conduct kinematic studies with the aim of facilitating the detection of lameness using deep learning. The video database was collected on a pig farm with a scenario built to allow filming of sows in locomotion with different lameness scores. Two stereo cameras were used to record 2D videos images. Thirteen locomotion experts assessed the videos using the Locomotion Score System developed by Zinpro Corporation. From this annotated repository, computational models were trained and tested using the open-source deep learning-based animal pose tracking framework SLEAP (Social LEAP Estimates Animal Poses). The top-performing models were constructed using the LEAP architecture to accurately track 6 (lateral view) and 10 (dorsal view) skeleton keypoints. The architecture achieved average precisions values of 0.90 and 0.72, average distances of 6.83 and 11.37 in pixel, and similarities of 0.94 and 0.86 for the lateral and dorsal views, respectively. These computational models are proposed as a Precision Livestock Farming tool and method for identifying and estimating postures in pigs automatically and objectively. The 2D video image repository with different pig locomotion scores can be used as a tool for teaching and research. Based on our skeleton keypoint classification results, an automatic system could be developed. This could contribute to the objective assessment of locomotion scores in sows, improving their welfare. | 10.1038/s41598-024-62151-7 | deep learning pose detection model for sow locomotion | lameness affects animal mobility, causing pain and discomfort. lameness in early stages often goes undetected due to a lack of observation, precision, and reliability. automated and non-invasive systems offer precision and detection ease and may improve animal welfare. this study was conducted to create a repository of images and videos of sows with different locomotion scores. our goal is to develop a computer vision model for automatically identifying specific points on the sow's body. the automatic identification and ability to track specific body areas, will allow us to conduct kinematic studies with the aim of facilitating the detection of lameness using deep learning. the video database was collected on a pig farm with a scenario built to allow filming of sows in locomotion with different lameness scores. two stereo cameras were used to record 2d videos images. thirteen locomotion experts assessed the videos using the locomotion score system developed by zinpro corporation. from this annotated repository, computational models were trained and tested using the open-source deep learning-based animal pose tracking framework sleap (social leap estimates animal poses). the top-performing models were constructed using the leap architecture to accurately track 6 (lateral view) and 10 (dorsal view) skeleton keypoints. the architecture achieved average precisions values of 0.90 and 0.72, average distances of 6.83 and 11.37 in pixel, and similarities of 0.94 and 0.86 for the lateral and dorsal views, respectively. these computational models are proposed as a precision livestock farming tool and method for identifying and estimating postures in pigs automatically and objectively. the 2d video image repository with different pig locomotion scores can be used as a tool for teaching and research. based on our skeleton keypoint classification results, an automatic system could be developed. this could contribute to the objective assessment of locomotion scores in sows, improving their welfare. | [
"animal mobility",
"pain",
"discomfort",
"early stages",
"a lack",
"observation",
"precision",
"reliability",
"automated and non-invasive systems",
"precision and detection ease",
"animal welfare",
"this study",
"a repository",
"images",
"videos",
"sows",
"different locomotion scores",
"our goal",
"a computer vision model",
"specific points",
"the sow's body",
"the automatic identification",
"ability",
"specific body areas",
"us",
"kinematic studies",
"the aim",
"the detection",
"lameness",
"deep learning",
"the video database",
"a pig farm",
"a scenario",
"filming",
"sows",
"locomotion",
"different lameness scores",
"two stereo cameras",
"2d videos images",
"thirteen locomotion experts",
"the videos",
"the locomotion score system",
"zinpro corporation",
"this annotated repository",
"computational models",
"the open-source deep learning-based animal",
"framework sleap",
"social leap",
"animal poses",
"the top-performing models",
"the leap architecture",
"6 (lateral view",
"10 (dorsal view) skeleton keypoints",
"the architecture",
"average precisions values",
"average distances",
"pixel",
"similarities",
"the lateral and dorsal views",
"these computational models",
"a precision livestock farming tool",
"method",
"postures",
"pigs",
"the 2d video image repository",
"different pig locomotion scores",
"a tool",
"teaching",
"research",
"our skeleton keypoint classification results",
"an automatic system",
"this",
"the objective assessment",
"locomotion scores",
"sows",
"their welfare",
"two",
"2d",
"thirteen",
"zinpro corporation",
"6",
"10",
"0.90",
"0.72",
"6.83",
"11.37",
"0.94",
"0.86",
"2d"
] |
Deep learning automatically assesses 2-µm laser-induced skin damage OCT images | [
"Changke Wang",
"Qiong Ma",
"Yu Wei",
"Qi Liu",
"Yuqing Wang",
"Chenliang Xu",
"Caihui Li",
"Qingyu Cai",
"Haiyang Sun",
"Xiaoan Tang",
"Hongxiang Kang"
] | The present study proposed a noninvasive, automated, in vivo assessment method based on optical coherence tomography (OCT) and deep learning techniques to qualitatively and quantitatively analyze the biological effects of 2-µm laser-induced skin damage at different irradiation doses. Different doses of 2-µm laser irradiation established a mouse skin damage model, after which the skin-damaged tissues were imaged non-invasively in vivo using OCT. The acquired images were preprocessed to construct the dataset required for deep learning. The deep learning models used were U-Net, DeepLabV3+, PSP-Net, and HR-Net, and the trained models were used to segment the damage images and further quantify the damage volume of mouse skin under different irradiation doses. The comparison of the qualitative and quantitative results of the four network models showed that HR-Net had the best performance, the highest agreement between the segmentation results and real values, and the smallest error in the quantitative assessment of the damage volume. Based on HR-Net to segment the damage image and quantify the damage volume, the irradiation doses 5.41, 9.55, 13.05, 20.85, 32.71, 52.92, 76.71, and 97.24 J/cm² corresponded to a damage volume of 4.58, 12.56, 16.74, 20.88, 24.52, 30.75, 34.13, and 37.32 mm³. The damage volume increased in a radiation dose-dependent manner. | 10.1007/s10103-024-04053-8 | deep learning automatically assesses 2-µm laser-induced skin damage oct images | the present study proposed a noninvasive, automated, in vivo assessment method based on optical coherence tomography (oct) and deep learning techniques to qualitatively and quantitatively analyze the biological effects of 2-µm laser-induced skin damage at different irradiation doses. different doses of 2-µm laser irradiation established a mouse skin damage model, after which the skin-damaged tissues were imaged non-invasively in vivo using oct. the acquired images were preprocessed to construct the dataset required for deep learning. the deep learning models used were u-net, deeplabv3+, psp-net, and hr-net, and the trained models were used to segment the damage images and further quantify the damage volume of mouse skin under different irradiation doses. the comparison of the qualitative and quantitative results of the four network models showed that hr-net had the best performance, the highest agreement between the segmentation results and real values, and the smallest error in the quantitative assessment of the damage volume. based on hr-net to segment the damage image and quantify the damage volume, the irradiation doses 5.41, 9.55, 13.05, 20.85, 32.71, 52.92, 76.71, and 97.24 j/cm² corresponded to a damage volume of 4.58, 12.56, 16.74, 20.88, 24.52, 30.75, 34.13, and 37.32 mm³. the damage volume increased in a radiation dose-dependent manner. | [
"the present study",
"vivo assessment method",
"optical coherence tomography",
"oct",
"techniques",
"the biological effects",
"2-µm laser-induced skin damage",
"different irradiation doses",
"different doses",
"2-µm laser irradiation",
"a mouse skin damage model",
"which",
"the skin-damaged tissues",
"vivo",
"oct",
"the acquired images",
"the dataset",
"deep learning",
"the deep learning models",
"u",
"-",
"net",
"psp-net",
"hr-net",
"the trained models",
"the damage images",
"the damage volume",
"mouse skin",
"different irradiation doses",
"the comparison",
"the qualitative and quantitative results",
"the four network models",
"hr-net",
"the best performance",
"the highest agreement",
"the segmentation results",
"real values",
"the smallest error",
"the quantitative assessment",
"the damage volume",
"hr-net",
"the damage image",
"the damage volume",
"the irradiation",
"97.24 j/cm²",
"a damage volume",
"37.32 mm³.",
"the damage volume",
"a radiation dose-dependent manner",
"2-µm",
"2-µm",
"oct",
"four",
"5.41",
"9.55",
"13.05",
"20.85",
"32.71",
"52.92",
"76.71",
"97.24",
"4.58",
"12.56",
"16.74",
"20.88",
"24.52",
"30.75",
"34.13",
"37.32"
] |
Low Dose CT Image Reconstruction Using Deep Convolutional Residual Learning Network | [
"Shalini Ramanathan",
"Mohan Ramasundaram"
] | Image reconstruction from computed tomography measurement is formulated as a thought-provoking statistical inverse problem. Deep learning algorithms are best for ill-posed statistical inverse problems that presently achieve state-of-art reconstruction results. The challenging task is to lower the potentially harmful radiation a patient is exposed to during the CT scan. In recently available CT Scanners, Low-Dose CT (LDCT) reconstruction is presented with a post-processing approach, which uses deep learning-based medical image reconstruction methods to reduce the dose level without compromising the image quality. Therefore, this paper proposes a deep learning-based post-processing method called Deep Convolutional Neural Network with Residual Learning (DCNN-RL). The method trains the network on a newly available low-dose CT benchmark dataset (LoDoPaB-CT). It also enables to compare with other benchmark CT datasets such as AAPM LDCT and COVIDx-CT. The proposed architecture optimizes the filtering part to minimize the error function. It learns the parameters of the residual network via numerous training to maximize the efficiency of production. This paper compares noise methods on DCNN-RL using various LDCT datasets of the same domain (human being's chest CT scan) to analyze the image quality. The experiment findings suggest that the Adagrad optimizer is the best for LDCT images. Gaussian noise with a minor variance outperforms the medical image reconstruction task. Here, it has been demonstrated that this approach with these benchmark datasets drastically improves the medical CT image quality, shown through qualitative and quantitative outcomes. | 10.1007/s42979-023-02210-4 | low dose ct image reconstruction using deep convolutional residual learning network | image reconstruction from computed tomography measurement is formulated as a thought-provoking statistical inverse problem. deep learning algorithms are best for ill-posed statistical inverse problems that presently achieve state-of-art reconstruction results. the challenging task is to lower the potentially harmful radiation a patient is exposed to during the ct scan. in recently available ct scanners, low-dose ct (ldct) reconstruction is presented with a post-processing approach, which uses deep learning-based medical image reconstruction methods to reduce the dose level without compromising the image quality. therefore, this paper proposes a deep learning-based post-processing method called deep convolutional neural network with residual learning (dcnn-rl). the method trains the network on a newly available low-dose ct benchmark dataset (lodopab-ct). it also enables to compare with other benchmark ct datasets such as aapm ldct and covidx-ct. the proposed architecture optimizes the filtering part to minimize the error function. it learns the parameters of the residual network via numerous training to maximize the efficiency of production. this paper compares noise methods on dcnn-rl using various ldct datasets of the same domain (human being's chest ct scan) to analyze the image quality. the experiment findings suggest that the adagrad optimizer is the best for ldct images. gaussian noise with a minor variance outperforms the medical image reconstruction task. here, it has been demonstrated that this approach with these benchmark datasets drastically improves the medical ct image quality, shown through qualitative and quantitative outcomes. | [
"image reconstruction",
"computed tomography measurement",
"a thought-provoking statistical inverse problem",
"deep learning algorithms",
"ill-posed statistical inverse problems",
"that",
"art",
"the challenging task",
"the potentially harmful radiation",
"a patient",
"the ct scan",
"recently available ct scanners",
"low-dose ct (ldct) reconstruction",
"a post-processing approach",
"which",
"deep learning-based medical image reconstruction methods",
"the dose level",
"the image quality",
"this paper",
"a deep learning-based post-processing method",
"deep convolutional neural network",
"residual learning",
"dcnn-rl",
"the method",
"the network",
"a newly available low-dose ct benchmark dataset",
"lodopab-ct",
"it",
"other benchmark ct datasets",
"aapm ldct",
"covidx-ct",
"the proposed architecture",
"the filtering part",
"the error function",
"it",
"the parameters",
"the residual network",
"numerous training",
"the efficiency",
"production",
"this paper",
"noise methods",
"dcnn-rl",
"various ldct datasets",
"the same domain",
"human being's chest",
"ct scan",
"the image quality",
"the experiment findings",
"the adagrad optimizer",
"ldct images",
"gaussian noise",
"a minor variance",
"the medical image reconstruction task",
"it",
"this approach",
"these benchmark datasets",
"the medical ct image quality",
"qualitative and quantitative outcomes",
"gaussian"
] |
Transforming clinical virology with AI, machine learning and deep learning: a comprehensive review and outlook | [
"Abhishek Padhi",
"Ashwini Agarwal",
"Shailendra K. Saxena",
"C. D. S. Katoch"
] | In the rapidly evolving field of clinical virology, technological advancements have always played a pivotal role in driving transformative changes. This comprehensive review delves into the burgeoning integration of artificial intelligence (AI), machine learning, and deep learning into virological research and practice. As we elucidate, these computational tools have significantly enhanced diagnostic precision, therapeutic interventions, and epidemiological monitoring. Through in-depth analyses of notable case studies, we showcase how algorithms can optimize viral genome sequencing, accelerate drug discovery, and offer predictive insights into viral outbreaks. However, with these advancements come inherent challenges, particularly in data security, algorithmic biases, and ethical considerations. Addressing these challenges head-on, we discuss potential remedial measures and underscore the significance of interdisciplinary collaboration between virologists, data scientists, and ethicists. Conclusively, this review posits an outlook that anticipates a symbiotic relationship between AI-driven tools and virology, heralding a new era of proactive and personalized patient care. | 10.1007/s13337-023-00841-y | transforming clinical virology with ai, machine learning and deep learning: a comprehensive review and outlook | in the rapidly evolving field of clinical virology, technological advancements have always played a pivotal role in driving transformative changes. this comprehensive review delves into the burgeoning integration of artificial intelligence (ai), machine learning, and deep learning into virological research and practice. as we elucidate, these computational tools have significantly enhanced diagnostic precision, therapeutic interventions, and epidemiological monitoring. through in-depth analyses of notable case studies, we showcase how algorithms can optimize viral genome sequencing, accelerate drug discovery, and offer predictive insights into viral outbreaks. however, with these advancements come inherent challenges, particularly in data security, algorithmic biases, and ethical considerations. addressing these challenges head-on, we discuss potential remedial measures and underscore the significance of interdisciplinary collaboration between virologists, data scientists, and ethicists. conclusively, this review posits an outlook that anticipates a symbiotic relationship between ai-driven tools and virology, heralding a new era of proactive and personalized patient care. | [
"the rapidly evolving field",
"clinical virology",
"technological advancements",
"a pivotal role",
"transformative changes",
"this comprehensive review",
"the burgeoning integration",
"artificial intelligence",
"ai",
"machine learning",
"deep learning",
"virological research",
"practice",
"we",
"these computational tools",
"diagnostic precision",
"therapeutic interventions",
"epidemiological monitoring",
"-depth",
"notable case studies",
"we",
"algorithms",
"drug discovery",
"predictive insights",
"viral outbreaks",
"these advancements",
"inherent challenges",
"data security",
"algorithmic biases",
"ethical considerations",
"these challenges",
"we",
"potential remedial measures",
"the significance",
"interdisciplinary collaboration",
"virologists",
"data scientists",
"ethicists",
"this review",
"an outlook",
"that",
"a symbiotic relationship",
"ai-driven tools",
"virology",
"a new era",
"proactive and personalized patient care"
] |
A secured deep learning based smart home automation system | [
"Chitukula Sanjay",
"Konda Jahnavi",
"Shyam Karanth"
] | With the expansion of modern technologies and the Internet of Things (IoT), the concept of smart homes has gained tremendous popularity with a view to making people’s lives easier by ensuring a secured environment. Several home automation systems have been developed to report suspicious activities by capturing the movements of residents. However, these systems are associated with challenges such as weak security, lack of interoperability and integration with IoT devices, timely reporting of suspicious movements, etc. Therefore, the given paper proposes a novel smart home automation framework for controlling home appliances by integrating with sensors, IoT devices, and microcontrollers, which would in turn monitor the movements and send notifications about suspicious movements on the resident’s smartphone. The proposed framework makes use of convolutional neural networks (CNNs) for motion detection and classification based on pre-processing of images. The images related to the movements of residents are captured by a spy camera installed in the system. It helps in identification of outsiders based on differentiation of motion patterns. The performance of the framework is compared with existing deep learning models used in recent studies based on evaluation metrics such as accuracy (%), precision (%), recall (%), and f-1 measure (%). The results show that the proposed framework attains the highest accuracy (98.67%), thereby surpassing the existing deep learning models used in smart home automation systems. | 10.1007/s41870-024-02097-1 | a secured deep learning based smart home automation system | with the expansion of modern technologies and the internet of things (iot), the concept of smart homes has gained tremendous popularity with a view to making people’s lives easier by ensuring a secured environment. several home automation systems have been developed to report suspicious activities by capturing the movements of residents. however, these systems are associated with challenges such as weak security, lack of interoperability and integration with iot devices, timely reporting of suspicious movements, etc. therefore, the given paper proposes a novel smart home automation framework for controlling home appliances by integrating with sensors, iot devices, and microcontrollers, which would in turn monitor the movements and send notifications about suspicious movements on the resident’s smartphone. the proposed framework makes use of convolutional neural networks (cnns) for motion detection and classification based on pre-processing of images. the images related to the movements of residents are captured by a spy camera installed in the system. it helps in identification of outsiders based on differentiation of motion patterns. the performance of the framework is compared with existing deep learning models used in recent studies based on evaluation metrics such as accuracy (%), precision (%), recall (%), and f-1 measure (%). the results show that the proposed framework attains the highest accuracy (98.67%), thereby surpassing the existing deep learning models used in smart home automation systems. | [
"the expansion",
"modern technologies",
"the internet",
"things",
"iot",
"the concept",
"smart homes",
"tremendous popularity",
"a view",
"people",
"a secured environment",
"several home automation systems",
"suspicious activities",
"the movements",
"residents",
"these systems",
"challenges",
"weak security",
"lack",
"interoperability",
"integration",
"iot devices",
"timely reporting",
"suspicious movements",
"the given paper",
"a novel smart home automation framework",
"home appliances",
"sensors",
"iot devices",
"microcontrollers",
"which",
"turn",
"the movements",
"notifications",
"suspicious movements",
"the resident’s smartphone",
"the proposed framework",
"use",
"convolutional neural networks",
"cnns",
"motion detection",
"classification",
"pre",
"processing",
"images",
"the images",
"the movements",
"residents",
"a spy camera",
"the system",
"it",
"identification",
"outsiders",
"differentiation",
"motion patterns",
"the performance",
"the framework",
"existing deep learning models",
"recent studies",
"evaluation metrics",
"accuracy",
"precision",
"recall",
"f-1 measure",
"the results",
"the proposed framework",
"the highest accuracy",
"98.67%",
"the existing deep learning models",
"smart home automation systems",
"98.67%"
] |
A hyperspectral deep learning attention model for predicting lettuce chlorophyll content | [
"Ziran Ye",
"Xiangfeng Tan",
"Mengdi Dai",
"Xuting Chen",
"Yuanxiang Zhong",
"Yi Zhang",
"Yunjie Ruan",
"Dedong Kong"
] | BackgroundThe phenotypic traits of leaves are the direct reflection of the agronomic traits in the growth process of leafy vegetables, which plays a vital role in the selection of high-quality leafy vegetable varieties. The current image-based phenotypic traits extraction research mainly focuses on the morphological and structural traits of plants or leaves, and there are few studies on the phenotypes of physiological traits of leaves. The current research has developed a deep learning model aimed at predicting the total chlorophyll of greenhouse lettuce directly from the full spectrum of hyperspectral images.ResultsA CNN-based one-dimensional deep learning model with spectral attention module was utilized for the estimate of the total chlorophyll of greenhouse lettuce from the full spectrum of hyperspectral images. Experimental results demonstrate that the deep neural network with spectral attention module outperformed the existing standard approaches, including partial least squares regression (PLSR) and random forest (RF), with an average R2 of 0.746 and an average RMSE of 2.018.ConclusionsThis study unveils the capability of leveraging deep attention networks and hyperspectral imaging for estimating lettuce chlorophyll levels. This approach offers a convenient, non-destructive, and effective estimation method for the automatic monitoring and production management of leafy vegetables. | 10.1186/s13007-024-01148-9 | a hyperspectral deep learning attention model for predicting lettuce chlorophyll content | backgroundthe phenotypic traits of leaves are the direct reflection of the agronomic traits in the growth process of leafy vegetables, which plays a vital role in the selection of high-quality leafy vegetable varieties. the current image-based phenotypic traits extraction research mainly focuses on the morphological and structural traits of plants or leaves, and there are few studies on the phenotypes of physiological traits of leaves. the current research has developed a deep learning model aimed at predicting the total chlorophyll of greenhouse lettuce directly from the full spectrum of hyperspectral images.resultsa cnn-based one-dimensional deep learning model with spectral attention module was utilized for the estimate of the total chlorophyll of greenhouse lettuce from the full spectrum of hyperspectral images. experimental results demonstrate that the deep neural network with spectral attention module outperformed the existing standard approaches, including partial least squares regression (plsr) and random forest (rf), with an average r2 of 0.746 and an average rmse of 2.018.conclusionsthis study unveils the capability of leveraging deep attention networks and hyperspectral imaging for estimating lettuce chlorophyll levels. this approach offers a convenient, non-destructive, and effective estimation method for the automatic monitoring and production management of leafy vegetables. | [
"backgroundthe phenotypic traits",
"leaves",
"the direct reflection",
"the agronomic traits",
"the growth process",
"leafy vegetables",
"which",
"a vital role",
"the selection",
"high-quality leafy vegetable varieties",
"the current image-based phenotypic traits extraction research",
"the morphological and structural traits",
"plants",
"leaves",
"few studies",
"the phenotypes",
"physiological traits",
"leaves",
"the current research",
"a deep learning model",
"the total chlorophyll",
"greenhouse lettuce",
"the full spectrum",
"hyperspectral",
"images.resultsa cnn-based one-dimensional deep learning model",
"spectral attention module",
"the estimate",
"the total chlorophyll",
"greenhouse lettuce",
"the full spectrum",
"hyperspectral images",
"experimental results",
"the deep neural network",
"spectral attention module",
"the existing standard approaches",
"partial least squares regression",
"plsr",
"random forest",
"rf",
"an average r2",
"an average rmse",
"2.018.conclusionsthis study",
"the capability",
"deep attention networks",
"hyperspectral imaging",
"lettuce chlorophyll levels",
"this approach",
"destructive, and effective estimation method",
"the automatic monitoring",
"production management",
"leafy vegetables",
"cnn",
"0.746"
] |
Low Dose CT Image Reconstruction Using Deep Convolutional Residual Learning Network | [
"Shalini Ramanathan",
"Mohan Ramasundaram"
] | Image reconstruction from computed tomography measurement is formulated as a thought-provoking statistical inverse problem. Deep learning algorithms are best for ill-posed statistical inverse problems that presently achieve state-of-art reconstruction results. The challenging task is to lower the potentially harmful radiation a patient is exposed to during the CT scan. In recently available CT Scanners, Low-Dose CT (LDCT) reconstruction is presented with a post-processing approach, which uses deep learning-based medical image reconstruction methods to reduce the dose level without compromising the image quality. Therefore, this paper proposes a deep learning-based post-processing method called Deep Convolutional Neural Network with Residual Learning (DCNN-RL). The method trains the network on a newly available low-dose CT benchmark dataset (LoDoPaB-CT). It also enables to compare with other benchmark CT datasets such as AAPM LDCT and COVIDx-CT. The proposed architecture optimizes the filtering part to minimize the error function. It learns the parameters of the residual network via numerous training to maximize the efficiency of production. This paper compares noise methods on DCNN-RL using various LDCT datasets of the same domain (human being's chest CT scan) to analyze the image quality. The experiment findings suggest that the Adagrad optimizer is the best for LDCT images. Gaussian noise with a minor variance outperforms the medical image reconstruction task. Here, it has been demonstrated that this approach with these benchmark datasets drastically improves the medical CT image quality, shown through qualitative and quantitative outcomes. | 10.1007/s42979-023-02210-4 | low dose ct image reconstruction using deep convolutional residual learning network | image reconstruction from computed tomography measurement is formulated as a thought-provoking statistical inverse problem. deep learning algorithms are best for ill-posed statistical inverse problems that presently achieve state-of-art reconstruction results. the challenging task is to lower the potentially harmful radiation a patient is exposed to during the ct scan. in recently available ct scanners, low-dose ct (ldct) reconstruction is presented with a post-processing approach, which uses deep learning-based medical image reconstruction methods to reduce the dose level without compromising the image quality. therefore, this paper proposes a deep learning-based post-processing method called deep convolutional neural network with residual learning (dcnn-rl). the method trains the network on a newly available low-dose ct benchmark dataset (lodopab-ct). it also enables to compare with other benchmark ct datasets such as aapm ldct and covidx-ct. the proposed architecture optimizes the filtering part to minimize the error function. it learns the parameters of the residual network via numerous training to maximize the efficiency of production. this paper compares noise methods on dcnn-rl using various ldct datasets of the same domain (human being's chest ct scan) to analyze the image quality. the experiment findings suggest that the adagrad optimizer is the best for ldct images. gaussian noise with a minor variance outperforms the medical image reconstruction task. here, it has been demonstrated that this approach with these benchmark datasets drastically improves the medical ct image quality, shown through qualitative and quantitative outcomes. | [
"image reconstruction",
"computed tomography measurement",
"a thought-provoking statistical inverse problem",
"deep learning algorithms",
"ill-posed statistical inverse problems",
"that",
"art",
"the challenging task",
"the potentially harmful radiation",
"a patient",
"the ct scan",
"recently available ct scanners",
"low-dose ct (ldct) reconstruction",
"a post-processing approach",
"which",
"deep learning-based medical image reconstruction methods",
"the dose level",
"the image quality",
"this paper",
"a deep learning-based post-processing method",
"deep convolutional neural network",
"residual learning",
"dcnn-rl",
"the method",
"the network",
"a newly available low-dose ct benchmark dataset",
"lodopab-ct",
"it",
"other benchmark ct datasets",
"aapm ldct",
"covidx-ct",
"the proposed architecture",
"the filtering part",
"the error function",
"it",
"the parameters",
"the residual network",
"numerous training",
"the efficiency",
"production",
"this paper",
"noise methods",
"dcnn-rl",
"various ldct datasets",
"the same domain",
"human being's chest",
"ct scan",
"the image quality",
"the experiment findings",
"the adagrad optimizer",
"ldct images",
"gaussian noise",
"a minor variance",
"the medical image reconstruction task",
"it",
"this approach",
"these benchmark datasets",
"the medical ct image quality",
"qualitative and quantitative outcomes",
"gaussian"
] |
Meta-heuristic-based hybrid deep learning model for vulnerability detection and prevention in software system | [
"Lijin Shaji",
"R. Suji Pramila"
] | Software vulnerabilities are flaws that may be exploited to cause loss or harm. Various automated machine-learning techniques have been developed in preceding studies to detect software vulnerabilities. This work tries to develop a technique for securing the software on the basis of their vulnerabilities that are already known, by developing a hybrid deep learning model to detect those vulnerabilities. Moreover, certain countermeasures are suggested based on the types of vulnerability to prevent the attack further. For different software projects taken as the dataset, feature fusion is done by utilizing canonical correlation analysis together with Deep Residual Network (DRN). A hybrid deep learning technique trained using AdamW-Rat Swarm Optimizer (AdamW-RSO) is designed to detect software vulnerability. Hybrid deep learning makes use of the Deep Belief Network (DBN) and Generative Adversarial Network (GAN). For every vulnerability, its location of occurrence within the software development procedures and techniques of alleviation via implementation level or design level activities are described. Thus, it helps in understanding the appearance of vulnerabilities, suggesting the use of various countermeasures during the initial phases of software design, and therefore, assures software security. Evaluating the performance of vulnerability detection by the proposed technique regarding recall, precision, and f-measure, it is found to be more effective than the existing methods. | 10.1007/s10878-024-01185-z | meta-heuristic-based hybrid deep learning model for vulnerability detection and prevention in software system | software vulnerabilities are flaws that may be exploited to cause loss or harm. various automated machine-learning techniques have been developed in preceding studies to detect software vulnerabilities. this work tries to develop a technique for securing the software on the basis of their vulnerabilities that are already known, by developing a hybrid deep learning model to detect those vulnerabilities. moreover, certain countermeasures are suggested based on the types of vulnerability to prevent the attack further. for different software projects taken as the dataset, feature fusion is done by utilizing canonical correlation analysis together with deep residual network (drn). a hybrid deep learning technique trained using adamw-rat swarm optimizer (adamw-rso) is designed to detect software vulnerability. hybrid deep learning makes use of the deep belief network (dbn) and generative adversarial network (gan). for every vulnerability, its location of occurrence within the software development procedures and techniques of alleviation via implementation level or design level activities are described. thus, it helps in understanding the appearance of vulnerabilities, suggesting the use of various countermeasures during the initial phases of software design, and therefore, assures software security. evaluating the performance of vulnerability detection by the proposed technique regarding recall, precision, and f-measure, it is found to be more effective than the existing methods. | [
"software vulnerabilities",
"flaws",
"that",
"loss",
"harm",
"various automated machine-learning techniques",
"studies",
"software vulnerabilities",
"this work",
"a technique",
"the software",
"the basis",
"their vulnerabilities",
"that",
"a hybrid deep learning model",
"those vulnerabilities",
"certain countermeasures",
"the types",
"vulnerability",
"the attack",
"different software projects",
"the dataset",
"feature fusion",
"canonical correlation analysis",
"deep residual network",
"drn",
"a hybrid deep learning technique",
"adamw-rat swarm optimizer",
"adamw-rso",
"software vulnerability",
"hybrid deep learning",
"use",
"the deep belief network",
"dbn",
"adversarial network",
"gan",
"every vulnerability",
"its location",
"occurrence",
"the software development procedures",
"techniques",
"alleviation",
"implementation level or design level activities",
"it",
"the appearance",
"vulnerabilities",
"the use",
"various countermeasures",
"the initial phases",
"software design",
"therefore, assures software security",
"the performance",
"vulnerability detection",
"the proposed technique",
"recall",
"precision",
"f-measure",
"it",
"the existing methods"
] |
Deep learning pose detection model for sow locomotion | [
"Tauana Maria Carlos Guimarães de Paula",
"Rafael Vieira de Sousa",
"Marisol Parada Sarmiento",
"Ton Kramer",
"Edson José de Souza Sardinha",
"Leandro Sabei",
"Júlia Silvestrini Machado",
"Mirela Vilioti",
"Adroaldo José Zanella"
] | Lameness affects animal mobility, causing pain and discomfort. Lameness in early stages often goes undetected due to a lack of observation, precision, and reliability. Automated and non-invasive systems offer precision and detection ease and may improve animal welfare. This study was conducted to create a repository of images and videos of sows with different locomotion scores. Our goal is to develop a computer vision model for automatically identifying specific points on the sow's body. The automatic identification and ability to track specific body areas, will allow us to conduct kinematic studies with the aim of facilitating the detection of lameness using deep learning. The video database was collected on a pig farm with a scenario built to allow filming of sows in locomotion with different lameness scores. Two stereo cameras were used to record 2D videos images. Thirteen locomotion experts assessed the videos using the Locomotion Score System developed by Zinpro Corporation. From this annotated repository, computational models were trained and tested using the open-source deep learning-based animal pose tracking framework SLEAP (Social LEAP Estimates Animal Poses). The top-performing models were constructed using the LEAP architecture to accurately track 6 (lateral view) and 10 (dorsal view) skeleton keypoints. The architecture achieved average precisions values of 0.90 and 0.72, average distances of 6.83 and 11.37 in pixel, and similarities of 0.94 and 0.86 for the lateral and dorsal views, respectively. These computational models are proposed as a Precision Livestock Farming tool and method for identifying and estimating postures in pigs automatically and objectively. The 2D video image repository with different pig locomotion scores can be used as a tool for teaching and research. Based on our skeleton keypoint classification results, an automatic system could be developed. This could contribute to the objective assessment of locomotion scores in sows, improving their welfare. | 10.1038/s41598-024-62151-7 | deep learning pose detection model for sow locomotion | lameness affects animal mobility, causing pain and discomfort. lameness in early stages often goes undetected due to a lack of observation, precision, and reliability. automated and non-invasive systems offer precision and detection ease and may improve animal welfare. this study was conducted to create a repository of images and videos of sows with different locomotion scores. our goal is to develop a computer vision model for automatically identifying specific points on the sow's body. the automatic identification and ability to track specific body areas, will allow us to conduct kinematic studies with the aim of facilitating the detection of lameness using deep learning. the video database was collected on a pig farm with a scenario built to allow filming of sows in locomotion with different lameness scores. two stereo cameras were used to record 2d videos images. thirteen locomotion experts assessed the videos using the locomotion score system developed by zinpro corporation. from this annotated repository, computational models were trained and tested using the open-source deep learning-based animal pose tracking framework sleap (social leap estimates animal poses). the top-performing models were constructed using the leap architecture to accurately track 6 (lateral view) and 10 (dorsal view) skeleton keypoints. the architecture achieved average precisions values of 0.90 and 0.72, average distances of 6.83 and 11.37 in pixel, and similarities of 0.94 and 0.86 for the lateral and dorsal views, respectively. these computational models are proposed as a precision livestock farming tool and method for identifying and estimating postures in pigs automatically and objectively. the 2d video image repository with different pig locomotion scores can be used as a tool for teaching and research. based on our skeleton keypoint classification results, an automatic system could be developed. this could contribute to the objective assessment of locomotion scores in sows, improving their welfare. | [
"animal mobility",
"pain",
"discomfort",
"early stages",
"a lack",
"observation",
"precision",
"reliability",
"automated and non-invasive systems",
"precision and detection ease",
"animal welfare",
"this study",
"a repository",
"images",
"videos",
"sows",
"different locomotion scores",
"our goal",
"a computer vision model",
"specific points",
"the sow's body",
"the automatic identification",
"ability",
"specific body areas",
"us",
"kinematic studies",
"the aim",
"the detection",
"lameness",
"deep learning",
"the video database",
"a pig farm",
"a scenario",
"filming",
"sows",
"locomotion",
"different lameness scores",
"two stereo cameras",
"2d videos images",
"thirteen locomotion experts",
"the videos",
"the locomotion score system",
"zinpro corporation",
"this annotated repository",
"computational models",
"the open-source deep learning-based animal",
"framework sleap",
"social leap",
"animal poses",
"the top-performing models",
"the leap architecture",
"6 (lateral view",
"10 (dorsal view) skeleton keypoints",
"the architecture",
"average precisions values",
"average distances",
"pixel",
"similarities",
"the lateral and dorsal views",
"these computational models",
"a precision livestock farming tool",
"method",
"postures",
"pigs",
"the 2d video image repository",
"different pig locomotion scores",
"a tool",
"teaching",
"research",
"our skeleton keypoint classification results",
"an automatic system",
"this",
"the objective assessment",
"locomotion scores",
"sows",
"their welfare",
"two",
"2d",
"thirteen",
"zinpro corporation",
"6",
"10",
"0.90",
"0.72",
"6.83",
"11.37",
"0.94",
"0.86",
"2d"
] |
Sentiment analysis of Canadian maritime case law: a sentiment case law and deep learning approach | [
"Bola Abimbola",
"Qing Tan",
"Enrique A. De La Cal Marín"
] | Historical information in the Canadian Maritime Judiciary increases with time because of the need to archive data to be utilized in case references and for later application when determining verdicts for similar cases. However, such data are typically stored in multiple systems, making its reachability technical. Utilizing technologies like deep learning and sentiment analysis provides chances to facilitate faster access to court records. Such practice enhances impartial verdicts, minimizing workloads for court employees, and decreases the time used in legal proceedings for claims during maritime contracts such as shipping disputes between parties. This paper seeks to develop a sentiment analysis framework that uses deep learning, distributed learning, and machine learning to improve access to statutes, laws, and cases used by maritime judges in making judgments to back their claims. The suggested approach uses deep learning models, including convolutional neural networks (CNNs), deep neural networks, long short-term memory (LSTM), and recurrent neural networks. It extracts court records having crucial sentiments or statements for maritime court verdicts. The suggested approach has been used successfully during sentiment analysis by emphasizing feature selection from a legal repository. The LSTM + CNN model has shown promising results in obtaining sentiments and records from multiple devices and sufficiently proposing practical guidance to judicial personnel regarding the regulations applicable to various situations. | 10.1007/s41870-024-01820-2 | sentiment analysis of canadian maritime case law: a sentiment case law and deep learning approach | historical information in the canadian maritime judiciary increases with time because of the need to archive data to be utilized in case references and for later application when determining verdicts for similar cases. however, such data are typically stored in multiple systems, making its reachability technical. utilizing technologies like deep learning and sentiment analysis provides chances to facilitate faster access to court records. such practice enhances impartial verdicts, minimizing workloads for court employees, and decreases the time used in legal proceedings for claims during maritime contracts such as shipping disputes between parties. this paper seeks to develop a sentiment analysis framework that uses deep learning, distributed learning, and machine learning to improve access to statutes, laws, and cases used by maritime judges in making judgments to back their claims. the suggested approach uses deep learning models, including convolutional neural networks (cnns), deep neural networks, long short-term memory (lstm), and recurrent neural networks. it extracts court records having crucial sentiments or statements for maritime court verdicts. the suggested approach has been used successfully during sentiment analysis by emphasizing feature selection from a legal repository. the lstm + cnn model has shown promising results in obtaining sentiments and records from multiple devices and sufficiently proposing practical guidance to judicial personnel regarding the regulations applicable to various situations. | [
"historical information",
"the canadian maritime judiciary",
"time",
"the need",
"data",
"case references",
"later application",
"verdicts",
"similar cases",
"such data",
"multiple systems",
"its reachability",
"technologies",
"deep learning",
"sentiment analysis",
"chances",
"faster access",
"court records",
"such practice",
"impartial verdicts",
"workloads",
"court employees",
"the time",
"legal proceedings",
"claims",
"maritime contracts",
"shipping disputes",
"parties",
"this paper",
"a sentiment analysis framework",
"that",
"deep learning",
"learning",
"access",
"statutes",
"laws",
"cases",
"maritime judges",
"judgments",
"their claims",
"the suggested approach",
"deep learning models",
"convolutional neural networks",
"cnns",
"deep neural networks",
"long short-term memory",
"lstm",
"neural networks",
"it",
"court records",
"crucial sentiments",
"statements",
"maritime court verdicts",
"the suggested approach",
"sentiment analysis",
"feature selection",
"a legal repository",
"the lstm + cnn model",
"promising results",
"sentiments",
"records",
"multiple devices",
"practical guidance",
"judicial personnel",
"the regulations",
"various situations",
"canadian"
] |
A structural reliability analysis method under non-parameterized P-box based on double-loop deep learning models | [
"Hao Hu",
"Minya Deng",
"Weichuan Sun",
"Jinwen Li",
"Huichao Xie",
"Haibo Liu"
] | Structural reliability analysis, when accounting for non-parameterized probability box (P-box) uncertainty, typically entails multiple calls to performance functions and poses significant computational hurdles, largely attributable to its inherently nested double-loop structure. Therefore, this paper proposes a new reliability analysis method tailored for structures with uncertain parameters represented using non-parameterized P-boxes. This method leverages double-loop deep learning models to efficiently calculate both the upper and lower bounds of the failure probability. In the development phase of the double-loop deep learning model, an active learning function is devised that integrates the local prediction uncertainty of the deep learning model, based on the K-fold cross-validation principle with the proximity of training samples to candidate sample points. Different stopping criteria are formulated at distinct stages of the model construction process. Firstly, within the inner loop, a deep learning model is established to represent the original performance function in relation to the input parameters. Secondly, based on the inner-loop deep learning model for the performance function, an outer-loop deep learning model is established for the auxiliary response function corresponding to the P-box bound curves of the performance function response with respect to standard uniform distribution variables. Thirdly, utilizing the outer-loop deep learning approximate model, the Monte Carlo simulation technique is employed to compute the upper and lower bounds of the structural failure probability. Finally, the effectiveness of the proposed method is validated through the investigation of two numerical examples and a practical engineering problem. The influence of parameters in the active learning function, threshold values for stopping criteria, and the number of sample points on the computational results is deliberated. | 10.1007/s00158-024-03854-3 | a structural reliability analysis method under non-parameterized p-box based on double-loop deep learning models | structural reliability analysis, when accounting for non-parameterized probability box (p-box) uncertainty, typically entails multiple calls to performance functions and poses significant computational hurdles, largely attributable to its inherently nested double-loop structure. therefore, this paper proposes a new reliability analysis method tailored for structures with uncertain parameters represented using non-parameterized p-boxes. this method leverages double-loop deep learning models to efficiently calculate both the upper and lower bounds of the failure probability. in the development phase of the double-loop deep learning model, an active learning function is devised that integrates the local prediction uncertainty of the deep learning model, based on the k-fold cross-validation principle with the proximity of training samples to candidate sample points. different stopping criteria are formulated at distinct stages of the model construction process. firstly, within the inner loop, a deep learning model is established to represent the original performance function in relation to the input parameters. secondly, based on the inner-loop deep learning model for the performance function, an outer-loop deep learning model is established for the auxiliary response function corresponding to the p-box bound curves of the performance function response with respect to standard uniform distribution variables. thirdly, utilizing the outer-loop deep learning approximate model, the monte carlo simulation technique is employed to compute the upper and lower bounds of the structural failure probability. finally, the effectiveness of the proposed method is validated through the investigation of two numerical examples and a practical engineering problem. the influence of parameters in the active learning function, threshold values for stopping criteria, and the number of sample points on the computational results is deliberated. | [
"structural reliability analysis",
"p-box",
"multiple calls",
"performance functions",
"significant computational hurdles",
"its inherently nested double-loop structure",
"this paper",
"a new reliability analysis method",
"structures",
"uncertain parameters",
"non-parameterized p-boxes",
"this method",
"double-loop deep learning models",
"both the upper and lower bounds",
"the failure probability",
"the development phase",
"the double-loop deep learning model",
"an active learning function",
"that",
"the local prediction uncertainty",
"the deep learning model",
"the k-fold cross-validation principle",
"the proximity",
"training samples",
"sample points",
"different stopping criteria",
"distinct stages",
"the model construction process",
"the inner loop",
"a deep learning model",
"the original performance function",
"relation",
"the input parameters",
"the inner-loop deep learning model",
"the performance function",
"an outer-loop deep learning model",
"the auxiliary response function",
"the p-box bound curves",
"the performance function response",
"respect",
"standard uniform distribution variables",
"approximate model",
"the monte carlo simulation technique",
"the upper and lower bounds",
"the structural failure probability",
"the effectiveness",
"the proposed method",
"the investigation",
"two numerical examples",
"a practical engineering problem",
"the influence",
"parameters",
"the active learning function",
"criteria",
"the number",
"sample points",
"the computational results",
"the k-fold",
"firstly",
"secondly",
"thirdly",
"two"
] |
A comprehensive survey on deep-learning-based visual captioning | [
"Bowen Xin",
"Ning Xu",
"Yingchen Zhai",
"Tingting Zhang",
"Zimu Lu",
"Jing Liu",
"Weizhi Nie",
"Xuanya Li",
"An-An Liu"
] | Generating a description for an image/video is termed as the visual captioning task. It requires the model to capture the semantic information of visual content and translate them into syntactically and semantically human language. Connecting both research communities of computer vision (CV) and natural language processing (NLP), visual captioning presents the big challenge to bridge the gap between low-level visual features and high-level language information. Thanks to recent advances in deep learning, which are widely applied to the fields of visual and language modeling, the visual captioning methods depending on the deep neural networks has demonstrated state-of-the-art performances. In this paper, we aim to present a comprehensive survey of existing deep learning-based visual captioning methods. Relying on the adopted mechanism and technique to narrow the semantic gap, we divide visual captioning methods into various groups. Representative categories in each group are summarized, and their strengths and limitations are discussed. The quantitative evaluations of state-of-the-art approaches on popular benchmark datasets are also presented and analyzed. Furthermore, we provide the discussions on future research directions. | 10.1007/s00530-023-01175-x | a comprehensive survey on deep-learning-based visual captioning | generating a description for an image/video is termed as the visual captioning task. it requires the model to capture the semantic information of visual content and translate them into syntactically and semantically human language. connecting both research communities of computer vision (cv) and natural language processing (nlp), visual captioning presents the big challenge to bridge the gap between low-level visual features and high-level language information. thanks to recent advances in deep learning, which are widely applied to the fields of visual and language modeling, the visual captioning methods depending on the deep neural networks has demonstrated state-of-the-art performances. in this paper, we aim to present a comprehensive survey of existing deep learning-based visual captioning methods. relying on the adopted mechanism and technique to narrow the semantic gap, we divide visual captioning methods into various groups. representative categories in each group are summarized, and their strengths and limitations are discussed. the quantitative evaluations of state-of-the-art approaches on popular benchmark datasets are also presented and analyzed. furthermore, we provide the discussions on future research directions. | [
"a description",
"an image/video",
"the visual captioning task",
"it",
"the model",
"the semantic information",
"visual content",
"them",
"syntactically and semantically human language",
"both research communities",
"computer vision",
"cv",
"natural language processing",
"nlp",
"visual captioning",
"the big challenge",
"the gap",
"low-level visual features",
"high-level language information",
"recent advances",
"deep learning",
"which",
"the fields",
"visual and language modeling",
"the visual captioning methods",
"the deep neural networks",
"the-art",
"this paper",
"we",
"a comprehensive survey",
"existing deep learning-based visual captioning methods",
"the adopted mechanism",
"technique",
"the semantic gap",
"we",
"visual captioning methods",
"various groups",
"representative categories",
"each group",
"their strengths",
"limitations",
"the quantitative evaluations",
"the-art",
"popular benchmark datasets",
"we",
"the discussions",
"future research directions"
] |
Thermoplastic waste segregation classification system using deep learning techniques | [
"M. Monica Subashini",
"R. S. Vignesh"
] | This research proposes a deep learning-based system, named deep CNN architecture, for the automated classification of the plastic resin in plastic waste. The system aims to detect and recognize objects such as drinking water bottles, detergent bottles, squeezable bottles, and plastic plates, and segregate them into PET, PE-HD, PE-LD, and other resin categories. The process involves capturing input images through a camera and using deep learning or traditional algorithms to detect and recognize the objects by comparing them with a trained database containing labeled objects. Unrecognized objects are dynamically trained, labeled, and updated in the database. The proposed system is implemented using Python, a versatile open-source programming language. Python’s functional and aspect-oriented programming paradigms are leveraged to develop the models. The performance of the proposed architecture is evaluated against existing works, demonstrating a classification accuracy of 92.66% according to experimental results. | 10.1007/s11042-023-16237-5 | thermoplastic waste segregation classification system using deep learning techniques | this research proposes a deep learning-based system, named deep cnn architecture, for the automated classification of the plastic resin in plastic waste. the system aims to detect and recognize objects such as drinking water bottles, detergent bottles, squeezable bottles, and plastic plates, and segregate them into pet, pe-hd, pe-ld, and other resin categories. the process involves capturing input images through a camera and using deep learning or traditional algorithms to detect and recognize the objects by comparing them with a trained database containing labeled objects. unrecognized objects are dynamically trained, labeled, and updated in the database. the proposed system is implemented using python, a versatile open-source programming language. python’s functional and aspect-oriented programming paradigms are leveraged to develop the models. the performance of the proposed architecture is evaluated against existing works, demonstrating a classification accuracy of 92.66% according to experimental results. | [
"this research",
"a deep learning-based system",
"deep cnn architecture",
"the automated classification",
"the plastic resin",
"plastic waste",
"the system",
"objects",
"drinking water bottles",
"detergent bottles",
"squeezable bottles",
"plastic plates",
"them",
"other resin categories",
"the process",
"input images",
"a camera",
"deep learning",
"traditional algorithms",
"the objects",
"them",
"a trained database",
"labeled objects",
"unrecognized objects",
"the database",
"the proposed system",
"python",
"python’s functional and aspect-oriented programming paradigms",
"the models",
"the performance",
"the proposed architecture",
"existing works",
"a classification accuracy",
"92.66%",
"experimental results",
"cnn",
"92.66%"
] |
Multi-step framework for glaucoma diagnosis in retinal fundus images using deep learning | [
"Sanli Yi",
"Lingxiang Zhou"
] | Glaucoma is one of the most common causes of blindness in the world. Screening glaucoma from retinal fundus images based on deep learning is a common method at present. In the diagnosis of glaucoma based on deep learning, the blood vessels within the optic disc interfere with the diagnosis, and there is also some pathological information outside the optic disc in fundus images. Therefore, integrating the original fundus image with the vessel-removed optic disc image can improve diagnostic efficiency. In this paper, we propose a novel multi-step framework named MSGC-CNN that can better diagnose glaucoma. In the framework, (1) we combine glaucoma pathological knowledge with deep learning model, fuse the features of original fundus image and optic disc region in which the interference of blood vessel is specifically removed by U-Net, and make glaucoma diagnosis based on the fused features. (2) Aiming at the characteristics of glaucoma fundus images, such as small amount of data, high resolution, and rich feature information, we design a new feature extraction network RA-ResNet and combined it with transfer learning. In order to verify our method, we conduct binary classification experiments on three public datasets, Drishti-GS, RIM-ONE-R3, and ACRIMA, with accuracy of 92.01%, 93.75%, and 97.87%. The results demonstrate a significant improvement over earlier results.Graphical abstract | 10.1007/s11517-024-03172-2 | multi-step framework for glaucoma diagnosis in retinal fundus images using deep learning | glaucoma is one of the most common causes of blindness in the world. screening glaucoma from retinal fundus images based on deep learning is a common method at present. in the diagnosis of glaucoma based on deep learning, the blood vessels within the optic disc interfere with the diagnosis, and there is also some pathological information outside the optic disc in fundus images. therefore, integrating the original fundus image with the vessel-removed optic disc image can improve diagnostic efficiency. in this paper, we propose a novel multi-step framework named msgc-cnn that can better diagnose glaucoma. in the framework, (1) we combine glaucoma pathological knowledge with deep learning model, fuse the features of original fundus image and optic disc region in which the interference of blood vessel is specifically removed by u-net, and make glaucoma diagnosis based on the fused features. (2) aiming at the characteristics of glaucoma fundus images, such as small amount of data, high resolution, and rich feature information, we design a new feature extraction network ra-resnet and combined it with transfer learning. in order to verify our method, we conduct binary classification experiments on three public datasets, drishti-gs, rim-one-r3, and acrima, with accuracy of 92.01%, 93.75%, and 97.87%. the results demonstrate a significant improvement over earlier results.graphical abstract | [
"glaucoma",
"the most common causes",
"blindness",
"the world",
"glaucoma",
"retinal fundus images",
"deep learning",
"a common method",
"present",
"the diagnosis",
"glaucoma",
"deep learning",
"the blood vessels",
"the optic disc",
"the diagnosis",
"some pathological information",
"the optic disc",
"fundus images",
"the original fundus image",
"the vessel-removed optic disc image",
"diagnostic efficiency",
"this paper",
"we",
"a novel multi-step framework",
"msgc-cnn",
"that",
"glaucoma",
"the framework",
"we",
"glaucoma pathological knowledge",
"deep learning model",
"the features",
"original fundus image",
"optic disc region",
"which",
"the interference",
"blood vessel",
"u",
"-",
"net",
"glaucoma diagnosis",
"the fused features",
"the characteristics",
"glaucoma fundus images",
"small amount",
"data",
"high resolution",
"rich feature information",
"we",
"a new feature extraction network",
"ra-resnet",
"it",
"transfer learning",
"order",
"our method",
"we",
"binary classification experiments",
"three public datasets",
"drishti-gs",
"rim-one-r3",
"acrima",
"accuracy",
"92.01%",
"93.75%",
"97.87%",
"the results",
"a significant improvement",
"glaucoma",
"msgc-cnn",
"1",
"2",
"three",
"one-r3",
"92.01%",
"93.75%",
"97.87%"
] |
Deep learning-based power usage effectiveness optimization for IoT-enabled data center | [
"Yu Sun",
"Yanyi Wang",
"Gaoxiang Jiang",
"Bo Cheng",
"Haibo Zhou"
] | The proliferation of data centers is driving increased energy consumption, leading to environmentally unacceptable carbon emissions. As the use of Internet-of-Things (IoT) techniques for extensive data collection in data centers continues to grow, deep learning-based solutions have emerged as attractive alternatives to suboptimal traditional methods. However, existing approaches suffer from unsatisfactory performance, unrealistic assumptions, and an inability to address practical data center optimization. In this paper, we focus on power usage effectiveness (PUE) optimization in IoT-enabled data centers using deep learning algorithms. We first develop a deep learning-based PUE optimization framework tailored to IoT-enabled data centers. We then formulate the general PUE optimization problem, simplifying and specifying it for the minimization of long-term energy consumption in chiller cooling systems. Additionally, we introduce a transformer-based prediction network designed for energy consumption forecasting. Subsequently, we transform this formulation into a Markov decision process (MDP) and present the branching double dueling deep Q-network. This approach effectively tackles the challenges posed by enormous action spaces within MDP by branching actions into sub-actions. Extensive experiments conducted on real-world datasets demonstrate the exceptional performance of our algorithms, excelling in prediction precision, optimization convergence, and optimality while effectively managing a substantial number of actions on the order of \(10^{13}\). | 10.1007/s12083-024-01663-5 | deep learning-based power usage effectiveness optimization for iot-enabled data center | the proliferation of data centers is driving increased energy consumption, leading to environmentally unacceptable carbon emissions. as the use of internet-of-things (iot) techniques for extensive data collection in data centers continues to grow, deep learning-based solutions have emerged as attractive alternatives to suboptimal traditional methods. however, existing approaches suffer from unsatisfactory performance, unrealistic assumptions, and an inability to address practical data center optimization. in this paper, we focus on power usage effectiveness (pue) optimization in iot-enabled data centers using deep learning algorithms. we first develop a deep learning-based pue optimization framework tailored to iot-enabled data centers. we then formulate the general pue optimization problem, simplifying and specifying it for the minimization of long-term energy consumption in chiller cooling systems. additionally, we introduce a transformer-based prediction network designed for energy consumption forecasting. subsequently, we transform this formulation into a markov decision process (mdp) and present the branching double dueling deep q-network. this approach effectively tackles the challenges posed by enormous action spaces within mdp by branching actions into sub-actions. extensive experiments conducted on real-world datasets demonstrate the exceptional performance of our algorithms, excelling in prediction precision, optimization convergence, and optimality while effectively managing a substantial number of actions on the order of \(10^{13}\). | [
"the proliferation",
"data centers",
"increased energy consumption",
"environmentally unacceptable carbon emissions",
"the use",
"things",
"iot",
"extensive data collection",
"data centers",
"deep learning-based solutions",
"attractive alternatives",
"suboptimal traditional methods",
"existing approaches",
"unsatisfactory performance",
"unrealistic assumptions",
"an inability",
"practical data center optimization",
"this paper",
"we",
"power usage effectiveness",
"(pue) optimization",
"iot-enabled data centers",
"deep learning algorithms",
"we",
"a deep learning-based pue optimization framework",
"iot-enabled data centers",
"we",
"the general pue optimization problem",
"it",
"the minimization",
"long-term energy consumption",
"chiller cooling systems",
"we",
"a transformer-based prediction network",
"energy consumption forecasting",
"we",
"this formulation",
"a markov decision process",
"mdp",
"the branching",
"deep q-network",
"this approach",
"the challenges",
"enormous action spaces",
"mdp",
"actions",
"sub",
"-",
"actions",
"extensive experiments",
"real-world datasets",
"the exceptional performance",
"our algorithms",
"prediction precision",
"optimization convergence",
"optimality",
"a substantial number",
"actions",
"the order",
"\\(10^{13}\\",
"first"
] |
Question classification task based on deep learning models with self-attention mechanism | [
"Subhash Mondal",
"Manas Barman",
"Amitava Nag"
] | Question classification (QC) is a process that involves classifying questions based on their type to enable systems to provide accurate responses by matching the question type with relevant information. To understand and respond to natural language questions posed by humans, machine learning models or systems must comprehend the type of information requested, which can often be inferred from the structure and wording of the question. The high dimensionality and sparse nature of text data lead to challenges for text classification. These tasks can be improved using deep learning (DL) approaches to process complex patterns and features within input data. By training on large amounts of labeled data, deep learning algorithms can automatically extract relevant features and representations from text, resulting in more accurate and robust classification. This study utilizes a dataset comprising 5452 instances of questions and six output labels and uses two different word embedding techniques, like GloVe and Word2Vec, tested on the dataset using three deep learning models, LSTM, BiLSTM, and GRU, followed by a convolution layer. Additionally, a self-attention layer is included, which helps the model to focus on more relevant information when making predictions. Finally, an analytical discussion of the proposed models and their performance results provide insight into how GloVe and Word2Vec perform on the above-mentioned models. The GloVe embedding outperforms by achieving 97.68% accuracy and a moderate loss of 16.98 with the GRU model. | 10.1007/s11042-024-19239-z | question classification task based on deep learning models with self-attention mechanism | question classification (qc) is a process that involves classifying questions based on their type to enable systems to provide accurate responses by matching the question type with relevant information. to understand and respond to natural language questions posed by humans, machine learning models or systems must comprehend the type of information requested, which can often be inferred from the structure and wording of the question. the high dimensionality and sparse nature of text data lead to challenges for text classification. these tasks can be improved using deep learning (dl) approaches to process complex patterns and features within input data. by training on large amounts of labeled data, deep learning algorithms can automatically extract relevant features and representations from text, resulting in more accurate and robust classification. this study utilizes a dataset comprising 5452 instances of questions and six output labels and uses two different word embedding techniques, like glove and word2vec, tested on the dataset using three deep learning models, lstm, bilstm, and gru, followed by a convolution layer. additionally, a self-attention layer is included, which helps the model to focus on more relevant information when making predictions. finally, an analytical discussion of the proposed models and their performance results provide insight into how glove and word2vec perform on the above-mentioned models. the glove embedding outperforms by achieving 97.68% accuracy and a moderate loss of 16.98 with the gru model. | [
"question classification",
"qc",
"a process",
"that",
"questions",
"their type",
"systems",
"accurate responses",
"the question type",
"relevant information",
"natural language questions",
"humans",
"machine learning models",
"systems",
"the type",
"information",
"which",
"the structure",
"wording",
"the question",
"the high dimensionality",
"sparse nature",
"text data",
"challenges",
"text classification",
"these tasks",
"dl",
"complex patterns",
"features",
"input data",
"training",
"large amounts",
"labeled data",
"deep learning algorithms",
"relevant features",
"representations",
"text",
"more accurate and robust classification",
"this study",
"a dataset",
"5452 instances",
"questions",
"six output labels",
"two different word",
"techniques",
"glove",
"word2vec",
"the dataset",
"three deep learning models",
"lstm",
"bilstm",
"gru",
"a convolution layer",
"a self-attention layer",
"which",
"the model",
"more relevant information",
"predictions",
"an analytical discussion",
"the proposed models",
"their performance results",
"insight",
"how glove",
"the above-mentioned models",
"the glove",
"outperforms",
"97.68% accuracy",
"a moderate loss",
"the gru model",
"5452",
"six",
"two",
"three",
"97.68%",
"16.98"
] |
A new content-aware image resizing based on Rényi entropy and deep learning | [
"Jila Ayubi",
"Mehdi Chehel Amirani",
"Morteza Valizadeh"
] | One of the most popular techniques for changing the purpose of an image or resizing a digital image with content awareness is the seam-carving method. The performance of image resizing algorithms based on seam machining shows that these algorithms are highly dependent on the extraction of importance map techniques and the detection of salient objects. So far, various algorithms have been proposed to extract the importance map. In this paper, a new method based on Rényi entropy is proposed to extract the importance map. Also, a deep learning network has been used to detect salient objects. The simulator results showed that combining Rényi’s importance map with a deep network of salient object detection performed better than classical seam-carving and other extended seam-carving algorithms based on deep learning. | 10.1007/s00521-024-09517-0 | a new content-aware image resizing based on rényi entropy and deep learning | one of the most popular techniques for changing the purpose of an image or resizing a digital image with content awareness is the seam-carving method. the performance of image resizing algorithms based on seam machining shows that these algorithms are highly dependent on the extraction of importance map techniques and the detection of salient objects. so far, various algorithms have been proposed to extract the importance map. in this paper, a new method based on rényi entropy is proposed to extract the importance map. also, a deep learning network has been used to detect salient objects. the simulator results showed that combining rényi’s importance map with a deep network of salient object detection performed better than classical seam-carving and other extended seam-carving algorithms based on deep learning. | [
"the most popular techniques",
"the purpose",
"an image",
"a digital image",
"content awareness",
"the seam-carving method",
"the performance",
"image resizing algorithms",
"seam machining",
"these algorithms",
"the extraction",
"importance map techniques",
"the detection",
"salient objects",
"various algorithms",
"the importance map",
"this paper",
"a new method",
"rényi entropy",
"the importance map",
"a deep learning network",
"salient objects",
"the simulator results",
"rényi’s importance map",
"a deep network",
"salient object detection",
"classical seam-carving",
"other extended seam-carving algorithms",
"deep learning",
"one"
] |
A deep learning framework for students' academic performance analysis | [
"Sumati Pathak",
"Hiral Raja",
"Sumit Srivastava",
"Neelam Sahu",
"Rohit Raja",
"Amit Kumar Dewangan"
] | Students Performance (SP) analysis is regarded as one of the most important steps in the educational system for supporting students' academic success and the institutions' overall outcomes. Nevertheless, it is tremendously challenging due to the numerous details that many students have. Data Mining (DM) is the most widely used approach for SP prediction that extracts imperative information from a bigger raw data set. Even though there are various DM-centered performance prediction approaches, they all have low accuracy and high training time and don't produce the desired output. This paper proposes a hybrid deep learning framework using Deer Hunting Optimization based Deep Learning Neural Networks (DH-DLNN). A self-structured questionnaire covers all aspects of using information and communication technology, including increased access, knowledge building, learning, performance, motivation, classroom management and interaction, collaborative learning, and satisfaction. Data Cleaning and data conversion preprocess the dataset. The prediction of the student's level is then performed by extracting imperative features from the preprocessed data, followed by feature ranking using entropy calculations. The obtained entropy values are inputted into the DH-DLNN, which predicts the students' academic performance. Finally, the accuracy of the proposed system is evaluated using K-fold cross-validation. The experiment results revealed that DH-DLNN outperforms the other classification approaches with an accuracy of 96.33%. | 10.1007/s40012-023-00388-9 | a deep learning framework for students' academic performance analysis | students performance (sp) analysis is regarded as one of the most important steps in the educational system for supporting students' academic success and the institutions' overall outcomes. nevertheless, it is tremendously challenging due to the numerous details that many students have. data mining (dm) is the most widely used approach for sp prediction that extracts imperative information from a bigger raw data set. even though there are various dm-centered performance prediction approaches, they all have low accuracy and high training time and don't produce the desired output. this paper proposes a hybrid deep learning framework using deer hunting optimization based deep learning neural networks (dh-dlnn). a self-structured questionnaire covers all aspects of using information and communication technology, including increased access, knowledge building, learning, performance, motivation, classroom management and interaction, collaborative learning, and satisfaction. data cleaning and data conversion preprocess the dataset. the prediction of the student's level is then performed by extracting imperative features from the preprocessed data, followed by feature ranking using entropy calculations. the obtained entropy values are inputted into the dh-dlnn, which predicts the students' academic performance. finally, the accuracy of the proposed system is evaluated using k-fold cross-validation. the experiment results revealed that dh-dlnn outperforms the other classification approaches with an accuracy of 96.33%. | [
"students",
"performance (sp) analysis",
"the most important steps",
"the educational system",
"students' academic success",
"the institutions' overall outcomes",
"it",
"the numerous details",
"many students",
"data mining",
"(dm",
"the most widely used approach",
"sp prediction",
"that",
"imperative information",
"a bigger raw data set",
"various dm-centered performance prediction approaches",
"they",
"all",
"low accuracy",
"high training time",
"the desired output",
"this paper",
"a hybrid deep learning framework",
"deer hunting optimization",
"neural networks",
"dh-dlnn",
"a self-structured questionnaire",
"all aspects",
"information and communication technology",
"increased access",
"knowledge building",
"learning",
"performance",
"motivation",
"classroom management",
"interaction",
"collaborative learning",
"satisfaction",
"data cleaning",
"data conversion",
"the dataset",
"the prediction",
"the student's level",
"imperative features",
"the preprocessed data",
"feature",
"entropy calculations",
"the obtained entropy values",
"the dh-dlnn",
"which",
"the students' academic performance",
"the accuracy",
"the proposed system",
"k",
"fold cross",
"-",
"validation",
"the experiment results",
"dh-dlnn",
"the other classification approaches",
"an accuracy",
"96.33%",
"one",
"96.33%"
] |
Deep multi-metric training: the need of multi-metric curve evaluation to avoid weak learning | [
"Michail Mamalakis",
"Abhirup Banerjee",
"Surajit Ray",
"Craig Wilkie",
"Richard H. Clayton",
"Andrew J. Swift",
"George Panoutsos",
"Bart Vorselaars"
] | The development and application of artificial intelligence-based computer vision systems in medicine, environment, and industry are playing an increasingly prominent role. Hence, the need for optimal and efficient hyperparameter tuning strategies is more than crucial to deliver the highest performance of the deep learning networks in large and demanding datasets. In our study, we have developed and evaluated a new training methodology named deep multi-metric training (DMMT) for enhanced training performance. The DMMT delivers a state of robust learning for deep networks using a new important criterion of multi-metric performance evaluation. We have tested the DMMT methodology in multi-class (three, four, and ten), multi-vendors (different X-ray imaging devices), and multi-size (large, medium, and small) datasets. The validity of the DMMT methodology has been tested in three different classification problems: (i) medical disease classification, (ii) environmental classification, and (iii) ecological classification. For disease classification, we have used two large COVID-19 chest X-rays datasets, namely the BIMCV COVID-19+ and Sheffield hospital datasets. The environmental application is related to the classification of weather images in cloudy, rainy, shine or sunrise conditions. The ecological classification task involves a classification of three animal species (cat, dog, wild) and a classification of ten animals and transportation vehicles categories (CIFAR-10). We have used state-of-the-art networks of DenseNet-121, ResNet-50, VGG-16, VGG-19, and DenResCov-19 (DenRes-131) to verify that our novel methodology is applicable in a variety of different deep learning networks. To the best of our knowledge, this is the first work that proposes a training methodology to deliver robust learning, over a variety of deep learning networks and multi-field classification problems. | 10.1007/s00521-024-10182-6 | deep multi-metric training: the need of multi-metric curve evaluation to avoid weak learning | the development and application of artificial intelligence-based computer vision systems in medicine, environment, and industry are playing an increasingly prominent role. hence, the need for optimal and efficient hyperparameter tuning strategies is more than crucial to deliver the highest performance of the deep learning networks in large and demanding datasets. in our study, we have developed and evaluated a new training methodology named deep multi-metric training (dmmt) for enhanced training performance. the dmmt delivers a state of robust learning for deep networks using a new important criterion of multi-metric performance evaluation. we have tested the dmmt methodology in multi-class (three, four, and ten), multi-vendors (different x-ray imaging devices), and multi-size (large, medium, and small) datasets. the validity of the dmmt methodology has been tested in three different classification problems: (i) medical disease classification, (ii) environmental classification, and (iii) ecological classification. for disease classification, we have used two large covid-19 chest x-rays datasets, namely the bimcv covid-19+ and sheffield hospital datasets. the environmental application is related to the classification of weather images in cloudy, rainy, shine or sunrise conditions. the ecological classification task involves a classification of three animal species (cat, dog, wild) and a classification of ten animals and transportation vehicles categories (cifar-10). we have used state-of-the-art networks of densenet-121, resnet-50, vgg-16, vgg-19, and denrescov-19 (denres-131) to verify that our novel methodology is applicable in a variety of different deep learning networks. to the best of our knowledge, this is the first work that proposes a training methodology to deliver robust learning, over a variety of deep learning networks and multi-field classification problems. | [
"the development",
"application",
"artificial intelligence-based computer vision systems",
"medicine",
"environment",
"industry",
"an increasingly prominent role",
"the need",
"optimal and efficient hyperparameter tuning strategies",
"the highest performance",
"the deep learning networks",
"large and demanding datasets",
"our study",
"we",
"a new training methodology",
"deep multi-metric training",
"dmmt",
"enhanced training performance",
"the dmmt",
"a state",
"robust learning",
"deep networks",
"a new important criterion",
"multi-metric performance evaluation",
"we",
"the dmmt methodology",
"-",
"-",
"vendors (different x-ray imaging devices",
"multi-size (large, medium, and small) datasets",
"the validity",
"the dmmt methodology",
"three different classification problems",
"(i) medical disease classification",
"(ii) environmental classification",
"(iii) ecological classification",
"disease classification",
"we",
"two large covid-19 chest x-rays datasets",
"namely the bimcv covid-19",
"hospital datasets",
"the environmental application",
"the classification",
"weather images",
"shine",
"sunrise conditions",
"the ecological classification task",
"a classification",
"three animal species",
"a classification",
"ten animals",
"transportation vehicles categories",
"cifar-10",
"we",
"the-art",
"resnet-50",
"vgg-16",
"vgg-19",
"denrescov-19",
"(denres-131",
"our novel methodology",
"a variety",
"different deep learning networks",
"our knowledge",
"this",
"the first work",
"that",
"a training methodology",
"robust learning",
"a variety",
"deep learning networks",
"multi-field classification problems",
"three",
"four",
"ten",
"three",
"two",
"covid-19",
"covid-19+",
"three",
"ten",
"cifar-10",
"densenet-121, resnet-50",
"vgg-16",
"vgg-19",
"denrescov-19",
"first"
] |
Deep Reinforcement Learning Model for Stock Portfolio Management Based on Data Fusion | [
"Haifeng Li",
"Mo Hai"
] | Deep reinforcement learning (DRL) can be used to extract deep features that can be incorporated into reinforcement learning systems to enable improved decision-making; DRL can therefore also be used for managing stock portfolios. Traditional methods cannot fully exploit the advantages of DRL because they are generally based on real-time stock quotes, which do not have sufficient features for making comprehensive decisions. In this study, in addition to stock quotes, we introduced stock financial indices as additional stock features. Moreover, we used Markowitz mean-variance theory for determining stock correlation. A three-agent deep reinforcement learning model called Collaborative Multi-agent reinforcement learning-based stock Portfolio management System (CMPS) was designed and trained based on fused data. In CMPS, each agent was implemented with a deep Q-network to obtain the features of time-series stock data, and a self-attention network was used to combine the output of each agent. We added a risk-free asset strategy to CMPS to prevent risks and referred to this model as CMPS-Risk Free (CMPS-RF). We conducted experiments under different market conditions using the stock data of China Shanghai Stock Exchange 50 and compared our model with the state-of-the-art models. The results showed that CMPS could obtain better profits than the compared benchmark models, and CMPS-RF was able to accurately recognize the market risk and achieved the best Sharpe and Calmar ratios. The study findings are expected to aid in the development of an efficient investment-trading strategy. | 10.1007/s11063-024-11582-4 | deep reinforcement learning model for stock portfolio management based on data fusion | deep reinforcement learning (drl) can be used to extract deep features that can be incorporated into reinforcement learning systems to enable improved decision-making; drl can therefore also be used for managing stock portfolios. traditional methods cannot fully exploit the advantages of drl because they are generally based on real-time stock quotes, which do not have sufficient features for making comprehensive decisions. in this study, in addition to stock quotes, we introduced stock financial indices as additional stock features. moreover, we used markowitz mean-variance theory for determining stock correlation. a three-agent deep reinforcement learning model called collaborative multi-agent reinforcement learning-based stock portfolio management system (cmps) was designed and trained based on fused data. in cmps, each agent was implemented with a deep q-network to obtain the features of time-series stock data, and a self-attention network was used to combine the output of each agent. we added a risk-free asset strategy to cmps to prevent risks and referred to this model as cmps-risk free (cmps-rf). we conducted experiments under different market conditions using the stock data of china shanghai stock exchange 50 and compared our model with the state-of-the-art models. the results showed that cmps could obtain better profits than the compared benchmark models, and cmps-rf was able to accurately recognize the market risk and achieved the best sharpe and calmar ratios. the study findings are expected to aid in the development of an efficient investment-trading strategy. | [
"deep reinforcement learning",
"drl",
"deep features",
"that",
"reinforcement learning systems",
"improved decision-making",
"drl",
"stock portfolios",
"traditional methods",
"the advantages",
"drl",
"they",
"real-time stock quotes",
"which",
"sufficient features",
"comprehensive decisions",
"this study",
"addition",
"stock quotes",
"we",
"stock financial indices",
"additional stock features",
"we",
"markowitz mean-variance theory",
"stock correlation",
"a three-agent deep reinforcement learning model",
"collaborative multi-agent reinforcement learning-based stock portfolio management system",
"cmps",
"fused data",
"cmps",
"each agent",
"a deep q-network",
"the features",
"time-series stock data",
"a self-attention network",
"the output",
"each agent",
"we",
"a risk-free asset strategy",
"cmps",
"risks",
"this model",
"cmps-risk free (cmps",
"we",
"experiments",
"different market conditions",
"the stock data",
"china shanghai stock exchange",
"our model",
"the-art",
"the results",
"cmps",
"better profits",
"the compared benchmark models",
"cmps-rf",
"the market risk",
"the best sharpe",
"calmar ratios",
"the study findings",
"the development",
"an efficient investment-trading strategy",
"three",
"china shanghai stock exchange",
"50",
"sharpe"
] |
Prediction of non-muscle invasive bladder cancer recurrence using deep learning of pathology image | [
"Guang-Yue Wang",
"Jing-Fei Zhu",
"Qi-Chao Wang",
"Jia-Xin Qin",
"Xin-Lei Wang",
"Xing Liu",
"Xin-Yu Liu",
"Jun-Zhi Chen",
"Jie-Fei Zhu",
"Shi-Chao Zhuo",
"Di Wu",
"Na Li",
"Liu Chao",
"Fan-Lai Meng",
"Hao Lu",
"Zhen-Duo Shi",
"Zhi-Gang Jia",
"Cong-Hui Han"
] | We aimed to build a deep learning-based pathomics model to predict the early recurrence of non-muscle-infiltrating bladder cancer (NMIBC) in this work. A total of 147 patients from Xuzhou Central Hospital were enrolled as the training cohort, and 63 patients from Suqian Affiliated Hospital of Xuzhou Medical University were enrolled as the test cohort. Based on two consecutive phases of patch level prediction and WSI-level predictione, we built a pathomics model, with the initial model developed in the training cohort and subjected to transfer learning, and then the test cohort was validated for generalization. The features extracted from the visualization model were used for model interpretation. After migration learning, the area under the receiver operating characteristic curve for the deep learning-based pathomics model in the test cohort was 0.860 (95% CI 0.752–0.969), with good agreement between the migration training cohort and the test cohort in predicting recurrence, and the predicted values matched well with the observed values, with p values of 0.667766 and 0.140233 for the Hosmer–Lemeshow test, respectively. The good clinical application was observed using a decision curve analysis method. We developed a deep learning-based pathomics model showed promising performance in predicting recurrence within one year in NMIBC patients. Including 10 state prediction NMIBC recurrence group pathology features be visualized, which may be used to facilitate personalized management of NMIBC patients to avoid ineffective or unnecessary treatment for the benefit of patients. | 10.1038/s41598-024-66870-9 | prediction of non-muscle invasive bladder cancer recurrence using deep learning of pathology image | we aimed to build a deep learning-based pathomics model to predict the early recurrence of non-muscle-infiltrating bladder cancer (nmibc) in this work. a total of 147 patients from xuzhou central hospital were enrolled as the training cohort, and 63 patients from suqian affiliated hospital of xuzhou medical university were enrolled as the test cohort. based on two consecutive phases of patch level prediction and wsi-level predictione, we built a pathomics model, with the initial model developed in the training cohort and subjected to transfer learning, and then the test cohort was validated for generalization. the features extracted from the visualization model were used for model interpretation. after migration learning, the area under the receiver operating characteristic curve for the deep learning-based pathomics model in the test cohort was 0.860 (95% ci 0.752–0.969), with good agreement between the migration training cohort and the test cohort in predicting recurrence, and the predicted values matched well with the observed values, with p values of 0.667766 and 0.140233 for the hosmer–lemeshow test, respectively. the good clinical application was observed using a decision curve analysis method. we developed a deep learning-based pathomics model showed promising performance in predicting recurrence within one year in nmibc patients. including 10 state prediction nmibc recurrence group pathology features be visualized, which may be used to facilitate personalized management of nmibc patients to avoid ineffective or unnecessary treatment for the benefit of patients. | [
"we",
"a deep learning-based pathomics model",
"the early recurrence",
"non-muscle-infiltrating bladder cancer",
"nmibc",
"this work",
"a total",
"147 patients",
"xuzhou central hospital",
"the training cohort",
"63 patients",
"suqian affiliated hospital",
"xuzhou medical university",
"the test cohort",
"two consecutive phases",
"patch level prediction",
"wsi-level predictione",
"we",
"a pathomics model",
"the initial model",
"the training cohort",
"learning",
"the test cohort",
"generalization",
"the features",
"the visualization model",
"model interpretation",
"migration learning",
"the area",
"the receiver operating characteristic curve",
"the deep learning-based pathomics model",
"the test cohort",
"(95%",
"good agreement",
"the migration training cohort",
"the test cohort",
"recurrence",
"the predicted values",
"the observed values",
"p values",
"the hosmer",
"lemeshow test",
"the good clinical application",
"a decision curve analysis method",
"we",
"a deep learning-based pathomics model",
"promising performance",
"recurrence",
"one year",
"nmibc patients",
"10 state prediction nmibc recurrence group pathology features",
"which",
"personalized management",
"nmibc patients",
"ineffective or unnecessary treatment",
"the benefit",
"patients",
"147",
"xuzhou",
"63",
"xuzhou medical university",
"two",
"0.860",
"95%",
"0.667766",
"0.140233",
"one year",
"10"
] |
Building trust in deep learning-based immune response predictors with interpretable explanations | [
"Piyush Borole",
"Ajitha Rajan"
] | The ability to predict whether a peptide will get presented on Major Histocompatibility Complex (MHC) class I molecules has profound implications in designing vaccines. Numerous deep learning-based predictors for peptide presentation on MHC class I molecules exist with high levels of accuracy. However, these MHC class I predictors are treated as black-box functions, providing little insight into their decision making. To build turst in these predictors, it is crucial to understand the rationale behind their decisions with human-interpretable explanations. We present MHCXAI, eXplainable AI (XAI) techniques to help interpret the outputs from MHC class I predictors in terms of input peptide features. In our experiments, we explain the outputs of four state-of-the-art MHC class I predictors over a large dataset of peptides and MHC alleles. Additionally, we evaluate the reliability of the explanations by comparing against ground truth and checking their robustness. MHCXAI seeks to increase understanding of deep learning-based predictors in the immune response domain and build trust with validated explanations. | 10.1038/s42003-024-05968-2 | building trust in deep learning-based immune response predictors with interpretable explanations | the ability to predict whether a peptide will get presented on major histocompatibility complex (mhc) class i molecules has profound implications in designing vaccines. numerous deep learning-based predictors for peptide presentation on mhc class i molecules exist with high levels of accuracy. however, these mhc class i predictors are treated as black-box functions, providing little insight into their decision making. to build turst in these predictors, it is crucial to understand the rationale behind their decisions with human-interpretable explanations. we present mhcxai, explainable ai (xai) techniques to help interpret the outputs from mhc class i predictors in terms of input peptide features. in our experiments, we explain the outputs of four state-of-the-art mhc class i predictors over a large dataset of peptides and mhc alleles. additionally, we evaluate the reliability of the explanations by comparing against ground truth and checking their robustness. mhcxai seeks to increase understanding of deep learning-based predictors in the immune response domain and build trust with validated explanations. | [
"the ability",
"a peptide",
"major histocompatibility complex (mhc) class",
"i molecules",
"profound implications",
"vaccines",
"numerous deep learning-based predictors",
"peptide presentation",
"mhc class",
"i molecules",
"high levels",
"accuracy",
"these mhc class",
"i predictors",
"black-box functions",
"little insight",
"their decision making",
"turst",
"these predictors",
"it",
"the rationale",
"their decisions",
"human-interpretable explanations",
"we",
"mhcxai, explainable ai (xai) techniques",
"the outputs",
"mhc class",
"i",
"terms",
"input peptide features",
"our experiments",
"we",
"the outputs",
"the-art",
"i",
"a large dataset",
"peptides",
"mhc alleles",
"we",
"the reliability",
"the explanations",
"ground truth",
"their robustness",
"mhcxai",
"understanding",
"deep learning-based predictors",
"the immune response domain",
"trust",
"validated explanations",
"four"
] |
Deep learning approach to detect cyberbullying on twitter | [
"Çinare Oğuz Aliyeva",
"Mete Yağanoğlu"
] | In recent years, especially children and adolescents have shown increased interest in social media, making them a potential risk group for cyberbullying. Cyberbullying posts spread very quickly, often taking a long time to be deleted and sometimes remaining online indefinitely. Cyberbullying can have severe mental, psychological, and emotional effects on children and adolescents, and in extreme cases, it can lead to suicide. Turkey is among the top 10 countries with the highest number of children who are victims of cyberbullying. However, there are very few studies conducted in the Turkish language on this topic. This study aims to identify cyberbullying in Turkish Twitter posts. The Multi-Layer Detection (MLP) based model was evaluated using a dataset of 5000 tweets. The model was trained using both social media features and textual features extracted from the dataset. Textual features were obtained using various feature extraction methods such as Bag of Words (BOW), Term Frequency-Inverse Term Frequency (TF-IDF), Hashing Vectorizer, N-gram, and word embedding. These features were utilized in training the model, and their effectiveness was evaluated. The experiments revealed that the features obtained from TF-IDF and unigram methods significantly improved the model’s performance. Subsequently, unnecessary features were eliminated using the Chi-Square feature selection method. The proposed model achieved a higher accuracy of 93.2% compared to machine learning (ML) methods used in previous studies on the same dataset. Additionally, the proposed model was compared with popular deep learning models in the literature, such as LSTM, BLSTM, and CNN, demonstrating promising results. | 10.1007/s11042-024-19869-3 | deep learning approach to detect cyberbullying on twitter | in recent years, especially children and adolescents have shown increased interest in social media, making them a potential risk group for cyberbullying. cyberbullying posts spread very quickly, often taking a long time to be deleted and sometimes remaining online indefinitely. cyberbullying can have severe mental, psychological, and emotional effects on children and adolescents, and in extreme cases, it can lead to suicide. turkey is among the top 10 countries with the highest number of children who are victims of cyberbullying. however, there are very few studies conducted in the turkish language on this topic. this study aims to identify cyberbullying in turkish twitter posts. the multi-layer detection (mlp) based model was evaluated using a dataset of 5000 tweets. the model was trained using both social media features and textual features extracted from the dataset. textual features were obtained using various feature extraction methods such as bag of words (bow), term frequency-inverse term frequency (tf-idf), hashing vectorizer, n-gram, and word embedding. these features were utilized in training the model, and their effectiveness was evaluated. the experiments revealed that the features obtained from tf-idf and unigram methods significantly improved the model’s performance. subsequently, unnecessary features were eliminated using the chi-square feature selection method. the proposed model achieved a higher accuracy of 93.2% compared to machine learning (ml) methods used in previous studies on the same dataset. additionally, the proposed model was compared with popular deep learning models in the literature, such as lstm, blstm, and cnn, demonstrating promising results. | [
"recent years",
"especially children",
"adolescents",
"increased interest",
"social media",
"them",
"cyberbullying posts",
"a long time",
"cyberbullying",
"severe mental, psychological, and emotional effects",
"children",
"adolescents",
"extreme cases",
"it",
"suicide",
"turkey",
"the top 10 countries",
"the highest number",
"children",
"who",
"victims",
"very few studies",
"the turkish language",
"this topic",
"this study",
"turkish twitter posts",
"the multi-layer detection",
"mlp) based model",
"a dataset",
"5000 tweets",
"the model",
"both social media features",
"textual features",
"the dataset",
"textual features",
"various feature extraction methods",
"bag",
"words",
"bow",
"tf-idf",
"vectorizer",
"word",
"these features",
"the model",
"their effectiveness",
"the experiments",
"the features",
"tf-idf and unigram methods",
"the model’s performance",
"unnecessary features",
"the chi-square feature selection method",
"the proposed model",
"a higher accuracy",
"93.2%",
"machine learning (ml) methods",
"previous studies",
"the same dataset",
"the proposed model",
"popular deep learning models",
"the literature",
"lstm",
"blstm",
"cnn",
"promising results",
"recent years",
"turkey",
"10",
"5000",
"n-gram",
"93.2%",
"cnn"
] |
A comparative study: prediction of parkinson’s disease using machine learning, deep learning and nature inspired algorithm | [
"Pankaj Kumar Keserwani",
"Suman Das",
"Nairita Sarkar"
] | Parkinson’s Disease (PD) is a degenerative and progressive neurological disorder worsens over time. This disease initially affects people over 55 years old. Patients with PD often exhibit a variety of non-motor and motor symptoms and are diagnosed based on those motor and non-motor symptoms as well as numerous clinical indicators. Advancement in medical science has produced medicines for many diseases but till now no significant remedies are discovered for Parkinson disease. It is very necessary to detect PD at early phase to take precautions accordingly to reduce its harmful impact and improve the patient’s life style to a considerable level. In this direction Artificial Intelligence (AI) based approaches have recently attracted many researchers to work accordingly as AI can handle vast amounts of data and generate accurate statistical predictions. Addressing this imperative, researchers have turned their focus toward Artificial Intelligence (AI) as a promising avenue. AI’s capacity to manage vast datasets and generate precise statistical predictions makes it an invaluable tool for PD detection. This article aims to provide a comprehensive survey and in-depth analysis of various AI-based approaches. Leveraging machine learning (ML), deep learning (DL), and meta-heuristic algorithms, these approaches contribute to the prediction of PD. Additionally, the article delves into current research directions. As the pursuit of advancements continues, the integration of AI holds promise in revolutionizing early detection methods and subsequently improving the lives of individuals grappling with Parkinson’s disease. | 10.1007/s11042-024-18186-z | a comparative study: prediction of parkinson’s disease using machine learning, deep learning and nature inspired algorithm | parkinson’s disease (pd) is a degenerative and progressive neurological disorder worsens over time. this disease initially affects people over 55 years old. patients with pd often exhibit a variety of non-motor and motor symptoms and are diagnosed based on those motor and non-motor symptoms as well as numerous clinical indicators. advancement in medical science has produced medicines for many diseases but till now no significant remedies are discovered for parkinson disease. it is very necessary to detect pd at early phase to take precautions accordingly to reduce its harmful impact and improve the patient’s life style to a considerable level. in this direction artificial intelligence (ai) based approaches have recently attracted many researchers to work accordingly as ai can handle vast amounts of data and generate accurate statistical predictions. addressing this imperative, researchers have turned their focus toward artificial intelligence (ai) as a promising avenue. ai’s capacity to manage vast datasets and generate precise statistical predictions makes it an invaluable tool for pd detection. this article aims to provide a comprehensive survey and in-depth analysis of various ai-based approaches. leveraging machine learning (ml), deep learning (dl), and meta-heuristic algorithms, these approaches contribute to the prediction of pd. additionally, the article delves into current research directions. as the pursuit of advancements continues, the integration of ai holds promise in revolutionizing early detection methods and subsequently improving the lives of individuals grappling with parkinson’s disease. | [
"parkinson’s disease",
"pd",
"a degenerative and progressive neurological disorder",
"time",
"this disease",
"people",
"patients",
"pd",
"a variety",
"non-motor and motor symptoms",
"those motor and non-motor symptoms",
"numerous clinical indicators",
"advancement",
"medical science",
"medicines",
"many diseases",
"no significant remedies",
"parkinson disease",
"it",
"pd",
"early phase",
"precautions",
"its harmful impact",
"the patient’s life style",
"a considerable level",
"this direction",
"artificial intelligence",
"ai) based approaches",
"many researchers",
"ai",
"vast amounts",
"data",
"accurate statistical predictions",
"this imperative",
"researchers",
"their focus",
"artificial intelligence",
"(ai",
"a promising avenue",
"ai’s capacity",
"vast datasets",
"precise statistical predictions",
"it",
"pd detection",
"this article",
"a comprehensive survey",
"-depth",
"various ai-based approaches",
"machine learning",
"ml",
"deep learning",
"dl",
"meta-heuristic algorithms",
"these approaches",
"the prediction",
"pd",
"the article",
"current research directions",
"the pursuit",
"advancements",
"the integration",
"ai",
"promise",
"early detection methods",
"the lives",
"individuals",
"parkinson’s disease",
"55 years old"
] |
A hybrid approach to detecting Parkinson's disease using spectrogram and deep learning CNN-LSTM network | [
"V. Shibina",
"T. M. Thasleema"
] | Parkinson’s disease (PD) is a common illness that affects brain neurons. Medical practitioners and caregivers face challenges in detecting Parkinson's disease promptly, either in its early or late stages. There is an urgent need for non-invasive PD diagnostic technologies because timely diagnosis substantially impacts patient outcomes. This research aims to provide an efficient way of identifying Parkinson's disease by transforming voice inputs into spectrograms using Short Term Fourier Transform and applying deep learning algorithms. The identification of Parkinson's disease can be done by leveraging the deep learning architectures such as Convolutional Neural Networks and Long Short-Term Memory networks. The experiment produced positive findings, with 95.67% accuracy, 97.62% precision, 94.67% recall, and an F1-score of 95.91%. The outcomes indicate that the suggested deep learning method is more successful in PD identification, surpassing the results of traditional classification methods. | 10.1007/s10772-024-10128-2 | a hybrid approach to detecting parkinson's disease using spectrogram and deep learning cnn-lstm network | parkinson’s disease (pd) is a common illness that affects brain neurons. medical practitioners and caregivers face challenges in detecting parkinson's disease promptly, either in its early or late stages. there is an urgent need for non-invasive pd diagnostic technologies because timely diagnosis substantially impacts patient outcomes. this research aims to provide an efficient way of identifying parkinson's disease by transforming voice inputs into spectrograms using short term fourier transform and applying deep learning algorithms. the identification of parkinson's disease can be done by leveraging the deep learning architectures such as convolutional neural networks and long short-term memory networks. the experiment produced positive findings, with 95.67% accuracy, 97.62% precision, 94.67% recall, and an f1-score of 95.91%. the outcomes indicate that the suggested deep learning method is more successful in pd identification, surpassing the results of traditional classification methods. | [
"parkinson’s disease",
"pd",
"a common illness",
"that",
"brain neurons",
"medical practitioners",
"caregivers",
"challenges",
"parkinson's disease",
"its early or late stages",
"an urgent need",
"non-invasive pd diagnostic technologies",
"timely diagnosis",
"patient outcomes",
"this research",
"an efficient way",
"parkinson's disease",
"voice inputs",
"spectrograms",
"short term fourier transform",
"deep learning algorithms",
"the identification",
"parkinson's disease",
"the deep learning architectures",
"convolutional neural networks",
"long short-term memory networks",
"the experiment",
"positive findings",
"95.67% accuracy",
"97.62% precision",
"94.67% recall",
"an f1-score",
"95.91%",
"the outcomes",
"the suggested deep learning method",
"pd identification",
"the results",
"traditional classification methods",
"95.67%",
"97.62%",
"94.67%",
"95.91%"
] |
Thermoplastic waste segregation classification system using deep learning techniques | [
"M. Monica Subashini",
"R. S. Vignesh"
] | This research proposes a deep learning-based system, named deep CNN architecture, for the automated classification of the plastic resin in plastic waste. The system aims to detect and recognize objects such as drinking water bottles, detergent bottles, squeezable bottles, and plastic plates, and segregate them into PET, PE-HD, PE-LD, and other resin categories. The process involves capturing input images through a camera and using deep learning or traditional algorithms to detect and recognize the objects by comparing them with a trained database containing labeled objects. Unrecognized objects are dynamically trained, labeled, and updated in the database. The proposed system is implemented using Python, a versatile open-source programming language. Python’s functional and aspect-oriented programming paradigms are leveraged to develop the models. The performance of the proposed architecture is evaluated against existing works, demonstrating a classification accuracy of 92.66% according to experimental results. | 10.1007/s11042-023-16237-5 | thermoplastic waste segregation classification system using deep learning techniques | this research proposes a deep learning-based system, named deep cnn architecture, for the automated classification of the plastic resin in plastic waste. the system aims to detect and recognize objects such as drinking water bottles, detergent bottles, squeezable bottles, and plastic plates, and segregate them into pet, pe-hd, pe-ld, and other resin categories. the process involves capturing input images through a camera and using deep learning or traditional algorithms to detect and recognize the objects by comparing them with a trained database containing labeled objects. unrecognized objects are dynamically trained, labeled, and updated in the database. the proposed system is implemented using python, a versatile open-source programming language. python’s functional and aspect-oriented programming paradigms are leveraged to develop the models. the performance of the proposed architecture is evaluated against existing works, demonstrating a classification accuracy of 92.66% according to experimental results. | [
"this research",
"a deep learning-based system",
"deep cnn architecture",
"the automated classification",
"the plastic resin",
"plastic waste",
"the system",
"objects",
"drinking water bottles",
"detergent bottles",
"squeezable bottles",
"plastic plates",
"them",
"other resin categories",
"the process",
"input images",
"a camera",
"deep learning",
"traditional algorithms",
"the objects",
"them",
"a trained database",
"labeled objects",
"unrecognized objects",
"the database",
"the proposed system",
"python",
"python’s functional and aspect-oriented programming paradigms",
"the models",
"the performance",
"the proposed architecture",
"existing works",
"a classification accuracy",
"92.66%",
"experimental results",
"cnn",
"92.66%"
] |
Deep learning model for detection of hotspots using infrared thermographic images of electrical installations | [
"Ezechukwu Kalu Ukiwe",
"Steve A. Adeshina",
"Tsado Jacob",
"Bukola Babatunde Adetokun"
] | Hotspots in electrical power equipment or installations are a major issue whenever it occurs within the power system. Factors responsible for this phenomenon are many, sometimes inter-related and other times they are isolated. Electrical hotspots caused by poor connections are common. Deep learning models have become popular for diagnosing anomalies in physical and biological systems, by the instrumentality of feature extraction of images in convolutional neural networks. In this work, a VGG-16 deep neural network model is applied for identifying electrical hotspots by means of transfer learning. This model was achieved by first augmenting the acquired infrared thermographic images, using the pre-trained ImageNet weights of the VGG-16 algorithm with additional global average pooling in place of conventional fully connected layers and a softmax layer at the output. With the categorical cross-entropy loss function, the model was implemented using the Adam optimizer at learning rate of 0.0001 as well as some variants of the Adam optimization algorithm. On evaluation, with a test IRT image dataset, and a comparison with similar works, the research showed that a better accuracy of 99.98% in identification of electrical hotspots was achieved. The model shows good score in performance metrics like accuracy, precision, recall, and F1-score. The obtained results proved the potential of deep learning using computer vision parameters for infrared thermographic identification of electrical hotspots in power system installations. Also, there is need for careful selection of the IR sensor’s thermal range during image acquisition, and suitable choice of color palette would make for easy hotspot isolation, reduce the pixel to pixel temperature differential across any of the images, and easily highlight the critical region of interest with high pixel values. However, it makes edge detection difficult for human visual perception which computer vision-based deep learning model could overcome. | 10.1186/s43067-024-00148-y | deep learning model for detection of hotspots using infrared thermographic images of electrical installations | hotspots in electrical power equipment or installations are a major issue whenever it occurs within the power system. factors responsible for this phenomenon are many, sometimes inter-related and other times they are isolated. electrical hotspots caused by poor connections are common. deep learning models have become popular for diagnosing anomalies in physical and biological systems, by the instrumentality of feature extraction of images in convolutional neural networks. in this work, a vgg-16 deep neural network model is applied for identifying electrical hotspots by means of transfer learning. this model was achieved by first augmenting the acquired infrared thermographic images, using the pre-trained imagenet weights of the vgg-16 algorithm with additional global average pooling in place of conventional fully connected layers and a softmax layer at the output. with the categorical cross-entropy loss function, the model was implemented using the adam optimizer at learning rate of 0.0001 as well as some variants of the adam optimization algorithm. on evaluation, with a test irt image dataset, and a comparison with similar works, the research showed that a better accuracy of 99.98% in identification of electrical hotspots was achieved. the model shows good score in performance metrics like accuracy, precision, recall, and f1-score. the obtained results proved the potential of deep learning using computer vision parameters for infrared thermographic identification of electrical hotspots in power system installations. also, there is need for careful selection of the ir sensor’s thermal range during image acquisition, and suitable choice of color palette would make for easy hotspot isolation, reduce the pixel to pixel temperature differential across any of the images, and easily highlight the critical region of interest with high pixel values. however, it makes edge detection difficult for human visual perception which computer vision-based deep learning model could overcome. | [
"hotspots",
"electrical power equipment",
"installations",
"a major issue",
"it",
"the power system",
"factors",
"this phenomenon",
"many, sometimes inter-related and other times",
"they",
"electrical hotspots",
"poor connections",
"deep learning models",
"anomalies",
"physical and biological systems",
"the instrumentality",
"feature extraction",
"images",
"convolutional neural networks",
"this work",
"a vgg-16 deep neural network model",
"electrical hotspots",
"means",
"transfer learning",
"this model",
"the acquired infrared thermographic images",
"the pre-trained imagenet weights",
"the vgg-16 algorithm",
"additional global average pooling",
"place",
"conventional fully connected layers",
"a softmax layer",
"the output",
"the categorical cross-entropy loss function",
"the model",
"the adam optimizer",
"rate",
"some variants",
"the adam optimization algorithm",
"evaluation",
"a test irt image dataset",
"a comparison",
"similar works",
"the research",
"a better accuracy",
"99.98%",
"identification",
"electrical hotspots",
"the model",
"good score",
"performance metrics",
"accuracy",
"precision",
"recall",
"f1-score",
"the obtained results",
"the potential",
"deep learning",
"computer vision parameters",
"infrared thermographic identification",
"electrical hotspots",
"power system installations",
"need",
"careful selection",
"the ir sensor’s thermal range",
"image acquisition",
"suitable choice",
"color palette",
"easy hotspot isolation",
"the pixel",
"temperature differential",
"any",
"the images",
"the critical region",
"interest",
"high pixel values",
"it",
"edge detection",
"human visual perception",
"which computer vision-based deep learning model",
"first",
"vgg-16",
"0.0001",
"99.98%"
] |
Automatic retinoblastoma screening and surveillance using deep learning | [
"Ruiheng Zhang",
"Li Dong",
"Ruyue Li",
"Kai Zhang",
"Yitong Li",
"Hongshu Zhao",
"Jitong Shi",
"Xin Ge",
"Xiaolin Xu",
"Libin Jiang",
"Xuhan Shi",
"Chuan Zhang",
"Wenda Zhou",
"Liangyuan Xu",
"Haotian Wu",
"Heyan Li",
"Chuyao Yu",
"Jing Li",
"Jianmin Ma",
"Wenbin Wei"
] | BackgroundRetinoblastoma is the most common intraocular malignancy in childhood. With the advanced management strategy, the globe salvage and overall survival have significantly improved, which proposes subsequent challenges regarding long-term surveillance and offspring screening. This study aimed to apply a deep learning algorithm to reduce the burden of follow-up and offspring screening.MethodsThis cohort study includes retinoblastoma patients who visited Beijing Tongren Hospital from March 2018 to January 2022 for deep learning algorism development. Clinical-suspected and treated retinoblastoma patients from February 2022 to June 2022 were prospectively collected for prospective validation. Images from the posterior pole and peripheral retina were collected, and reference standards were made according to the consensus of the multidisciplinary management team. A deep learning algorithm was trained to identify “normal fundus”, “stable retinoblastoma” in which specific treatment is not required, and “active retinoblastoma” in which specific treatment is required. The performance of each classifier included sensitivity, specificity, accuracy, and cost-utility.ResultsA total of 36,623 images were included for developing the Deep Learning Assistant for Retinoblastoma Monitoring (DLA-RB) algorithm. In internal fivefold cross-validation, DLA-RB achieved an area under curve (AUC) of 0.998 (95% confidence interval [CI] 0.986–1.000) in distinguishing normal fundus and active retinoblastoma, and 0.940 (95% CI 0.851–0.996) in distinguishing stable and active retinoblastoma. From February 2022 to June 2022, 139 eyes of 103 patients were prospectively collected. In identifying active retinoblastoma tumours from all clinical-suspected patients and active retinoblastoma from all treated retinoblastoma patients, the AUC of DLA-RB reached 0.991 (95% CI 0.970–1.000), and 0.962 (95% CI 0.915–1.000), respectively. The combination between ophthalmologists and DLA-RB significantly improved the accuracy of competent ophthalmologists and residents regarding both binary tasks. Cost-utility analysis revealed DLA-RB-based diagnosis mode is cost-effective in both retinoblastoma diagnosis and active retinoblastoma identification.ConclusionsDLA-RB achieved high accuracy and sensitivity in identifying active retinoblastoma from the normal and stable retinoblastoma fundus. It can be used to surveil the activity of retinoblastoma during follow-up and screen high-risk offspring. Compared with referral procedures to ophthalmologic centres, DLA-RB-based screening and surveillance is cost-effective and can be incorporated within telemedicine programs.Clinical Trial RegistrationThis study was registered on ClinicalTrials.gov (NCT05308043). | 10.1038/s41416-023-02320-z | automatic retinoblastoma screening and surveillance using deep learning | backgroundretinoblastoma is the most common intraocular malignancy in childhood. with the advanced management strategy, the globe salvage and overall survival have significantly improved, which proposes subsequent challenges regarding long-term surveillance and offspring screening. this study aimed to apply a deep learning algorithm to reduce the burden of follow-up and offspring screening.methodsthis cohort study includes retinoblastoma patients who visited beijing tongren hospital from march 2018 to january 2022 for deep learning algorism development. clinical-suspected and treated retinoblastoma patients from february 2022 to june 2022 were prospectively collected for prospective validation. images from the posterior pole and peripheral retina were collected, and reference standards were made according to the consensus of the multidisciplinary management team. a deep learning algorithm was trained to identify “normal fundus”, “stable retinoblastoma” in which specific treatment is not required, and “active retinoblastoma” in which specific treatment is required. the performance of each classifier included sensitivity, specificity, accuracy, and cost-utility.resultsa total of 36,623 images were included for developing the deep learning assistant for retinoblastoma monitoring (dla-rb) algorithm. in internal fivefold cross-validation, dla-rb achieved an area under curve (auc) of 0.998 (95% confidence interval [ci] 0.986–1.000) in distinguishing normal fundus and active retinoblastoma, and 0.940 (95% ci 0.851–0.996) in distinguishing stable and active retinoblastoma. from february 2022 to june 2022, 139 eyes of 103 patients were prospectively collected. in identifying active retinoblastoma tumours from all clinical-suspected patients and active retinoblastoma from all treated retinoblastoma patients, the auc of dla-rb reached 0.991 (95% ci 0.970–1.000), and 0.962 (95% ci 0.915–1.000), respectively. the combination between ophthalmologists and dla-rb significantly improved the accuracy of competent ophthalmologists and residents regarding both binary tasks. cost-utility analysis revealed dla-rb-based diagnosis mode is cost-effective in both retinoblastoma diagnosis and active retinoblastoma identification.conclusionsdla-rb achieved high accuracy and sensitivity in identifying active retinoblastoma from the normal and stable retinoblastoma fundus. it can be used to surveil the activity of retinoblastoma during follow-up and screen high-risk offspring. compared with referral procedures to ophthalmologic centres, dla-rb-based screening and surveillance is cost-effective and can be incorporated within telemedicine programs.clinical trial registrationthis study was registered on clinicaltrials.gov (nct05308043). | [
"backgroundretinoblastoma",
"the most common intraocular malignancy",
"childhood",
"the advanced management strategy",
"the globe salvage",
"overall survival",
"which",
"subsequent challenges",
"long-term surveillance",
"offspring screening",
"this study",
"a deep learning algorithm",
"the burden",
"follow-up",
"screening.methodsthis cohort study",
"retinoblastoma patients",
"who",
"beijing tongren hospital",
"march",
"january",
"deep learning algorism development",
"clinical-suspected and treated retinoblastoma patients",
"february",
"june",
"prospective validation",
"images",
"the posterior pole",
"peripheral retina",
"reference standards",
"the consensus",
"the multidisciplinary management team",
"a deep learning algorithm",
"normal fundus”, “stable retinoblastoma",
"which",
"specific treatment",
"“active retinoblastoma",
"which",
"specific treatment",
"the performance",
"each classifier",
"sensitivity",
"specificity",
"accuracy",
"cost-utility.resultsa total",
"36,623 images",
"the deep learning assistant",
"retinoblastoma monitoring",
"dla-rb",
"-",
"dla-rb",
"an area",
"curve",
"auc",
"(95% confidence interval",
"normal fundus",
"active retinoblastoma",
"0.940 (95%",
"ci 0.851–0.996",
"stable and active retinoblastoma",
"february",
"june",
"139 eyes",
"103 patients",
"active retinoblastoma tumours",
"all clinical-suspected patients",
"active retinoblastoma",
"all treated retinoblastoma patients",
"the auc",
"dla-rb",
"0.991 (95%",
"ci 0.970–1.000",
"ci 0.915–1.000",
"the combination",
"ophthalmologists",
"dla-rb",
"the accuracy",
"competent ophthalmologists",
"residents",
"both binary tasks",
"cost-utility analysis",
"dla-rb-based diagnosis mode",
"both retinoblastoma diagnosis",
"active retinoblastoma",
"high accuracy",
"sensitivity",
"active retinoblastoma",
"the normal and stable retinoblastoma fundus",
"it",
"the activity",
"retinoblastoma",
"follow-up and screen high-risk offspring",
"referral procedures",
"ophthalmologic centres",
"dla-rb-based screening",
"surveillance",
"telemedicine programs.clinical trial registrationthis study",
"nct05308043",
"march 2018 to",
"january 2022",
"february 2022 to june 2022",
"36,623",
"0.998",
"95%",
"0.940",
"95%",
"0.851–0.996",
"february 2022 to june 2022",
"139",
"103",
"0.991",
"95%",
"0.962",
"95%",
"0.915–1.000"
] |
Curriculum learning and evolutionary optimization into deep learning for text classification | [
"Alfredo Arturo Elías-Miranda",
"Daniel Vallejo-Aldana",
"Fernando Sánchez-Vega",
"A. Pastor López-Monroy",
"Alejandro Rosales-Pérez",
"Victor Muñiz-Sanchez"
] | The exponential growth of social networks has given rise to a wide variety of content. Some social content violates the integrity and dignity of users, therefore, this task has become challenging. The need to deal with short texts, poorly written language, unbalanced classes, and non-thematic aspects. These can lead to overfitting in deep neural network (DNN) models used for classification tasks. Empirical evidence in previous studies indicates that some of these problems can be overcome by improving the optimization process of the DNN weights to avoid overfitting. Moreover, a well-defined learning process in the input examples could improve the order of the patterns learned throughout the optimization process. In this paper, we propose four Curriculum Learning strategies and a new Hybrid Genetic–Gradient Algorithm that proved to improve the performance of DNN models detecting the class of interest even in highly imbalanced datasets. | 10.1007/s00521-023-08632-8 | curriculum learning and evolutionary optimization into deep learning for text classification | the exponential growth of social networks has given rise to a wide variety of content. some social content violates the integrity and dignity of users, therefore, this task has become challenging. the need to deal with short texts, poorly written language, unbalanced classes, and non-thematic aspects. these can lead to overfitting in deep neural network (dnn) models used for classification tasks. empirical evidence in previous studies indicates that some of these problems can be overcome by improving the optimization process of the dnn weights to avoid overfitting. moreover, a well-defined learning process in the input examples could improve the order of the patterns learned throughout the optimization process. in this paper, we propose four curriculum learning strategies and a new hybrid genetic–gradient algorithm that proved to improve the performance of dnn models detecting the class of interest even in highly imbalanced datasets. | [
"the exponential growth",
"social networks",
"rise",
"a wide variety",
"content",
"some social content",
"the integrity",
"dignity",
"users",
"this task",
"the need",
"short texts",
"poorly written language",
"unbalanced classes",
"non-thematic aspects",
"these",
"deep neural network (dnn) models",
"classification tasks",
"empirical evidence",
"previous studies",
"some",
"these problems",
"the optimization process",
"the dnn weights",
"a well-defined learning process",
"the input examples",
"the order",
"the patterns",
"the optimization process",
"this paper",
"we",
"four curriculum",
"strategies",
"a new hybrid genetic–gradient algorithm",
"that",
"the performance",
"dnn models",
"the class",
"interest",
"highly imbalanced datasets",
"four"
] |
Deep learning-based power usage effectiveness optimization for IoT-enabled data center | [
"Yu Sun",
"Yanyi Wang",
"Gaoxiang Jiang",
"Bo Cheng",
"Haibo Zhou"
] | The proliferation of data centers is driving increased energy consumption, leading to environmentally unacceptable carbon emissions. As the use of Internet-of-Things (IoT) techniques for extensive data collection in data centers continues to grow, deep learning-based solutions have emerged as attractive alternatives to suboptimal traditional methods. However, existing approaches suffer from unsatisfactory performance, unrealistic assumptions, and an inability to address practical data center optimization. In this paper, we focus on power usage effectiveness (PUE) optimization in IoT-enabled data centers using deep learning algorithms. We first develop a deep learning-based PUE optimization framework tailored to IoT-enabled data centers. We then formulate the general PUE optimization problem, simplifying and specifying it for the minimization of long-term energy consumption in chiller cooling systems. Additionally, we introduce a transformer-based prediction network designed for energy consumption forecasting. Subsequently, we transform this formulation into a Markov decision process (MDP) and present the branching double dueling deep Q-network. This approach effectively tackles the challenges posed by enormous action spaces within MDP by branching actions into sub-actions. Extensive experiments conducted on real-world datasets demonstrate the exceptional performance of our algorithms, excelling in prediction precision, optimization convergence, and optimality while effectively managing a substantial number of actions on the order of \(10^{13}\). | 10.1007/s12083-024-01663-5 | deep learning-based power usage effectiveness optimization for iot-enabled data center | the proliferation of data centers is driving increased energy consumption, leading to environmentally unacceptable carbon emissions. as the use of internet-of-things (iot) techniques for extensive data collection in data centers continues to grow, deep learning-based solutions have emerged as attractive alternatives to suboptimal traditional methods. however, existing approaches suffer from unsatisfactory performance, unrealistic assumptions, and an inability to address practical data center optimization. in this paper, we focus on power usage effectiveness (pue) optimization in iot-enabled data centers using deep learning algorithms. we first develop a deep learning-based pue optimization framework tailored to iot-enabled data centers. we then formulate the general pue optimization problem, simplifying and specifying it for the minimization of long-term energy consumption in chiller cooling systems. additionally, we introduce a transformer-based prediction network designed for energy consumption forecasting. subsequently, we transform this formulation into a markov decision process (mdp) and present the branching double dueling deep q-network. this approach effectively tackles the challenges posed by enormous action spaces within mdp by branching actions into sub-actions. extensive experiments conducted on real-world datasets demonstrate the exceptional performance of our algorithms, excelling in prediction precision, optimization convergence, and optimality while effectively managing a substantial number of actions on the order of \(10^{13}\). | [
"the proliferation",
"data centers",
"increased energy consumption",
"environmentally unacceptable carbon emissions",
"the use",
"things",
"iot",
"extensive data collection",
"data centers",
"deep learning-based solutions",
"attractive alternatives",
"suboptimal traditional methods",
"existing approaches",
"unsatisfactory performance",
"unrealistic assumptions",
"an inability",
"practical data center optimization",
"this paper",
"we",
"power usage effectiveness",
"(pue) optimization",
"iot-enabled data centers",
"deep learning algorithms",
"we",
"a deep learning-based pue optimization framework",
"iot-enabled data centers",
"we",
"the general pue optimization problem",
"it",
"the minimization",
"long-term energy consumption",
"chiller cooling systems",
"we",
"a transformer-based prediction network",
"energy consumption forecasting",
"we",
"this formulation",
"a markov decision process",
"mdp",
"the branching",
"deep q-network",
"this approach",
"the challenges",
"enormous action spaces",
"mdp",
"actions",
"sub",
"-",
"actions",
"extensive experiments",
"real-world datasets",
"the exceptional performance",
"our algorithms",
"prediction precision",
"optimization convergence",
"optimality",
"a substantial number",
"actions",
"the order",
"\\(10^{13}\\",
"first"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.